title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Temporally Consistent Atmospheric Turbulence Mitigation with Neural Representations | Accept (poster) | Summary: The study focuses on Video Atmospheric Turbulence Mitigation (ATM), which aims to restore videos that are affected by distortions caused by atmospheric turbulence. Specifically, the proposed ConVRT introduces a neural video representation that decouples spatial and temporal information, allowing targeted regularization of the network's temporal representation capability. Also, this paper integrates supervised and self-supervised learning, significantly improving the temporally consistent mitigation of ATM methods on diverse real-world data.
Strengths: The study introduces a novel framework called ConVRT for video ATM, which addresses the challenge of maintaining temporal consistency in turbulence mitigation. It proposes a neural video representation that decouples spatial and temporal information, allowing targeted regularization and effective mitigation of turbulence-induced temporal frequency variations. Furthermore, the study integrates supervised and self-supervised learning, improving the temporally consistent mitigation of ATM methods on diverse real-world data.
Weaknesses: 1. As a neural representation method, ConVRT is designed to handle video clips with a limited number of frames, making it challenging to handle larger video sequences and more significant motions without compromising accuracy.
2. The related work section lacks completeness. More test-time methods should be carefully reviewed. Also, no SOTA methods were involved in the experimental section for comparison, which is unacceptable.
3. In addition to VRT, more SOTA transformer-based methods, as well as the recent-popular diffusion-based models, should be added for analysis.
4. ConVRT employs test-time optimization, which can be computationally intensive. Therefore, model efficiency should be discussed.
5. Some intermediate results should be given.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the detailed comments!
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to the detailed comments!
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **`R4-Q1`**: **Capability of handling longer video sequences and significant motion**.
Our method effectively handles videos with significant motion. We've included additional experimental results and a detailed analysis in our global response to **`shared question A`**. This demonstrates our approach's robustness across various dynamic scenarios
**`R4-Q2`**: **The related work section lacks completeness. More test-time methods should be carefully reviewed. Also, no SOTA methods were involved in the experimental section for comparison, which is unacceptable.**.
We'd like to highlight that our experimental section includes a comprehensive setup that SOTA and advanced methods in the related fields:
* TMT (TCI’23) [1]: The pioneering video-based ATM method utilizing transformer architecture and temporal attention.
* DATUM (CVPR’24) [2]: The current SOTA video-based ATM method, trained on an advanced turbulence simulator dataset, achieving 10x faster inference time and superior performance compared to TMT.
* TurbNet (ECCV’22) [3]: An effective image-based method trained on an advanced turbulence simulator.
* VRT (TIP’24) [4]: A notable video method for general deblurring.
Comprehensive qualitative and quantitative results are presented in Figure 1, Figure 5 and Table 1 of the main paper, with additional comparisons shown in **`attached file Figure 1`**. These results demonstrate our method's significant performance improvements in addressing the residual distortion (temporal inconsistency) that persists in current SOTA methods.
Table 1 and subsection 1.1 in the main paper present a comprehensive taxonomy of current ATM methods, analyzing their strengths and limitations relative to our approach.
We've added two more unsupervised and test-time optimization-based turbulence removal methods for comparison and discussion in our global response to **`shared question C`**.
[1] Zhang, Xingguang, et al. "Imaging through the atmosphere using turbulence mitigation transformer." IEEE Transactions on Computational Imaging (2024).
[2] Zhang, Xingguang, et al. "Spatio-Temporal Turbulence Mitigation: A Translational Perspective." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[3] Mao, Zhiyuan, et al. "Single frame atmospheric turbulence mitigation: A benchmark study and a new physics-inspired transformer model." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
[4] Liang, Jingyun, et al. "Vrt: A video restoration transformer." IEEE Transactions on Image Processing (2024).
**`R4-Q3`** : **In addition to VRT, more SOTA transformer-based methods, as well as the recent-popular diffusion-based models, should be added for analysis**.
Regarding the comparison with diffusion-based models, we've prepared a comprehensive analysis highlighting our method's unique advantages. Please see our response to **`R2-Q5`** for an in-depth discussion on this topic.
**`R4-Q4`**: **Discussion about Efficiency**.
We appreciate your suggestion on our model's efficiency. Our global response to **`R1-Q5`** provides a detailed breakdown of our method's computational performance, including comparisons with other test-time optimization approaches for turbulence removal.
**`R4-Q5`**: **Intermediate results**.
These results are compiled in our global response to **`shared question D`** for a clearer understanding of our approach.
---
Rebuttal Comment 1.1:
Title: Response to Author Rebuttal
Comment: I appreciate the author's rebuttal. However, I am still concerned about the capability for large motion and moving objects.
Therefore, I have decided to maintain my original rating of "Borderline reject".
---
Reply to Comment 1.1.1:
Title: Follow-up Request
Comment: Would you mind elaborating on your concern?
As illustrated in Figure 1 of the response, our approach can effectively reconstruct scenes with substantial motion: Over the course of the video the person moves across nearly half the field-of-view (see the long diagonal band in the x-t slice). | Summary: This paper proposes a method for improving the temporal consistency of turbulence-affected videos. The proposed method uses neural representations (MLP layers) to separately model the spatial and temporal deformations caused by air turbulence and is able to improve the temporal consistency of restoration results. It seems that the proposed method needs to be used in conjunction with another base turbulence restoration method. The prosed method (by combining with several other SOTA approaches) is evaluated on existing real turbulent video dataset, and a small dataset collected by the authors. Temporal consistency, especially for videos with moving objects, is apparently improved after applying the proposed method.
Strengths: - The paper is generally well-written and easy to follow.
- The MLP-based network for mitigating spatial and temporal deformations is new and seems to be effective, especially on maintaining temporal consistency.
- Qualitative evaluation is performed on videos with moving objects and have shown apparent improvement.
- A small real-world dataset was captured and used for testing, but details about the dataset are not clear.
Weaknesses: - It seems that the method is an add-on approach for regularizing the temporal consistency of videos restored by another turbulence restoration methods. Since turbulent degradation is more complex than temporal inconsistency and spatial distortions (e.g., there might be blurriness and color aberration beyond the spatial and temporal deformations), being able to only handle these two types of artifacts seems quite limited for turbulence mitigation.
- The quantitative evaluation results (Table 2) are confusing. This table shows metric scores for using the proposed method as a stand-alone (i.e., using the original turbulent video as input). However, the "no base results" are not demonstrated in any visual comparison figures (not even in supplementary materials). It would be useful to see visual comparison between "no base results" and others. What really confuses me about this table is that the metric scores of the original turbulent images (denoted as "ori") are even better than processed results in many cases (for example, its PSNR is higher than DATUM results for the HeatChamber dataset, and there many such cases). But according to visual results, most processed results have apparent improvement. Besides, after applying the proposed method, some metric scores become much worse (for example, the slice_tv scores for most datasets and method combinations). There should be some discussions explaining the metric results.
- The paper missed some relevant prior works (see below). These works either use MLP for modeling turbulent distortions or use similar idea to enforce temporal consistency and they should be discussed and compared with the proposed method.
Li et al., "Unsupervised Non-Rigid Image Distortion Removal via Grid Deformation," ICCV 2021.
Thapa et al. "Learning to Remove Refractive Distortions from Underwater Images," ICCV 2021.
Technical Quality: 3
Clarity: 3
Questions for Authors: - It seems that motions in demonstrated results are quite slight and slow. Is the method also robust to large motions? How is this related to turbulence strength?
- I'm interested at knowing more details about the airport dataset acquired by the authors, like size of the dataset, type of scenes, turbulence strength, etc.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Two limitations are discussed: 1. length of input video cannot be too long, and 2. processing time is sort of long (running time of the method is not reported). Turbulence strength and motion scale could be also discussed, if they are limiting factors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **`R3-weakness 1`**: **Forward Model of Turbulence.**
Indeed mitigating turbulence requires overcoming color aberrations, blurriness, and deformations and other effects. Fortunately, existing techniques have successfully addressed many of these challenges and arguably the largest limitation of existing approaches is that their results lack temporal consistency. Our method addresses this challenge.
Recent advancements in the field, from TMT (2023) to DATUM (2024), have leveraged increasingly large datasets created by advanced simulators and more complex networks. While these improvements have significantly enhanced sharpness in real-world cases, as evidenced in **` main paper Figures 1`** and **`main paper Figure 5`** of our main paper, poor temporal consistency remains a glaring issue. This inconsistency is not only noticeable to the human eye but also detrimental to downstream tasks such as video segmentation.
This persistent problem suggests that video-based ATM methods struggle to improve temporal consistency relative to other types of degradation. The primary challenge lies in poor generalization to the random and complex deformations caused by real-world turbulence.
Our method addresses this critical gap by providing an effective refinement solution for video-based ATM methods when applied to real-world videos. We believe that tackling the temporal inconsistency issue represents a significant contribution to the video ATM field at this stage, as it targets a key weakness in current state-of-the-art approaches
**`R3-weakness 2`**: **Qualitative result of ours trained on the distorted video (No Base Method).**
Even without a base method to provide partially restored frames, our approach effectively improves temporal consistency by a significant margin, as demonstrated in **` attached file Figure 1`**. For clear evidence of this improvement, please observe the X-t slice presented there.
**`R3-weakness 2`**: **Table 2 is confusing. Why do video-based ATM methods have worse PSNR and PSNR_x-t compared to image-based methods on the HeatChamber dataset?**
Apologies for any confusion caused by the content in the main table. Below is a detailed explanation of our results. We will clarify in the revision.
When reconstructing static scenes, image-based reconstruction methods have a significant advantage over video-based methods. This is because image-based methods effectively impose a prior assumption that the scene is completely static, while video-based methods do not. Consequently, it's not surprising that image-based methods perform well on the static HeatChamber dataset.
Without this strong static prior, it's understandable that video-based methods are less effective on static datasets. However, this situation reverses for dynamic scenes, where the static scene prior would be invalid. In these cases, image-based methods perform much worse than video-based methods.
**`R3-weakness 2`**: **Why do some metric scores (tv_slice) appear much worse?**
This is a typo; a lower slice_tv indicates better performance. We will fix it in the camera-ready version.
**`R3-weakness 3`**: **Missings comparisons two unsupervised methods**
At present, no open source implementation of [3] is available. We have provided comparisons with Mao’s [1] Li's [2] in our global response to the **` shared question C`**. Our approach handles videos with moving objects, whereas Li's and Mao's methods cannot.
[1] Mao, Zhiyuan, Nicholas Chimitt, and Stanley H. Chan. "Image reconstruction of static and dynamic scenes through anisoplanatic turbulence." IEEE Transactions on Computational Imaging 6 (2020): 1415-1428.
[2] Li, Nianyi, et al. "Unsupervised non-rigid image distortion removal via grid deformation." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.
[3] Thapa, Simron, Nianyi Li, and Jinwei Ye. "Learning to remove refractive distortions from underwater images." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.
**`R3-Q1`** : **Capability on large motion**.
Yes, our method can handle moving objects with large motion. For the qualitative results, please see our global response to the **` shared question A `** .
---
Rebuttal Comment 1.1:
Comment: My concerns are mostly addressed by the rebuttal. I would support the acceptance of this paper, given its strength on handling moving objects and improving the temporal consistency of turbulence restoration. In the final version, the author should clarify quantitative results and include comparisons with the methods by Li et al. and Mao et al.
---
Rebuttal 2:
Comment: Thank you for your thoughtful review and for recognizing the potential of our paper.
In the revised version, we will include both quantitative and qualitative comparisons with unsupervised and test-time optimization methods ( Li et al. & Mao et al). Additionally, we will expand our discussion section to incorporate your insightful questions and our exchange, helping readers better understand our contribution in the atmospheric turbulence mitigation field.
Again, we greatly appreciate your valuable time and insightful suggestions. | Summary: This paper introduced ConVRT, a novel method for video atmospheric turbulence mitigation.
This paper has a good structure and is well-written.
Strengths: This paper proposed a new method to deal with turbulence mitigation.
Weaknesses: 1. limited real-world case visualization
2. limited proof of algorithm effectiveness
3. limited comparison with classic algorithm
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Test cases with visualization are so limited, even in the supplemental material. Please show more real-world cases to show the effectiveness of the algorithm.
2. As the representative unsupervised method, "Unsupervised non-rigid image distortion removal via grid deformation," it is important to compare with it, no matter from the algorithm or qualitative result.
3. Lack of ablation studies to prove the effectiveness of the proposed algorithm.
4. Can the method deal with the video with moving objects?
5. No matter whether it is for a single image-based or multiple frames-based turbulence mitigation, most existing algorithms can deal with them very well. With the help of the diffusion model, the resulting image can be refined further. It means that it could generate a good final result in a short time. Then, what is the advantage of your algorithm?
6. It is important to visualize the temporal and spatial field to verify the algorithm's effectiveness.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Same with question
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **`R2-Q1 & R2-Q4`**: **Capability of moving object on more cases**
Yes, our method can handle moving objects. More than half of the cases in our main paper and supplementary materials are dynamic videos. This is evident as the lines in X-t Slice or Y-t Slice are not perfectly vertical or horizontal, indicating the objects are moving along the temporal dimension. Please also view the html file in the supplement which includes video reconstructions of our scenes.
We have also added additional experimental results, with large object motion, in our global response to **`shared question A`**. Our approach can effectively reconstruct scenes with significant object motion.
**`R2-Q2`**: **Comparison with Li’s method**
We have added comparisons with Li's [1] in our global response to **`shared question C`**. Our approach handles videos with moving objects, whereas Li's method cannot.
[1] Li, Nianyi, et al. "Unsupervised non-rigid image distortion removal via grid deformation." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.
**`R2-Q3`**: **More ablation study.**
We conducted an ablation study on temporal resolution (T) and the presence of L_temp, as shown in **`Table below`**, with corresponding qualitative results in **`attached file Figure 4`**.
The first row of the Table shows TurbNet's performance. Equipping TurbNet with our method yields a 0.76 dB improvement in PSNR, demonstrating the effectiveness of our representation field design in regularizing irregular turbulence motion. Decreasing T from 15 to 5 (implying stronger regularization) further improves PSNR by 0.94 dB, SSIM by 0.03, and PSNR_slicext by 1.28 dB. Adding our proposed loss function achieves additional performance gains.
**`Attached file Figure 4`** illustrates qualitative results. Removing the proposed loss function and relaxing regularization leads to noisier and more jittery X-t slices compared to our full method
| Method Name | Resolution of T| Our Feature Maps | L_temp | PSNR | SSIM | PSNRx-t |
|--------------|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|
| TurbNet | | | | 23.46 | 0.68 | 26.08 |
| Ours | 15 | ✔️ | | 24.22 | 0.69 | 26.63 |
| Ours | 8 | ✔️ | | 24.88 | 0.70 | 27.46 |
| Ours | 5 | ✔️ | | 25.16 | 0.72 | 27.91 |
| Ours | 5 | ✔️ | ✔️ | 25.29 | 0.73 | 27.99 |
**`R2-Q5`**: **Advantage of the proposed method over diffusion model.**
Our method in video ATM offers several advantages over diffusion models: It is generalizable, physics-grounded, temporally consistent, and free from hallucinations.
Diffusion models can learn incredibly accurate priors on images and, more recently, videos. Long term, they could be an effective method to mitigate turbulence. However, they also come with severe limitations. Diffusion models require extensive training data and often suffer from hallucination biases inherent in that data. These models are purely data-driven and do not consider domain knowledge or the physics of turbulence. In contrast, our method addresses residual temporal inconsistency issues from existing physics-based turbulence removal models. By observing turbulence motion features, we design the mitigation based on temporal motion regularity, achieving effective results without any training. Moreover, our method can be applied to any existing physics-based ATM method, making it more grounded in the physics of turbulence rather than relying solely on training data quality.
To date, there has been no exploration of diffusion models for video-based ATM. The only diffusion model method, PiRN, for image-based ATM is a closed-source approach trained on limited synthetic image datasets. There is no evidence it can reconstruct temporally consistent dynamic videos. In contrast, our method significantly improves temporal consistency.
**`R2-Q6`**: **Visualization of the temporal and spatial field.**
Please see our global response to the **`shared question D`**
---
Rebuttal Comment 1.1:
Title: Solve most of my conserns
Comment: For 'Advantage of the proposed method over diffusion model', I mean that it could be an further improve the result of some algorithms turbulence mitigation result.
---
Reply to Comment 1.1.1:
Title: Further discussion with Reviewer EL8T (denoted as R2)
Comment: Thank you for your further explanation.
Applying diffusion models as add-ons to ATM methods presents challenges with hallucination and may worsen temporal consistency. Current restoration-focused diffusion models still struggle with hallucination. Using these models on ATM video outputs has a high likelihood of exacerbating temporal inconsistency, as turbulence-induced random local motion in each frame may result in inconsistent texture restoration, potentially degrading video quality. We will include a small figure in our revised version to further illustrate the limitations of the diffusion model as adds-on.
However, your suggestion points to a valuable but unexplored direction for future ATM research. To effectively use diffusion models as add-ons to refine current ATM methods, future work might consider developing a specialized turbulence module within video-based diffusion models. Our work could inspire such developments, potentially including a regularized temporal attention module.
We appreciate your response. Please let us know if you have any further questions or concerns. We'll respond promptly to your comments. | Summary: This paper presents an implicit neural representation (INR) framework for taking a pre-trained supervised video atmospheric turbulence mitigation (ATM) model and regularizing its output to be more temporally consistent. The main components are (1) an INR called the temporal deformation field; and (2) a subsequent INR called the spatial content field to output RGB intensity at a pixel in the video (at a certain time). These two INRs are trained on the output of a pre-trained ATM model, and regularized using a disparity loss (with MiDas pre-trained network) for temporal consistency, and a similarity loss for the content of the video. Experiments are conducted on real-world datasets with comparison to state-of-the-art ATM models recently proposed as well as some simulated ablation studies.
Strengths: + Method can improve a variety of existing state-of-the-art ATM models, and the use of INRs with different feature representations used as inputs seems like an original contribution (at least to this application field, if not video representation in general)
+ The use of KLT tracker for visualizing the temporal variability across the frames is a good visualization and helps show the improvement of the method in a qualitative way
+ Supplemental videos on the website show the method stabilizing video in the presence of turbulence
+ Extensive quantification of the method shown in Table 2 to illustrate the effectiveness of ConVRT
Weaknesses: - There is little detail about the spatial feature map M, temporal feature map N, canonical spatial feature map C. What does it mean to call these feature maps, and why are they chosen the way they are? For instance, why the Hadamard product for M and N, and not just learning a 3D feature map at that place instead directly? I also don't see how the C map is "canonical" to me in any obvious way (for instance, you could change the dimensions of Q1, Q2 and I don't see why that couldn't work in the method?).
- The method seems to be focused primarily on fixing errors for supervised ATM methods. However, some of the classical approaches such as [Mao 2020] that utilize lucky frames, would they have this problem of temporal variability? I'm not necessarily asking for a comparison or new experiments, but it would be good to discuss if this problem primarily is for supervised methods.
Reference: Zhiyuan Mao, Nicholas Chimitt, and Stanley H. Chan, ‘‘Image Reconstruction of Static and Dynamic Scenes through Anisoplanatic Turbulence’’, IEEE Transactions on Computational Imaging, vol. 6, pp. 1415-1428, Oct. 2020
- Table 3 is a very modest improvement . Does the Ltemp really help? A qualitative example would really help clear up that this L_temp is working (show a figure with and without Ltemp).
- How would the method handle issues such as camera shake (common in long-range videos that are shot with high optical zoom)?
Minor suggestions:
- Line 187 - TurbNet, shouldn't it be whatever ATM method you are comparing with?
- Line 205 - One shouldn't be capitalized
- Line 109 - unresolved citation
- Table 1 - more stylistic, but I don't think its necessary to put down the venue into the table. We shouldn't judge methods based on their venue, but on the content of the method itself, so having the citation alone is enough to let readers draw their own conclusions about the papers. I would remove this column from the table.
- Table 3 - there is a typo in the PSNR_Img column where the lower number is bolded rather than the higher one
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can the authors explain (1) how the feature maps M, N, C are novel compared to other INR video representations? Is it the use of the Hadamard product, and the two stage architecture? If so, an ablation study of comparing the Hadamard product of M and N to just a 3D spatial feature map directly is warranted. (2) I would be good to visualize these features after optimization (what do they look like), and what information/interpretability can be gleaned from them?
2. There are missing details: what is the details of the MLP layers, any positional encoding? These should be added to a supplemental file.
3. I am interested if L_temp is the key factor that improves the method, or its a minor improvement. Showing a qualitative example as discussed earlier would be beneficial here.
4. Can there be a discussion about issues involving camera shake? I assume the method would require stabilized videos first, and if there is any residual motion leftover, this would cause errors in the reconstruction.
5. What is the wall clock time of the method? How long does it take (in actual seconds) from start to finish? The paper only states 80 epochs, I'm curious how long that actually takes.
6. In line 134 - it says an additional enhancement module is applied after S_field. This was never discussed again. How important is this module? What's the performance with and without the module?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **`R1-Q1 / R1 - weakness 1`** : **Visualization of Canonical Spatial Field and Representation Field Design**
The canonical spatial field C serves as a base spatial representation, containing all spatial content of the video. We can obtain a canonical image by deriving it from this field without applying delta x and y. This canonical spatial field functions similarly to a key frame in video compression, providing a reference from which other frames in the sequence can be derived. For more detailed results and discussion on this topic, please refer to our global response to **`shared question D`**
Our feature maps M and N, combined with the Hadamard product, provide an efficient representation strategy for 3D feature volumes represented by INR. This tensor decomposition strategy is commonly used to parameterize 3D volumes in INR, enhancing their ability to represent 3D signals while reducing the number of required parameters [1][2][3]. The Hadamard product in our two-stage architecture efficiently combines spatial and temporal information, offering a balance between computational efficiency and representational power, particularly suited for video ATM tasks.
**`R1-Q3 / R1 - weakness 3`**: **More ablation study on loss function**
Both representation and loss design are effective in our method. The neural representation design plays a more crucial role, bringing a 0.95dB improvement in performance, while the L_temp loss contributes an additional 0.15dB improvement, as demonstrated in our ablation study. For more details, please refer to our response to **`R2-Q3`**.
**`R1-Q4 / R1-weakness 4`** : **Camera shake**
Thank you for the interesting question. Please see our global response to **`shared question B`**.
**`R1-Q5`**: **Wall clock time**
We conducted experiments on a 25-frame video with 540x540 resolution. Our total running time consists of two parts: the preprocessing using the ATM model (DATUM) and our test-time optimization approach. For comparison, the running time for other unsupervised test-time optimization methods (Li's [2] and Mao’s [1] methods) only includes the methods themselves, as they do not rely on preprocessing.
As shown in the **`Table`** below, even with the additional DATUM preprocessing time, our method is more than 15 times faster than Mao’s method and around 30 times faster than Li's. Our method's efficiency stems from combining the advantages of supervised models with test-time optimization. We use faster supervised methods to address major issues in video turbulence mitigation and apply the slower test-time optimization only to residual temporal inconsistencies. This approach makes our method significantly more efficient than pure test-time optimization methods that must solve the entire problem independently.
| Method | Running Time (min) |
| :----------------: | :------: |
| Mao | 165 |
| Li | 300 |
| DATUM + Ours | 10 |
**`R1-Q2`**: **Missing details about MLP**
Our network architecture consists of two MLPs: a content MLP with 3 fully-connected layers (input: Q1, hidden: 128, output: 2) and a temporal MLP with 8 fully-connected layers (input: Q2, hidden: 128, output: 3 RGB). We encode position information in the optimizable Canonical Spatial Feature Map C and Spatial Feature Map M, both with dimensions H x W matching the image. For each RGB image location, we query the corresponding Spatial Feature Map M location, perform a Hadamard product, and input to the MLP. In the Temporal Feature Map, multiple frames share neighboring features with different weights due to explicit regularization. We will provide additional illustrations in the revision to further clarify these concepts.
**`R1-Q6`**: **The missing enhanced module**
The “enhanced module” refers to a network feature that ablation studies caused us to remove. Referencing it in the manuscript was a typo—apologies for the confusion. This reference will be removed.
**`R1 - weakness 2`**: **Add comparison with Mao’s method**
Our method substantially outperforms Mao et al. [4] and [5] on dynamic video content. This is because their approaches are only effective for static scene turbulence removal and fail when applied to videos containing moving objects. For qualitative comparison and discussion, please see our global response to **`shared question C`**
**`R1 - weakness 6`**: **Term involved in the loss function**
At line 187, the loss is calculated between the output of our method and the output of the base method (TurbNet, TMT, DATUM, etc.). We will clarify this description in the camera-ready version.
**`R1 - weakness 5/7/8/9`**: **Typos**
Thank you for pointing out the typos. We will fix them in the camera-ready version.
[1] Chen, Anpei, et al. "Tensorf: Tensorial radiance fields." European conference on computer vision. Cham: Springer Nature Switzerland, 2022.
[2] Fridovich-Keil, Sara, et al. "K-planes: Explicit radiance fields in space, time, and appearance." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[3] Zhou, Haowen, et al. "Fourier ptychographic microscopy image stack reconstruction using implicit neural representations." Optica 10.12 (2023): 1679-1687.
[4] Mao, Zhiyuan, Nicholas Chimitt, and Stanley H. Chan. "Image reconstruction of static and dynamic scenes through anisoplanatic turbulence." IEEE Transactions on Computational Imaging 6 (2020): 1415-1428.
[5] Li, Nianyi, et al. "Unsupervised non-rigid image distortion removal via grid deformation." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.
---
Rebuttal Comment 1.1:
Title: Response to Author Rebuttal
Comment: Thank you for the rebuttal, it satisfied most of my concerns.
One remaining question: I am still confused about M and N. Are they learned feature maps (optimized)? Perhaps you can visualize them as you did C, so the reader has a better understanding of what these maps look like (probably good for supplemental material).
A few follow-ups for the final camera-ready paper (if accepted):
1) I think it is important that the paper acknowledge that the Hadamard product decomposition is inspired by other papers (such as [1-3]) by explicitly putting those references and sentence into a revised version of the paper.
2) Visualizing C in the main paper greatly assists the reader in understanding what this map is. I would endeavor to include it in one of your figures, or as a small additional figure. Your text description should also be incorporated so that the reader can more clearly understand C. Also if M and N are able to be visualized (see question above), it would be good to point readers to understand those feature maps as well.
3) If the loss function only adds 0.15dB, please revise the language in that section to say a "modest" improvement. It is not a major difference, and do not want to overclaim the contributions of that one loss function. If you want to keep the original language, then I still think a qualitative figure with and without the loss function is needed to convince readers the 0.15dB is significant.
4) For the wall-clock latency time, I think it would be good to breakdown how much time was due to DATUM (which is not your method), and how much was due to your optimizations. That gives the readers some understanding that if they add your module to their base ATM method, this is the expected overhead. Please add this discussion to either the main paper or the supplemental material.
---
Reply to Comment 1.1.1:
Title: Further discussion with Reviewer DNNj (denoted as R1)
Comment: We greatly appreciate your prompt response. Thank you for your follow-up question and suggestions.
**`Suggestion 2 & Further Concern`** **More details about the feature map M and N**
Yes, M (Spatial Feature Map), N (Temporal Feature Map), and C (Canonical Spatial Feature Map) are all optimizable during training. M and N are learnable parameters with shapes [H,W,Q1] and [T_res,Q1], respectively. The optimizable parameters in our pipeline include all these feature maps (M,N,C) and two MLPs (content and deformation MLP), as mentioned in line 136 of the main paper. We will emphasize the details of each component by adding more legends and markers to the pipeline figure in the revision.
Regarding the visualization of M (spatial feature map) and N (temporal feature map), we will include visualizations of the deformation grid (delta x and delta y) and M in the revised version. The delta x and delta y serve as a sampling grid, indicating how each frame is sampled from the canonical field (similar to a keyframe in video compression). This visualization will help clarify the product of M and N for specific x, y, and t coordinates
**`Suggesion 1`** **Acknowledge the tensor decomposition**
Thank you for your suggestion. In the revised version, we will reorganize the related work section about INR and add a new subsection to the methods part, explaining the inspiration behind our method's design.
**`Suggesion 3`** **Description of effectiveness brought by the loss function.**
Thank you for your considerate suggestion. We will add the qualitative results from our ablation study on loss functions, as shown in the **`attached file figure 4 `**, to the revised paper. This addition will help readers better understand the effectiveness of each component. Additionally, we will revise our language to more clearly explain the effectiveness of each component. We appreciate your feedback, which will improve the clarity and comprehensiveness of our paper
**`Suggesion 4`** **Running Time Breakdown**
Thank you for the suggestion. In the revised version, we will include a detailed runtime discussion, breaking down the time for DATUM (0.5 mins) and our method (9.5 mins). Our method is the first refinement strategy for supervised video-based ATM methods struggling with temporal consistency. Therefore, running time efficiency compared to supervised methods is not our primary focus in this work. However, we agree that discussing the runtime of each method, particularly with the breakdown for DATUM, is crucial. This information will help readers understand the computational cost of incorporating our method into existing approaches.
We appreciate your timely response to our rebuttal. Please let us know if you have further concerns. We will respond to your comments as quickly as we can
---
Rebuttal 2:
Comment: I am satisfied with these revisions and will raise my rating to weak accept for this paper. The main justification behind my rating is the experimental results shown in the paper (KLT tracking, enhanced temporal consistency) including some moving scenes in the supplemental html page.
---
Rebuttal Comment 2.1:
Comment: We greatly appreciate your positive evaluation and recognition. All your suggesions will be included in the revised version. Thank you again for your valuable time and review. | Rebuttal 1:
Rebuttal: **`Shared Question A `** : **How does the proposed method handle large object motion?**
Since our method relies on moderating the temporal regularity of motion in the video, it is natural to ask whether it can distinguish between large object motion and turbulence motion. We provided additional experimental results on large motion cases from the Augmented URG-T Dataset [1], featuring videos with both strong turbulence and fast-moving objects.
As shown in **`attached file Figure 1`**, our approach remains effective for scenes with both large object motion and strong turbulence. This is evidenced by the long KLT trajectories in our restored videos, which successfully capture the original large object motion. Our approach achieves this because its MLP implicitly penalizes temporally irregular motion caused by turbulence while effectively fitting large but regular motion due to the scene itself. This behavior is illustrated in the X-t slices presented in Figure 1; turbulence motion appears irregular (jittering regions), while large object motion, despite being significant, maintains a smooth shape.
Fundamentally, the difference between large object motion and turbulence is that large object motion results in large optical flow magnitude, but the optical flow direction in a local region remains regular. In contrast, turbulence motion causes irregular optical flow directions. Our method (implicitly) allows large optical flows as long as they are locally regular in direction, while surpassing irregular optical flows in a local region. This is why our method can handle large object motion while mitigating turbulence.
[1] Saha, Ripon Kumar, et al. "Turb-Seg-Res: A Segment-then-Restore Pipeline for Dynamic Videos with Atmospheric Turbulence." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024
**`Shared Question B `** : **What is the relationship between turbulence, camera shake, and object motion?**
As discussed in **`Shared Question A`**, in the X-t slice (a way to visualize optical flow over time), turbulence motion appears as an irregular jittering shape due to irregular optical flow direction over time, while object motion, even if large, results in a smooth shape in the X-t slice. Similarly, as shown in the X-t slice **`Figure 2 in attached file`**, camera shake also manifests as an irregular jittering in a local region. Accordingly, by mitigating irregular motion between frames our method can simultaneously compensate for both turbulence and camera shake.
We simulated Brownian motion to represent camera shake/translation and used MiDas [2] depth to simulate the effect of decreasing translational optical flow inversely proportional to scene depth due to camera motion. In **`Figure 2`**, we present results for scenes with camera shake alone and with a combination of heavy turbulence and camera shake. Our methods effectively mitigate camera shake since it exhibits irregular motion patterns over time. Additionally, for combined camera shake and turbulence, our methods enhance temporal consistency, even when residual camera shake and temporal inconsistencies remain after performing DATUM.
[2] Ranftl, René, et al. "Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer." IEEE transactions on pattern analysis and machine intelligence 44.3 (2020): 1623-1637
**`Shared Question C`** : **Comparison with Unsupervised and Test-time Optimization Baselines.**
We have added additional baselines with two popular unsupervised/test-optimization methods: Mao’s [3] (only designed for grayscale frames) and Li’s [4]. As shown in **`attached file Figures 3`**, both methods are designed for static scenes and fail to capture object motion while removing turbulence. This is evident as both methods replace moving human body parts with an averaged static background.
These unsupervised and test-time optimizations are only effective for static scene turbulence removal and fail with videos containing moving objects. To our knowledge, our method is the only unsupervised and test-time optimization approach designed to address temporal consistency in video turbulence mitigation involving moving objects. This is the key difference and novelty of our method compared to other unsupervised and test-time optimization turbulence mitigation methods.
[3] Mao, Zhiyuan, Nicholas Chimitt, and Stanley H. Chan. "Image reconstruction of static and dynamic scenes through anisoplanatic turbulence." IEEE Transactions on Computational Imaging 6 (2020): 1415-1428.
[4] Li, Nianyi, et al. "Unsupervised non-rigid image distortion removal via grid deformation." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.
**`Shared Question D`** : **Feature Visualization**.
We visualize the canonical image by inputting the canonical spatial feature map into the content MLP without applying $\Delta x$ and $\Delta y$, as shown in the **`attached file Figure 4`**. The canonical image contains most of the video's content, providing a base representation from which other frames can be derived. Consequently, the canonical spatial field functions similarly to a key frame in video compression, serving as a reference for other frames in the sequence to query information.
While directly visualizing the N Temporal Feature Map (dimension [T_res, Q1]) is challenging, we demonstrate its effects through ablation studies. We control the strength of regularization by adjusting T_res. As shown in Table R2-Q3, decreasing T_res achieves better regularization on temporal inconsistency
Pdf: /pdf/a5e57e47e62129616ede9f505d53a4b7bc320f65.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Aligning Large Language Models with Representation Editing: A Control Perspective | Accept (poster) | Summary: In their paper, the authors introduce RE-CONTROL, a novel approach designed to align Large Language Models (LLMs) through representation editing. They view LLMs as discrete-time stochastic dynamical systems and propose the insertion of control signals into the internal representations. This technique allows for precise manipulation of the model's outputs during test time token by token. The experiments show that this method increases the win rate on HH dataset and does not need significant inference time.
Strengths: - Viewing LLMs as a dynamical system and interpret the steering vector as a kind of controlling signal to align models is innovative.
- Make LLMs adjustable during the generation process, and the evaluation does not have to wait until the entire sentence is generated.
- They empirically show that their method outperform some test-time alignment methods and does not need significant inference time, which makes the method be more practical usable.
Weaknesses: - Some parts of the paper are confusing, especially certain expressions. For example, they did not clarify some notations like a_t, V_{phi} etc.. The legend in figure 1 seems mismatched. And some figures are not mentioned in the paper.
- I think the performance of this method is highly depend on the value model. However, the paper does not discuss the reliability of the value model, which is crucial since it needs to assess the alignment effectiveness of the entire result based on each newly generated token and do so before the results are generated.
- The theoretical analysis and interpretation of their method is interesting, but lack rigor. e.g. the generated token (y_t) should be determined by logits (o_t), which is a part of state in the dynamic system. However, the paper interprets the generated token as kind of random variable or random noise (w_t).
Technical Quality: 4
Clarity: 2
Questions for Authors: Please refer to the part of weaknesses.
Confidence: 3
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: Limitation are sufficiently discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Some parts of the paper are confusing, especially certain expressions. For example, they did not clarify some notations like a_t, V_{phi} etc.. The legend in figure 1 seems mismatched. And some figures are not mentioned in the paper.**
A: $a_t$ is a typo; we meant $u_t$, which is the control signal. $V_{\phi}$ represents the value function parameterized by a neural network with parameters \(\phi\). We will correct this typo and add more illustrations of the notations in our revised version. Additionally, we will polish the figures.
**Q2: I think the performance of this method is highly depend on the value model. However, the paper does not discuss the reliability of the value model, which is crucial since it needs to assess the alignment effectiveness of the entire result based on each newly generated token and do so before the results are generated.**
A: There are three pieces of evidence that demonstrate the reliability of our value model. First, comprehensive evaluation metrics show that RE-Control outperforms the baselines. Second, we test RE-Control and the most competitive baselines on out-of-distribution data. Figure 4 illustrates that our method generalizes well to new input distributions beyond the training distribution of the value function. Third, the uploaded PDF in the general response shows the validation loss during the training process, which is very smooth. This further indicates the reliability of our value model training.
**Q3: The theoretical analysis and interpretation of their method is interesting, but lack rigor. e.g. the generated token (y_t) should be determined by logits (o_t), which is a part of state in the dynamic system. However, the paper interprets the generated token as kind of random variable or random noise (w_t).**
A: This depends on how one defines the language dynamical system, and we believe both perspectives are mathematically correct. In our work, we emphasize representation editing, adding control signals only to the representation space, thus defining the system's state within this space. Mathematically, the generated token functions similarly to the random component in traditional stochastic dynamical systems, as both are functions of the state. Indeed, one could also define the generated token as part of the state. We will include this alternative view in our revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses. It addressed some questions. I will maintain my score. | Summary: The paper suggests editing language model features for alignment tasks. The authors first learn a value function of a language model from a human-preference dataset. They then increment feature representations in model layers to maximize test-time utility. Empirical evidence shows that this feature editing method surpasses both test-time and training-time alignment baselines.
Strengths: The proposed method, RE-CONTROL, is a useful middle ground between current training-time and test-time alignment methods:
- RE-CONTROL, unlike existing training-time methods, does not alter a language model’s parameters, reducing training costs. Instead, it learns a value function offline.
- RE-CONTROL, unlike existing test-time methods, employs a learned value function to inject feature increments into features of language models.
The experiments are extensive in that they compared RE-CONTROL with both training-time and test-time alignment methods.
Weaknesses: While the paper is technically well-executed, I believe it has three main limitations: (i) the lack of compute--performance tradeoff analysis (ii) the lack of details in comparing RE-CONTROL with training-time alignment baselines. (iii) the limitation in application scope.
First, a compute-performance tradeoff analysis would clarify the behavior of RE-CONTROL. RE-CONTROL is more compute-intensive than other test-time decoding alternatives because it requires gradient ascent steps at decoding time (Section 4.4). These steps add up and can become quite intensive for generating long text. Therefore, comparing RE-CONTROL with test-time alignment alternatives while considering compute time would be informative. For instance, the authors could display the win rate of different test-time decoding methods on the y-axis and their wallclock time on the x-axis.
Second, I think the performance comparison between RE-CONTROL and training-time alignment methods in Section 6.1 seems very preliminary. There, the authors empirically show that the test-time alignment method RE-CONTROL *outperforms* training-time alignment methods like PPO, by concluding that
>We observe that RE-CONTROL achieves a higher GPT-4 win rate and average reward compared to both PPO and DPO. Furthermore, RE-CONTROL also outperforms these methods in terms of diversity and coherence.
I'm puzzled by how to interpret the results here. Should the take-home message here be "Decoding-time RE-CONTROL is better than training-time PPO in alignment. Period." or are there qualifications to this statement? I strongly suspect that some qualification is needed. To some extent, RE-CONTROL is a decoding-time approximation of PPO. Both methods use a learned value function to steer the model's behavior. At decoding time, RE-CONTROL does this in a more lossy (due to test-time gradient ascent) and shallower (because not all parameters are updated) way. Thus, with adequate training, I expected PPO to yield better results than RE-CONTROL. Note that this doesn't undermine RE-CONTROL's capability, as it is more lightweight than PPO.
Thirdly, while RE-CONTROL is technically sound, its application scope seems narrow. To my understanding, RE-CONTROL is most appealing to users who are unwilling to train a language model offline, who are willing to train a value function offline, who aim to save computing power during training, and who don't mind using more compute during decoding. These intersections of users seem limiting. This raises the question: Is it better to simply use a similar compute budget for efficient alignment (e.g., LoRa) of the LM model using standard methods (DPO, PPO, etc.) and avoid ongoing compute costs during decoding?
Technical Quality: 3
Clarity: 3
Questions for Authors: As mentioned above, in my opinion, it is surprising that decoding-time RE-CONTROL outperforms training-time PPO. To compare PPO and RE-CONTROL more carefully, could the authors consider some ablation studies? For example, you could use the same value function for both PPO and RE-CONTROL, one at training time to fine-tune the model parameters and the other at decoding time to produce the feature increment and compare the results.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See the "Weaknesses" section above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. First, a compute-performance tradeoff analysis would clarify the behavior of RE-CONTROL. RE-CONTROL is more compute-intensive than other test-time decoding alternatives because it requires gradient ascent steps at decoding time (Section 4.4). These steps add up and can become quite intensive for generating long text. Therefore, comparing RE-CONTROL with test-time alignment alternatives while considering compute time would be informative. For instance, the authors could display the win rate of different test-time decoding methods on the y-axis and their wallclock time on the x-axis.**
A: Actually, RE-Control is significantly faster than controlled decoding. We compared the inference times of all methods using the best hyperparameters selected on the validation set in our paper. Please refer to the inference time (in hours) column in Table 1 of our paper. As shown, ARGS ([1], referred to as controlled decoding in the paper) and controlled decoding with a value function [2] are much slower than RE-Control. This discrepancy arises because controlled decoding needs to evaluate multiple candidate tokens and repeatedly perform forward passes through the entire reward model. In contrast, RE-Control only requires optimization through a value function, which is a two- or three-layer neural network, making it significantly faster.
Additionally, we provide more analysis on the inference speed in the general response. Specifically, we plot the inference time under different batch sizes and the compute-performance tradeoff. The results further verify that RE-Control is significantly faster than ARGS. Please see the uploaded PDF and the general response for a detailed discussion.
**Q2. I'm puzzled by how to interpret the results here. Should the take-home message here be "Decoding-time RE-CONTROL is better than training-time PPO in alignment. Period." or are there qualifications to this statement? ... At decoding time, RE-CONTROL does this in a more lossy (due to test-time gradient ascent) and shallower (because not all parameters are updated) way. Thus, with adequate training, I expected PPO to yield better results than RE-CONTROL. Note that this doesn't undermine RE-CONTROL's capability, as it is more lightweight than PPO.**
A: First, since our paper focuses on test-time alignment, our comparison is primarily with other test-time alignment methods. When comparing with PPO and DPO, we follow the setup in ARGS [1]. Table 4 in Section 4 of [1] demonstrates that ARGS can outperform both PPO and DPO. Since RE-Control outperforms ARGS, it is unsurprising that RE-Control slightly outperforms PPO and DPO as well. We suspect this is because we use LoRA for PPO and DPO, similar to the ARGS paper, which introduces approximation errors. However, we lack the computational resources to perform a direct comparison with full fine-tuning and this is exactly the motivation why we want test-time alignment methods.
Second, we want to clarify that RE-Control does not update model parameters at test time but optimizes the hidden representation directly. Existing works on representation editing [3,4] also show that it can outperform fine-tuning methods for other tasks.
That said, we agree that it is not appropriate to conclude that test-time alignment is better. Therefore, in lines 296-297, we state that RE-Control can achieve competitive performance compared to PPO and DPO. We will further soften our arguments in the revised version, emphasizing that the use of LoRA for PPO and DPO may introduce additional approximation errors. Additionally, we will add a discussion on when to use test-time alignment methods versus fine-tuning methods, as addressed in Q3 below.
**Q3. Thirdly, while RE-CONTROL is technically sound, its application scope seems narrow. ...This raises the question: Is it better to simply use a similar compute budget for efficient alignment (e.g., LoRa) of the LM model using standard methods (DPO, PPO, etc.) and avoid ongoing compute costs during decoding?**
A: This is a broad question: do we need test-time alignment methods? Given the "no free lunch" principle, test-time alignment methods like guided decoding indeed increase inference time. However, as we mentioned in Q1, RE-Control has already shown a significant speed-up in inference time compared to controlled decoding. This represents an important advancement that can make test-alignment practical in real world applications.
We also want to highlight that even with LoRA, fine-tuning methods still require substantially more computing resources. For example, in our experiments, we used LoRA with rank 8 for both PPO and DPO. PPO training takes about 3 days on three A100 GPUs, while DPO takes about 1.5 days on three A100 GPUs. In contrast, training the value function only takes around 3 hours on one A100 GPU.
For users who prioritize real-time inference, amortizing the computation into the training process might still be preferable. However, for those without the resources to fine-tune at all, test-time alignment is a better choice and can easily adapt to different alignment objectives. We will include this discussion in our revised version.
**Q4. To compare PPO and RE-CONTROL more carefully, ... one at training time to fine-tune the model parameters and the other at decoding time to produce the feature increment and compare the results.**
In PPO, it also needs to train a value function to estimate the advantage function. However, since PPO is an online algorithm that requires iterative training of the value function, it is hard to employ a pre-trained value function.
## References:
[1] ARGS: Alignment As Reward-Guided Search, ICLR 2024
[2] Controlled Decoding from Language Models, ICML 2024
[3] Inference-Time Intervention: Eliciting Truthful Answers from a Language Model, NeurIPS 2023
[4] ReFT: Representation Finetuning for Language Models, Arixv
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for your response. It addressed some of my questions, and I raised my score.
> A: Actually, RE-Control is significantly faster than controlled decoding. We compared the inference times of all methods using the best hyperparameters selected on the validation set in our paper. Please refer to the inference time (in hours) column in Table 1 of our paper.
Apologies for overlooking the inference time mentioned in Table 1. The results are impressive!
> We suspect this is because we use LoRA for PPO and DPO, similar to the ARGS paper, which introduces approximation errors. However, we lack the computational resources to perform a direct comparison with full fine-tuning and this is exactly the motivation why we want test-time alignment methods.
Thank you for the clarification. As the authors mentioned, I recommend they add a qualification to their statements in Section 6.1, such as: "Overall, the results indicate that our approach is a competitive alternative to **parameter-efficient / LoRa-based** training-time alignment methods." Without this qualification, readers might question the rigor of the experimental results.
> We also want to highlight that even with LoRA, fine-tuning methods still require substantially more computing resources. For example, in our experiments, we used LoRA with rank 8 for both PPO and DPO. PPO training takes about 3 days on three A100 GPUs, while DPO takes about 1.5 days on three A100 GPUs. In contrast, training the value function only takes around 3 hours on one A100 GPU.
I see. These numbers are very impressive. I suggest the authors include a similar discussion in their paper or code repository, as doing so will make the method more appealing to practitioners.
> However, since PPO is an online algorithm that requires iterative training of the value function, it is hard to employ a pre-trained value function.
I'm not sure I understand this. Wouldn't it be possible to take a value function trained with PPO and use it within the authors' decoding framework? This approach could allow for a more direct comparison between PPO and the authors' method, as they use the same underlying value function. A similar approach for reusing the PPO value function is discussed here: https://openreview.net/forum?id=QaODpeRaOK.
---
Rebuttal 2:
Title: Thanks for your reply!
Comment: Thank you for your reply and for raising the score! We would like to provide some comments as follows:
**Thank you for the clarification. As the authors mentioned, I recommend they add a qualification to their statements in Section 6.1, such as: "Overall, the results indicate that our approach is a competitive alternative to parameter-efficient / LoRa-based training-time alignment methods." Without this qualification, readers might question the rigor of the experimental results.**
Thank you for the suggestion. We will explicitly clarify that our comparisons are focused on LoRa-based training-time alignment methods, and we demonstrate competitive performance relative to them.
**I suggest the authors include a similar discussion in their paper or code repository, as doing so will make the method more appealing to practitioners.**
Thank you for your suggestion. We will include the training time in our comparisons with LoRa-based training-time alignment methods in the revised version.
**I'm not sure I understand this. Wouldn't it be possible to take a value function trained with PPO and use it within the authors' decoding framework? This approach could allow for a more direct comparison between PPO and the authors' method, as they use the same underlying value function. A similar approach for reusing the PPO value function is discussed here: https://openreview.net/forum?id=QaODpeRaOK.**
Thank you for suggesting this paper. The training objective of the value function in PPO is slightly different from ours. In PPO (as Equation 1 in your suggested paper), regularization is incorporated into the reward function. In contrast, our value function estimates the true reward (see Line 168), and we introduce regularization during testing time by tuning the hyperparameters, such as the number of iterations and step size. This difference means that using the PPO value function could complicate hyperparameter tuning, as it would require retraining the value function each time we adjust the regularization strength.
That said, we agree that it would be interesting to explore the performance of using the value function from PPO. Additionally, examining how the PPO value function performs in baseline-controlled decoding scenarios would also be interesting.
However, with only about a day remaining in the discussion period, it would be challenging to conduct these new experiments, especially since tuning the hyperparameters in this context would require retraining the PPO model. We plan to explore this further in our camera-ready version and hope this is understandable to you. Moreover, since our primary focus is on test-time alignment methods, we believe this does not affect our main conclusions.
Again, Thanks for this interesting suggestion!
Best,
RE-Control authors | Summary: The paper introduces an alternative procedure for LLM alignment that does not fine-tune LLM weights, but instead learns a separate value function that is used to update hidden states. The value function is learned using a variation of temporal difference, then applied at inference time to modify hidden states by gradient ascent, maximizing the predicted state value. Authors evaluate their approach with multiple 7B LLMs on HH-RLHF data, comparing against both RLHF and training-free baselines. The paper also analyzes OOD generalization to HarmfulQA.
Strengths: - Authors propose an interesting approach to that can be used to alter LLM behavior in general
- When experimenting with HH-RLHF dataset, authors evaluate against multiple types of baselines and provide additional analysis that was interesting to read
- The paper is generally well-written and easy to follow
- Authors made the code available, in a (mostly) serviceable state
Weaknesses: **1a. Motivation for the choice of baselines.**
In your work, you cite, among others, ARGS[26], DeAL [22], Value Augmented Sampling [21] that also learn value functions and use them to steer model outputs (in other ways), but, to the best of my knowledge, you do not compare against them as baselines, instead choosing a relatively older work on controlled decoding. While [21] may be dismissed as concurrent work, the other works appear to be a relevant alternative and it is not clear why they were not chosen as baselines.
If there is a reason why these works will, beyond reasonable doubt, fail at the task that you evaluate on, I would recommend that you explain this in the paper. If there is no such reason, the paper would benefit from comparing against them.
**1b. Motivation for the choice of models**
Your paper focuses on Llama, Vicuna and Falcon models, of the 7B variety. While these are indeed LLMs, the original Llama was released circa 1.5 years ago and since then, LLMs improved **significantly** across tasks.
Picking older LLMs appears counterintuitive, as their generally worse quality makes it harder to measure possible drawdowns introduced by LLM alignment.
If you have a reason for choosing these models, please explain why you focus on older LLMs as compared to, for example, Llama 3 8B (or 70B), Qwen2, Gemma or other models near the top of https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard . If there is no such reason, the paper would benefit from switching to more accurate models.
**2. Inference time exploration**
LLM use cases are often sensitive to inference throughput (tokens per second) and latency (time to first / next token).
To the best of my understanding, RE-Control introduces an iterative optimization step to each forward pass during autoregressive inference. Depending on the configuration, this may result in a significant slowdown, which may limit the practical significance of your approach.
I would argue that the work would benefit from analyzing this difference in speed in different settings (e.g. single-sequence vs batch inference, etc).
**3. Main experiments are limited to one dataset and relatively small past generation LLMs, ranked by GPT-4**
This is definitely not a fault on authors' side, but the paper makes its main conclusions based on 7B models, using reward functions trained on a single dataset. This could result in accidental false conclusions if it turns out that, for instance, RE-Control harms the quality of stronger models or if it is somehow implicitly overfitting on on GPT4 opinions.
The standard way to minimize this risk is to diversify the experiments: try alternative alignment datasets (e.g. webgpt_comparisons, oasst1, etc), try larger models (llama-3 70B), introduce human rankings in some setups, etc. I understand that not all of these evaluations may be available to the authors, but for a NeurIPS publication, I would expect more variation in the experiments and, if there is a confounder that could not be eliminated (e.g. using GPT4 and no human eval), it should be stated among the limitations section.
Technical Quality: 2
Clarity: 3
Questions for Authors: **Questions on the definition of state**
To the best of my (possibly wrong) understanding, when you apply Bellman equation, you assume that the dynamic system's state satisfies Markov assumption. [If not, please explain why not]
Since LLMs use attention to previous hidden states, hidden vector for a specific state do not satisfy Markov assumption, since LLM's next token probability depends not only on them, but on a more distant past as well. In contrast, a fully markovian state would need to contain all previous hidden vectors, or the current hidden vectors and all past KV projections, or a sequence of all previous tokens (no hidden vectors necessary).
In other words, **when you define V(s), does s refer to just the current token's hiddens or a full state with Markov assumption?**
If you mean the latter state, then the test-time intervention (S4.4) needs to modify all previous hidden states of an LLM. This is important because modifying past hidden states may result in a very inefficient LLM inference algorithm.
If only the current state, you seem to apply policy iteration (S4.2-4.3) to a non-markov state. Please explain how you make sure that this algorithm still has the guarantees of optimal policy. If it doesn't, please clearly explain that the algorithm is a heuristic inspired by PI rather than actual PI.
### On reproducibility
To reiterate, the fact that you publish the code is great. None of my complaints below affected the final score.
The codebase lacks library versions (requirements.txt / dockerfile / list them in the readme), which makes it difficult to reproduce, especially in the future. While I ultimately managed to run the code by choosing the libraries with an educated guess (and minor modifications to the code), I am still not sure if I got the method to work "as intended" and not introduce silent errors.
For legal reasons, it would be best to direct the users to a version of Llama 7B that contains its original license, at least in the final version of the paper.
Using GPT-4 opinion means that the experiments would be difficult to reproduce after it is cycled ou
### Typos / minor:
> L16 LLama
The capitalization for the first version was LLaMA, second and third are Llama.
> supplementary code: intervented_model
you may have meant “intervened”
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The "Limitations and future work" appendix can be significantly improved. Currently, it focuses on future work and omits some limitations of the experiments, such as:
- using GPT-4 as the primary metric will make the results irreproducible once OpenAI cycles out GPT4, a closed-source model
- evaluating only on relatively weaker models (pre-previous gen, 7B) may miss some caveats or synergies from more capable LLMs
- using a single training dataset makes it possible that the proposed method is uniquely powerful in this one scenario but not others
The quality of the limitation section did not affect my score.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1a: Choice of baselines**
A: **We have compared our work with ARGS [26]**. Both [26] and [39] are controlled decoding methods. Specifically, [26] directly uses a pre-trained reward model, while [39] further trains a value function that can predict the reward from partial responses. In our paper, we refer to ARGS [26] as controlled decoding, and [39] as controlled decoding with prefix. Lines 242-247 provide a detailed description of this naming strategy. If this causes confusion, we are open to using the original names in our revised version.
DeAL [22] does not provide source code and has not been published yet, making it difficult to reproduce their method. Value Augmented Sampling is a concurrent work [21].
**Q1b: Choice of models**
A: We want to clarify that Vicuna and Falcon are based on Llama 2 instead of LLaMA 1. Using Llama 2 family as the base model was still the most common choice for academia research when we submitted this paper since Llama3 was released just one month before the NeurIPS deadline. That being said, we are running additional experiments using Llama 3 and we expect we can share the results with you around this weekend.
As for the model size, we agree that it would be ideal to test on larger models beyond 7B. However, as a small research lab in academia, we lack the computing resources to test on 70B models. Testing on a 7B model is the best we can manage with our current resources. We believe that using a 7B model for testing is also common in LLM papers from academic groups, such as in ARGS and DeAL. For example, DeAL uses Falcon 7B and MBT 7B. ARGS uses LLaMA1 7B. We will include this point in our limitations section.
**Q2: Inference Speed**
A: We want to clarify that RE-Control is actually much faster than controlled coding [26,39]. This is because controlled decoding needs to evaluate multiple candidate tokens and repeatedly perform forward passes through the entire reward model. In contrast, RE-Control only requires optimization through a value function, which is merely a two- or three-layer neural network, making it significantly faster.
We provide the inference time under different batch sizes in the uploaded PDF in the general response. As shown, increasing the batch size reduces the discrepancy between RE-Control and the base model, becoming negligible at a batch size of 32. Furthermore, ARGS [26] (referred to as controlled decoding in the original paper) does not support batch generation, resulting in significantly slower inference speeds compared to RE-Control. Additionally, we provide the compute-performance tradeoff analysis. Please refer to the general response for a detailed discussion.
**Q3. Main experiments are limited to one dataset and relatively small past generation LLMs, ranked by GPT-4.**
We have added experimental results on the SHP dataset using Vicuna. The results show that RE-Control also outperforms the baselines on this new dataset. Please see the general response for a detailed discussion.
Regarding the model choice, please refer to our answer in Q1b. Additionally, we are conducting experiments using Llama3 8B and expect to share the results this weekend.
Regarding the evaluation metrics, we follow those used in ARGS, which is the most important baseline for our paper. As investigated in [69], using GPT-4 as a proxy aligns with human evaluations over 80% of the time for quality assessments, providing a scalable method to approximate human preferences. Additionally, in Section 6.4 of the DPO paper, they demonstrated that "Humans agree with GPT-4 about as much as they agree with each other" for alignment tasks. We do not use any information from GPT-4 during training, so we do not anticipate any overfitting issues related to GPT-4. In addition to the GPT-4 evaluation, we also assess average reward, coherence, and diversity.
We have also conducted human evaluations. We sampled 100 response pairs from RE-Control and ARGS on HH-RLHF using Vicuna as the backbone, and then asked humans to evaluate which response was more helpful, less harmful, and of overall higher quality. An option was provided for cases where the two responses were equally good. Two volunteers participated, each evaluating 50 comparisons without knowing which model generated the responses. The results were as follows: RE-Control vs. ARGS: Win: 31%, Tie: 52%, Lose: 17%.
**Q4: Definition of state.**
A: We want to clarify that policy iteration requires the input to the value function to be the full state, but it does not require the control signals to be added to the full state. This means we can train the value function on the full state and backpropagate through it with respect to partial inputs at test time. We will make this clear in the final version of our paper.
**Q5: Reproducibility**
A: We did include the requirements.txt file in our submitted code. We will provide a more detailed README in the revised version. Additionally, we will add the link to a version of LLama2 7B that contains its original license.
---
Rebuttal Comment 1.1:
Title: Results on Llama3
Comment: Dear Reviewer XYhX,
We have added new results using Llama3 on the SHP dataset. For more details, please refer to the general response. If you have any further questions or concerns, please feel free to let us know.
---
Rebuttal Comment 1.2:
Title: Review update
Comment: I apologize for a delayed response and thank authors for a detailed response. Authors have answered my questions in full, suggested reasonable updates to the paper and provided additional experiments. Based on these updates, I have increased my score in the original review.
---
Rebuttal 2:
Title: Thanks for your review
Comment: Dear Reviewer XYhX
Thank you for your time and effort in helping us improve our work. As we approach the end of the discussion period, we wanted to check in to see if you have any further questions or comments. We are more than happy to address any additional concerns you may have.
Best,
RE-Control authors | Summary: The paper "Aligning Large Language Models with Representation Editing: A Control Perspective" proposes a method for aligning large language models (LLMs) with human objectives through representation editing. Unlike fine-tuning, which is resource-intensive and unstable, or test-time alignment techniques like prompting that rely on the original model's capabilities, this method introduces external control signals into the hidden states of a pre-trained LLM. The method treats the LLM as a discrete-time stochastic dynamical system and applies control theory to train a value function on the hidden states, optimizing control signals at test time. The experiments show that this method, named RE-CONTROL, outperforms existing test-time alignment techniques and requires fewer resources compared to fine-tuning methods.
Strengths: Innovative Approach: The use of control theory to introduce control signals into the hidden states of LLMs is novel and provides a new perspective on alignment.
Resource Efficiency: RE-CONTROL is less resource-intensive than traditional fine-tuning methods, making it more practical for large-scale applications.
Empirical Success: The experiments demonstrate that RE-CONTROL outperforms existing test-time alignment methods, showing strong generalization and alignment capabilities.
Flexibility: The method offers more flexibility than prompting or guided decoding as it perturbs the representation space dynamically during the generation process
Weaknesses: Complexity: The method involves sophisticated control theory and optimization techniques, which might be challenging to implement and understand for practitioners without a strong background in these areas.
Dependency on Value Function: The success of the method heavily relies on the accuracy and training of the value function, which might introduce additional challenges in terms of training and performance.
Technical Quality: 3
Clarity: 4
Questions for Authors: What are the specific challenges encountered during the training of the value function, and how can they be mitigated?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limited Scope: The paper primarily focuses on aligning LLMs for helpfulness and minimizing harmfulness. It might not address other important alignment objectives comprehensively.
Potential Overfitting: The reliance on a specific value function and control signals might lead to overfitting to the training data or specific tasks, limiting the method's generalizability.
Evaluation Metrics: The evaluation metrics, while comprehensive, might not capture all aspects of alignment, especially in diverse and dynamic real-world scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Complexity: The method involves sophisticated control theory and optimization techniques, which might be challenging to implement and understand for practitioners without a strong background in these areas.**
A: Though our work is theoretically grounded, the implementation of our method is actually very simple. At training time, we only need to train a basic value function, typically a two or three-layer neural network. At test time, optimizing through the value function involves only about 10 lines of code in the forward pass of the language model.
**Q2: Dependency on Value Function: The success of the method heavily relies on the accuracy and training of the value function, which might introduce additional challenges in terms of training and performance. What are the specific challenges encountered during the training of the value function, and how can they be mitigated?**
A: We did not encounter any significant challenges when training the value function, as it is underpinned by the Bellman equation. One minor challenge we faced was that the learning rate could affect the quality of the value function. However, this can be easily addressed by selecting the model based on the loss on the validation set. In the uploaded PDF in the general response, we also show the validation loss during the training process of the value function, which is very smooth. This is also an indicator that our value function is reliable.
**Q3. The paper primarily focuses on aligning LLMs for helpfulness and minimizing harmfulness. It might not address other important alignment objectives comprehensively.**
A: We follow the experimental setup of previous work on test-time alignment (ARGS [26]), focusing on helpfulness and harmfulness, which are the most common tasks in LLM alignment literature. The main goal is to demonstrate that dynamic representation editing can outperform guided decoding methods. While we agree that testing on other alignment objectives would be interesting, we leave this for future research.
**Q4. Potential Overfitting: The reliance on a specific value function and control signals might lead to overfitting to the training data or specific tasks, limiting the method's generalizability.**
A: In Section 6.2, we test RE-Control and the most competitive baselines on out-of-distribution data. Figure 4 demonstrates that our method can generalize well to new input distributions beyond the training distribution of the value function.
**Q5. Evaluation Metrics: The evaluation metrics, while comprehensive, might not capture all aspects of alignment, especially in diverse and dynamic real-world scenarios.**
A: We follow the evaluation metrics used in previous work on guided decoding (ARGS [26]), which is the most important baseline for our method. While we agree that proposing evaluation metrics for dynamic real-world scenarios is important, it is beyond the scope of this paper.
---
Rebuttal Comment 1.1:
Comment: Thank you so much for the detailed response, the clarification helps. | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable feedback and the time they spent on our manuscript. We would like to highlight that all reviewers agree that RE-control is an innovative approach and that viewing LLM as a dynamical system is novel. Additionally, all reviewers have noted that RE-control is technically sound. We provide additional experimental results (please see the uploaded PDF) in this general response and offer individual responses to each reviewer in detail below.
**More Results on an Additional Dataset.** We conducted further experiments on the SHP dataset (https://huggingface.co/datasets/stanfordnlp/SHP) using Vicuna as the backbone. As shown in Table 1 of the uploaded PDF, RE-Control continues to outperform all the baselines on this dataset. Additionally, we are also running experiments using the latest Llama3 8B. We expect to have the results by this weekend and will keep updating them.
**Inference Speed under Different Batch Sizes:** We provide additional analysis on inference time in Figure 1, which compares the inference time under different batch sizes. As shown, increasing the batch size reduces the discrepancy between RE-Control and the base model, becoming negligible at a batch size of 32. Furthermore, ARGS [26] (referred to as controlled decoding in the original paper) does not support batch generation, resulting in significantly slower inference speeds compared to RE-Control. This is because controlled decoding needs to evaluate multiple candidate tokens and repeatedly perform forward passes through the entire reward model. In contrast, RE-Control only requires optimization through a value function, which is a simple two- or three-layer neural network, making it significantly faster.
**Compute-performance Tradeoff at Test Time:** Figure 2 illustrates the compute-performance tradeoff between RE-Control and ARGS. For RE-Control, we vary the number of iterations when optimizing through the value function at test time, while for ARGS, we adjust the number of candidate tokens. As shown, the performance of RE-Control initially increases with more computing time but eventually decreases. This decline occurs because a large number of iterations at test time can lead to reward hacking, reducing the quality of the generated sentences. As discussed in Section 6.3, this hyperparameter can be selected based on the validation set. Since ARGS does not support batch generation, its inference speed is significantly slower than RE-Control. Even when RE-Control does not use batch generation, it outperforms ARGS when using the same computing resources. For example, when the inference time is around 155 minutes, the win rate of RE-Control is 75%, while ARGS is only 62%.
**Validation Loss Curve During Training of the Value Function:** We provide the validation loss versus the number of training steps for the value function in Figure 3. As shown, the training of the value function is very smooth, indicating its reliability.
Pdf: /pdf/bec31f592a6406a7dd553201e40a22b64b309b0d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Exogenous Matching: Learning Good Proposals for Tractable Counterfactual Estimation | Accept (poster) | Summary: The manuscript introduces exogenous matching, an importance sampling method for efficient estimation of counterfactual expressions in various settings. This method transforms the variance minimization problem into a conditional distribution learning problem, allowing integration with existing modeling approaches. The authors validate their theoretical findings through experiments with different Structural Causal Models (SCMs), showing competetive performance in a range of counterfactual estimation tasks. They also examine the impact of structural prior knowledge and demonstrate the method's unbiased estimates and practical applicability in identifiable proxy SCMs.
Update: revising score upward following author rebuttal.
Strengths: The topic is timely and important, as counterfactual estimation has become an increasingly popular subject in statistics and machine learning. The proposal builds on recent results in neural causal models, specifically with normalizing flows. The ability to incorporate prior knowledge in the form of Markov boundaries is especially welcome, since computing counterfactuals is often intractable without such constraints. The theoretical results appear sound (though I confess I did not go closely through the proofs) and the empirical results are compelling.
Weaknesses: The manuscript is not always clear, probably because a great deal of material has been moved to the appendix to accommodate page count. The result is a somewhat disjointed text that would likely be better served by a journal publication than a conference paper. That said, I am generally supportive of this submission and would be willing to revise my score upward if my questions are adequately addressed (see below).
Technical Quality: 3
Clarity: 2
Questions for Authors: When we say the causal model is “not fully specified”, does that just refer to the structural equations or to the graphical structure as well? In general, I was not always certain just how much causal information is used as input to this method.
I’m a bit confused by Eq. 5. If this is meant to be a variance estimator, then presumably the RHS should be something like $\mathbb{E}[X^2] – (\mathbb{E}[X])^2$, where $X$ denotes the likelihood ratio $p(u) / q(u)$, correct? This is almost but not quite what we find here. Looking at the appendix, I don’t see why Eq. 48 follows from Eq. 47. Why do we get to drop the square from the first term?
What does the constant $c$ denote in Eq. 6? Is it just the entropy of $P$? A brief word on this would help with intuition.
Should there definitely be a negative both in front of the expectation *and* within it? I assume the first summand should just be $-\mathbb{E} [ \log Q(u \mid Y_* (u)) ] $? This would look more like the classic cross entropy formula, as indicated in the following line. Eq. 7 also suggests this.
On “augmented graphs” – does this “reverse projection” of the ADMG always work? Different DAGs can have the same ADMG, for instance if two latent variables have all the same endogenous children. Perhaps there’s an unstated minimality assumption at work here?
What is $m$ in Eq. 13?
It is not clear to me from Sect. 5 what the sample size and data dimensionality are for these tasks? In general, this section appears rushed. The performance metrics are also somewhat surprising. If data is simulated, then we presumably have ground truth with respect to counterfactual probabilities. If so, then why not just compute the mean square error of the proposed estimator, perhaps as a function of sample size?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the patience and valuable feedback. We acknowledge that, due to page limitations, we had to move some material to the appendix, which may have affected the clarity of the manuscript. We understand that this may have made the paper appear somewhat disorganized. To address this issue, we will reorganize the content to ensure a clearer and more coherent presentation. We will introduce more transitional text to improve the flow of the manuscript and make the key points more understandable.
Below are our responses to the questions:
> **Question 1**: When we say the causal model is “not fully specified”, does that just refer to the structural equations or to the graphical structure as well? How much causal information is used as input to this method?
**Response**: “Fully specified” means that $\mathcal{F}$ is completely known, which means that we can write out the explicit expression of any $f \in \mathcal{F}$ (i.e., the structural equation). This also implies that the graphical structure is known, as the graph structure can be derived from $\mathcal{F}$. On the other hand, “not fully specified but $\mathcal{F}$ is evaluable” means that given an input $\mathbf{u} \in \Omega_\mathbf{U}$, we can obtain the output $\mathcal{F}(\mathbf{u})$. In other words, $\mathcal{F}$ is allowed to be regarded as a black box, such as a neural network.
> **Question 2**: Confusion by Eq. 5 and inadequate derivations in the appendix.
**Response**: We apologize for the confusion caused by the omission of the relevant steps in the appendix. Specifically, the subscript of the expectation operator has changed:
$\mathbb{E}_Q\left[\left(\frac{P(\mathbf{u})}{Q(\mathbf{u})}\right)^2 f^2(\mathbf{u})\right]=\sum\_{\mathbf{u}\in\Omega\_{\mathbf{U}}} Q(\mathbf{u}) \left(\frac{P(\mathbf{u})}{Q(\mathbf{u})}\right)^2 f^2(\mathbf{u})$ $= \sum\_{\mathbf{u} \in \Omega\_{\mathbf{U}}} \frac{P^2(\mathbf{u})}{Q(\mathbf{u})} f^2(\mathbf{u})$ $= \sum\_{\mathbf{u} \in \Omega\_{\mathbf{U}}} P^2(\mathbf{u}) \left(\frac{f^2(\mathbf{u})}{Q(\mathbf{u})}\right)$ $= \sum\_{\mathbf{u} \in \Omega\_{\mathbf{U}}} P(\mathbf{u}) \left(\frac{P(\mathbf{u})}{Q(\mathbf{u})}\right) f^2(\mathbf{u})$ $= \mathbb{E}_P\left[\frac{P(\mathbf{u})}{Q(\mathbf{u})} f^2(\mathbf{u})\right].$
Similar steps and the same conclusion also apply to continuous distributions.
> **Question 3**: What does the constant $c$ denote in Eq. 6? Is it just the entropy of $P$? A brief word on this would help with intuition.
As described in the appendix (line 1018), $c$ is composed of the entropy of $P$ and $\log\kappa$. We will include an brief explanation of this in the main text.
> **Question 4**: Typo error of $-\mathbb{E}\left[\log q(\mathbf{u}\mid\mathbf{Y}_*(\mathbf{u}))\right]$.
We thank the reviewer for the careful review and correction. This was indeed a typo, and we will fixed it in the updated version.
> **Question 5**: On “augmented graphs” – does this “reverse projection” of the ADMG always work? Perhaps there’s an unstated minimality assumption at work here?
In recursive SCMs, there is not always a "reverse projection" between ADMGs and their corresponding augmented DAGs, as the reviewer mentioned, different DAGs may correspond to the same ADMG. We have tried to understand why the reviewer might have this question. We believe it may be due to some ambiguity in our presentation in the "Markov boundary" section, which might have led readers to think that "ADMG -> DAG -> MB" is implied. However, we only emphasize "DAG -> MB". We thank the reviewer for this question and will clarify this information.
> **Question 6**: What is $m$ in Eq. 13?
In Appendix C.1, we explained $m$, but we forgot to explain it in the main text. We will add the missing explanation of \$m$ in the main text, and the correct notation is $\le m$ instead of $\ge m$. Here, $m$ is a threshold value; if the proportion of valid samples in a test is less than $m$, it is considered a failure.
> **Question 7**: It is not clear to me from Sect. 5 what the sample size and data dimensionality are for these tasks? In general, this section appears rushed. The performance metrics are also somewhat surprising. If data is simulated, then we presumably have ground truth with respect to counterfactual probabilities. If so, then why not just compute the mean square error of the proposed estimator, perhaps as a function of sample size?
The experimental settings in Section 5 are detailed in App. C.4. For sample sizes, RS was set to $10^6$, while the other methods used $10^3$. For variable dimensions, this is related to $|s|$ and the SCM settings; for example, LARGEBD-NLIN with 9 variables results in 45 dimensions when $|s|=5$. As for the choice of metrics, our use of ESP and LL is motivated by the intuition that indicator functions in counterfactual probability estimators are more challenging to satisfy (that is, equals $1$). Therefore, we refer to metrics from rare event sampling, which are computationally convenient, dimension-independent, and naturally range from 0 to 1. Another reason for using this metric is that as dimensions increase, counterfactual probabilities decrease exponentially, making it nearly impossible for RS to sample effective samples in high-dimensional cases. RS is a tractable method for estimating ground truth in general cases, so the ground truths are almost unattainable in such situations.\
Considering the reviewer's feedback, we note that metrics based on the deviation from ground truth are suitable for counterfactual effect estimation, given the low dimensionality in the experimental setting. Therefore, we have updated the section "Counterfactual Estimation on Proxy SCMs" to use the bias with respect to the true values ($|\hat{\mathbb{E}} - \mathbb{E}|$) as a metric of unbiasedness. The updated results can be found in the Author Rebuttal PDF (Tab. 1).
---
Rebuttal Comment 1.1:
Title: Re: Author rebuttal
Comment: Many thanks to the authors for their detailed replies to my comments. I will be revising my score upward in light of these clarifications.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for reading our rebuttal and updating the score accordingly. If there are any further questions, we would be happy to provide additional information. | Summary: This paper introduces an importance sampling method for efficient estimation of counterfactual expressions within general settings. It transforms the variance minimization problem into a conditional distribution learning issue, allowing integration with existing modeling approaches. The paper also explores the impact of incorporating structural prior knowledge, i.e. Markov boundaries, and applies the method to identifiable proxy SCMs, proving the unbiasedness of estimates and illustrating the method's practical applicability.
Strengths: 1. The paper is well-structured, with many subsections and bullet points summarizing paragraphs.
2. The approach proposed in this paper has clear intuition and is easy to implement.
Weaknesses: 1. Contributions are not disentangled well. All three points involve experimental or empirical findings.
2. Some results of the ablation study are abnormal. First, the results show that the approach proposed is not robust. Under setting SIMPSON-NLIN and M, the inclusion of Markov Boundary Mask significantly improves ESP. However, under setting LARGEBD-NLIN and NAPKIN, the Markov Boundary Mask harms the performance, especially with backbone SOSPF. Second, the ESPs under setting LARGEBD-NLIN with backbone SOSPF are almost 0, even when $|s|=1$, which is hard to explain if including Markov Boundary Mask is effective. Third, the variance under setting LARGEBD-NLIN with backbone NICE is extremely large.
3. Insufficient explanation or legend for figures, making it difficult for readers to understand. For example, in Figure 1, $\mathcal{M}$ with a subscripted hammer is not explained. In Figure 3, the legend does not indicate what different colors mean.
4. Hard to follow. Some terminologies need explanation or reference. For example, in line 234, *faithfulness* is not defined.
Technical Quality: 3
Clarity: 2
Questions for Authors: I would like to know whether Theorem 2 and 3 are novel. Have the papers cited (like 81, 3, 94, 111) and other papers proposed methods to obtain Markov boundaries? If so, what is the improvement of the method proposed in this paper?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors adequately discussed the limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for highlighting the issues and providing valuable feedback to help us improve the manuscript. Below are our responses to the identified weaknesses and questions:
> **Weakness 1**: Contributions are not disentangled well. All three points involve experimental or empirical findings.
**Response**: We thank the reviewer for the suggestion. We will reorganize the presentation of our contributions by placing the experimental and empirical conclusions in the third point, while the first two points will separately address the two main contributions of our work: i) the introduction of EXOM and its theoretical guarantees, and ii) the injection of Markov boundaries into neural networks for conditioning.
> **Weakness 2**: Some results of the ablation study are abnormal. First, the results show that the approach proposed is not robust. Second, it is hard to explain if including Markov Boundary Mask is effective under LARGEBD-NLIN with backbone SOSPF.Third, the variance under setting LARGEBD-NLIN with backbone NICE is extremely large.
**Response**: We have found that the abnormality observed is not due to the lack of robustness in the method but rather is caused by the width of the hidden layers in the neural network. Specifically, the injection of Markov boundaries relies on masking connections in the neural network, which means that with the same width, the injection of Markov boundaries results in fewer weights being used during inference. In experiments where the performance was normal, the injection of Markov boundaries still exhibited superiority even with fewer weights. Moreover, as the Markov boundaries become smaller (as seen with NAPKIN and LARGEBD-NLIN), the masks become sparser, resulting in even fewer weights being involved in inference. When the number of hyperparameters to be inferred for the conditional model is very high (e.g., SOSPF), the neural network's representational capacity is significantly insufficient, leading to markedly poor performance.
In the previous ablation studies, to ensure fairness, we used a hidden layer width of 64 across all settings, but this was inadequate for SOSPF. We have now increased the width to 256, and the result has reversed, with the experimental results on SOSPF showing consistency with results from other settings, as detailed in the Author Rebuttal PDF (Fig. 1).
In the follow-up, we will also update the relevant results in the appendix. We will include results in App. C.7 and App. C.8 with a hidden layer width of 256, then add a discussion about the recommended hyperparameter settings and their theoretical rationale in App. C.4.
> **Weakness 3**: Insufficient explanation or legend for figures, making it difficult for readers to understand. For example, in Figure 1, $\mathcal{M}$ with a subscripted hammer is not explained. In Figure 3, the legend does not indicate what different colors mean.
**Response**: In Fig. 1, $\mathcal{M}$ with a subscripted hammer represents the SCM under intervention, i.e., the submodel. In Fig. 3, the different colors indicate different submodels. We will address fidelity after d-separation, where fidelity refers to the independence implied in the probability distribution corresponding to d-separation in the DAG.
> **Weakness 4**: Hard to follow. Some terminologies need explanation or reference. For example, in line 234, faithfulness is not defined.
**Response**: We apologize for the confusion caused by the omission of the conceptual explanation in the main text. We have explained the concept of faithfulness in the appendix (line 1115, 1116). We will revise the manuscript to include the discussion of faithfulness in the main text, following the section on d-separation. This is because faithfulness and d-separation are highly related; it signifies that every conditional independence present in the distribution is entailed by the d-separation in the DAG.
Regarding the issue of lacking explanations for legends and terminology, we will thoroughly review the entire manuscript to identify any figures, concepts, or terms that may be missing explanations. We will ensure that these are properly addressed and clarified in the updated version.
> **Question 1**: Whether Theorem 2 and 3 are novel? Have the papers cited (like 81, 3, 94, 111) and other papers proposed methods to obtain Markov boundaries? If so, what is the improvement of the method proposed in this paper?
**Response**: In fact, the cited works and the proposed method are used for completely different tasks. The cited works are used to learn Markov boundaries among observable variables from data, while the proposed method is specifically designed to derive the counterfactual Markov boundaries (Def. 2) of exogenous variables from an augmented graph. The cited works cannot be directly applied to learn the counterfactual Markov boundary unless the exogenous variables are also observed in the data.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal by the authors addresses my concerns. I will update my rating accordingly.
---
Reply to Comment 1.1.1:
Comment: We are glad that the rebuttal addressed the reviewer's concerns, and we appreciate the reviewer for updating the rating accordingly. | Summary: This paper presents Exogenous Matching (EXOM), a new importance sampling method for estimating counterfactual probabilities in Structural Causal Models (SCMs). EXOM transforms variance minimization into a conditional distribution learning problem, providing an upper bound on counterfactual estimator variance as per Theorem 1. It outperforms existing methods across various SCM settings and integrates well with identifiable neural proxy SCMs for practical applications. By incorporating prior knowledge through Markov boundaries, EXOM further enhances performance, demonstrating its potential as an efficient tool for counterfactual estimation in diverse scenarios.
Strengths: 1. EXOM provides a tractable and efficient approach for counterfactual estimation in general settings, including scenarios with discrete or continuous exogenous variables and various observations and interventions. This flexibility makes it applicable to a wide range of causal inference problems.
2. The method is built on solid theoretical grounds, with the authors deriving an optimizable variance upper bound for counterfactual estimators.
3. The authors incorporate structural prior knowledge, specifically Markov boundaries, into the neural networks used for parameter optimization. They empirically validate the effectiveness of this approach across various scenarios.
4. EXOM consistently outperforms other importance sampling methods in various SCM settings, as demonstrated by the experimental results. Its compatibility with identifiable neural proxy SCMs further enhances its practical applicability.
Weaknesses: 1. Theorem 1 relies on the assumption that the density ratio $q(\mathbf{u}|\mathbf{y}_ {\ast})/q(\mathbf{u}|\mathbf{y}_ {\ast}^\prime)\leq \kappa$ holds for all $\mathbf{u} \in \Omega_{\mathbf{u}}$ and $y_{\ast}$, $y_ {\ast} ^\prime \in \Omega_{\mathbf{Y}_{\ast}}$. This assumption may be overly stringent, as probability measures with infinite support sets might easily violate it. Could the authors elaborate on this assumption and provide examples of distributions that satisfy it?
2. In the Sampling and Optimization section, the distribution of the exogenous variable $\mathbf{U}$ is assumed to be known. However, in practical scenarios, $\mathbf{U}$ is often unknown, necessitating additional efforts to estimate $\mathbf{P_U}$ [A]. Could the authors provide further clarification on this assumption and discuss potential methods for estimating $\mathbf{P_U}$?
3. I understand the authors only consider models that provide identifiability results. However, it is encouraged to include neural proxy SCM methods based on VAE and DDPM as experimental baselines. While these may lack identifiability guarantees, comparing against them would further illustrate the superiority of the proposed method in relation to current state-of-the-art techniques.
4. While the method shows good performance on the tested SCMs, it's unclear how well it scales to larger, more complex causal models. The experiments are conducted on relatively small SCMs, and scalability to high-dimensional or densely connected causal graphs isn't thoroughly addressed.
[A] Ren, Shaogang, and Xiaoning Qian. "Causal Bayesian Optimization via Exogenous Distribution Learning." *arXiv preprint arXiv:2402.02277* (2024).
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. In Table 1, the EXOM method with MAP shows significantly better performance on the SIMPSON-NLIN and NAPKIN datasets compared to EXOM with GMM, whereas the performance on the FAIRNESS-XW dataset is similar for both methods. Could the authors provide further explanation for this discrepancy? Why does the GMM approach underperform in these specific cases, and what factors contribute to the similar performance on FAIRNESS-XW?
2. In the ablation study investigating the impact of injecting Markov boundaries, could the authors please include the performance results of EXOM without Markov boundaries? This comparison would further illustrate the benefit of incorporating Markov boundaries.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors discuss the limitations of their work in Section 6 and Section D.3. Notably, this work does not have a negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer's patience in reading our manuscript and the attention to issues concerning assumptions, generalization, scalability, and experimental aspects. Below are our responses to these concerns:
> **Weakness 1**: Could the authors elaborate on the assumption about density ratio in Theorem 1 and provide examples of distributions that satisfy it?
**Response**: We agree with this concern; this is indeed not a weak assumption, especially for probability measures with infinite support sets. This is because the proof of Thm. 1 relies on the linearity of Lebesgue integration, which requires Lebesgue integrable functions to be bounded. Some alternative formulations with more relaxed assumptions are as follows:
1. **By Sufficient Condition**: If the importance weights satisfy $\kappa^{-\frac{1}{2}} \le p(\mathbf{u}) / q(\mathbf{u} \mid \mathbf{y}_\*) \le \kappa^\frac{1}{2}$ with probability 1, then, almost surely, $q(\mathbf{u} \mid \mathbf{y}'\_\*) / q(\mathbf{u} \mid \mathbf{y}\_\*) \le \kappa$, and hence Thm. 1 holds.
2. **By Concentration Inequality**: If the importance weights satisfy $\kappa^{-\frac{1}{2}} \le p(\mathbf{u}) / q(\mathbf{u} \mid \mathbf{y}\_\*) \le \kappa^\frac{1}{2}$ with probability $\zeta < 1$, then by introducing the assumption that the second moment of the weights $\mathbb{E}\_w^2$ is bounded, and using Thm. 1 from [1], we can derive an inequality of the form $\mathbb{V} \le \widehat{\mathbb{V}} + f(n, \zeta, \kappa, \delta, \mathbb{E}\_w^2)$ with probability $(1-\delta) \cdot \zeta^n$, where $\widehat{\mathbb{V}}$ is computed from $n$ i.i.d. samples $\mathbf{u}^{(1)}, \dots, \mathbf{u}^{(n)}$ with bounded weights. In this case, for $\widehat{\mathbb{V}}$, almost surely $q(\mathbf{u}^{(i)} \mid \mathbf{y}'\_\*) / q(\mathbf{u}^{(i)} \mid \mathbf{y}\_\*) \le \kappa$, so Thm. 1 holds for $\widehat{\mathbb{V}}$, and consequently, it also holds for $\mathbb{V}$ by the inequality relationship.
3. **By Approximation**: Similar to [2], consider only the approximation in bounded regions. If the importance weights satisfy $\kappa^{-\frac{1}{2}} \le p(\mathbf{u}) / q(\mathbf{u} \mid \mathbf{y}\_\*) \le \kappa^\frac{1}{2}$ in $\Omega\_{\mathbf{U}}^+$, let $\mathcal{Y}^\*\_+ = \\{\mathbf{Y}^*(\mathbf{u}) \mid \mathbf{u} \in \Omega\_{\mathbf{U}}^+ \\} \cap \mathcal{Y}\_\*$, and if $P(\mathcal{Y}\_\*) \approx P(\mathcal{Y}^\*\_+)$, then the theorem holds for $\mathcal{Y}^\*\_+$ and approximately for $\mathcal{Y}\_\*$.
> **Weakness 2**: Could the authors provide further clarification on the assumption that $P_\mathbf{U}$ is known and discuss potential methods for estimating $\mathbf{U}$?
**Response**: We agree with the viewpoint that obtaining the exogenous distribution $P^*_\mathbf{U}$ of the true SCM $\mathcal{M}^*$ is challenging. Literature [A] addresses this by introducing additional parameter assumptions for the causal mechanism $\mathcal{F}$ to recover the exogenous distribution. However, for counterfactual tasks, recovering the true exogenous distribution may not be necessary, since identifiable counterfactual results can still be provided by neural proxy SCMs. As deep generative models, the exogenous (or, latent) distribution $P_\mathbf{U}$ is typically either pre-specified or trainable, so the assumption that $P_\mathbf{U}$ is known is weak in this context.
> **Weakness 3**: It is encouraged to include neural proxy SCM methods based on VAE and DDPM to further illustrate the superiority.
**Response**: We thank the reviewer for the suggestion. Although in our experiments we only selected models that provide identifiability results as baselines, theoretically, the proposed method is not limited to the type of SCMs. We will consider applying it to neural proxy SCMs with VAE or DDPM as the backbone in future work for larger, more complex tasks.
> **Weakness 4**: While the method shows good performance on the tested SCMs, it's unclear how well it scales to larger, more complex causal models.
**Response**: In principle, we do not impose any restrictions on the size, complexity, or dimensionality of variables for SCM. However, in practice, it is well-known that computational complexity is a challenge when using normalizing flows for high-dimensional data. Therefore, scalability is an aspect worth investigating in future research.
> **Question 1**: Could the authors provide further explanation for the discrepancy of performance on the SIMPSON-NLIN and NAPKIN datasets? Why does the GMM approach underperform in these specific cases, and what factors contribute to the similar performance on FAIRNESS-XW?
**Response**: The FAIRNESS dataset is discrete and finite, whereas the SIMPSON-NLIN dataset is continuous and diffeomorphic. Thus, the sampling space of the former is smaller, which is why it did not show a significant difference in performance. The performance of GMM is as expected since its expressive capacity is weaker than that of MAF. We will include a breif discussion of these results in the main text to facilitate reader understanding.
> **Question 2**: Could the authors please include the performance results of EXOM without Markov boundaries in the ablation study?
**Response**: We have included results for "EXOM without Markov boundaries" (i.e., the blue bars) in both Fig. 4 and App. C.8 (Fig. 9), and compared them with "EXOM with Markov boundaries" (i.e., the orange bars). These results demonstrate the advantages of incorporating Markov boundaries. We will further improve the display of the legends in the figures to enhance readability.
----
[1] [Cortes, C., Mansour, Y., & Mohri, M. (2010). Learning Bounds for Importance Weighting.](https://papers.nips.cc/paper_files/paper/2010/hash/59c33016884a62116be975a9bb8257e3-Abstract.html)\
[2] [Tengchao Yu, Linjun Lu, & Jinglai Li. (2019). A weight-bounded importance sampling method for variance reduction.](https://arxiv.org/abs/1811.09436)
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the detailed responses. Regarding Weakness 1, in addition to the relaxed assumptions, could the authors kindly provide some concrete examples of distributions with infinite support sets that satisfy these assumptions?
---
Reply to Comment 1.1.1:
Comment: Thank you for the reviewer’s attention to this weakness. In fact, without concentration inequalities or approximations, it is challenging to find concrete examples that fully satisfy this assumption on infinite support sets. Even the density ratios between Gaussian distributions often violate boundedness asymptotically, as illustrated in [1] (Eg. 4.1). The best example we could find is from [3] (Eq. 11), which provides upper and lower bounds between the binomial distribution and the Poisson distribution (with infinite support) under certain conditions.
Of course, once concentration inequalities or approximations are employed (as in relaxed formulations 2 and 3), the weakness regarding infinite support becomes less acute, provided we prove or assume that the probability measure of $P_\mathbf{U}$ on the unbounded portion is zero or very close to zero.
----
[3] [Dümbgen, L., Samworth, R., & Wellner, J. (2021). Bounding distributional errors via density ratios.](https://arxiv.org/abs/1905.03009) | Summary: Based on the importance sampling methods, the authors propose an exogenous matching approach to estimate counterfactual probability in general settings. They derive the variance upper bound of counterfactual estimators and transform it into the conditional learning problem. They also employ the Markov boundaries information in the inference to improve the learning performances further. Extensive experiments validate the superiority and practicality of their method.
Strengths: - This paper is clearly and well written.
- The authors give a theoretical analysis of their estimator, its log-variance upper bound in the general settings, and the counterfactual Markov boundary, and they also perform extensive experiments to demonstrate effectiveness in several cases: with two types of stochastic counterfactual processes, with three categories of fully specified SCMs, etc.
Weaknesses: - This paper makes it clear in lines 140-144 about the assumptions needed for the proposed method. Regarding assumption ii), I think it is not mild, and I am wondering if the proposed method would be sensitive to the specified distribution $P_{\textbf{U}}$ of $\textbf{U}$. The authors might have performed such experiments, but it is not quite clear.
- Are the Markov boundaries learned from the observational data via the d-separation, or they are given prior? The authors claimed that such Markov boundaries are structural prior knowledge in line 223, whereas they gave Theorem 3 to demonstrate how to obtain them.
- It is suggested to offer the whole procedure or pseudo code of their proposed algorithm somewhere.
- In line 239, “augmentied” might be a typo error.
Technical Quality: 4
Clarity: 4
Questions for Authors: Please see my questions in Weaknesses.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer’s positive evaluation and valuable feedback. Below are our responses to the questions raised in the Weaknesses section:
> **Weakness 1**: Regarding assumption ii), if the proposed method would be sensitive to the specified distribution $P_\mathbf{U}$ of $\mathbf{U}$?
**Response**: The proposed method is theoretically insensitive to the exogenous distribution $P_\mathbf{U}$, as we do not impose any restrictions on the parameters or specific form of $P_\mathbf{U}$. However, we acknowledge that in our experiments, we only considered specific exogenous distributions. This is because, in practice, neural proxy SCMs, as deep generative models, typically use a form-specific and relatively simple distribution for latent variables to facilitate inference.
> **Weakness 2**: Are the Markov boundaries learned from the observational data via the d-separation, or they are given prior?
**Response**: We would like to emphasize that the counterfactual Markov boundary (Def. 2) for counterfactual estimation tasks serves as prior knowledge, which is not necessary and is known before estimation. The Markov boundary here refers to the counterfactual Markov boundary of exogenous variable, which can be directly derived from other prior knowledge (i.e., the augmented graph) as described in Thm. 2 and Thm. 3. Specifically, if the augmented graph is provided, we can use graph algorithms (Thm. 3) to effectively derive the Markov boundary of any exogenous variable. For learning the counterfactual Markov boundary from data, some methods cited in the manuscript (e.g., 3, 94, 111) cannot be directly applied unless the data includes observations of the exogenous variables. We thank the reviewer for the question and will refine the text to clarify this information.
> **Weakness 3, 4**: It is suggested to offer the whole procedure or pseudo code of their proposed algorithm somewhere; In line 239, “augmentied” might be a typo error.
We will carefully correct the typo errors in the manuscript and, in response to the reviewer's feedback, we will include pseudo code describing the proposed algorithm in the updated version. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their valuable feedbacks, which will help us improve the manuscript. Here, we summarize and address some common concerns, then list the changes to be made in the next updated manuscript.
### **Discussion on Assumptions**
- **Assumptions Required for Exogenous Matching**
This refers to the necessary inputs for the Exogenous Matching algorithm. As indicated in lines 140-144 of the manuscript and as mentioned in 8MDB, these assumptions consist of two parts:
1. An exogenous distribution $P_\mathbf{U}$ that is both sampleable and has computable density or probability.
We do not impose any restrictions on its parameters or form. Please refer to our response to reviewer 8MDB for more details.
Moreover, in practice, assuming the exogenous distribution is known is a weak assumption for using neural proxy SCMs for counterfactual estimation. See our response to reviewer aexG for further clarification.
2. An evaluable causal mechanism $\mathcal{F}$.
Here, "evaluable" means that given an input $\mathbf{u} \in \Omega_\mathbf{U}$, we can obtain the output $\mathcal{F}(\mathbf{u})$ without needing to know the specific expressions of all structural equations $f \in \mathcal{F}$. This leads to the concept of "fully specified," which is elaborated in our response to reviewer rY6N.
These assumptions are weak for proxy SCMs; therefore, we have emphasized in the limitations section that our method does not directly estimate from data in practice but requires first training a proxy SCM, as demonstrated in our experiment "Counterfactual Estimation on Proxy SCMs."
- **Assumptions in Theorem 1**
We acknowledge that the original description in the main text is not weak. To address this, we have introduced 3 more relaxed formulations. Please refer to our response to reviewer aexG for details. We will briefly incorporate these relaxations into the main text and provide their proofs in the appendix.
### **Clarification for Injecting Markov Boundaries**
In response to questions from reviewers 8MDB, 6eu3, and rY6N, we need to clarify some fundamental facts about "Injecting Markov Boundaries". Firstly, the "Markov Boundaries" here actually refer to counterfactual Markov Boundaries (Def. 2). Additionally, methods for learning Markov Boundaries from observational data cannot be directly applied to learning counterfactual Markov Boundaries. Therefore, we need Thm. 2 and Thm. 3 to derive counterfactual Markov Boundaries from the augmented graph. We will supplement this explanation in the updated version.
### **Discussion on Experiments**
- **Ablation**
According to the feedback from reviewer 6eu3, the ablation experiments related to the Markov boundary exhibited anomalies, which could lead readers to perceive our method as not robust. We identified that these anomalies stem from the width of the hidden layers, which is a practical engineering issue and can be reasonably explained. For a detailed explanation, please refer to our response to reviewer 6eu3.
We have updated the experiments accordingly, and the corresponding part of Fig. 4 in the main text will be revised to Fig. 1 in the Author Rebuttal PDF. We will also continue to refine the relevant experiments in the appendix, adding results with increased hidden layer widths, and include a discussion on hyperparameter selection and its impact.
- **Counterfactual Estimation on Proxy SCMs**
Based on reviewer rY6N's suggestion, a more intuitive metric should be the deviation from the ground truth. We have elaborated on the intuition and rationale for choosing ESP and LL as metrics, which is due to the difficulty of obtaining the ground truth in high-dimensional settings in our experimental setup. For more details, please refer to our response to reviewer rY6N.
Of course, for low-dimensional cases, such as in counterfactual effect estimation experiments, the ground truth is easily accessible. Therefore, we have updated the metrics for these experiments, and Tab. 2 in the main text will be revised to Tab. 1 in the Author Rebuttal PDF.
### **Discussion on Presentation**
Due to page limitations, we did indeed move some content to the appendix and inadvertently omitted some key information from the main text. We will carefully review and correct any typos, complete the missing explanations and results in the main text, and reorganize key sections with additional transitions to improve readability.
### **List of Changes**
Based on the reviewer’s suggestions, we will carry out the following changes:
- **Regarding the presentation in the main text:**
1. Correct typo errors, reorganize the description of contributions, and ensure that legends clearly describe the elements in the figures. For example, eliminate the ambiguity caused by the missing legends in Fig. 1 and 2.
2. Assess the inclusion of concepts, methods, and experimental results in the main text, and consider transferring relevant content from the appendix to the main text where appropriate, such as the explanation of the concept of faithfulness.
3. Reorganize the presentation of some key sections, using smoother transitions or more intuitive ways to connect different parts to improve readability.
- **Regarding further clarification of assumptions:**
1. Add additional discussion about the assumptions required for Exogenous Matching.
2. Clarify the motivation and significance of (counterfactual) Markov boundaries, distinguish the proposed method from data-driven learning approaches.
3. Add some relaxations to the assumptions in Thm. 1.
- **Regarding modifications to experimental figures:**
1. Replace the corresponding part of Fig. 4 in the main text with Fig. 1 from the PDF.
2. Replace Tab. 2 in the main text with Tab. 1 from the PDF.
Pdf: /pdf/82ff03b1642241551f153e10c236f14757442f89.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
$SE(3)$ Equivariant Ray Embeddings for Implicit Multi-View Depth Estimation | Accept (poster) | Summary: This paper introduces an SE(3)-equivariant multi-view depth estimation model based on the Perceiver IO framework. Specifically, each feature ray is treated as a token, and the feature vector of each ray is concatenated with an equivariant positional embedding. To achieve equivariance, the authors propose using spherical harmonics to encode the ray poses. Ray features are treated as type-0 (rotation-invariant) irreps. These equivariant ray encodings are processed through several equivariant self-attention layers and aggregated into global features and a canonical reference frame. The camera pose encoding is first inverse-transformed into this inferred canonical frame, resulting in an SE(3)-invariant query. A series of cross-attention layers between the encoded global features and the query features is then used to predict pixel colors. The authors demonstrate the effectiveness of the proposed approach on the ScanNet and DeMoN datasets.
Strengths: 1. To the best of the reviewers' knowledge, this is the first paper to address SE(3)-equivariant positional embedding for the transformer/PerceiverIO framework for multiview applications. While Fuchs et al [1]. and Liao et al [2,3]. have addressed SE(3)-equivariant attention for GNNs, their methods are more complex and computationally inefficient compared to the proposed approach.
2. The proposed method shows competitive benchmark results compared to state-of-the-art methods across multiple datasets. The ablation study convincingly demonstrates the significance of the equivariant embedding.
3. In the appendix, the authors put significant effort to make the concepts accessible to beginners, including detailed visualizations for how the computations are done. This contrasts with typical papers on SE(3)-equivariance, which often include difficult equations that can be a barrier to entry for newcomers.
[1] Fuchs et al., “SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks,” NeurIPS`20
[2] Liao et al., “Equiformer: Equivariant graph attention transformer for 3d atomistic graphs,” ICLR’23
[3] Liao et al., “Equiformerv2: Improved equivariant transformer for scaling to higher-degree representations,” ICLR’24
Weaknesses: 1. The authors introduced a new equivariant nonlinearity inspired by [4], but the motivation and benefits are not clearly demonstrated. What is the distinctive advantage of this new nonlinearity, compared to existing SE(3)-equivariant nonlinearities?
2. The number of parameters was not fixed during the ablation experiments regarding the maximum spherical harmonics degrees. A recent study [5] claimed that the reported increase in performance due to incorporating higher-type irreps in various works could actually be due to the increased number of parameters. It is essential to control the number of parameters to be similar between the ablated models and the proposed model.
[4] Deng et al., "Vector Neurons: A General Framework for SO(3)-Equivariant Networks,” ICCV’21
[5] Wang et al., “Rethinking the Benefits of Steerable Features in 3D Equivariant Graph Neural Networks,” ICLR’24
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. According to the equations in Appendix F, different types of irreps do not mix in the self-attention layer. They also do not mix in the proposed nonlinearities in Appendix F. It seems like in the proposed method, each of the irreps (except for type-0) can only indirectly modulate other irreps of different types via attention. Am I correct?
2. Subtracting the mean of the center is not stable under the addition or removal of camera points. Is it possible to use relative positional encoding, similar to rotary embedding, to achieve translational equivariance without relying on centroid subtraction?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: 1. Spherical harmonics can only encode the orientation of $t_i-\bar{t}$ and not the length $|t_i-\bar{t}|$. Therefore, typical SE(3)-equivariant networks address this by incorporating additional length encoding. However, in this paper, the distance information $|t_i-\bar{t}|$ is discarded.
2. Using irreps features inevitably introduces a band-limit. Increasing this band-limit is difficult because the feature dimension increases quadratically, which is also discussed by the authors.
3. The authors also mentioned that higher-degree spherical harmonics caused instability in training. However, this might be due to the choice of equivariant nonlinearities. Liao et al. [3] reported that certain nonlinearities cause instability in training.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *W1. The authors introduced a new equivariant nonlinearity inspired by [4], but the motivation and benefits are not clearly demonstrated. What is the distinctive advantage of this new nonlinearity, compared to existing SE(3)-equivariant nonlinearities?*
Our nonlinearity, unlike norm and Gate nonlinearities, can change the direction of the feature, by reducing/keeping the projection of the tensor to an equivariance space. In the meanwhile, compared with the nonlinearity method that can change the direction of the feature by applying Fourier and Inverse Fourier transforms [1, 2], ours is more computationally efficient.
[1] Maurice, et al. "General E(2)-equivariant Steerable CNNs." NeurIPS 2019
[2] Adrien, et al. "A Functional Approach to Rotation Equivariant Non-linearities for Tensor Field Networks." CVPR 2021
*W2. The number of parameters was not fixed during the ablation experiments regarding the maximum spherical harmonics degrees. A recent study [5] claimed that the reported increase in performance due to incorporating higher-type irreps in various works could actually be due to the increased number of parameters. It is essential to control the number of parameters to be similar between the ablated models and the proposed model.*
Thank you for the great suggestion. We agree that this is an essential variable to be controlled in these experiments. Thus, we include here additional ablation experiments on the ScanNet benchmark with increasing the numbers of parameters, by using the highest orders of spherical harmonic embeddings to match the parameter count of our model. These were conducted using a short training schedule of 100K training steps, and the results are shown in Table 1 in the rebuttal PDF. We observe a similar trend as reported in our submission (Table 3), showing that these improvements do not come from the increased model complexity. We will include these additional results in the camera-ready version of the paper.
*Q1. According to the equations in Appendix F, different types of irreps do not mix in the self-attention layer. They also do not mix in the proposed nonlinearities in Appendix F. It seems like, in the proposed method, each of the irreps (except for type-0) can only indirectly modulate other irreps of different types via attention. Am I correct?*
You are correct. When generating key, query, and value features, we do not mix different feature types. While calculating the attention matrix, as shown in Appendix F, we use all types of features, which makes them indirectly modulate other irreps of different types.
*Q2. Subtracting the mean of the center is not stable under the addition or removal of camera points. Is it possible to use relative positional encoding, similar to rotary embedding, to achieve translational equivariance without relying on centroid subtraction?*
We thank the reviewer for pointing that out. When there are two cameras (which is the primary setting explored in our experiments), the relative position is exactly twice our current translation (the translation after subtracting the centroid). We use spherical harmonics for our current translational positional embedding, which is equivariant to $SO(3)$ while rotary embedding is not equivariant to $SO(3)$. For multiple cameras, introducing a relative positional encoding would be a good idea, however, this approach introduces the problem of permutation equivariance of the camera order in the Perceiver IO architecture (the index of the camera impacts the index of relative translation). We believe there is a more elegant solution to the translation equivariance problem, and hope to look into it in future work.
*L1. Spherical harmonics can only encode the orientation of and not the length. Therefore, typical SE(3)-equivariant networks address this by incorporating additional length encoding. However, in this paper, the distance information is discarded.*
As stated in Section 3.3.1, we incorporate an additional radial component $||r||^l$ into the original “order-l” spherical harmonics of the corresponding degree. Despite this adaptation, these functions retain their fundamental characteristics and are still referred to as spherical harmonics. We provide an introduction and discussion of spherical harmonics in Appendix A.4, as well as the design of incorporating the invariant length.
*L2. Using irreps features inevitably introduces a band-limit. Increasing this band-limit is difficult because the feature dimension increases quadratically, which is also discussed by the authors.*
Thanks for the deep comment. Yes, the dimension of the spherical harmonics is a limitation in our approach. That's why we address it in the decoder by predicting an equivariant frame followed by a conventional decoder, which enables higher frequency embedding for the query.
*L3. The authors also mentioned that higher-degree spherical harmonics caused instability in training. However, this might be due to the choice of equivariant nonlinearities. Liao et al. [3] reported that certain nonlinearities cause instability in training.*
Thank you for the insightful remark. We have indeed observed the same behavior stated in [3] about the instability of $S^2$ activation, which treats the features as the Fourier coefficients for $S^2$, processing scalars, and high-order features together. They addressed this behavior by separating the scalars and higher-order features. On the other hand, our approach separates different orders of features and processes them independently (as mentioned in Q1). Replacing our nonlinearity with norm linearity did not stabilize training, suggesting that this instability is not solely due to the choice of nonlinearity.
---
Rebuttal Comment 1.1:
Title: Major concerns addressed, raised the score accordingly.
Comment: Thank you for the clarification. I have raised my score as the major concerns have been addressed.
**[Summary of Concerns Resolved by the Rebuttal]**
1. Lack of motivation for the new nonlinearity => Addressed with explanations of computational efficiency and training stability.
2. Number of parameters not controlled in the ablation study => New experiments in the rebuttal paper have controlled the number of parameters. | Summary: This paper introduces a ray embedding representation with rotational and translational equivariance, integrating the existing Perceiver IO architecture to achieve robust multi-view implicit depth estimation. The paper first utilizes the mean shift and spherical harmonics to achieve translational equivariance, and then builds upon this to use spherical harmonics to achieve a rotationally equivariant representation, ultimately combining to obtain a three-dimensional transformation embedding with equivariance. By further designing equivariant encoders and decoders, the paper realizes robust estimation of depth from new perspectives. Experiments on the ScanNet and DeMoN datasets demonstrate the effectiveness of the proposed method.
Strengths: -The motivation is clear, the algorithm design makes sense, and the experimental results are complete.
Weaknesses: -Ablation study: Since the equivariance consists of two parts, namely translation and rotation, what would be the qualitative and quantitative impact of removing these two parts respectively?
Technical Quality: 3
Clarity: 4
Questions for Authors: -The task setting of implicit depth estimation seems to be very compatible with the existing sparse view NeRF/GS methods. Although the focus of the two is different, with NeRF/GS focusing more on rendering images, while DeFiNe and EPIO mainly focus on geometry, there is a possibility of mutual exchange between the two. Can you report the comparative results with such methods? For example, ENeRF.
Lin, et al. "Efficient Neural Radiance Fields for Interactive Free-viewpoint Video", SIGGRAPH-ASIA 2022.
-DeFiNe can synthesize novel view images, can EPIO do the same? What are the results like?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: see questions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *W1. Since the equivariance consists of two parts, namely translation and rotation, what would be the quantitative impact of removing these two parts respectively?*
Thank you for the valuable suggestion. We conducted an ablation study where we individually integrated only rotation equivariance and translation equivariance into the model. These were conducted using a short training schedule of 100K training steps due to the time limit. The results, displayed in Table 1 of the rebuttal PDF, show that models without translation or rotation equivariance perform worse than our complete model.
*Q1/2. The task setting of implicit depth estimation seems to be very compatible with the existing sparse view NeRF/GS methods. DeFiNe can synthesize novel view images, can EPIO do the same?*
Thank you for pointing that out, this is a great direction that we would like to explore in follow-up works. EPIO and DeFiNe were designed for “single-query” predictions, without relying on volumetric rendering (like NeRF) or explicit 3D structures (like 3DGS). Single-query novel view synthesis is challenging, especially in the generalizable setting, and DeFiNe itself does not report results in this task, but rather shows that by jointly learning novel view synthesis and depth estimation it can marginally improve depth estimation. Our proposed EPIO architecture could be extended to novel view synthesis, especially because it accepts traditional non-equivariant decoders (i.e., from DeFiNe), and we indeed show results on a toy novel depth synthesis experiment in Appendix O.2. Unfortunately, extending EPIO for an additional task and retraining the model is impractical in the rebuttal period, but we will aim to provide initial results (similar to the novel depth synthesis experiments) in the camera-ready version.
Having said that, there is a follow-up work to DeFiNe (DeLiRa) [1] which explores the use of Perceiver IO for volumetric rendering, focusing on novel view synthesis with depth estimation guidance. We believe EPIO could be used in this setting as well, and that would be a very interesting extension.
[1] Guizilini et al. “DeLiRa: Self-Supervised Depth, Light, and Radiance Fields”, ICCV 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response | Summary: This paper presents a SE(3) rotational and translational equivariant variation of Perceive IO for multi-view depth estimation with known camera poses. The authors first encode both the pixel-wise ray direction and the camera translation using spherical harmonics as the position encoding, and then to maintain equivariance under global transformations through the network's forward pass, the authors modify several components, including the linear projection, the latent array construction, and the output decoding. To demonstrate the effectiveness of the proposed method, the authors conducted experiments on several RGBD datasets, including ScanNet, SUN3D, TUM-RGBD, and Scene11, and achieved better performance than existing implicit multi-view depth estimations, such as DeFiNe, and multi-view stereo (MVS) models, such as DPSNet.
Strengths: - The authors introduce the problem well, explaining the importance of equivariance to the task of multi-view depth estimation effectively. They also provide a brief yet sufficient review of existing works, clearly positioning this work within the field.
- The authors have carefully designed several novel equivariant components:
- A SE(3) equivariant positional encoding, where besides rotation, the authors smartly encode camera translation also using spherical harmonics.
- An equivariant linear projection layer where the linear projection is applied to each group of features that corresponds to position embedding derived from the spherical harmonics of a specific order.
- Equivariant latent array construction and the reversal of the rotation from the latent array before being cross-attended to the output queries.
These designs, along with the adoption of existing equivariant components through the Perceive IO pipeline, ensure good performance and can be inspiring for other tasks that require equivariance.
- The experiments are sufficient and demonstrate the equivariance of the output and the overall accuracy.
Weaknesses: The major weakness of this paper lies in its presentation and organization, which makes the paper difficult to read:
- Many important details from Sections 3.4 to 3.6 are placed in the appendix, making the main paper not self-contained. For instance, details in Appendices A.3 and E would be better suited in the main paper.
- Sections 3.4 to 3.6 are organized into fragmented components, where the holistic process of the Perceiver IO is missing. Specifically, the authors should introduce each modification in the order of the Perceiver IO pipeline.
- The description of individual components are also confusing:
- It is better to only briefly discuss components that are equivariant itself, such as attention, and discuss only how they made the input to the attention equivariant, such as the latent array in Section 3.5.1. Otherwise, it might be misleading to suggest that there are new equivariant attention modules themselves.
- Why is only rotation sampled and encoded when constructing the latent array in Section 3.5.2 and Figure 4, while the inputs have the encoded camera translation?
- Similarly, in Section 3.6, only reverse rotation is applied to the latents after several self-attention transformation blocks, while the translation is omitted.
- Line 261-262: "which allows us to leverage higher frequency information beyond the dimensional constraints of SPH." The authors indicate that the Fourier encoding is not equivariant but use it for the output query, therefore, the authors should elaborate more on the insight behind this choice and provide sufficient proof to support this design.
- Many illustrations (Figures 8-11) in the appendix are confusing and do not help to clarify the equations.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to the weakness.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The limitations have been sufficiently discussed in the Appendix. Q.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *W1. Many important details from Sections 3.4 to 3.6 are placed in the appendix, making the main paper not self-contained.*
We appreciate and thank the reviewer for the valuable feedback. We decided to organize the paper this way not only due to limited space but also because we want readers to have a clear understanding of our main proposed components, especially those without a prior equivariance background. Thus, we leave standard components (e.g., equivariant linear and normalization layers in appendix A.3) in the appendix, where readers unfamiliar with the field can get a better understanding through visualization and more detailed analysis and proofs.
Having said that, we agree that a more balanced structure might be better, and will move more technical details and visualization from the appendix E to the main paper for the camera-ready version.
*W2. Sections 3.4 to 3.6 are organized into fragmented components, where the holistic process of the Perceiver IO is missing.*
Thank you for the suggestion. We decided to introduce fundamental attention layers in Section 3.4 before discussing the encoder in Section 3.5 and the decoder in Section 3.6 because attention mechanisms are essential components of both modules (encoder and decoder). By first explaining these fundamental operations, we aim to help readers build an understanding going from the basics to the overall more complex structures.
If this approach is causing confusion, we propose to instead first briefly give a whole overview of our holistic method with the illustration of Figure 2, then present the input design of the encoder first, followed by an introduction of the encoder, the “canonicalization” of the encoder output to serve as input to the decoder, the design of the decoder, and finally the prediction process. We believe this new order would create a better flow that approximates more closely the holistic nature of the Perceiver IO architecture, from input to output.
*W3. It is better to only briefly discuss components that are equivariant itself, such as attention, and discuss only how they made the input to the attention equivariant.*
Thank you for the suggestion. The reason we provide an overview of the attention mechanism and compare our module structure with previous work is to lay emphasis on the equivariant operation of proposed module and help readers without a relevant background to have a better understanding of why the conventional operation is not equivariant and ours is.
However, to avoid any potential confusion, we will move more details from Section 3.4 (especially the descriptions of previous works) to the appendix for reference, so the focus is on our contributions. We will also move Appendix I to Section 3.5.1 to provide more information about the input construction for the encoder.
*W4. Why is only rotation sampled and encoded when constructing the latent array in Section 3.5.2 and Figure 4, while the inputs have the encoded camera translation?*
The translations of the two cameras are opposites because we subtract the mean of the translation to achieve translational invariance. As discussed in Section 3.5.2, we need to average the PE (positional encodings) of the two cameras to obtain the latent array. Since averaging the PE of two opposite translations might cause some zero values, we discard the PE for the translation of the latent array construction. Note that we keep the spherical harmonics PE for translation in inputs, as a way to preserve translational information.
*W5. In Section 3.6, only reverse rotation is applied to the latents after several self-attention transformation blocks, while the translation is omitted.*
As stated in Section 3.3. 2, after we subtract the cameras’ central position, our model becomes translationally invariant. Because the hidden features are now rotationally equivariant and translationally invariant, we apply the reverse rotation to the latent array to make it rotationally invariant. Note that, when we provide query cameras for the decoder, their position is also extracted from the center, which makes the whole model translationally invariant.
*W6. The authors indicate that the Fourier encoding is not equivariant but use it for the output query.*
As stated in L334-339, a limitation in the use of spherical harmonics (SH) is that dimensionality grows linearly (2x) with increasing orders, which constrains the highest frequency that can be used in the positional encodings. Therefore, we learn instead an **equivariant** frame of reference designed to make the input to the decoder **invariant**, which enables us to use traditional decoders without SH, which can then reach higher frequencies. It is important to note that, even though we use traditional Fourier positional encodings for the decoder, the model is still **equivariant**, since the input to the decoder is **invariant** by design. The theoretical proof that guarantees an invariant input for the decoder is provided in Appendix J. There is an ablation study in Table 3, where “EquiDecoder” indicates the use of an equivariant decoder with equivariant positional encoding for the query, our design is better than the equivariant decoder, which supports the effectiveness of our design.
*W7. Many illustrations (Figures 8-11) in the appendix are confusing.*
Thank you for pointing that out. We have enhanced these figures and included the updated versions in the rebuttal PDF (Figure 1,2,3,and 4). Any additional feedback would be highly appreciated, so we can further improve the quality of our submission.
---
Rebuttal Comment 1.1:
Comment: Dear authors, thank you for the rebuttal, which clearly addressed my concerns.
---
Reply to Comment 1.1.1:
Comment: Thank you for the reply, and for taking the time to consider and analyze our rebuttal. If it has clearly addressed your concerns, do you mind raising your score accordingly? We are also happy to answer any other questions you might have in the meantime.
---
Rebuttal 2:
Comment: Dear Reviewer KQ34
Thanks for reviewing this work. Would you mind to check authors' feedback and see if it resolves your concerns or you may have further comments?
Best wishes
AC | null | null | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for the valuable comments and positive feedback regarding our submission.
As mentioned by reviewer **xRYu**, we are the “first to address SE(3)-equivariance in the transformer-based Perceiver IO architecture for multi-view applications.” They also praise the significant effort put into our appendix, in an attempt to make the complex topic of neural network equivariance accessible to beginners. Reviewer **KQ34** states that our proposed method “can be inspiring for other tasks that require equivariance”, and that we “introduce the problem well, effectively explaining the importance of equivariance to the task of multi-view depth estimation”. Reviewer **QCgu** mentions that our general multi-view architecture can be extended beyond depth estimation to also improve novel view synthesis, which is an exciting direction for future work that we will explore. All reviewers equally praise the motivation behind our work, our algorithmic and design contributions, which include several novel equivariance components, and our state-of-the-art results in multi-view depth estimation, evaluated in multiple benchmarks, as well as convincing ablation studies.
We address each point raised by the reviewers in their respective replies and will include all proposed modifications and additional experiments in the revised version of our manuscript. In particular, reviewer **KQ34** mentioned that our appendix contains important information that should be included in the main paper to improve reader experience. We are committed to balancing the main paper and appendix in order to give readers a fluent understanding of our motivation and contributions, as well as improving figures to help those not familiar with fundamental equivariance concepts. We also would like to emphasize that reviewer **QCgu** awarded us with an Excellent score for presentation.
Pdf: /pdf/cb1a70174d7b7576d560b4837724a2c98ee1b87d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Diffusion-based Curriculum Reinforcement Learning | Accept (poster) | Summary: The paper presents an intuitive way to apply curriculum learning using diffusion based models to learn a goal distribution that can interpolate between the state-visitation distribution to states with high-value and high-intrinsic reward. As a result, the curriculum generates goals that lie at the edge of the states with non-zero occupancy, and higher value/ closeness to the target-goal.
The technical details are mostly complete and seem sound upon initial reading, I did not delve into the proofs/ derivations in the appendix. But the exposition of how we go from diffusion-models, to AIM, to visitation-count modelling, and to the newly proposed DiCuRL method, is mostly clear.
Although there are multiple points of improvements, I think many practitioners will appreciate the author's work.
Strengths: The technical details are quite clear upon initial reading, even if one is not familiar with AIM or diffusion models. I.e., the new method should be clear enough to reproduce from reading the paper.
Weaknesses: ### Major comments
- The introduction dumps too much related work together to find the actual point and criticism that the authors want to make on the current state of the field.
- Before reading the background/ technical details, motivation DiCuRL 1) is unclear how noising/ denoising is unique to helping exploration? Why can't another method (like a VAE, or GAN) do this through modelling the state-visitation distribution. Aren't we just choosing another, perhaps more powerful, generative method? **After reading the full paper:** I disagree that this is a sound motivation, any stochastic method over the state-visitation distribution could achieve this. I agree that modelling the state-visitation distribution is useful as it allows learning of goals that the agent has seen and can reach.
- 4.0 Line 222, it is not clear from the text what problem the authors are trying to solve through the graph construction and the optimization of the curiculum goal (Eq. 12). How is the 'optimal' curriculum goal even defined? Eq 12 of course shows the objective, but why do we need this? How is the graph even constructed (meaning the edges), is this fully-connected? Initial reading of this paragraph gives the impression of severe over-engineering of the goal-sampler.
- Figure 1 overlaps with table 1 and contains too many overlapping lines to draw a conclusion. This must be improved for presentation. Reduce the number of unnecesary baselines, show these in the appendix.
- The results section spends most of its time speculating why the baselines perform in a certain way but does not focus on the authors' method. Line 281, states that there is a difference between OUTPACE and DiCuRL, however, neither method statistically significantly outperforms the other. Too much of the experimental setup is moved to the appendix.
- It is unclear from figure 3 at what point during training this plot was made. Now the baseline methods look arbitrarily bad compared to the authors' method. It is color-coded, but maybe add a colorbar to figure 3 indicating the training episodes.
### Technical comments
- 3.3 Slight confusion on the reward $r^\pi_\phi$, it's good to mention that you're actually learning $f(s)$ and using this to compute $r$.
- 4.0 Explanation on the mixing parameter $\bar{\alpha}_k$ is omitted. Shortly state it in the main text.
- 4.0 The definition of $g_d$ is too hidden. I infer from Alg.2 that this is supposed to represent the *true* goal distribution.
- Results, figure 1, table 2. Why plot the standard-deviations? Why not a non-parametric tolerance interval to get a sense of spread, or plot a confidence interval for the expected success-rate?
### Minor comments
- Intro paragraph 1 should be split into separate paragraphs making distinct points. Not a lumpsum of information.
- Intro paragraph 1, maybe make a distinction between hierarchical RL + curriculum RL for goal-generation. Even if HRL can implicitly generate curriculums, the motivation is often slightly different.
- Direct reference to papers should be done with the author: 'Person et al., (year) showed ...', not '[1, 2] showed ...'. Or you could write, 'Other studies [1, 2, 3], investigated ...' or something similar.
- Intro paragraph 2 is not a paragraph but 1 sentence.
- Figure 3, since DiCuRL is mostly on par with OUTPACE this should be compared in the plot for comparing curriculum goals
Technical Quality: 3
Clarity: 2
Questions for Authors: 1) Could the authors revise the current version and improve upon (most) of my critiques, then I'd be willing to raise my score.
2) Will the authors share code?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors shortly discuss the limitations of their method, which I mostly agree with.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**: The introduction dumps too much related work together [...]
We will restructure the introduction to separate and clarify the related work.
**W2**: [...] Why can't another method (like a VAE, or GAN) do this through modelling the state-visitation distribution? [...]
We acknowledge that other methods could effectively model the state distribution. However, we are cautious not to speculate on which method would yield the most successful output without further comparative analysis, which may be an interesting line of future research.
We believe that the noising/denoising mechanism of the diffusion model is particularly beneficial for the following reasons:
* *Denoising* (lines 5-7 in Alg. 1): Gaussian noise is incrementally reduced by the neural network according to a specific variance schedule. This process is inherently imperfect: due to the neural network's limitations in precisely matching the sampled noise, a small degree of randomness remains. We believe that this residual noise introduces a subtle variability in the curriculum goals, which aids exploration by generating slightly varied goals.
* *Noising* (lines 8-9 in Alg. 1): Original data sampled from a distribution are intentionally corrupted with Gaussian noise based on the sampled timestep k. The “noised” data are then processed by the neural network, and the loss is calculated (see Eq. 10 of the paper).
**W3**: [...] it is not clear from the text what problem the authors are trying to solve through the graph construction and the optimization of the curriculum goal [...]
We employ two different strategies for generating curriculum goals:
* **Bipartite Graph Optimization Strategy**: In this strategy, as outlined in the paper and also utilized in baseline methods like HGG and OUTPACE, we sample a mini-batch $b$ from the replay buffer $B$. This batch contains many states from different timesteps, which we provide to the curriculum goal generator, i.e., the diffusion model. The diffusion model generates a distribution of curriculum goal candidates $G_c$ from which we select the optimal curriculum goal $g_c$ using bipartite graph optimization. This graph maximizes the cost function given in Eq. 12.
- *Graph Construction*: The vertices of the bipartite graph can be divided into two disjoint sets $V_x$ and $V_y$, where $V_x$ consists of the generated curriculum goals and $V_y$ includes the desired goals sampled from the desired goal distribution. The graph is not fully connected as every pair of distinct vertices in our graph is not connected by a unique edge, but each vertex from $V_x$ is only connected to vertices in $V_y$.
- *Purpose of Eq. 12*: The objective here is to select K curriculum goals that are most diverse and beneficial for training, ensuring that the goals selected cover a broad spectrum of possible scenarios. We then randomly sample a single $g_c$ from these K goals for each episode.
* **Single State Strategy** (see Sec. B of the supp. mat.): This method involves feeding only the state from the last timestep into the diffusion model, which then generates a single curriculum goal. The rationale behind using the last timestep is the assumption that the final state is closer to the desired goal. However, this does not always hold true as agents might not always progress linearly towards the goal, sometimes moving backwards or sideways. In other words, if the last state does not accurately represent progress towards the desired goal, the generated curriculum goals might not optimally guide the agent, potentially leading to decreased sample efficiency and slower learning rates. This effect is demonstrated in Fig. 4 in Section B of our supp. mat.
**W4**: Fig. 1 overlaps with table 1 and contains too many overlapping lines to draw a conclusion [...]
We will revise Fig. 1 by reducing the number of baseline methods shown and displaying only the baselines that match our proposed method. Additionally, we will relocate some of the comparisons to the supp. mat.
**W5**: [...] states that there is a difference between OUTPACE and DiCuRL, however, neither method statistically significantly outperforms the other [...]
We acknowledge that our analysis might have focused excessively on the baselines’ performances. We will revise the discussion accordingly and move part of the discussion from the Appendix to the main text (given the possibility of including an extra page in case of acceptance).
Concerning the comparison between OUTPACE and DiCuRL, we have conducted a statistical analysis using the Wilcoxon rank-sum test to compare the no. of timesteps needed by the two methods to achieve a success rate greater than 0.99 across five different seeds for training. Here are the detailed test results for three specific environments:
* PointSpiralMaze: p=0.04
* PointNMaze: p=0.44
* PointUMaze: p=0.04
For PointSpiralMaze and PointUMaze, there is statistically significant evidence to reject the null hypothesis (p<0.05), suggesting that DiCuRL statistically outperforms OUTPACE in these environments. Conversely, for PointNMaze, this is not the case. We note however that with 5 samples this analysis may be limited.
**W6**: It is unclear from Fig. 3 at what point during training this plot was made [...]
We will add a color bar to Fig. 3 to indicate the timestep corresponding to each different color of the curriculum goals. You can refer to the new plot layout in Fig. 15 of the **attached PDF**, which shows the intermediate goals for OUTPACE.
**Technical and minor comments**
Due to space limitations, we cannot provide detailed replies. However, we acknowledge these comments and we'll do our best to handle them in the final version of the paper.
**Q1**: Could the authors revise the current version and improve upon (most) of my critiques, then I'd be willing to raise my score.
We did (and will do) our best to address all the comments.
**Q2**: Will the authors share code?
Of course! Please see the link reported at L264.
---
Rebuttal Comment 1.1:
Title: Good rebuttal, but confusion remains around the goal optimization
Comment: Thank you for replying to most of my comments. The additional plots are highly appreciated.
Not all of my concerns are fully resolved yet though. Could the authors still comment on major concerns below?
---
### Major concerns
> W3
Your answer clarifies some of my confusions around the goal optimization, and the results in appendix B also support the motivation. But this still remains a rough edge of the paper.
It was not clear from the main text that the baselines also utilize this method, this is important and must be mentioned and cited.
Remaining questions:
* The objective of Eq. 12 is a sum of distances to an average goal. Could the authors comment on what $\bar{g}$ means, and whether this is a pitfall of the method? For example, in figure 2, if one goal $g_1$ is on the upper side of the task and $g_2$ in the bottom, would $\bar{g}$ be inside a wall?
* How does this objective transfer to tasks where a $l_2$ distance over states doesn't work (e.g., atari)?
* Can the term inside the square root of Eq. 12 become negative?
* Is there prior work on other methods? Couldn't we e.g., use random network distillation to score $w$ based on approximate uncertainty or state-occupancy?
* After line 12 in Algorithm 2, where is $g_c$ used? Is this the set in Line 16?
* Line 12 algorithm 2, shouldn't $g_c$ be a set $\mathcal{G}_c$?
---
### Remaining minor remarks
> W2
I see what you mean now on the advantage of diffusion methods for goal generation over e.g., GANs. I think the paper would greatly benefit from this discussion and the points you raise, including a short comment on how we observe this effect in figure 2.
> W4
Thank you for addressing this, I saw that figure 4 in the appendix B also seems to have the plot-title cropped.
> W5
I believe that doing statistical tests on the the final performance of the tested methods slightly misses the point of my comment. My issue was that a majority of the discussion of section 5 reads speculatory whereas the results did not show a very strong difference.
Also, the use of p-values and statistical tests are not recommended in RL, since a) the sample sizes are always too small, b) there are too many confounders for the algorithms (implementation parity, hyperparameter settings, or even the hardware), and c) results are often biased in favor of the designer. See also: Patterson et al., 2023 https://arxiv.org/abs/2304.01315.
---
Rebuttal 2:
Comment: Thanks a lot for engaging in this discussion. We really appreciate your comments.
**W3** [...] this is important and must be mentioned and cited.
We will clarify in the revised manuscript that the baselines (OUTPACE and HGG) also utilize the Minimum Cost Maximum Flow algorithm [a].
[a] Ravindra, K. A., et al. (1993) Network Flows: Theory, Algorithms, and Applications
* [...] Could the authors comment on…\
$\bar{g}$ represents the mean of the generated curriculum goals $\mathcal{G_c}$. It is true that $\bar{g}$ could potentially fall inside a wall, as our algorithm does not specifically prevent this because we assume that our algorithm is environment agnostic. However, we mitigate this issue through the use of diffusion loss (Eq. 10) in our total loss formulation (Eq. 11). This approach ensures that our generated curriculum goals are representative of the state-visitation distribution. Since the agent may collide with maze walls (it does not go through) during exploration, the state-visitation distribution typically does not include locations inside the walls. Consequently, the goals selected by minimizing the objective function are usually outside the walls.\
Despite these measures, the algorithm is not infallible. As illustrated in Figures 3a, 5a, and 6a, a minor amount of goals may still be generated inside the walls (actually, this same problem occurs also in OUTPACE).
* [...] How does this objective transfer to tasks [...]\
Indeed, the current objective function may not be suitable for tasks where distance over states are not applicable, such as in Atari games. However, as detailed in Section B of our supp. mat., we have demonstrated that our algorithms are also effective without relying on this specific objective function.
* [...] Can the term inside the square root of Eq. 12 become negative?\
In our implementation, we calculate the square of the difference $(g_i -\bar{g})^2$, to ensure non-negativity (see L200-201 in our repo https://anonymous.4open.science/r/diffusioncurriculum/hgg/hgg.py). However, it appears we inadvertently omitted the exponent in the paper. We will correct this to accurately reflect the calculation as $(g_i -\bar{g})^2$ in Eq. 12.
* [...] Is there prior work on other methods? [...]\
Indeed, Eq. 12 can be designed more effectively and represents a promising area for future research.As you suggested, integrating methods based on uncertainty or entropy is a viable approach. For instance, the baseline method OUTPACE effectively employs curriculum goal generation that targets states of uncertainty, as demonstrated in Figure 2 of the OUTPACE paper. We are open to exploring similar enhancements to our model to further refine its capabilities by incorporating such uncertainty or state-occupancy based scoring mechanisms in future work.
* [...] After line 12 in Algorithm 2, where is $g_c$ used? [...]\
In our algorithm, $g_c$ is utilized in several key parts: it serves as input to the policy network (line 7) and the Q network (line 13), and it is also used in the computation of the AIM reward (line 14). Once $g_c$ is determined, it defines the goal that the agent aims to achieve. At the end of the episodes, we assess the algorithm's success rate by sampling a desired goal from the goal distribution $\mathcal{G}$ and observing how many of the test rollouts are successfully completed by our agent in achieving the given desired goal.
* [...] Line 12 algorithm 2, shouldn't $g_c$ be a set $\mathcal{G_c}$?\
Indeed, at the end of the process, we select a single curriculum goal, referred to as $g_c$. More specifically, given the set $\mathcal{G_c}$ in Eq. 12, we select K curriculum goals that minimize the equation. We then randomly sample a single $g_c$ from these K goals for each episode. We will revise Line 12 in Algorithm 2 to more clearly reflect this process and ensure that the line is accurate and unambiguous.
**W2** [...] I think the paper would greatly benefit [...]
We will certainly add this to the final version of the paper.
**W4** [...] figure 4 in the appendix B [...]
We chose to crop the plot titles and included the necessary information in the captions under each subplot for clarity. We will adjust the layout in the final version of the paper.
**W5** I believe that doing statistical tests [...]
Thanks a lot for pointing us to this reference. Honestly we were also a bit hesitant in showing the statistical analysis because of the small sample size, as we admitted in the previous reply. That’s why in the paper we reported the results in terms of curves and descriptive statistics on the number of timesteps needed to reach a given success rate, as also done in other papers in the field.
Concerning the way we discussed the results in the results section, we will do our best to revise the text to make it read less speculative, highlighting the similar performance of the methods whenever appropriate.
---
Rebuttal Comment 2.1:
Title: Thanks for the clarification
Comment: The author's response to my additional questions clears up most of my confusions around the graph-optimization part of their goal generation.
I've raised my score from 5 to 6.
---
Rebuttal 3:
Comment: Thank you! | Summary: This paper studies curriculum reinforcement learning (RL) in the context of multi-goal RL, which aims to generate a series of goals with increasing difficulty to facilitate guiding learning policies. To this end, the paper proposes a framework that employs a conditional diffusion model that learns to generate a goal conditioned on the current state. The experiments in three maze navigation tasks show that the proposed method can reliably solve the tasks and perform them comparably to existing methods. This work studies a meaningful problem and proposes a reasonable framework. Yet, I am concerned with the limited domain (navigation) and tasks (maze) used for evaluation, the significance of the results, and the limited applicability beyond multi-goal RL, etc. Therefore, I am slightly leaning toward rejecting this paper, but I am willing to adjust my score if the rebuttal addresses my concern.
Strengths: **Motivation and intuition**
- The motivation for studying curriculum learning for multi-goal RL is convincing.
- Leveraging diffusion models to generate goals is reasonable.
**Clarity**
- The overall writing is clear. The authors utilize figures well to illustrate the ideas.
**Related work**
- The authors provide comprehensive descriptions of existing works in curriculum RL.
**Experimental results**
- The experimental results show that the proposed method performs comparably to existing methods.
**Reproducibility**
- The code is provided, which helps understand the details of the proposed framework.
Weaknesses: **Clarity**
- The first paragraph of the introduction is unnecessarily long, making it very difficult to follow.
- While the related work section describes several existing works in detail, it fails to differentiate these works from the proposed method exactly.
**Limited to goal-conditioned RL**
- The proposed method is limited to multi-goal RL, which requires a given goal. However, in many real-world applications, specifying a goal could be difficult or even impossible, making using the proposed method undoable. I feel it is entirely possible to extend the proposed method to the general RL setup, where only the current state is given. This will greatly increase the applicability of the proposed method.
**Evaluation is limited to the Maze navigation**
- The proposed method was only compared to existing methods in the Maze navigation tasks, where goals are represented as coordinates. It would be a lot more convincing if the evaluation was also conducted in other domains, such as robot arm manipulation, locomotion, and games. Additionally, evaluating in grid-world navigation tasks can add value to the paper by exploring discrete state and action spaces.
**Significance of the results**
- According to Figure 1, I am not entirely convinced that the proposed method performs significantly better than the baselines. Also, the plotting scheme makes it difficult to interpret when many curves overlap.
**Related work**
- The related work section focuses on existing works in curriculum RL yet fails to discuss many works that use diffusion models for RL or imitation learning, including but not limited to
- "Learning Universal Policies via Text-Guided Video Generation"
- "Diffusion Policy: Visuomotor Policy Learning via Action Diffusion"
- "Learning to Act from Actionless Video through Dense Correspondences"
- "Goal-conditioned imitation learning using score-based diffusion policies"
- "Diffusion model-augmented behavioral cloning"
- "Imitating human behaviour with diffusion models"
**Algorithm 2**
- While Algorithm 2 is titled RL Training, Lines 15-21 are for evaluation/testing, which is a bit confusing.
**Minor errors**
- L282: It seems that a non-break newline is used here, which gives no space between this paragraph and the next paragraph starting from Line 283.
Technical Quality: 3
Clarity: 2
Questions for Authors: See above
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**: The first paragraph of the introduction is unnecessarily long [...]
We will restructure the introduction to ensure it is more concise and better organized, which we believe will significantly improve the clarity and readability of the paper.
**W2**: While the related work section describes several existing works in detail, it fails to differentiate these works from the proposed method exactly.
We address this comment under **W6**.
**W3**: [...] I feel it is entirely possible to extend the proposed method to the general RL setup, where only the current state is given. This will greatly increase the applicability of the proposed method.
We recognize that the necessity for giving a curriculum goal can restrict the applicability of our approach in real-world scenarios where such goals are not readily available or definable.
Indeed, extending our method to a general RL setup, where the system operates solely based on the current state without explicit goal definitions, is a valuable direction for future research. Inspired by your suggestion and valuable references, we are considering an approach similar to that described in [a]: in this approach, a desired goal is defined using text encoding, and then a planner generates a series of future frames that illustrate the actions. Control actions are then derived from this generated video, enabling the agent to navigate without the need for specifically predefined goals. Additionally, we could adapt our policy (possibly our Q function using diffusion models), as demonstrated in [b]. Alternatively, we might explore learning directly from the environment using techniques such as RGB video, as proposed in [c], or through point clouds.
**W4**: [...] It would be a lot more convincing if the evaluation was also conducted in other domains, such as robot arm manipulation, locomotion, and games [...]
Given the time constraints, we have carried out additional experiments on two robot manipulation tasks, although it would be entirely possible to apply to other domains such as indeed locomotion and games as well as discrete state and action spaces. Please see our **General comment** and its **attached PDF** (Fig. 16). These experiments hopefully demonstrate the applicability of our method to tasks that are quite different from maze navigation.
**W5**: According to Fig. 1, I am not entirely convinced that the proposed method performs significantly better than the baselines. Also, the plotting scheme makes it difficult to interpret when many curves overlap.
We will provide clearer plots in our supp. mat. by creating different subplots and plotting together groups of 2 to 3 baseline methods. We hope that comparing only 2 or 3 baseline methods in each subplot will make the comparison clearer and easier to understand.
Concerning the comparison between OUTPACE and DiCuRL, we have conducted a statistical analysis using the Wilcoxon rank-sum test to compare the no. of timesteps needed by the two methods to achieve a success rate greater than 0.99 across five different seeds for training. Here are the detailed test results for three specific environments:
- PointSpiralMaze: p=0.04
- PointNMaze: p=0.44
- PointUMaze: p=0.04
For PointSpiralMaze and PointUMaze, there is statistically significant evidence to reject the null hypothesis (p<0.05), suggesting that DiCuRL statistically outperforms OUTPACE in these environments. Conversely, for PointNMaze, this is not the case. We note however that with 5 samples this analysis may be limited.
**W6**: The related work section focuses on existing works in curriculum RL yet fails to discuss many works that use diffusion models for RL or imitation learning, including but not limited to [...]
We acknowledge our oversight and appreciate your detailed suggestions. We will thoroughly review these studies and will update the related work section to discuss them extensively, highlighting how they relate to our research.
**W7**: While Algorithm 2 is titled RL Training, Lines 15-21 are for evaluation/testing, which is a bit confusing.
Thank you for pointing this out. We will update our algorithm title to "Algorithm 2: RL Training and Testing”.
**W8**: L282: It seems that a non-break newline is used here, which gives no space between this paragraph and the next paragraph starting from Line 283.
Thank you for noticing this layout issue. We will fix it in the revised version of the paper.
[a] Du, Y., et al. (2024), Learning universal policies via text-guided video generation
[b] Chi, C., et al. (2023), Diffusion policy: Visuomotor policy learning via action diffusion
[c] Ko, P-C, et al. (2023), Learning to act from actionless videos through dense correspondences
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: Thank you for the rebuttal with additional experiments, which address some of my concerns, including fixing Algorithm 2, and the significance of the results.
**Clarity \& minor errors**: Given that NeurIPS does not allow updating submissions during the rebuttal period, it is difficult for me to assume that the authors would completely fix the issues. I would still say the paper in its current form is not ready for publication.
**Limited to goal-conditioned RL**: I appreciate the discussion of how the proposed method can apply beyond the goal-conditioned RL scenario. However, without seeing the results, it is not convincing.
**Related work**: While the authors made a promise to discuss how their work differs from the references I provided, I am unsure how this will go without seeing the actual revised paper, which, again, unfortunately, is not possible during the NeurIPS rebuttal.
**Evaluation is limited to the Maze navigation**: I really appreciate the additional results of FetchPush and FetchPickAndPlace. However, I still believe the evaluation is limited. As I suggested, including navigation tasks with discrete state and action spaces, locomotion tasks, and image-based games would significantly strengthen the contributions of this work.
I have mixed feelings about this work. On the one hand, I like the idea of leveraging diffusion models to generate goals for multi-goal RL; on the other hand, I hope to see an improved version of this work with clear writing, detailed discussion of related works, and evaluations in diverse domains and beyond goal-conditioned RL, which could present significant contributions to the community. Hence, I recommend rejecting this paper in its current form, and I encourage the authors to improve this submission and give it another shot if it eventually gets rejected. That said, I won't fight to reject this paper if my fellow reviewers decide to champion it.
---
Rebuttal 2:
Comment: > Thank you for the rebuttal with additional experiments, which address some of my concerns, including fixing Algorithm 2, and the significance of the results.
Thanks for acknowledging our rebuttal and for pointing out the significance of the results.
> Clarity & minor errors: Given that NeurIPS does not allow updating submissions during the rebuttal period, it is difficult for me to assume that the authors would completely fix the issues. I would still say the paper in its current form is not ready for publication.
As far as we can see, the point of the NeurIPS rebuttal process is to engage in a scientific discussion with peers to receive suggestions on how papers can be improved to meet the quality standards of the conference. From the conference website: “Authors may not submit revisions of their paper or supplemental material, but may post their responses as a discussion in OpenReview. This is to reduce the burden on authors to **have to revise their paper in a rush during the short rebuttal period**.” The entire spirit of the rebuttal process is based on the idea that authors can revise an accepted paper right after the rebuttal period (**not** in a rush). After all, if only papers that are already good as they were to be accepted, what would be the point of the rebuttal phase?
Given this, as said we honestly promised that we will fix those issues, and we will do it. It would be against our professional ethics not to do that.
> Limited to goal-conditioned RL: I appreciate the discussion of how the proposed method can apply beyond the goal-conditioned RL scenario. However, without seeing the results, it is not convincing.
We limited our experimentation to goal-conditioned scenarios to align our experimental setup with that of our baselines, and compare against state-of-the-art results. Testing our method on tasks beyond goal-conditioned scenarios has never been in the scope of this paper, but as we said this can certainly be an interesting direction for future research for the whole community.
> Related work: While the authors made a promise to discuss how their work differs from the references I provided, I am unsure how this will go without seeing the actual revised paper, which, again, unfortunately, is not possible during the NeurIPS rebuttal.
We are a bit puzzled also by this comment, to be honest. We, as authors, are obviously devoted to maintaining a highly ethical scientific conduct, as we have always done. As said in our previous reply, we will analyze these works and include them in our discussion in the revised version of the paper (as a matter of fact, we have started working on it, please see below our reply on the evaluation).
> Evaluation is limited to the Maze navigation: I really appreciate the additional results of FetchPush and FetchPickAndPlace. However, I still believe the evaluation is limited. As I suggested, including navigation tasks with discrete state and action spaces, locomotion tasks, and image-based games would significantly strengthen the contributions of this work.
We politely disagree on this. Our evaluation setup is actually well aligned with most of the works from the state of the art in the field, including the papers introducing our baselines and the papers suggested by the reviewer in the previous comment.
Considering the papers introducing the baselines we used in our experiments:
* ACL [18] → 3 tasks (synthetic language modelling on text generated by n-gr models, repeat copy and the bAbI tasks)
* GoalGAN [14] → only Ant Maze tasks
* HGG [19] → 4 robot manipulation and 4 hand manipulation tasks
* ALP-GMM [17] → BipedalWalker
* VDS [16] → 4 Robot manipulation tasks, 9 hand manipulation tasks , 3 maze tasks
* CURROT [23] → Bipedal Walker and Maze tasks
* GRADIENT [13] → FetchPush and Maze tasks
* OUTPACE [12] → 3 maze with points agent, 1 maze with Ant, 2 robotic tasks
Concerning the papers suggested by the reviewer:
* Learning Universal Policies via Text-Guided Video Generation → combinatorial robot planning tasks (real robotic system as well)
* Diffusion Policy: Visuomotor Policy Learning via Action Diffusion → 5 Robotic manipulation tasks (real robotic system as well)
* Learning to Act from Actionless Video through Dense Correspondences → 6 different robot manipulations tasks + 4 different tasks + 3 real world tasks
* Goal-conditioned imitation learning using score-based diffusion policies → 3 different robotic tasks (no real world task)
* Diffusion Model-Augmented Behavioral Cloning → 1 Maze, 1 FetchPick, 1 Hand, 1 cheetah, 1 walter, 1 AntReach
* Imitating human behaviour with diffusion models → CLAW environment, Kitchen environments, CSGO
> I have mixed feelings about this work [...]
We sincerely appreciate your engagement in the rebuttal, although of course we would hope for a score improvement. We are aware that the paper can be improved and as said we will do our best to do that if the paper were to be accepted.
---
Rebuttal Comment 2.1:
Comment: Thank you for the further clarification.
**Clarity**: The clarity of this paper is not ready for publication, and I cannot simply count on the author's promise and accept this paper. To be clear, I do trust that the authors will do their best to improve the clarity; however, I am still unsure if the revised paper will be ready. I believe this paper needs significant reorganization and revision, not just fixing a few sentences, so I have to see it to believe it.
**Related work**: I understand that the authors promised to discuss the relevant works I provided. However, it's not just about discussing them. What really matters is how the authors discuss them and differentiate their work from these works. Again, without seeing the actual writing, I just cannot say my concern is addressed.
**Limited evaluation**: In short, listing existing methods that do not sufficiently evaluate their methods, at least in my opinion, does not alleviate my concern. I am not asking the authors to evaluate their method in complex real-world tasks. Setting up and evaluating their method in the simulated tasks in different domains should be totally reasonable.
In sum, I stand by my evaluation and recommend rejecting this paper in its current form.
In my opinion, the rebuttal is for clarifying what reviewers misunderstand, not for the authors to make promises and urge reviewers to take them.
---
Rebuttal 3:
Comment: Dear Reviewer,
Thanks again for the engaging discussion. Please note that we have thoroughly revised the Introduction and the Related Work sections (see the comments we posted under the General comment section).
Specifically, in the Introduction we clearly highlighted the novel aspects of our algorithm (see the **Contributions** paragraph), and reorganized the overall text to make it more effective (splitting the first paragraph in a more coherent way). In the Related Work section, according to your suggestions, we included the references you kindly provided and added a brief paragraph highlighting the limitations of current works and the distinctive elements of our paper.
We do hope that the revised versions of Section 1 and Section 2 address your concerns regarding **Claritiy** and **Related work**. We would really appreciate it if you could acknowledge this reply.
Best regards,
The authors
---
Rebuttal Comment 3.1:
Title: Re: Official Comment by Authors
Comment: Thank you for the revised introduction and the related work, which are easier to follow while containing sufficient details. I will increase my score to 5.
---
Reply to Comment 3.1.1:
Comment: Thank you! | Summary: This work presents a novel diffusion model-based curriculum learning approach, called DiCURL, for multi-goal reinforcement learning, namely goal-conditioned RL. The proposed conditional diffusion model leverages a Q-function and a learned reward function based on the Adversarial Intrinsic Motivation principle to incentivize goals that are reachable yet challenging to an RL agent. The paper evaluates DiCURL against state-of-the-art curriculum learning approaches in maze environments with differing maps. In PointUMaze and PointNMaze, DiCURL matches or slightly outperforms OUTPACE, which seems to be the best-performing method in these maze environments. In the most challenging map, PointSpiralMaze, DiCURL outperforms OUTPACE, while the rest of the methods fail to yield an optimal policy at the end of the training.
Strengths: - The related work section is extensive in terms of content and covers most of the recent advances in automatic curriculum learning for RL. The background and methodology sections are also detailed, and the problem setting and the proposed approach are explained clearly.
- The proposed curriculum learning approach is novel as it employs a conditional diffusion model. The idea of leveraging a Q-function and a learned intrinsic reward function to select achievable but challenging goals is intuitive, as well.
- Table 1 highlights the advantages of DiCURL, and the introduction section also supports this table.
- The curricula generated by DiCURL in Figures 2 and 3 (as well as the ones in the appendix) illustrate how DiCuRL yields optimal policies and outperforms some existing methods in evaluated environments.
Weaknesses: - The introduction section should be improved in terms of writing. Content-wise, it is informative but also too dense. Some of the paragraphs are either too long or too short. Restructuring this section and making it more to the point would improve the readers' experience immensely.
- OUTPACE is the second best-forming automatic curriculum learning method in the evaluated environments. However, the paper does not demonstrate the curricula generated by OUTPACE, unlike the curricula of GRADIENT and HGG in Figure 3, which do not perform as well.
- All environments (point maze domain in MuJoCo with different maps) in the empirical validation section have the same dynamics, low-dimensional state, and action spaces. Although DiCuRL's advantages seem apparent as the map gets more complex, the empirical validation is insufficient to conclude that DiCuRL can outperform state-of-the-art methods in various goal-conditioned domains.
- The roles of loss components related to the Q-function and AIM reward function sound intuitive, yet they are explained briefly. I suggest the authors run an ablation study to highlight their separate contributions.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How do Q and AIM rewards differ in a goal-conditioned environment that provides a (sparse) reward for reaching the goal? Could you please give me an illustrative example to highlight how including both in the loss function of the diffusion model is better?
- What is g_d that initializes g_c in Algorithm 2?
- What do colors in figures illustrating curricula stand for specifically?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I don't see any explicit limitations regarding the proposed approach and the problem setting of interest other than those discussed by the authors in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**: The introduction section should be improved in terms of writing [...]
We agree with the reviewer's suggestions: we will restructure the introduction to ensure it is more concise and better organized.
**W2**: [...], the paper does not demonstrate the curricula generated by OUTPACE [...]
In the **attached PDF**, we show the generated curriculum goals of OUTPACE for all maze environments (Fig. 15). Additionally, we have added a color bar to indicate which colors of the curriculum goals correspond to which timesteps. These figures will be included in the final version of the paper. Furthermore, we will update Fig. 3 in the paper to include the color bars.
**W3**: [...] the empirical validation is insufficient to conclude that DiCuRL can outperform state-of-the-art [...]
We have additionally tested our approach on two robotic manipulation tasks. Please refer to our **General comment** and the **attached PDF** (Fig. 16) for more details.
**W4**: The roles of loss components related to the Q-function and AIM reward [... ]
We have conducted an ablation study to explore the individual contributions of the Q function and the AIM reward when integrated with the diffusion loss $L_d$, as outlined in Eq. 11 in the paper. The success rate results from this study are presented in Fig. 13a in the **attached PDF**.
Concerning the roles of the Q function and AIM reward, here’s our intuitive explanation:
* Loss $L_d$: Minimizing this component helps us accurately capture the state distribution. It ensures that our generated curriculum goals are representative of the state-visitation distribution.
* Q-function and AIM reward: The Q-function predicts the cumulative reward starting from a state, following the policy, while the AIM reward estimates how close an agent is to achieving its goal. We integrate both terms in the loss function by inverting their signs because our objective is to maximize Q and the AIM reward. By doing this, the diffusion model focuses on generating goals that not only minimize $L_d$ but also maximize the expected Q and AIM rewards.
This approach ensures that our generated curriculum goals are neither overly simplistic nor excessively challenging and progress towards the desired goal.
We have provided a detailed analysis with various visualizations for the PointUMaze environment in Sec. D in our supp. mat. If further details are required, we can provide a similar analysis for the PointSpiralMaze environment as well.
**Q1**: How do Q and AIM rewards differ in a goal-conditioned environment [...]
To demonstrate the effects of using either the AIM reward or the $Q_\phi$ function in conjunction with the $L_d$ component in Eq. 11 in the paper, we have provided illustrative examples in the **attached PDF**. In particular, Fig. 13c and 13d display the generated curriculum goals using $L_d$ with only $Q_\phi$ and $L_d$ only with the AIM reward, respectively. The generated curriculum goals reveal that omitting the AIM reward results in suboptimal performance, whereas the absence of the $Q_\phi$ function leads to the agent's inability to accomplish the task.
Additionally, we have implemented our method within a sparse reward, goal-conditioned RL framework across two different robotic manipulation tasks. We compared our method with HGG [a] and HER [b], as detailed in Fig. 16 in the **attached PDF**.
In this setting, the curriculum goals are integrated into our policy and the Q network (see line 7 in Algorithm 2 for the policy and line 10 in Algorithm 1 for the Q function). We utilize the AIM reward for both training RL algorithms and generating curriculum goals. This reward function, also a feature of OUTPACE, is a trainable neural network parametrized by $\varphi$ that is initialized randomly and trained simultaneously with the RL algorithms. It is specifically trained to minimize the Wasserstein distance between state visitation and the desired goal distribution, as detailed in Section 3.3.
The distinction between training RL algorithms using the sparse (binary) reward and the AIM reward is significant. For instance, consider an agent at position (x,y) with a goal at (m,n). In a sparse reward setting, such as that used in the HER methodology, the reward is calculated based on the Euclidean distance between the agent's position and the goal. If this distance is greater than a predefined threshold (set to 0.05m in Gymnasium-Robotics), the agent receives a reward of 0; otherwise, it receives a reward of 1. In other words, If the agent reaches a given goal within a given threshold distance, the agent receives a positive reward, otherwise a non-positive one. In contrast, both our method and OUTPACE utilize a neural network-based reward—the AIM reward—which provides a continuous reward value. To illustrate this concept, we have included a 2D color map of the AIM reward function for the PointUMaze environment, showing how the reward evolves during different training episodes. These visualizations can be found in Fig. 8a, 10a, and 12a of our supp. mat. Additionally, the changes in the AIM reward for the PointSpiralmaze environment are illustrated in Fig. 13b in the **attached PDF**.
**Q2**: What is g_d that initializes g_c in Algorithm 2?
$g_d$ is the desired goal, and we initialize $g_c$ with $g_d$. At the beginning of training, our diffusion algorithm has not yet been run, so we initially provide the agent with $g_d$. Then, our curriculum goal generation algorithm generates curriculum goals for the agent.
**Q3**: What do colors in figures illustrating curricula stand for specifically?
We will add a color bar to Fig. 3 to illustrate the corresponding timesteps for each different color of the curriculum goals. You can refer to the new plot in Fig. 15 of the **attached PDF**, which shows the intermediate goals for OUTPACE.
[a] Andrychowicz, M., et al. (2017), Hindsight experience replay
[b] Ren, Z., et al. (2019), Exploration via hindsight goal generation
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: Thank you for responding to all of my comments. My concerns are addressed to a large extent. I believe the new results and ablation studies provided in the global response improve the validity of the proposed approach. Of course, the rest of the baselines should be evaluated in the new environments for the final version.
This paper showcases a nice implementation of diffusion for curriculum learning, an important novelty. Although the results do not clearly demonstrate a superiority over the existing state-of-the-art, and the introduction section is not ready for the final version, I will raise my score from 5 to 6.
If the authors pinpoint the changes in the introduction section, I will raise my score again.
---
Reply to Comment 1.1.1:
Comment: Dear `FHis`,
Please find the revised version of the Introduction under the **General comment** section. We hope that the intro is now more effective and easy to read (we do believe it has improved indeed). Any further feedback is welcome of course.
Best regards,
The authors | Summary: This work introduces DiCuRL, a novel approach that uses diffusion models to generate curriculum goals for reinforcement learning agents. The method trains a model to capture the distribution of visited states, focusing on those with higher Q-values and intrinsic motivation rewards (i.e., AIM rewards). This approach aims to generate goals at an appropriate difficulty level while guiding the curriculum closer to the desired final goal. DiCuRL employs the Minimum Cost Maximum Flow algorithm to solve a bipartite matching problem to select curriculum goals.
Strengths: - Strong empirical evaluation against competitors (Fig. 1)
- The paper is information-dense but reasonably well-written. It helps with the comprehension of the proposed ideas
Weaknesses: - The approach is quite complicated and possibly unnecessarily so. I'd like to emphasize that I did not find any faults with the proposed method. It's just that I do not see how it will scale to more challenging, realistic environments.
- They missed citing a rich literature on exploration and curriculum RL. For example, see papers [1-5].
- The reward function for the Maze envs is not provided. Is this dense or sparse reward env? Note that, dense reward would not be a justifiable choice in this case.
*References*
1. Riedmiller, M., Hafner, R., Lampe, T., Neunert, M., Degrave, J., Wiele, T., Mnih, V., Heess, N., and Springenberg, J. T. (2018). Learning by playing solving sparse reward tasks from scratch. In International conference on machine learning, pages 4344–4353. PMLR.
2. Hertweck, T., Riedmiller, M., Bloesch, M., Springenberg, J. T., Siegel, N., Wulfmeier, M., Hafner, R., and Heess, N. (2020). Simple sensor intentions for exploration. arXiv preprint arXiv:2005.07541.
3. Nair, A. V., Pong, V., Dalal, M., Bahl, S., Lin, S., and Levine, S. (2018). Visual reinforcement learning with imagined goals. Advances in neural information processing systems, 31.
4. Korenkevych, D., Mahmood, A. R., Vasan, G., and Bergstra, J. (2019). Autoregressive policies for continuous control deep reinforcement learning. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 2754–2762.
5. Narvekar, S., Peng, B., Leonetti, M., Sinapov, J., Taylor, M. E., & Stone, P. (2020). Curriculum learning for reinforcement learning domains: A framework and survey. Journal of Machine Learning Research, 21(181), 1-50.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In Fig. 3, what do the colours represent? Please be more elaborate. It is not clear at all at the moment
- In the appendix, it's mentioned that "The agent starts each episode from an initial state of [0, 0]." In RL environments, environmental resets can implicitly help exploration [1]. How would DiCuRL + fixed start state fare against SAC only + random start states?
- How does SAC only perform in the comparisons in Fig. 1?
- How important is the AIM reward? It is a bit weird to sum the Q value and one-step intrinsic motivation reward. This results in different scales/magnitudes of values, which is why the authors needed to tune the coefficients.
- To ask the previous question differently, can the AIM reward be substituted with simpler intrinsic motivation rewards like RND [2] or TD-error?
- It seems SAC + HER would be a lot simpler to use computationally and algorithmically. How does DiCuRL compare against SAC + HER?
*References*
1. Vasan, G., Wang, Y., Shahriar, F., Bergstra, J., Jagersand, M., & Mahmood, A. R. (2024). Revisiting Constant Negative Rewards for Goal-Reaching Tasks in Robot Learning. arXiv preprint arXiv:2407.00324.
2. Burda, Y., Edwards, H., Storkey, A., & Klimov, O. (2018). Exploration by random network distillation. arXiv preprint arXiv:1810.12894.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please suggest how this work can be extended to challenging environments with larger state-action spaces.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**: The approach is quite complicated [...]
We carried out additional experiments on two robot manipulation tasks, please see the **General comment** and its **attached PDF** (Fig. 16) for more details.
**W2**:They missed citing a rich literature [...]
We will carefully review these papers and integrate them appropriately into the related work section to ensure a comprehensive discussion of existing research.
**W3**:The reward function for the Maze envs is not provided [...]
We are using a trainable reward function both for training the RL algorithm and for generating curriculum goals. The trainable reward function, see Eq. 4, is learned by minimizing the Wasserstein distance between state visitation and the desired goal distribution. Given that we are working with maze environments, using Euclidean distance would not be appropriate for a reward function. Following the same reward approach as OUTPACE, we use the AIM reward function trained simultaneously with the RL algorithms. As demonstrated in the supp. mat. and Fig. 13b in the **attached PDF**, the reward function converges to the desired goal as training progresses. Therefore, we integrated the reward function into our curriculum goal generation mechanism, as it is useful in Eq. 11 and has the potential to guide the agent toward the desired goal. In summary, we do not use a sparse or pre-defined dense reward equation. Instead, we utilize a trainable reward function which is randomly initialized and specifically trained to minimize the Wasserstein distance between state-visitation and the desired goal distribution.
**Q1**: In Fig. 3, what do the colours represent?
We will add a color bar to Fig. 3, which illustrates the corresponding timesteps for each different color of the curriculum goals. You can refer to our new plot in Fig. 15 in the **attached PDF** to have a preview of the modified figure.
**Q2**: [...] How would DiCuRL + fixed start state fare against SAC only + random start states?
**Q3**: How does SAC only perform in the comparisons in Fig. 1?
To address jointly Q2 and Q3, we removed the curriculum goal generation part from the code and provided only the final desired goal to agents in the maze tasks, which corresponds to training the agent solely with SAC. We obtained the first set of results by starting the agent in a fixed initial state at $(0,0)$. For the second set of results, we uniformly sampled the initial position of the agent randomly in the environment. To avoid starting the agent inside the walls of the maze, we performed an infeasibility check, such that if the initial state sampled was inside the walls, we continued sampling until a feasible initial state was found. We compared our DICURL approach with both SAC+fixed initial state and SAC + random initial state across all maze environments. The success rate is shown in Fig. 14 of the **attached PDF**.
We observe that including curriculum goals significantly improves the success rate in simpler tasks such as PointUMaze, helping the agent achieve the task successfully across different training seeds. Without the curriculum (SAC + fixed initial position), the agent struggles to achieve complex tasks consistently, resulting in high variance in performance. With SAC + random initial state, the agent often reaches the desired goal, at least in some scenarios. This success can be attributed to trial and error (without requiring curriculum goals) because the random starting positions help the agent avoid becoming stuck in maze walls, thus enhancing its ability to navigate the environment effectively.
**Q4**: How important is the AIM reward? [...]
You are absolutely right; this is why we need to use different coefficients. To illustrate the effects of the Q and AIM reward functions, we conducted an ablation study. Please see our **General comment** and its **attached PDF** (Fig. 13a) for more details.
**Q5**: [...] can the AIM reward be substituted with simpler intrinsic motivation rewards [...]
Yes, we believe that both RND and TD-error can potentially be substituted with the AIM reward. Indeed, in the AIM reward paper [a], the authors compare this approach with RND and show that RND performs worse compared to the AIM reward in certain tasks. Based on this evidence, we preferred to use the AIM reward for our method.
**Q6**: [...] How does DiCuRL compare against SAC + HER?
Given the timeframe we had, we couldn’t test SAC+HER on the maze tasks. Instead, to demonstrate that our approach can be extended to robotic manipulation and sparse reward tasks, and compare it to HER, we used the official repository of [b], where the authors compare their approach with HER in a sparse reward setting. As detailed in the **General comment**, we implemented DiCuRL using HER settings in sparse reward robotic manipulation tasks. Please note that we used DDPG instead of SAC, which is another off-policy RL algorithm. All methods, including ours, use sparse rewards for training DDPG. However, since our method is based on the AIM reward, we only use the AIM reward for generating curriculum goals. You can find the comparison in Fig. 16 of the **attached PDF**.
Limitations: Please suggest how this work can be extended to challenging environments with larger state-action spaces.
To demonstrate that our proposed method can be extended to more complex environments such as robotic manipulation tasks, we compared our approach with HGG and HER. The success rates are shown in Fig. 16 of the **attached PDF**. We will be happy to include these additional experiments -along with a brief discussion of these new results- in the final version of our paper.
[a] Durugkar, I., et al. (2021), Adversarial intrinsic motivation for reinforcement learning
[b] Ren, Z., et al. (2019), Exploration via hindsight goal generation
---
Rebuttal Comment 1.1:
Comment: Dear `c6He`,
Given that the author-reviewer discussion period will close soon, we would be really grateful if you could acknowledge our response and let us know if we addressed all of your concerns. Of course, we are happy to provide more details if you need them. We hope we can engage in an active discussion with you as we are doing with the other reviewers.
Thanks a lot.
The authors.
---
Rebuttal 2:
Comment: Dear Reviewer,
Please note that we have revised the Related Work section as well, including the references you suggested (see the comments we posted under the General comment section).
Best regards,
The authors | Rebuttal 1:
Rebuttal: ### **General Comment**
We sincerely thank all reviewers for the time and effort devoted to reviewing our manuscript. To address the key points raised, we have provided detailed responses to each reviewer. All responses are organized into questions and weaknesses. For example, **Q1** refers to the first question, while **W1** to the first weakness. Where needed, we also included a reply concerning the limitations.
For some responses, we have included additional results, which are available in the **attached PDF** (Fig. 13-16) included in this General comment.
We have been diligently working on improving the paper on several fronts, addressing all comments to the best of our capacity. We hope that the reviewers and chairs will appreciate our efforts and we wish to engage in a fruitful discussion during the rebuttal period.
Below, we summarize the main changes made:
* We conducted an **ablation study** to investigate the impact of the AIM reward function and Q function in generating curriculum goals with our method. For that, we omitted, separately, the reward function and the Q function from Eq. 11 in the paper, and plotted the success rate (with three different seeds) in Fig. 13a for the most challenging maze environment, PointSpiralMaze. The results indicate that the agent performs worse without the AIM reward function and fails to achieve the task without the Q function. The generated curriculum goals without the reward or Q function are shown in Fig. 13c and 13d, respectively. Fig. 13b, instead, illustrates the AIM reward value across different training episodes in a clockwise direction. Specifically, the first row and first column in Fig. 13b represent the reward values at the very beginning of training. As training progresses, the reward values shift towards the left corner of the maze environment (1st row, 2nd column). In the middle of training, the reward values are concentrated around the left corner of the maze environment (2nd row, 2nd column), and, by the end of training, the reward values converge to the desired goal area (2nd row, 1st column). This progression explains why the generated curriculum goals are not guiding the agent effectively but are instead distributed in the corner points shown in Fig. 13d. We have also demonstrated the behavior of the AIM reward function across different training episodes in our supp. mat. for the PointUMaze environment.
* Additionally, we **examined the impact of SAC with a fixed initial state [0,0] and SAC with a random initial state**. To do that, we removed the curriculum goal generation mechanism and assigned the desired goal, and then trained the agent using either SAC with a fixed initial state [0,0] or SAC with a random initial state. For the random initial state, we sampled goals randomly in the environment. To avoid starting the agent inside the maze walls, we performed an infeasibility check, resampling the initial state until it was feasible. We compared our approach using three different seeds with both the fixed initial state + SAC and the random initial state + SAC across all maze environments. The success rates are shown in Fig. 14 in the **attached PDF**. Note that the success rate for the PointUMaze environment in Fig. 14a is shown up to timestep $10^5$, whereas it was shown up to $10^6$ in the paper.
* We also **displayed the generated curriculum goals by the baseline method OUTPACE** in Fig. 15, with a color bar indicating the corresponding timesteps.
* To demonstrate the applicability of our method to different tasks, particularly in robot manipulation tasks, we implemented our approach using the official Hindsight Goal Generation (HGG) repository. We converted the HGG code from TensorFlow to PyTorch to integrate it with our diffusion model, which is based on PyTorch. We selected **two robot manipulation tasks, FetchPush and FetchPickAndPlace**, and increased the environment difficulty by expanding the desired goal area. This is shown in Fig. 16c and 16d, where the yellow area indicates the object sampling region and the blue area indicates the desired goal sampling region. The action space is four-dimensional, where three dimensions represent the Cartesian displacement of the end effector, and the last dimension controls the opening and closing of the gripper. The state space is 25-dimensional, including the end-effector position, position, and rotation of the object, the linear and angular velocity of the object, and the left and right gripper velocity. More detailed information regarding the action space and observation space of these robotic tasks can be found in the Gymnasium library documentation. For these additional experiments, we compared our method with HGG and HER using the DDPG algorithm, to ensure alignment with the baselines, using five different seeds. The results are shown in Fig. 16a and Fig. 16b, respectively for FetchPush and FetchPickAndPlace. Note that in this setting, all RL algorithms (including ours) use a binary reward (i.e., a sparse reward). However, since our curriculum goal generation algorithm is based on the AIM reward, we implemented the AIM reward function solely to generate curriculum goals while still using the sparse reward setting to train the DDPG algorithm using the diffusion model to generate curriculum goals.
**NOTE**: Reviewer `7Mgp` asked about GAN. We did compare our method with GOAL-GAN, where a goal generator proposes goal regions, and a goal discriminator is trained to evaluate if a goal is at the right level of difficulty for the policy. While GOAL-GAN does not consider the target distribution as our approach, we still consider it a pertinent baseline.
### **Reproducibility**
We have created a new (anonymous) repository containing the code for the additional experiments:
https://anonymous.4open.science/r/HER_diffusion-EB4F
The original (anonymous) codebase is available at (see L264 in the paper):
https://anonymous.4open.science/r/diffusioncurriculum/.
Pdf: /pdf/0cb3bc92972c7edd6c6c5d4e476e13f81cf0fdee.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Generative Modeling of Molecular Dynamics Trajectories | Accept (poster) | Summary: The paper suggests a flow-based generative framework on molecular trajectories, with various downstream tasks such as forward simulation and transition path sampling. Additionally, the model is trained in a transferable setting, across tetrapeptides.
Strengths: 1. Extensive experiments over various downstream tasks
2. Transferable settings for tetrapeptides
Weaknesses: 1. Experiment baselines
The baselines of experiments are mostly the Markov State Models. I think it would also be good if there were some comparison between other models, though I understand that many prior works targeted Alanine dipeptide not tetrapeptides.
- Forward simulation: ITO$^{[1]}$, Timewarp$^{[2]}$
- Interpolation (Transition path sampling): PIPS$^{[3]}$
2. (Minor) Necessity of additional tasks
The necessity of additional tasks relatively seems weak compared to tasks such as forward simulation, TPS, specially the inpainting design. Rather than additional tasks, ablations for stability might be a better? One can obviously see that scaling to long trajectories shows the stability against the time scale, and protein simulation shows the stability against space complexity.
**Minor typos, suggestions**
- Definition of S-MPNN only exists in the Appendix. It would great to point out that more details are in the appendix, in the first paragraph of section 4.4
- Figure 6 is not referenced in the main paper, only the appendix
- Figure 2F, reference of that blue indicates the side chains and orange indicates the backbones seems to be missing
[1] Implicit transfer operator learning: Multiple time-resolution surrogates for molecular dynamics, NIPS 2023
[2] Timewarp: transferable acceleration of molecular dynamics by learning time-coarsened dynamics, NIPS 2023
[3] Stochastic Optimal Control for Collective Variable Free Sampling of Molecular Transition Paths, NIPS 2023
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Difference between downstream tasks
Could the upsampling task be seen as a superset of interpolation? Since upsampling with two given frames would the same as interpolation.
2. Training on one tetrapeptide
Just curious, though the authors has presented a transferable setting, are there any results when the model is trained for a specific tetrapeptide and tested on downstream tasks?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: 1. Unconditional generation
As the authors have mentioned, unconditional generation is impossible since the model relies on key frames.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review! To address your questions and concerns:
---
**Experimental baselines**
We now provide new results comparing our work to Timewarp and ITO. Emphatically, these comparisons are limited to the forward simulation task as Timewarp and ITO are not capable of solving the other tasks.
* For **Timewarp**, we use the 4AA model with weights from the authors and sample 100 ns trajectories by running 2000 inference steps with timestep 50 ps. We do not use MH acceptance steps as the authors found exploration of the energy landscape to be much more effective without them.
* For **ITO**, transferable models across tetrapeptides are not available. We, therefore, train ITO ourselves on our tetrapeptide dataset with timesteps of 500 ps. We then run 200 inference steps to sample 100 ns trajectories.
For both methods, we observe that trajectories are unstable without further intervention (see **Figure 2** of the PDF attached to the global response). To bolster the baselines, we run OpenMM relaxation steps between each timestep to proceed with further analysis. We note that the Timewarp authors, in lieu of relaxation, rejected steps with an energy increase of 300 kJ / mol; however, we found that this strategy would reject the majority of proposed steps on generic tetrapeptides.
In **Figure 3** of the PDF attached to the global response, we visualize the free energy surfaces and torsion angle distributions for several peptides from Timewarp and ITO compared with MDGen. Our model obtains much better results across the board.
Quantitatively, we obtain the following Jensen-Shannon divergences from the forward simulation rollouts (c.f. Table 2):
|C.V.| MDGen | Timewarp | ITO |
|:-| :-: | :-: | :-: |
|Torsions (bb)| **0.130** | 0.325 | 0.564 |
Torsions (sc) | **0.093** | 0.427 | 0.462 |
Torsions (all) | **0.109** | 0.383 | 0.505 |
TICA-0 | **0.230** | 0.265 | 0.538 |
TICA-0,1 joint | **0.316** | 0.419 | 0.756 |
MSM states | 0.235 | **0.222** | 0.414 |
Runtime (s)| **60** | 599 | 2083 |
To evaluate the dynamics, we also compare the Pearson correlation of predicted torsional relaxation times (c.f. Figure 2F). We could not compute these quantities for ITO as the vast majority of torsions did not decorrelate within the ITO rollouts.
|C.V.| MDGen | Timewarp | ITO |
|:-| :-: | :-: | :-: |
Torsion relaxation (bb)| **0.55** | 0.04 | ---|
Torsion relaxation (sc) | **0.97** | 0.77 | ---|
Altogether, the results confirm that MDGen is more successful and efficient at modeling thermodynamics and kinetics than Timewarp or ITO.
**PIPS** We note that PIPS is a method for steering simulations towards a desired target state, whereas MDGen aims to learn from existing unsteered simulations in a transferable manner to sample transitions of unseen systems. Due to the difference in tasks, a direct comparison with PIPS is difficult to formulate.
**Additional tasks vs ablations for stability**
A central point of our paper is to highlight the novel capabilities afforded by our trajectory modeling framework. We chose several tasks out of reach of previous methods, and hence less studied in ML papers, but all of which we believe are scientifically meaningful. With that said, we acknowledge that different tasks will appeal to different readers, and appreciate your input about their relative importance.
We are happy to consider suggestions for further experiments for stability (time permitting) beyond the existing experiments you referenced.
**Difference between the downstream tasks**
Yes, upsampling could be seen as a superset of interpolation in terms of the conditioning input. However, their conceptual aims are somewhat different: In the interpolation setting, we choose endpoints in different (often distant) macrostates of the energy landscape to study how they are connected. On the other hand, in upsampling the conditioning frames are chosen based on timestep, irrespective of whether they exhibit interesting transitions.
**Training on one tetrapeptide**
Apart from the Hyena experiment (Section 4.4), we opted to focus on the transferable setting as we believe this is the arena in which ML-based MD emulators will be useful in practice. Additionally, since our model generates 10 ns at a time, we do not anticipate that learning from a single 100 ns trajectory would be meaningful, as this amounts to only 10 statistically independent training examples.
**Unconditional generation**
While it is true that the use of key frames impacts the ability to do unconditional generation, we nevertheless opted for this design choice as unconditional generation of trajectories is (to our knowledge) not a problem of scientific interest. On the other hand, the key frames allow the conditional modeling problems to be consideribly simplified and clarified, leading to the strong results shown in our experiments. Hence, the use of key frames could instead be considered a technical insight of our model that enables it to contribute to the development of real-world, impactful scientific problems, at the cost of problems of lesser interest.
**Minor typos, suggestions**
Thank you for noting these; we will incorporate them in the revision!
---
We hope the new discussion and results address your concerns! Please let us know if there are further opportunities to improve the score.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response and additional experiments. Additional experiments on forward simulation shows better performance of MDGen over Timewarp and ITO, where prior works fail in longer simulation time. All questions have been resolved, and I have raised my score accordingly. | Summary: The authors propose MDGen -- a generative model to sample molecular dynamics trajectory conditioned on key frames. This is a direct application of video generation techniques to solve domain challenges in protein modeling. Specifically, SiT and flow matching models are used to sample SE(3)-invariant representation of all-atom protein representations. This work demonstrates the effectiveness of MDGen primarily on tetrapeptide systems, where the authors showcase four downstream application tasks including forward simulation, interpolation, upsampling, and dynamics-conditioned inpainting.
In general, I find this manuscript well-written and easy to follow. The model performance looks reasonable on tetrapeptides, yet the results are proof-of-concept in nature and generalization to larger proteins remain challenging. However, it is one of the pioneering work in AI protein modeling to directly emulate MD simulation trajectories using data-driven approaches. To that end, I think it would be beneficial for this work to gain visibility across the research community to inspire future studies.
Strengths: - It is one of the pioneering work to adopt video generation techniques for MD trajectory generation. Although conceptually straightforward, good practices to generate time-coherent MD trajectories across different protein systems remain underexplored.
- The authors demonstrated a variety of downstream tasks using the same model architecture. The underlying modeling framework seems versatile and transferrable across different applications..
- Performance benchmark and analysis on tetrapeptides are comprehensive and provides insights to modeling these peptide systems.
- I think it is a good idea to model residue offsets relative to the key frames in order to bypass the need to learn sequence-to-structure mapping. MD simulations always start from a seed structure, so I do not think this is a key limitation as mentioned in L#310-312.
Weaknesses: - Benchmark and evaluation results on tetrapeptides, although comprehensive, are proof-of-concept in nature. It may not be sufficient to demonstrate transferability to general protein systems.
- Performance on ATLAS (i.e., larger proteins instead of short peptides) does not seem promising. MDGen performance is worse than AlphaFlow in Table 4. I wonder if the main bottleneck is training data quality/availability, or model architecture?
Technical Quality: 3
Clarity: 3
Questions for Authors: - L#141, when $K > 1$, how to ensure roto-translation prediction consistency across $K$ key frames and obtain a final $\hat{g}_j$?
- Table 2, with 100 ns being the ground truth, the non-zero JSD in the last column originates from subsampling the simulation trajectory?
- Figure 2F. My understanding is that sidechains exhibit faster dynamics while backbone motions are slower. The low correlation for backbone suggests that MDGen is not good at learning slower dynamics, which are typically more interesting to researchers?
- Temporal coherence between generated protein conformations is mainly evaluated using auto-correlation in this work. Is it possible to show other metrics to capture detailed structural quality and variation during time evolution?
- Why is MDGen more effective at sequence recovery than MPNN? More explanation and analysis would be helpful here.
- Would it be possible to emulate MD simulation trajectory of the 12 fast folding proteins from [Shaw 2009](https://dl.acm.org/doi/abs/10.1145/1654059.1654126)? They are smaller than ATLAS proteins and longer than tetrapeptides, with much longer simulation time and rich dynamics.
- It would be nice to see if MDGen could infer a trajectory given an [apo/holo pair](https://arxiv.org/abs/2304.02198).
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review! To address your questions and concerns:
---
**Limited to tetrapeptides**
We have focused on tetrapeptides as model systems in this work for two key reasons:
* We can run simulations for thousands of systems in order to properly test the generalization abilities of our model.
* We can build Markov State Models for each system, allowing the careful benchmarking for forward simulation and interpolation capabilities.
Both of these aspects are important for the thorough, careful benchmarking of the new capabilities we demonstrate. We opted to prioritize these careful studies to support the core claims of our work, in lieu of expanding its scope to larger and more diverse systems, which we think are best left to future work.
**Comparison with AlphaFlow**
It is true that the results on ATLAS simulations are worse than AlphaFlow. However,
* MDGen learns the harder task of providing temporal consistency across structures, which is beyond the capabilities of AlphaFlow. However, our metrics are obtained from the AlphaFlow paper and only assess the ensemble similarity of unordered sets of conformations, favoring their method.
* To more concretely demonstrate learned temporal consistency, we now show an MDGen trajectory connecting apo and holo states of adenylate kinase in **Figure 4** in the PDF attached to the global response.
* AlphaFlow is a much larger model with significant transfer learning from O(10^5) structures via AlphaFold, and has a much better pretrained understanding about likely protein conformations. On the other hand, MDGen is trained from scratch on only O(1000) proteins.
With that said, we do not anticipate that this will be the final, definitive architecture for trajectory modeling of protein systems. Rather, our experiments serves as a proof-of-concept that these capabilities _can_ be demonstrated on proteins, even with the extremely limited available data. We anticipate that a combination of expanded datasets and further architectural exploration can improve results on ATLAS in future work.
**Additional structural quality and variation metrics**
We note that in addition to autocorrelation, our torsion angle distributional metrics and free energy surfaces also assess the quality of structures in the trajectory. With that said, we are happy to report addition metrics as requested.
To more stringently assess structural quality in MDGen forward simulation rollouts, we compute the distributions of
* The closest distance between nonbonded atoms
* Nonbonded energy (Coulomb + Lennard-Jones)
* Torsional energy
* Heavy atom bond lengths
* Radius of gyration
These distributions are shown and compared to the ground truth in **Figure 1** in the PDF attached to the global response. We find that the vast majority of MDGen structures are of high quality (i.e., clashes are rare) and adhere closely to the ground truth distributions.
To assess structural variation across time, we report the aligned RMSD between frames spaced at regular intervals, and compare to the same metric computed from the reference simulation. This gives an idea of how fast the structures should be moving. We observe that the trajectories have dynamics that closely resemble the ground truth.
| Interval | MDGen | Reference MD |
|:-| :-: | :-: |
| 10 ps | 1.02 A | 0.99 A |
| 20 ps | 1.19 A | 1.17 A |
| 50 ps | 1.45 A | 1.44 A |
| 100 ps | 1.65 A | 1.67 A |
| 200 ps | 1.86 A | 1.89 A |
| 500 ps | 2.11 A | 2.18 A |
| 1000 ps | 2.28 A | 2.37 A |
Please let us know if you had other metrics in mind; we are happy to analyze the structures further!
**Other questions**
>L#141, when K>1, how to ensure roto-translation prediction consistency across K key frames and obtain a final g^j?
We predict all-atom coordinates using each set of key frames (and corresponding predictions) and average these coordinates.
>Table 2, with 100 ns being the ground truth, the non-zero JSD in the last column originates from subsampling the simulation trajectory?
Similar to the other columns, the MD baselines indicate replicate simulations, i.e., an independent 100 ns simulation.
>Figure 2F. My understanding is that sidechains exhibit faster dynamics while backbone motions are slower.
It is true that the slower dynamics are typically more interesting; unsurprisingly, they are also harder to simulate and learn. Our method does not perfectly recover these dynamics, but it is the first of its kind to report good results on this task.
>Why is MDGen more effective at sequence recovery than MPNN? More explanation and analysis would be helpful here.
MDGen is trained to use more input information about the peptide, namely the intermediate dynamics of the unmasked residues. Our inverse folding baselines are not able to make use of these partially observed intermediate structures.
>Would it be possible to emulate MD simulation trajectory of the 12 fast folding proteins from Shaw 2009?
The DESRES trajectories are unusual in that they were simulated at elevated temperatures to invoke unfolding events. In terms of absolute displacement, these are vastly larger motions away from the starting structure than those seen in ATLAS simulations. As such, we do not think our architecture, with its dependence on key frames, would be optimal for modeling such trajectories. Earlier in the project, we considered implementing a distogram-based architecture for the DESRES proteins, but opted to focus on the transferable setting with ATLAS instead.
>It would be nice to see if MDGen could infer a trajectory given an apo/holo pair.
Thanks for the suggestion! We have newly trained an interpolation model on ATLAS data and visualize a trajectory between the apo and holo states of adenylate kinase (1AKE / 4AKE) in **Figure 4** of the PDF attached to the global response.
---
We hope the new discussion and results address your concerns! Please let us know if there are further opportunities to improve the score.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Dear authors,
Thanks for your response. Most of my concerns have been properly addressed. I think this pioneering work on generative modeling for protein/peptide dynamics should be shared with the community to inspire follow-up work. I have raised my scores accordingly. | Summary: The paper presents a new framework for generating trajectory of molecular geometries, ie, generative modeling for molecular dynamics. The paper proposes tokenization methods to tokenize the trajectory and learn flow models on the data. Experiments demonstrate the effectiveness of several tasks including forward sampling, interpolation, and up sampling.
Strengths: 1. The paper tackles a new problem in molecular dynamics generation, which has not been explored in existing literature.
2. The paper is in good structure and easy to follow.
3. The paper provides a detailed analysis of several domain tasks on interested molecular structures, which demonstrate the critical usage in some scenarios.
Weaknesses: 1. Limited ML technical contribution, as all components exist in previous molecular generative models.
2. The experiment is comprehensive from a domain perspective. However, I feel the experiments lack some benchmarking comparison with state-of-the-art molecular generative models for related tasks. See my question below.
Technical Quality: 3
Clarity: 3
Questions for Authors: I think existing methods can also tackle several tasks. For example, for the forward sampling task, previous generative MD models like Timewarp (Klein et al., 2024) and ITO (Schreiner et al., 2024) can also be used for the task. A numerical comparison with these baselines can help to justify the effectiveness of the proposed method.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitation is nicely discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review! To address your questions and concerns:
---
**Limited ML technical contribution**
In writing the paper, we opted to placed more emphasis on the experimental results. Nonetheless, we respectfully disagree that our work has limited ML technical contribution.
We highlight that the following points are novel from a modeling perspective and, to our knowledge, have not been used in any prior molecular generative model:
* The reduced representation of residue all-atom coordinates into $\mathbb{R}^{21}$.
* The use of key frames to obtain SE(3)-invariant tokens from molecular trajectories.
* The use of scalable vanilla transformers for molecular generation while respecting the problem symmetries (i.e., without breaking equivariance like molecular conformer fields or AF3).
Indeed, as we write in the Introduction, our approach provides a novel strategy for how to circumvent the expensive architectures normally used for equivariant molecular modeling, a technical contribution that could be useful for many future works.
Additionally, we believe that the insight of using trajectory generation to solve inverse problems like interpolation or upsampling in itself constitutes methodological novelty, akin to the novelty appreciated in the many surprising applications of diffusion models in RL.
**Comparison with Timewarp and ITO**
We now provide new results comparing our work to Timewarp and ITO. Emphatically, these comparisons are limited to the forward simulation task as Timewarp and ITO are not capable of solving the other tasks.
* For **Timewarp**, we use the 4AA model with weights from the authors and sample 100 ns trajectories by running 2000 inference steps with timestep 50 ps. We do not use MH acceptance steps as the authors found exploration of the energy landscape to be much more effective without them.
* For **ITO**, transferable models across tetrapeptides are not available. We, therefore, train ITO ourselves on our tetrapeptide dataset with timesteps of 500 ps. We then run 200 inference steps to sample 100 ns trajectories.
For both methods, we observe that trajectories are unstable without further intervention (see **Figure 2** of the PDF attached to the global response). To bolster the baselines, we run OpenMM relaxation steps between each timestep to proceed with further analysis. We note that the Timewarp authors, in lieu of relaxation, rejected steps with an energy increase of 300 kJ / mol; however, we found that this strategy would reject the majority of proposed steps on generic tetrapeptides.
In **Figure 3** of the PDF attached to the global response, we visualize the free energy surfaces and torsion angle distributions for several peptides from Timewarp and ITO compared with MDGen. Our model obtains much better results across the board.
Quantitatively, we obtain the following Jensen-Shannon divergences from the forward simulation rollouts (c.f. Table 2):
|C.V.| MDGen | Timewarp | ITO |
|:-| :-: | :-: | :-: |
|Torsions (bb)| **0.130** | 0.325 | 0.564 |
Torsions (sc) | **0.093** | 0.427 | 0.462 |
Torsions (all) | **0.109** | 0.383 | 0.505 |
TICA-0 | **0.230** | 0.265 | 0.538 |
TICA-0,1 joint | **0.316** | 0.419 | 0.756 |
MSM states | 0.235 | **0.222** | 0.414 |
Runtime (s)| **60** | 599 | 2083 |
To evaluate the dynamics, we also compare the Pearson correlation of predicted torsional relaxation times (c.f. Figure 2F). We could not compute these quantities for ITO as the vast majority of torsions did not decorrelate within the ITO rollouts.
|C.V.| MDGen | Timewarp | ITO |
|:-| :-: | :-: | :-: |
Torsion relaxation (bb)| **0.55** | 0.04 | ---|
Torsion relaxation (sc) | **0.97** | 0.77 | ---|
Altogether, the results confirm that MDGen is more successful and efficient at modeling thermodynamics and kinetics than Timewarp or ITO.
---
We hope the new discussion and results address your concerns! Please let us know if there are further opportunities to improve the score. | Summary: In this work, the authors proposed MDGen, a new framework that aims to model molecular dynamics trajectories via generative modeling techniques. By properly encoding the Protein MD trajectories according to the characteristics of key frames, MDGen adopts flow matching techniques (both continuous and discrete flow matching) to generatively model MD trajectories. As a unified framework, MDGen is able to perform diverse tasks including forward simulation, interpolation, upsampling and inpaiting. Extensive experiments are conducted to demonstrate the effectiveness of MDGen.
Strengths: 1. The problem this work aims to tackle is of great significance in scientific domains lie computational biology.
2. The formulation of molecular (protein) trajectories by using key frame references is reasonable and compact for reducing the modeling difficulties.
3. The experiments are comprehensive.
4. The paper is well-written and easy to follow.
Weaknesses: 1. Lack of discussion on related works. This work does not discuss related works on the same topic. Some works are mentioned in the Introduction section, but I still recommend that there should be an independent Related Works section for comprehensive discussion. Here are also several works that are worth discussing: (1) EGNO, which uses neural operator learning approach to also model the trajectory dynamics of molecules; (2) DiffMD, which uses diffusion models to simulate molecular dynamics. The quality of this work should be further improved if the authors could carefully discuss the differences between MDGen and these works and the strengths of MDGen compared to these works.
2. Lack of ablation studies. MDGen is composed of several parts, including the design of the backbone model, the design choices of flow matching framework, and the adoption of Hyena architecture for efficiency consideration. In addition to the aimed tasks, it would further improve the quality of this work if the authors could conduct ablation studies on these aspects to help readers know what the influence of each part of MDGen is.
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors carefully discuss the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review! To address your questions and concerns:
---
**Further discussion on related works**
Thanks for mentioning these works. We are happy to discuss them in the revision alongside the existing related work in the Background which we will also expand as per your suggestion. Briefly:
* EGNO is a fourier neural operator that, given a molecular structure and a time delta, *deterministically* predicts the structure after the time delta. The time delta is constrained to be between 0 and a maximum time which in practice is 5fs - this is 20,000,000 smaller than the duration of our trajectories in forward simulation. Notably, the possible structures after a time delta (in the absence of the initial velocity) are a distribution, which EGNO fails to model since it is not a generative model.
* DiffMD adds noise to molecular structures and denoises them to generate a future structure. Repeating the process autoregressively yields a trajectory.
When producing trajectories of timescales similar to MDGen, both methods produce their trajectories autoregressively which limits them to forward simulation as an application and prevents past frames from being informed by future frames. Meanwhile, MDGen generates all frames in a trajectory jointly, which can improve temporal coherence and enables addressing additional tasks such as interpolation, upsampling, and inpainting.
**Lack of ablation studies**
We now provide new experimental results ablating components of the method. The ablations are as follows:
* **No IPA embedding**: the model is not provided IPA embeddings of the key frames and amounts to a vanilla Scalable Interpolant Transformer
* **No $SE(3)$ invariance**: there is no SE(3)-invariant tokenization and the model operates directly over the frame representations $g_j$ rather than frame offsets $g_i^{-1}g_j$
* **No frames/torsions**: the model operates directly over all-atom coordinates rather than the reduced representation in $\mathbb{R}^{21}$
We run these ablations on the forward simulation experiment and obtain the following Jensen-Shannon divergences (c.f. Table 2). For a fair comparison, we use the baseline model after training for the same number of epochs as the ablations. The ablations all perform worse than the baseline model.
|C.V.| Baseline | No IPA embedding | No $SE(3)$ invariance | No frames/torsions |
|:-| :-: | :-: | :-: | :-: |
|Torsions (bb)| **0.161** | 0.339 | 0.195 | 0.537 |
Torsions (sc) | **0.106** | 0.249 | 0.262 | 0.502 |
Torsions (all) | **0.130** | 0.287 | 0.233 | 0.517 |
TICA-0 | **0.245** | 0.375 | 0.298 | 0.510 |
TICA-0,1 joint | **0.345** | 0.500 | 0.416 | 0.657 |
MSM states | **0.237** | 0.250 | 0.278 | 0.408 |
---
We hope the new discussion and results address your concerns! Please let us know if there are further opportunities to improve the score.
---
Rebuttal 2:
Comment: Thank you for your clarifications. Most of my concerns have been addressed. I choose to keep my positive rating. | Rebuttal 1:
Rebuttal: # Overall Response
We thank all reviewers for their time taken in providing constructive feedback!
In addition to the individual responses, we also provide new **figures and visualizations in the PDF file** attached to this global response.
* In Figure 1, we compare the distributions of additional **structural and energy metrics** (e.g., clashes) from MDGen rollouts and from reference MD simulations, as suggested by **Reviewers QAiG** and **hVZ7**. We find that the vast majority of MDGen structures are of high quality and closely resemble the ground truth distribution on these metrics.
* In Figures 2 and 3, we provide **comparisons with Timewarp and ITO** as requested by **Reviewers zaG3** and **b8Jp**. In Figure 2, we show that Timewarp and ITO suffer from unstable inference rollouts out-of-the-box on generic tetrapeptides. However, we correct for these instabilities ourselves to obtain a meaningful comparison. In Figure 3, we visualize free energy surfaces and torsion angle distributions obtained from Timewarp and ITO compared with MDGen. We find that MDGen obtains better results across the board.
* In Figure 4, we visualize an **interpolation path between apo / holo states** of adenylate kinase (1AKE / 4AKE) as suggested by **Reviewer hVZ7**, obtained from a newly trained ATLAS interpolation model.
Pdf: /pdf/052b8743fe2470dd0617a299d119c48ad8db90b9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper presents a novel generative model for molecular dynamics (MD) trajectories called MDGEN. This model aims to serve as a flexible surrogate for MD simulations by generating entire trajectories conditioned on initial frames. It addresses tasks such as forward simulation, transition path sampling, trajectory upsampling, and dynamics-conditioned molecular design. The model is evaluated on tetrapeptide simulations and demonstrates its capability to generate reasonable ensembles of protein monomers.
Strengths: Novelty and Scope: The approach introduces a novel paradigm for surrogate modeling of MD, extending the capabilities of existing models to handle a variety of tasks that are not straightforward with current methods.
Generative Framework: The use of generative modeling for entire MD trajectories is a significant advancement, as it allows for a broader range of applications including forward and inverse problems.
Comprehensive Evaluation: The paper evaluates MDGEN on several tasks, demonstrating its effectiveness in forward simulation, interpolation, upsampling, and inpainting. The results show promising performance in terms of distributional similarity, dynamical content, and computational efficiency.
Technical Implementation: The detailed description of the tokenization process and the flow model architecture provides a clear understanding of how the model operates. The use of SE(3)-invariant tokens and the scalable interpolant transformer (SiT) backbone are well-motivated choices.
Weaknesses: Complexity and Accessibility: The model’s complexity might pose challenges for reproducibility and accessibility for researchers who are not deeply familiar with both molecular dynamics and advanced generative modeling techniques.
Evaluation on Larger Systems: While the paper provides proof-of-concept evaluations on proteins, the primary focus remains on smaller tetrapeptides. The model's scalability and effectiveness on larger and more complex molecular systems need further exploration.
Dependence on Key Frames: The reliance on key frames for conditional generation limits the model’s ability to perform unconditional generation or inpainting of residue roto-translations, which could be a significant limitation in certain applications.
Computational Resources: The paper lacks detailed information on the computational resources required for training and inference, which is crucial for understanding the practical implications of using MDGEN in various research settings.
Technical Quality: 3
Clarity: 3
Questions for Authors: How can the model be adapted or improved to reduce its reliance on key frames?
Exploring techniques for unconditional generation or alternative ways to handle the roto-translations without predefined key frames could enhance the model's flexibility and applicability.
What architectural changes or enhancements could improve the model's performance on larger molecular systems such as proteins?
Investigating more scalable architectures or hybrid approaches that combine the current method with other techniques tailored for large systems could address this limitation.
How does the computational cost of training the model compare to traditional MD simulations, and what are the implications for its practical use?
Providing detailed information on computational requirements and potential optimizations could help in assessing the model's feasibility for widespread use.
What alternative tokenization strategies could be explored to extend the model's applicability to a wider range of molecular systems?
Research into tokenization methods that can handle diverse molecular structures and dynamics could broaden the model's utility.
How can additional conditioning types (e.g., textual descriptions, experimental data) be incorporated into the model, and what benefits might they provide?
Experimenting with and integrating various forms of conditioning could enhance the model's ability to generate more accurate and contextually relevant trajectories.
What are the potential impacts of data quality and availability on the model's performance, and how can these challenges be mitigated?
Addressing data-related challenges through techniques like data augmentation, transfer learning, or synthetic data generation could improve the model's robustness and applicability.
Can additional evaluation metrics be developed to provide a more comprehensive assessment of the generated trajectories' quality?
Identifying and implementing new evaluation criteria could offer deeper insights into the strengths and limitations of the model's output.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Reliance on Key Frames:
The model relies on key frames for parameterizing the roto-translations, which means it cannot perform unconditional generation or inpainting of residue roto-translations. This dependency might limit its applicability to scenarios where key frames are not easily obtainable or where full trajectories need to be generated from scratch.
Scalability to Larger Systems:
The architecture shows weaker performance on larger systems such as proteins compared to smaller peptides. This suggests that the current model and architecture might not be well-suited for handling the complex motions and larger size of protein structures without further modifications or enhancements.
Computational Resources:
While the paper mentions significant speedups compared to traditional MD simulations, the computational resources required for training the model (e.g., GPU hours) are not explicitly discussed. This information is crucial for understanding the practicality and scalability of the approach.
Generalization to Diverse Systems:
The current tokenization and modeling strategies are tailored to peptides and proteins. For more diverse molecular systems such as organic ligands, materials, or explicit solvent systems, alternative tokenization strategies might be necessary. This limits the immediate applicability of the model to a broader range of molecular simulations.
Limited Exploration of Additional Conditioning:
The paper primarily explores conditioning on initial frames and residue identities. Other types of conditioning, such as textual or experimental descriptors, are not explored but could open up further applications and improve the model's utility.
Data Availability and Quality:
The success of the model heavily depends on the availability of high-quality MD trajectory data. For many complex systems, obtaining such data can be challenging, which could limit the model's applicability and performance.
Evaluation Metrics:
While the paper uses several rigorous metrics for evaluation, the choice of metrics may not fully capture all aspects of the generated trajectories' quality. Additional metrics or more diverse evaluation criteria could provide a more comprehensive assessment.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review! To address your questions and concerns:
---
**Complexity and accessibility**
We have aimed to provide a clear and reproducible method and exposition accessible to the average reader familiar with molecular machine learning. We aimed to make modeling choices that were as simple as possible, i.e.,
* using a simple stochastic interpolants framework rather than diffusion
* using a vanilla unmodified transformer from previous work rather than complex frame- or pair-based architectures
* generating invariant tokens over Euclidean space rather than diffusing over the space of frames and tokens.
To promote reproducibility, we have
* provided pseudocode for our architecture in Appendix A
* provided experimental details sufficient for replications and conceptual justifications for design choices and metrics in Appendix B
* will provide full source code in the revision
While our work bridges molecular dynamics and generative models, it should not require a deep background in either as we have distilled the essential aspects of the relevant literature in Section 2. With that said, if there are any specific aspects of the method that remain unclear despite our best efforts, please let us know.
**Reliance on key frames**
While it is true that the use of key frames impacts the ability to do unconditional generation, we nevertheless opted for this design choice as unconditional generation of trajectories is (to our knowledge) not a problem of scientific interest. On the other hand, the key frames allow the conditional modeling problems to be consideribly simplified and clarified, leading to the strong results shown in our experiments. Hence, the use of key frames could instead be considered a technical insight of our model that enables it to contribute to the development of real-world, impactful scientific problems, at the cost of problems of lesser interest.
**Larger and more diverse systems**
We have focused on tetrapeptides as model systems in this work for two key reasons:
* We can run simulations for thousands of systems in order to properly test the generalization abilities of our model.
* We can build Markov State Models for each system, allowing the careful benchmarking for forward simulation and interpolation capabilities.
Both of these aspects are important for the thorough, careful benchmarking of the new capabilities we demonstrate. We opted to prioritize these careful studies to support the core claims of our work, in lieu of expanding its scope to larger and more diverse systems, which we think are best left to future work.
**Conditioning**
We agree that further types of conditioning would be exciting to explore. However, the conditional settings we have already provided represent significant conceptual shifts relative to existing work in learning surrogate models of molecular dynamics. Additional types of conditioning would further expand the technical scope of the work, and the ones we mention (i.e., text conditioning) are highly speculative with large uncertainty with regards to scientific utility, at least at present. With that in mind, it is not clear to us which additional settings are being referenced as weaknesses or limitations. If there are specific areas in mind, please let us know.
**Data availability and quality**
We acknowledge that MD data is required to train the method, and obtaining long trajectories can be time-consuming. However, compared to molecular ML modalities requiring _experimental_ data, such as those that train on crystal structures, it is much easier to obtain high-quality data by running simulations than by running wet-lab experiments. Hence, works like ours that train on _simulated_ data actually help mitigate the challenges of data availability and quality in molecular machine learning.
**Evaluation metrics**
We have designed and implemented thorough and principled metrics that assess the ability of our model to produce trajectories:
* With good distributional properties over structures, assessed by the Jensen-Shannon divergences and Markov state occupancies
* With good dynamical properties over motions, assessed by the various transition path metrics, the decorrelation times, flux matrix correlations, autocorrelation functions, and dynamical content.
While other metrics are certainly possible, it is not clear to us which areas are concretely being referenced as weaknesses or limitations of the current evaluations. If there are specific areas in mind, please let us know.
**Computational resources**
We report our training time for each model as follows, measured on A6000 GPUs:
* Forward simulation: 412 GPU-hrs
* Interpolation: 272 GPU-hrs
* Upsampling: 292 GPU-hrs
We have reported inference time comparisons with MD simulations in Table 2 on the forward simulation tasks, all on one A6000 GPU. Our method is much faster than the baseline simulations, and much more accurate than an abridged simulation of the same wall-clock time.
Emphatically, the training time is amortized across all systems since our method is _transferable_ by design. That is, for the fixed training cost we obtain a set of models that can be applied, with the reported accuracy, to _any_ tetrapeptide system at test time. Thus, our model training offers a clear and overwhelming advantage over running long MD simulations for individual tetrapeptide systems.
---
We hope the new discussion and results address your concerns! Please let us know if there are further opportunities to improve the score.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarifications
Comment: Some of my concerns have been addressed. I raised my score. | Summary: The authors introduce MDGen as a novel approach for modeling MD trajectories. They demonstrate the capabilities of this method in tasks such as interpolation, upsampling, and inpainting of small peptides. The accuracy as well as speed of the new approach compared to the ground truth baseline is quantitatively evaluated. Initial experiments toward upscaling to small proteins are shown.
Strengths: The idea of MDGen is novel and very well presented in this manuscript. The results are convincing and interesting.
Weaknesses: 1. Parts of Sections 3.1 and 3.2 are very condensed and hard to follow. A more detailed description in the SI would be helpful, where the most important aspects of the cited work is also repeated.
2. The suitability of the chosen representation for longer amino acid chains is questionable. This is also mentioned in the manuscript, but nonetheless, proteins are mentioned many times (more than 30) in the manuscript, while almost all experiments are actually performed on very small peptides. It should be stated in a more prominent place that upscaling to proteins is not trivial.
3. The representation limits the model to learn MD trajectories of natural amino acids, as no all-atom representation is used directly. This should be made clearer in the manuscript.
Minor points: A lot of figures have no proper axis labels (e.g. Fig 3, 4, 5, 6). This should be fixed. The best models in Table 4 should be indicated in bold.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. How often do clashes and other high-energy structures occur in the generated trajectories?
2. When comparing to other methods and approaches in the experimental section - do all of them use a similar reduced representation or do the other methods generate all-atom representations?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The approach is limited to peptides. Transfer to any other molecules is questionable due to a lack of suitable representation/tokenization.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review! To address your questions and concerns:
---
**Parts of Section 3 hard to follow**
Due to space limitations, the description of our method in Section 3 was indeed a bit condensed. We will expand the exposition with the extra page allotted in the revision.
**Suitability for proteins and non-AA molecules**
We sought to qualify our results on protein simulations and apologize if a different impression was conveyed. In the revision we will increase emphasis that the focus of the paper is on peptides and remove prominent mention of proteins from the abstract and introduction, as suggested.
With that said, we have focused on tetrapeptides as model systems in this work for two key reasons:
* We can run simulations for thousands of systems in order to properly test the generalization abilities of our model.
* We can build Markov State Models for each system, allowing the careful benchmarking for forward simulation and interpolation capabilities.
Both of these aspects are important for the thorough, careful benchmarking of the new capabilities we demonstrate. We opted to prioritize these careful studies to support the core claims of our work, in lieu of expanding its scope to larger and more diverse systems, which we think are best left to future work.
**Clashes in the generated structures**
To assess the frequency of clashes or high-energy structures in MDGen forward simulation rollouts, we compute the distributions of
* The closest distance between nonbonded atoms
* Nonbonded energy (Coulomb + Lennard-Jones)
* Torsional energy
* Heavy atom bond lengths
* Radius of gyration
These distributions are shown and compared to the ground truth in **Figure 1** in the PDF attached to the global response. We find that the vast majority of MDGen structures are of high quality (i.e., clashes are rare) and adhere closely to the ground truth distributions.
**Representations of other methods**
The peptide representations of the other methods are summarized as follows:
* For most evaluations we compare to Markov state models, which emit discretized representations of the trajectory. These are much more coarse-grained than our reduced representation.
* AlphaFlow emits residue frames and torsion angles, an equivalent representation to ours.
* The inpainting baselines are inverse folding models and do not emit trajectories of any kind.
* Ground truth molecular dynamics provides all-atom trajectories.
**Minor points**
Thank you for noting these; we will incorporate them in the revision.
---
We hope the new discussion and results address your concerns! Please let us know if there are further opportunities to improve the score.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. This answers all my questions and I encourage that all updates announced here will be integrated in the final version. As the changes do not change the main message and claims, I will keep my score as it is, so I continue to support acceptance of the paper. | null | null | null | null |
Toward a Stable, Fair, and Comprehensive Evaluation of Object Hallucination in Large Vision-Language Models | Accept (poster) | Summary: This paper explores the stable evaluation of object hallucinations, which is a crucial challenge in large vision-language models. The authors provide the first systematic analysis of the underlying mechanism through which instructions affect hallucinations, based on comprehensive experiments. They report a linear correlation between the length of descriptions and the levels of object hallucinations. Furthermore, the authors propose a curve-based framework that incorporates description lengths to enable a stable evaluation of hallucinations. What I find particularly novel is that the slope of the curve is incorporated as a metric, which achieves a more comprehensive evaluation.
Strengths: 1. This work might provide valuable insights to the community. Firstly, while the impact of instructions on hallucinations is widely recognized, this work unveils a crucial aspect by demonstrating that instructions exert their influence through the modification of description lengths. This finding illuminates the previously unexplored mechanism underlying instruction-affected hallucinations. Secondly, they employ a curve-based evaluation method instead of relying solely on a single metric, which goes a new way in addressing hallucination evaluation. Thus, this work has the potential to inspire further research and exploration in hallucination evaluation.
2. The proposed curve-based hallucination evaluation method in this paper is intuitively reasonable, and the author provides substantial experimental evidence to support the motivation behind this method. The experimental results are clearly presented, and the corresponding analyses further enhance the persuasiveness of this work. Overall, the combination of the intuitive approach, extensive experiments, clear presentation of results, and insightful analyses makes this work convincing.
Weaknesses: 1. The proposed method realizes consistent evaluation by calculating the hallucination rate at a uniform length. However, the length distributions of descriptions generated by different LVLMs exhibit variations. In other words, some models tend to produce shorter descriptions while others generate longer ones. In light of this, I have concerns regarding the ability of this method to maintain its effectiveness under such circumstances.
2. In my view, the hallucination evaluation of a LVLM in practical requires a large instruction set that could simulate real-world applications of the LVLM. If the authors can build such a large instruction set as the benchmark, it would yield a significant contribution to the community.
3. The authors claim that their proposed evaluation method is fairer compared to other evaluation methods. However, the paper appears to lack experimental results to support this assertion.
4. The analysis of the stability of the “LeHaCE_GR” is lacking.
5. The selection of instructions may have a substantial impact on the fitted curve. It would be beneficial for the authors to provide further discussion on this aspect.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Considering that shorter descriptions tend to have fewer hallucinations, have the authors explored the possibility of generating multiple concise descriptions with distinct focuses for the same image, and subsequently merging them into a comprehensive and detailed description?
2. What factors determine the slope of the length-hallucination curve for the model?
3. Since the authors introduce the slope of the length-hallucination curve as a valuable evaluation metric, it raises the question of what the intercept of the curve signifies. Is it feasible to incorporate the intercept into the evaluation framework?
4. Why does the average length of the image description generated by the Otter model, specifically under instruction I12, amount to only 2? Is there any misunderstanding here?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The paper briefly mentioned limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**W1:** The proposed method realizes consistent evaluation by calculating the hallucination rate at a uniform length. However, the length distributions of descriptions generated by different LVLMs exhibit variations. In other words, some models tend to produce shorter descriptions while others generate longer ones. In light of this, I have concerns regarding the ability of this method to maintain its effectiveness under such circumstances.
**Response to W1:** Thanks for your comment. In fact, calculating the hallucination rate at respective average lengths is a practice of the average-based framework, not our method. When the output lengths of different models are inconsistent, the average-based framework indeed suffers from evaluation inconsistency and unfairness. In our LeHaCE framework, by constructing the length-hallucination curve, we can evaluate the hallucination degree of different models at a specified uniform description length, thereby improving the consistency and fairness of the evaluation.
***
>**W2:** In my view, the hallucination evaluation of a LVLM in practical requires a large instruction set that could simulate real-world applications of the LVLM. If the authors can build such a large instruction set as the benchmark, it would yield a significant contribution to the community.
**Response to W2:** Thank you for your constructive suggestion. We will give it serious consideration.
***
>**W3:** The authors claim that their proposed evaluation method is fairer compared to other evaluation methods. However, the paper appears to lack experimental results to support this assertion.
**Response to W3:** Thanks for your comment. In Figure 4 of our paper, we present real data to visually demonstrate the fairness of our LeHaCE method. Our LeHaCE method constructs a length-hallucination curve to evaluate the hallucination levels of LVLMs at a uniform description length. This effectively mitigates the issue of length bias in hallucination level evaluation, providing a fairer evaluation.
***
>**W4:** The analysis of the stability of the “LeHaCE_GR” is lacking.
**Response to W4:** Following your suggestion, we supplement an experiment on the stability of LeHaCE_GR.The results are reported in Table 1 of author rebuttal pdf. From the results, we can observe that the consistency of LeHaCE_GR increases with the number of instructions.
***
>**W5:** The selection of instructions may have a substantial impact on the fitted curve. It would be beneficial for the authors to provide further discussion on this aspect.
**Response to W5:** Thanks for your valuable question. For the selection of instructions, a set of instructions that result in significant 5.differences in the model output length will aid in fitting the length illusion curve.
***
>**Q1:** Considering that shorter descriptions tend to have fewer hallucinations, have the authors explored the possibility of generating multiple concise descriptions with distinct focuses for the same image, and subsequently merging them into a comprehensive and detailed description?
**Response to Q1:** Thank you for your insightful suggestion. We have indeed tried this method. Specifically, we first had the LVLMs list the objects in the image, then prompted the LVLMs to generate descriptions for these objects individually, and finally summarized the information using the LVLM. However, due to the limited ability of LVLMs in listing objects and summarizing, this approach did not yield ideal results. Utilizing multimodal agent technology to achieve this idea with multiple different models is our future research direction.
***
>**Q2:** What factors determine the slope of the length-hallucination curve for the model?
**Response to Q2:** We believe that various factors influence the length-hallucination curve of LVLMs, such as the training data, the visual encoder and the language model in LVLMs.
***
>**Q3:** Since the authors introduce the slope of the length-hallucination curve as a valuable evaluation metric, it raises the question of what the intercept of the curve signifies. Is it feasible to incorporate the intercept into the evaluation framework?
**Response to Q3:** We believe that the intercept of the length-hallucination curve is not practically meaningful. This is because the output lengths of large vision-language models (LVLMs) are always positive, and for image descriptions, they typically contain at least a dozen words. Given this context, the intercept holds little practical significance.
***
**Q4:** Why does the average length of the image description generated by the Otter model, specifically under instruction I12, amount to only 2? Is there any misunderstanding here?
**Response to Q4:** This is because the Otter model always returns the names of the objects in the image rather than a description of the image when given the I12 instruction. | Summary: This work aims to establish a stable, fair, and comprehensive evaluation method for object hallucinations in large vision-language models. The authors discovered a positive correlation between the length of image descriptions and the degree of object hallucination. Building upon this observation, they developed a hallucination evaluation method named LeHaCE by fitting a length-hallucination curve. LeHaCE enables the evaluation at any given image description length, ensuring stability and fairness in the evaluation process. Additionally, LeHaCE involves the curve slope as a metric to evaluate the influence of image description length on the degree of object hallucination, thereby achieving a comprehensive evaluation. The motivation behind this work is reasonable, and the authors provide many experiments to support their claims. However, it is worth considering that the use of the linear fitting scheme, although straightforward, does somewhat diminish the novelty of the proposed method.
Strengths: The experimental analysis conducted on instructions and hallucination is compelling and provides strong support for the main argument that the hallucination degree is positively correlated with the length of the description. While previous research (Yifan et al., 2023) has already shown the influence of instructions on hallucinations, this work takes it a step further by proposing that instructions indirectly influence hallucinations through the length of image descriptions. This sheds light on the reason behind the limitations of previous approaches that relied on average-based methods. Overall, this paper offers valuable insights into the evaluation of consistent hallucinations.
Weaknesses: 1. Although the rationale behind the length-hallucination curve is compelling, it is fitted using a relatively simplistic linear approach. Exploring more flexible and intricate fitting approaches is worth considering, as it has the potential to achieve higher fitting accuracy and more effective hallucination evaluation.
2. Since the proposed method relies on a fitted curve, it needs at least two instructions to evaluate LVLMs and cannot be used with just one instruction.The authors should discuss this limitation.
3. Lack of indepth discussion on the shortcomings of the proposed method. For instance, as shown in Table 2, why does LeHaCE exhibit poor stability on a few LVLMs when the number of instructions is three?
4. It seems that the selection of instructions might affect the stability of LeHaCE. It would be helpful to include more discussion on this aspect.
5. The current paper seems to have lots of results and experiments. As a reader, it is not very easy for me to get the main conclusion for each experiment. It would be good to highlight the conclusions so that the readers can understand the point easier.
6. Some typos need to be corrected: Line 79: lrv-instruction -> LRV-instruction. Line 92 Nope -> NOPE. Line 81 chatgpt -> ChatGPT. Table 2: Minigpt-4 -> MiniGPT-4.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Does the complexity of the image content, such as the number of objects, influence the extent of hallucination in the model? It would be valuable to investigate additional factors that impact hallucination degrees.
2. Intuitively, the average-based framework can also be effective as long as there are enough instructions, such as 200 instructions. I'm wondering if this viewpoint is accurate?
3. Is the relative standard deviation an appropriate approach to evaluate stability, considering that stability in this context essentially refers to the consistency of multiple evaluation results?
4. Why does this work exclusively focus on object hallucinations? Is this a choice made by the authors or a limitation of the proposed method?
5. In Figure 5, why does LeHaCE show higher instability on LLaVA and Qwen-VL when the image description length is less than 20 words?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: The paper includes a simple discussion of the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1:** Although the rationale behind the length-hallucination curve is compelling, it is fitted using a relatively simplistic linear approach. Exploring more flexible and intricate fitting approaches is worth considering, as it has the potential to achieve higher fitting accuracy and more effective hallucination evaluation.
**Response to W1:** Following your suggestion, we supplement a comparative experiment for LeHaCE based on linear fitting versus quadratic and cubic polynomial fitting. The results are reported in Table 3 of the author rebuttal pdf. These results show that linear fitting is significantly superior to polynomial fitting, especially when the number of instructions is small.
***
>**W2:** Since the proposed method relies on a fitted curve, it needs at least two instructions to evaluate LVLMs and cannot be used with just one instruction. The authors should discuss this limitation.
**Response to W2:** In the typical practice of evaluating hallucination levels in LVLMs, multiple instructions are usually used to enhance the stability of the evaluation results. Although LeHaCE cannot be used with just one instruction, this limitation does not affect its ability to provide stable evaluations. We will discuss this limitation in the final version.
***
>**W3:** Lack of indepth discussion on the shortcomings of the proposed method. For instance, as shown in Table 2, why does LeHaCE exhibit poor stability on a few LVLMs when the number of instructions is three?
**Response to W3:** Thank you for your valuable suggestions. LeHaCE exhibits instability with a low number of instructions because such a small sample is insufficient for an adequate fit of the length-hallucination curve. However, when the number of instructions is increased to five or more, LeHaCE consistently demonstrates improved stability.
***
>**W4:** It seems that the selection of instructions might affect the stability of LeHaCE. It would be helpful to include more discussion on this aspect.
**Response to W4:** For the selection of instructions, a set of instructions that result in significant differences in the model output length will aid in fitting the length illusion curve.
***
>**W5:** The current paper seems to have lots of results and experiments. As a reader, it is not very easy for me to get the main conclusion for each experiment. It would be good to highlight the conclusions so that the readers can understand the point easier
**Response to W5:** We will revise the paper to highlight the main conclusions of each experiment, making it easier for readers to grasp the key points.
***
>**W6:** Some typos need to be corrected: Line 79: lrv-instruction -> LRV-instruction. Line 92 Nope -> NOPE. Line 81 chatgpt -> ChatGPT. Table 2: Minigpt-4 -> MiniGPT-4.
**Response to W6:** Thanks for your careful review and we will revise these typos in our final version.
***
>**Q1:** Does the complexity of the image content, such as the number of objects, influence the extent of hallucination in the model? It would be valuable to investigate additional factors that impact hallucination degrees.
**Response to Q1:** Following your suggestion, We supplement an experiment about the correlation between the hallucination rates and the number of objects in the images for the Gemini, Llava, and MiniGPT-4 models on the COCO dataset, which were 0.08, -0.12, and 0.06, respectively. This indicates that the number of objects has almost no correlation with the hallucination degree of the models. Additionally, we found that LVLMs are more prone to hallucinations on some black-and-white images, suggesting that the style or domain of the images can affect the hallucination degree of LVLMs.
***
>**Q2:** Intuitively, the average-based framework can also be effective as long as there are enough instructions, such as 200 instructions. I'm wondering if this viewpoint is accurate?
**Response to Q2:** No. Considering that different LVLMs produce outputs of varying lengths for the same instructions, increasing the number of instructions still cannot alleviate the length bias of the average-based framework.
***
>**Q3:** Is the relative standard deviation an appropriate approach to evaluate stability, considering that stability in this context essentially refers to the consistency of multiple evaluation results?
**Response to Q3:** We believe that Relative Standard Deviation (RSD) is appropriate because the means of different metrics vary, making direct comparisons of standard deviations meaningless.Using RSD as an evaluation metric can eliminate the impact of the mean.
***
>**Q4:** Why does this work exclusively focus on object hallucinations? Is this a choice made by the authors or a limitation of the proposed method?
**Response to Q4:** Object hallucination is the most common type of hallucination in LVLMs. Following your suggestion, we evaluate the relation hallucinations and attribution hallucinations in image descriptions generated by Qwen-VL and InternLM-XComposer under 25 instructions. We conducted a model-based evaluation using Gemini-1.5-flash. The results are shown in Figure 2 of the Author Rebuttal attachment, from which we can observe that the degree of relation and attribution hallucination increases with the length of the descriptions. This demonstrates that relation hallucination and attribution hallucination are also influenced by length bias, suggesting that our findings and methods are applicable to these types of hallucinations as well. Expanding to other tasks will be the focus of our future work.
***
>**Q5:** In Figure 5, why does LeHaCE show higher instability on LLaVA and Qwen-VL when the image description length is less than 20 words?
**Response to Q5:** LLaVa and Qwen-VL have generally large output lengths, with minimums of 19 and 17, respectively (shown in Figure 3). This causes greater fitting deviations in LeHaCE's length-hallucination curve at shorter lengths, resulting in poorer consistency. | Summary: The paper identifies a pitfall regarding the length of image descriptions in the current average-based LVLM hallucination evaluation framework. To address this, they propose a new Length-Hallucination Curve Based evaluation framework to enhance the fairness of evaluations. The paper observes that the degree of object hallucinations is primarily influenced by the length of image descriptions, with instructions indirectly affecting hallucinations through their impact on description lengths. They suggest using a linear regression curve for evaluation and develop two metrics based on this curve. Extensive experiments on multiple LVLMs with different instruction sets demonstrate the stability of their proposed new evaluation metrics.
Strengths: - The observation is intuitive and validate by extensive experiments
- The paper is clearly written and easy to follow
- The evaluation is comprehensive in terms of numerous instructions and LVLMs
Weaknesses: - Although paper observe the linear relation between the length of the image description and objection hallucination, there are still unanswered questions regarding the justification of the claim. Please see questions below.
- Some minor inconsistent typo, for example, the AEF and ABF in Figure 4.
- The evaluation only use CHAIR scores and scores of other aspects is not evaluated, for example, the detail or the coverage of the real objects in the description as in AMBER.
Technical Quality: 2
Clarity: 3
Questions for Authors: - The paper grouped 25 different instructions to 5 instruction set. What’s the grouping strategy? How do the author group these instructions?
- The paper claimed that object hallucination is primarily influenced by the length of image descriptions, with instructions only indirectly affecting hallucinations through their effect on description lengths. How is this claim being validated? Specifically, how do the author validate that the length of the image description is the primary cause and is not also affecting the hallucinations indirectly through their effect on some hidden factors ? The observation could be due to the spurious correlation.
- Does the increased length of the image description also capture more real objects, or does it mainly consist of rephrasing and hallucinatory sentences?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes, the author adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**W1:** Although paper observe the linear relation between the length of the image description and objection hallucination, there are still unanswered questions regarding the justification of the claim. Please see questions below.
>**Q2:** The paper claimed that object hallucination is primarily influenced by the length of image descriptions, with instructions only indirectly affecting hallucinations through their effect on description lengths. How is this claim being validated? Specifically, how do the author validate that the length of the image description is the primary cause and is not also affecting the hallucinations indirectly through their effect on some hidden factors ? The observation could be due to the spurious correlation.
**Response to W1 and Q2:** Thanks for your insightful comment. Intuitively, longer descriptions are more prone to hallucinations because the latter parts of the description can be influenced by the earlier parts, leading to cumulative hallucinations; Experimentally, we conducted extensive experiments across different models, datasets, and decoding strategies, and consistently observed a strong correlation between output length and the hallucination degree in these experiments. Theoretically establishing the causal relationship between output length and the hallucination degree will be a future direction of our work.
***
>**W2**:Some minor inconsistent typo, for example, the AEF and ABF in Figure 4.
**Response to W2:** Thanks for your careful review and we will revise these typos.
***
>**W3**: The evaluation only use CHAIR scores and scores of other aspects is not evaluated, for example, the detail or the coverage of the real objects in the description as in AMBER.
>**Q3:** Does the increased length of the image description also capture more real objects, or does it mainly consist of rephrasing and hallucinatory sentences?
**Response to W3 and Q3:** Thanks for your valuable comment. The increased length of the image description also captures more real objects. Following your suggestion, we analyzed the Coverage Ratio of image descriptions generated by 12 LVLMs under 25 instructions, defining the Coverage Ratio as |{real objects in description}|/|{all real objects in figure}|. The results are shown in Figure 1 of the Author Rebuttal attachment, from which we can observe that longer image descriptions result in a higher Coverage Ratio, capturing more real objects. This suggests that the Coverage Ratio is also influenced by length bias.
Furthermore, we evaluated the stability of the Coverage Ratio when applying LeHaCE and the average-based framework. The results are in Table 2 of the author rebuttal pdf. From the results, we can observe that LeHaCE demonstrates greater stability compared to the average-based framework, and its stability further improves with the addition of more instructions.
***
>**Q1:** The paper grouped 25 different instructions to 5 instruction set. What’s the grouping strategy? How do the author group these instructions?
**Response to Q1:** Thanks for your comment. As described in Section 4.4, first paragraph: **“Specifically, LVLMs are prompted by three sets of different instructions to generate three sets of image descriptions. Each instruction set consists of multiple instructions randomly drawn from a pool of 25 instructions, with no overlap between instructions in different sets.”** We randomly selected three non-overlapping instruction sets from the 25 instructions to ensure reliable evaluation.
***
---
Rebuttal Comment 1.1:
Comment: The additional experimental evaluations on coverage ratio and the possibilities of future work for the theoretical analysis on the relationship, which might be able to explain the phenomenon of the observed relationship, address my initial concerns. I have raised my overall score to 5 (borderline accept) accordingly.
---
Rebuttal 2:
Comment: Thank you for taking the time and effort to evaluate our rebuttal and for adjusting the score. | Summary: This work presents comprehensive experiments to study the relationship between description lengths and hallucinations in LVLMs. Based on the observed positive correlation, authors propose an approach of fitting a length-hallucination curve to evaluate object hallucinations. Speciffically, the curve allows for fair comparisons that are not influenced by varying lengths, through providing the hallucination degree corresponding to any given description length. Furthermore, the curve slope reflects the extent to which a LVLM's hallucination degree is affected by description lengths. The evaluation, considering both the value and slope, demonstrates stability and comprehensiveness, as supported by the conducted experiments. The authors' thorough and meticulous research on this issue is highly convincing, and the proposed method effectively showcases its effectiveness.
Strengths: Hallucinations evaluation is a realistic and crucial task in the field of LVLMs, as hallucinations usually introduce misleading conclusions or even have disastrous outcomes. In this context, the authors perform a detailed experimental analysis on the impact of instructions on hallucinations, providing convincing evidence to support their motivation. Moreover, the proposed curve-based method is a simple yet effective approach, which is well-motivated by the observed linear correlation between description lengths and hallucination rates. The paper is well-written and effectively communicates its main contributions and techniques. Overall, the paper exhibits technical solidity.
Weaknesses: 1. The authors conduct experiments using only the beam search setting. Although I understand that beam search is widely used in hallucination evaluation of LVLMs/LLMs, it remains uncertain whether the observed correlation between the hallucination degree and the description length holds true under different decoding strategies. Thus, I recommend that the authors explore additional commonly used decoding strategies, such as greedy decoding, to provide a more comprehensive analysis.
2. The paper lacks a study about the influence of the instruction number on the length-hallucination curve. The fitted curve is directly affected by the number of samples, which corresponds to the number of instructions provided. It is therefore essential to thoroughly investigate the minimum number of instructions necessary for the proposed method.
3. The authors mention in the paper that the proposed method can "evaluate object hallucinations at any given image description length." In reality, when the given length deviates too much from the existing data, the fitting is likely to fail, leading to inaccurate results. The authors should use more cautious wording.
4. In my opinion, the impact of length might be mitigated by simply controlling the maximum generation lengths.The authors only mention this method in a footnote and believe it does not align with the actual usage scenarios of LVLMs. More in-depth discussions should be provided.
5. Some minor errors need to be corrected. For example, in line 42, "Figure 2&3" should be "Figures 2&3".
6. It appears inappropriate to represent a variable using only two letters. Consider replacing "hr" with "h_r".
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why is the proposed method limited to large vision-language models? Could it be extended to large language models as well? It would be beneficial for the authors to provide a clear explanation or justification for this limitation.
2. Similarly, are the finding and method presented in this paper applicable to other forms of hallucination beyond object hallucinations, or other tasks, such as VQA?
3. What could potentially explain the phenomenon observed in Figure 2, where longer output lengths result in higher object hallucination degrees?
4. How are the 25 instructions used in experiments designed? Are they generated randomly or based on specific rules? Besides, why is it 25, and what difference would there be if there are more or less instructions?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1**: The authors conduct experiments using only the beam search setting. Although I understand that beam search is widely used in hallucination evaluation of LVLMs/LLMs, it remains uncertain whether the observed correlation between the hallucination degree and the description length holds true under different decoding strategies. Thus, I recommend that the authors explore additional commonly used decoding strategies, such as greedy decoding, to provide a more comprehensive analysis.
**Response to W1**
Thanks for your valuable comment. Following your suggestion, we evaluate the hallucination degree of Qwen-VL and InternLM-XComposer under the greedy decoding strategy. The results are shown in Image 3 of the Author Rebuttal attachment, from which we can observe that the correlation between the hallucination degree and description length remains valid under the greedy decoding setting.
Furthermore, we supplement a comparison of the average-based framework and LeHaCE under the greedy decoding setting. The results are reported in Table 4 of the Author Rebuttal attachment. The results show that LeHaCE's stability still surpasses that of the average-based method under the greedy decoding strategy.
> **W2**: The paper lacks a study about the influence of the instruction number on the length-hallucination curve. The fitted curve is directly affected by the number of samples, which corresponds to the number of instructions provided. It is therefore essential to thoroughly investigate the minimum number of instructions necessary for the proposed method.
**Response to W2:**
Thanks for your comment. We analyze the impact of the number of instructions on the effectiveness of the LeHaCE framework in the second paragraph of Section 4.4: **In Table 2, we observe that when the number of instructions is very low, such as three, the stability of LeHaCE is compromised due to the difficulty in accurately fitting the length-hallucination curve. However, with just four or five instructions, LeHaCE consistently exhibits superior stability.** LeHaCE requires at least two instructions, but too few instructions can affect its consistency. When the number of instructions is greater than or equal to five, LeHaCE demonstrates excellent consistency.
> **W3**: The authors mention in the paper that the proposed method can "evaluate object hallucinations at any given image description length." In reality, when the given length deviates too much from the existing data, the fitting is likely to fail, leading to inaccurate results. The authors should use more cautious wording.
**Response to W3:**
We will change the statement to "evaluate object hallucinations at any given image description length within a large range."
>**W4**: In my opinion, the impact of length might be mitigated by simply controlling the maximum generation lengths. The authors only mention this method in a footnote and believe it does not align with the actual usage scenarios of LVLMs. More in-depth discussions should be provided.
**Response to W4:**
Thanks for your comment. Firstly, truncating the output to make the response length uniform does not reflect real usage scenarios, as users typically prefer complete responses. Secondly, truncating the output does not accurately control the description length, as some outputs may not reach the threshold length.
>**W5**: Some minor errors need to be corrected. For example, in line 42, "Figure 2&3" should be "Figures 2&3".
**Response to W5:**
Thanks for your careful review and we will revise these typos in our final version.
>**W6**: It appears inappropriate to represent a variable using only two letters. Consider replacing "hr" with "h_r".
**Response to W6:**
Thank you for your suggestion, we will revise it
>**Q1**: Why is the proposed method limited to large vision-language models? Could it be extended to large language models as well? It would be beneficial for the authors to provide a clear explanation or justification for this limitation.
**Response to Q1:**
Thanks for this insightful question. The causes of hallucinations in LVLMs and LLMs share similarities, such as contradictions between the knowledge embedded in the parameters and the information in the context. Therefore, we believe that the proposed method has the potential to be extended to LLMs, which will be our future research direction.
>**Q2**: Similarly, are the finding and method presented in this paper applicable to other forms of hallucination beyond object hallucinations, or other tasks, such as VQA?
**Response to Q2:**
Following your suggestion, we evaluate the relation hallucinations and attribution hallucinations in image descriptions generated by Qwen-VL and InternLM-XComposer under 25 instructions. We conducted a model-based evaluation using Gemini-1.5-flash. The results are shown in Figure 2 of the Author Rebuttal attachment, from which we can observe that the degree of relation and attribution hallucination increases with the length of the descriptions, indicating that our findings and method are applicable to other forms of hallucination. Expanding to other tasks will be the focus of our future work.
>**Q3**: What could potentially explain the phenomenon observed in Figure 2, where longer output lengths result in higher object hallucination degrees?
**Response to Q3:**
In Appendix 6.2, we further explore this phenomenon and find that MLLMs are more likely to employ hallucinogenic words in generating lengthy and detailed image descriptions, resulting in a higher hallucination rate.
> **Q4**: How are the 25 instructions used in experiments designed? Are they generated randomly or based on specific rules? Besides, why is it 25, and what difference would there be if there are more or less instructions?
**Response to Q4:**
To obtain a diverse and extensive set of instructions, we referred to those from existing works and additionally designed some of our own. The value of 25 was not meticulously chosen. | Rebuttal 1:
Rebuttal: We thank all the reviewers and area chairs for your time and effort during the review process. We are encouraged to hear that our work has **clear and well-written presentations** (by all Reviewers), **good motivation** (by Reviewer Pvzh and Ffbd), **convincing analysis** (by Reviewer Pvzh and onEV), **novel** (by Reviewer onEV) **and effective** (by Reviewer Ffbd) **technical contributions**, **valuable insights** (by ReviewerPvzh and onEV), **extensive experiments** (by Reviewer Pvzh and onEV), and **comprehensive evaluation** (by reviewer XTeq ).
During the rebuttal phase, we meticulously give point-by-point responses to your comments, and further add the additional experiments and figures into the one-page supplementary PDF. Especially,
- we provided extensive evaluations of LeHaCE on more hallucination evaluation metrics, including coverage ratio, relation hallucination, attribute hallucination, and more decoding strategies (greedy decoding), which further validate the effectiveness and versatility of our findings and method.
- We also provided comprehensive ablation experiments on the fitting methods for LeHaCE, which validated the reasonableness of using linear fitting.
- Furthermore, we conducted a more comprehensive and in-depth analysis of our findings.
We hope that our responses adequately address all your concerns and meet the expectations of the conference committee.
Pdf: /pdf/e4aa3ed6923855cc1f11d4442d0edef47e17574f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Local Curvature Smoothing with Stein's Identity for Efficient Score Matching | Accept (poster) | Summary: The paper proposes a novel score matching variant called Local Curvature Smoothing with Stein’s Identity (LCSS). This method addresses the computational challenges associated with the Jacobian trace in score matching, particularly for high-dimensional data, by leveraging Stein’s identity. LCSS aims to bypass the expensive computation of the Jacobian trace, offering both regularization benefits and efficient computation. The method is validated through experiments on synthetic and real datasets.
Strengths: 1. the idea of LCSS is novel
2. Jacobian is not computed directly, but implicitly respected.
3. Experiments on high and low resolution are performed.
Weaknesses: 1. In lines 161-162, interchangeability is assumed. However, in the analysis, interchangeability requires some properties of the interested function. The reason why the assumption holds is missing.
2. This paper does not approximate the Jacobian but instead circumvents the Jacobian. The empirical and theoretical differences against the method using Jacobian should be discussed, such as the difference in the estimated error bound.
3. In Tab. 3, the improvement seems to be marginal, while in figures, such as Fig. 4, the selected picture is much better under LCSS. The discrepancy should be discussed.
Technical Quality: 3
Clarity: 3
Questions for Authors: see weakness
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for thoroughly reading our paper and asking important questions, which we believe will clarify the contributions of our work.
-----------------
Response to Q.1
-----------------
Interchangeability holds when the score function $S_{\theta}$ is both integrable and differentiable.
$S_{\theta}$ is implemented by a neural network, and we assume these conditions are met.
We noted that if there are correlations between the dimensions of $x'$ over which expectations are taken, interchangeability does not always hold.
However, because $x'$ is a sample from a diagonal Gaussian, $x' \sim \mathcal{N}(x, \sigma^2 \mathbb{I} _ {d})$, in our case, there are no correlations between dimensions, thus fulfilling this condition.
We plan to include an extra sentence in the camera-ready version for better reader comprehension.
-----------------
Response to Q.2
-----------------
The loss curve in Fig. 8 (Appendix B) empirically demonstrates that our LCSS has a lower variance than SSM (and even DSM).
The theoretical explanation is provided below, (essentially identical to the response given to Reviewer tfBV's question).
We replaced the Jacobian computation with Stain's identity, approximating the expectation computation with a single sampling.
Compared to Hutchinson's trick, which approximates the Jacobian by random projection in the existing methods (SSM and FD-SSM), we show that our approximation error is smaller with high probability when the variance of Gaussian, $\sigma^{2}$, is small.
---
### 2-1. Error in single sampling approximation
Let $x$ be a $d$-dimension vector, $S(x)$ be a $L$-Lipschitz (score) function, $S: \mathbb{R}^{d} \rightarrow \mathbb{R}$.
We approximate $M := \mathbb{E} _ { x' \sim N(x, \sigma^{2} I_d) } \left[ \mathcal{J} _ \text{SM}^{s} (\theta, x') \right]$
by its single sampling,
$M' := \mathcal{J} _ \text{SM}^{s} (\theta, x')$.
Then, Chernoff bound for Gaussian variable tells us that
$
\Pr[ |M - M'| \geq \delta] \leq 2 \exp \left(- \frac{\delta^{2}}{2 L^{2} \sigma^{2}}\right),
\forall \delta \geq 0.
$
Letting $p = 2 \exp \left(- \frac{\delta^{2}}{2 L^{2} \sigma^{2}} \right)$, we obtain
$
|M - M'| \leq \delta = \sqrt{2 L^{2} \sigma^{2} \log \left(\frac{2}{p}\right)}
$
with probability at least $1 - p$.
(See Thm. 2.4 in [5], for example.)
- [5] Wainwright, M. (2015). "Mathematical Statistics: Chapter 2 Basic tail and concentration bounds."
---
### 2-2. Upper bound of Hutchinson's trick error
We denote the Jacobian matrix of $S(x)$, $\nabla _ {x} S(x)$, by $A$.
The error between the true trace of $A$, $\text{Tr}(A)$, and the estimate by Hutchinson's trick, $\tilde{T}$, is bounded as
$|\text{Tr}(A) - \tilde{T}| \leq |A| _ {F} $ where $ |\cdot|_{F}$ is the Frobenius norm.
The upper bound of Hutchinson's trick error is thus $|A| _ {F}$.
(See line 94-96 of our paper.)
---
### 2-3. Comparison
We compare the upper bound of $|M - M'| $ to $|A _ {F}|$.
By substituting $|A _ {F}|$ into $\delta$ in the above, we know that
$
|M - M'| \leq |A _ {F}|
$
holds with probability at least $q := 1 - p = 1 - 2 \exp \left( - \frac{|A| _ {F}^{2}}{2 L^{2} \sigma^{2}} \right)$.
In the case of small $\sigma$ (a region particularly crucial for SDM training), $q$ is nearly 1, indicating that a single sampling approximation of the expected value on a Gaussian distribution almost consistently yields smaller errors compared to Hutchinson's trick.
-----------------
Response to Q.3
-----------------
Evaluation in Table 3 was conducted using large models like DDPM++ at a low resolution of $32 \times 32$, where LCSS and DSM showed comparable performance.
In contrast, the experiments in Figs. 4-6 were conducted with the smaller NCSNv2 model at $256 \times 256$ resolution, presented more challenging conditions.
The qualitative evaluation of Figs. 4-6 indicates that LCSS significantly outperforms DSM under such severe conditions.
The results demonstrate that LCSS operates stably even when the model capacity is small.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for further clarifications.
1-2 is good for me.
3 is kind of mysterious to me. Perhaps I have missed something. In some sense, I think giving up Jacobian should be considered as an approximation. If so, when model capacity is high, it should reach comparable performance, while when capacity is small, approximation causes more deviation. This is different from your findings, why?
I noticed that Reviewer AvFd also mentioned non-affine SDE. Is it possible to do an empirical comparison?
---
Reply to Comment 1.1.1:
Title: Response to Reviewer j6aU
Comment: We appreciate the reviewer's questions to ensure accurate understanding and hope the responses below will contribute to the reviewer's clarity.
---
## Regarding Q.3
The table below illustrates a highly simplified comparison of the relative performance between the performance of LCSS (ours) and DSM.
| Case | Model | Resolution | Results | LCSS | DSM |
| :-----: | :--------: |:--------------:|:-----------: |:--------: |:--------:|
| #1 | Large | 32 x 32 | Table 3 | good | good |
| #2 | Small | 256 x 256 | Figs. 4-6 | good | poor |
Under the stricter conditions of Case #2 compared to Case #1, LCSS performance remained stable, whereas DSM performance degraded.
While LCSS, SSM, and FD-SSM approximate the Jacobian in their respective ways, DSM does not approximate it but circumvents it by replacing the true score, $\nabla _ {\bf x} \log p({\bf x})$, with the score of the perturbed distribution
corrupted by Gaussian noise, $\nabla _ {\bf x} \log q _ {\sigma} (\tilde{{\bf x}}|{\bf x})$, as the learning target.
The drawbacks of DSM noted in Lines 104-108 are all caused by this replacement.
We regularly monitored the quality of generated images during model training.
In the experiments of Case #2, as noted in lines 256-258,
although the quality of generated images was improving up to a certain stage (around 210k iterations, for example), it suddenly deteriorated.
Frequent spikes in loss values were observed during training, which appeared to be a trigger for the deterioration.
Although the exact cause was not precisely identified,
we attribute this phenomenon to the replacement by $\nabla _ {\bf x} \log q _ {\sigma} (\tilde{{\bf x}}|{\bf x})$, as it was not observed in other score matching methods.
We argue that the instability of DSM becomes apparent under stringent training conditions, such as those in Case #2.
This is our response, but to ensure clarity for the reviewers, please request clarification if any uncertainties remain.
---
## Regarding non-affine SDE
Designing non-affine SDE demands an in-depth understanding of SDE, and we are in the process of developing that.
As such, we are not currently ready for conducting empirical comparisons, and let us leave proposing non-linear SDE leveraging LCSS for future work.
---
Rebuttal 2:
Comment: Thanks for the reply:
1 is interesting to read.
2 is okay.
1. Perhaps I have missed something. Is there any work involving the accurate Jacobian? If so, how much of your methods approximate the accurate one empirically? This is the key concern in my previous 3rd question.
2. I think it's good for readability to involve a table to present the difference between your method and the existing one. This can highlight your contribution. Also, the summary table you present is good for presenting the paper.
Anyway, Although I am not an expert in this field, I think this paper is worth reading. I have raised my score to 6.
---
Rebuttal Comment 2.1:
Title: Response to Reviewer j6aU (2)
Comment: ### Response to 1
Thanks to this question, we have understood the insight behind the original 3rd question.
The original score matching (SM) proposed in [7] does not approximate the Jacobian.
In [8], the loss between SM and SSM-VR (= SSM in our paper) for low-dimensional table datasets (dimensionality is 11 ~ 22) is compared, demonstrating that the SSM approximation is nearly equivalent to the original SM.
The experimental results in our paper, along with the error-bound inequality above in response to Q2, demonstrate that LCSS approximates better than SSM, indirectly indicating a minor discrepancy between LCSS and SM.
After receiving this question, we conducted an experiment on the Checkerboard dataset akin to those in the paper.
The necessity to drastically reduce batch-size to avoid out-of-memory errors during Jacobian computation probably led to the unsuccessful density estimation, hence no comparison of performance could be made.
Instead, we compare the computational efficiencies.
The training time (sec.) measured over 20k iterations with a batch size of 10,000 are presented below.
| SSM | DSM | LCSS | SM (no approx.) |
| :--------: |:--------------:|:-----------: |:--------: |
| 26.13 | 22.59 | 21.75 | 645.61 |
It shows that even on a mere 2-dimension Checkerboard dataset, the computational cost is about 25 times greater than other methods.
This underscores that, in score matching training for high-dimensional data, such as images with dimensions up to several hundreds of thousands, Jacobian approximation for acceleration is indispensable in today’s typical computational environments.
- [7] Hyvärinen, A., & Dayan, P. (2005). Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4).
- [8] Song, Y., Garg, S., Shi, J., & Ermon, S. (2020, August). Sliced score matching: A scalable approach to density and score estimation. In Uncertainty in Artificial Intelligence (pp. 574-584). PMLR.
---
### Response to 2
We agree with the reviewer's suggestion. Through this rebuttal, we have recognized that presenting the differences with existing methods in a tabular comparison will highlight our contributions.
---
Lastly, we deeply appreciate the reviewer for dedicating time to the discussion and for the inquiries to clarify the ambiguities. | Summary: This paper provides a new way for score matching with the purpose of resolving some of the limitations of the existing methods such as high variance of sliced score matching and Gaussian constraints of denoising score matching (DSM). The new method is based on the local curvature smoothing proposed in [15]. A new score matching objective function is proposed by combining the Stein's Identity with the local curvature smoothing. The authors empirically show that the new method is more efficient in training than DSM and also has comparable performance to DSM.
Strengths: Although DSM is the default method used nowadays for score matching, the authors provide a nice novel alternative which may have some advantages over DSM. I'm interested to see more theoretical study in the future of this new method.
Weaknesses: I think some parts of the paper are not stated clearly and further clarification is needed. See questions for more details.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In section 2.4, the authors criticize the DSM method for having a Gaussian constraint. However, later there is no clarification showing how the new method is different from DSM in this regard. Can you please clarify this?
- In line 108, the authors criticize the DSM for having 0 numerator and denominator. However, in the final LCSS (equation (16)), the denominator can also be 0 and be problematic. Can the authors provide more discussion on why the new method is better in this regard?
- In Corollary 2, there is an assumption that an integral must be 0. How restrictive is this assumption? It seems to me that later on when designing LCSS objective, formula (14) is directly used without any further discussion on this assumption. Can the authors explain why this assumption can be dropped?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The detailed questions from the reviewer reflect her/his careful reading of our paper.
We are grateful for the constructive questions and hope our responses address their concerns.
-----------------
Response to Q.1
-----------------
The loss function of DSM includes $\nabla _ {x} \log q _ {\sigma _ {t}} ( \tilde{x} _ {t} | x _ {0} )$.
To compute it for any $0 \leq t \leq T$, $\nabla _ {x} \log q _ {\sigma _ {t}} ( \tilde{x} _ {t} | x _ {0} )$ must be Gaussian, necessitating a Gaussian prior and constraining the SDE to be linear.
In constrast, our LCSS
does not include $\nabla _ {x} \log q _ {\sigma _ {t}} ( \tilde{x} _ {t} | x _ {0} )$ in the loss function, allowing for a non-Gaussian prior and the design of nonlinear SDEs.
We note the following two:
- The LCSS loss function involves sampling from a Gaussian, $x' \sim \mathcal{N}(x, \sigma^2 \mathbb{I} _ {d})$,
but this stems from curvature smoothing regularization and does not relate to or constrain the prior distribution form or SDE design.
- Like our LCSS, SSM and FD-SSM do not impose Gaussian constraints, yet, as shown in the paper, SSM and FD-SSM fail to learn effectively at resolutions larger than $32 \times 32$.
-----------------
Response to Q.2
-----------------
Actual model training of SDMs with our method minimizes Eq.(18), which incorporates Eq.(16).
In Eq.(18), the coefficient $\lambda(t)=g^2(t)$ and for the VE SDE, $\lambda(t)=g^2(t)=\sigma_{t}^2$, effectively canceling out $\sigma_{t}^2$ in the denominator of Eq. (18) and avoiding unstable situations where the denominator could become zero.
For other SDE types (VP and sub VP), $\lambda(t)$ is more elaborate but similarly cancels out $\sigma_{t}^2$ in the denominator.
We note that, similarly, in training SDMs with DSM, applying the coefficient $\lambda(t)=g^2(t)$ allows for the cancellation of $\sigma_{t}^2$ in the denominator, thus circumventing the weakness of DSM the authors mentioned in line 108.
For fairness, we plan to add this point to the camera-ready version.
-----------------
Response to Q.3
-----------------
The assumption in Corollary 2 holds almost surely.
The assumption can be confirmed if $\lim _ {\lVert x \rVert \rightarrow \infty} s_{\theta_{i}}(x) Q(x) = 0$, as noted in [6].
Because $Q$ is defined as a probability density function of Gaussian in Eq. (14), $Q(x)$ tending towards zero at $x=\pm \infty$, this condition is satisfied.
For clarity, let us also include this explanation in the camera-ready version.
- [6] Liu, Q., Lee, J., & Jordan, M. A kernelized Stein discrepancy for goodness-of-fit tests. (ICML 2016)
---
Rebuttal Comment 1.1:
Comment: 1. Please make sure to include an explanation somewhere in Section 3. Right now, the whole non-affine thing is missing in this section.
2. That makes sense.
3. This is good.
Thanks the authors for the clarification and I have raised my score. | Summary: The paper proposes to use Stein's lemma to obtain a computationally efficient way in implementing a local-curvature regularized variant of the score matching objective. The main idea is to rewrite the Jacobian-trace term in a way that requires no Jacobian evaluations. In numerical experiments, the effectiveness of this approach is clearly demonstrated.
Strengths: - The paper is well-written and the main idea is clear and easy to understand.
- Other works which the paper builds upon are referenced and fairly attributed.
- Experiments on small-scale data clearly demonstrate the effectiveness of the approach.
- Also on larger datasets, the method appears to give strong empirical results.
Weaknesses: - Approximating Jacobian trace through Stein's identity potentially leads to an estimator with large variance -- I found the claims that it solves Hutchinson's high variance problem to be a bit misleading.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can there be a formal argument that the proposed estimator has lower variance than random projections? Essentially, the gradient is estimated through random (zero-order) sampling, which is not exactly low-variance?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: All limitations are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's meaningful question; addressing it will clarify our paper's contributions. For reader comprehension, we plan to include the following argument in the camera-ready version (maybe in the appendix).
----
## Question: a formal argument that the proposed estimator has lower variance than random projections?
The expectation in Eq.(16) is approximated by a single sampling, leading to an error, as the reviewer asks.
However, the key lies in taking the expectation over Gaussian samples, resulting in sufficiently small errors.
Below, we show the comparison between the error in a single sampling approximation of Stain's identiy with the error caused in random projections (Hutchinson's trick error).
----
### 1. Error in single sampling approximation
Let $x$ be a $d$-dimension vector, $S(x)$ be a $L$-Lipschitz (score) function, $S: \mathbb{R}^{d} \rightarrow \mathbb{R}$.
In our implementation, we approximate $M := \mathbb{E} _ { x' \sim N(x, \sigma^{2} I_d) } \left[ \mathcal{J} _ \text{SM}^{s} (\theta, x') \right]$
by its single sampling,
$M' := \mathcal{J} _ \text{SM}^{s} (\theta, x')$.
Then, Chernoff bound for Gaussian variable tells us that
$
\Pr[ |M - M'| \geq \delta] \leq 2 \exp \left(- \frac{\delta^{2}}{2 L^{2} \sigma^{2}}\right),
\forall \delta \geq 0.
$
Letting $p = 2 \exp \left(- \frac{\delta^{2}}{2 L^{2} \sigma^{2}} \right)$, we obtain
$
|M - M'| \leq \delta = \sqrt{2 L^{2} \sigma^{2} \log \left(\frac{2}{p}\right)}
$
with probability at least $1 - p$.
(See Thm. 2.4 in [5], for example.)
- [5] Wainwright, M. (2015). "Mathematical Statistics: Chapter 2 Basic tail and concentration bounds."
----
### 2. Upper bound of Hutchinson's trick error
We denote the Jacobian matrix of $S(x)$, $\nabla _ {x} S(x)$, by $A$.
The error between the true trace of $A$, $\text{Tr}(A)$, and the estimate by Hutchinson's trick, $\tilde{T}$, is bounded as
$|\text{Tr}(A) - \tilde{T}| \leq |A| _ {F} $ where $ |\cdot|_{F}$ is the Frobenius norm.
The upper bound of Hutchinson's trick error is thus $|A| _ {F}$.
(See line 94-96 of our paper.)
----
### 3. Comparison
We compare the upper bound of $|M - M'| $ to $|A _ {F}|$.
By substituting $|A _ {F}|$ into $\delta$ in the above, we know that
$
|M - M'| \leq |A _ {F}|
$
holds with probability at least $q := 1 - p = 1 - 2 \exp \left( - \frac{|A| _ {F}^{2}}{2 L^{2} \sigma^{2}} \right)$.
In the case of small $\sigma$ (a region particularly crucial for SDM training), $q$ is nearly 1, indicating that a single sampling approximation of the expected value on a Gaussian distribution almost consistently yields smaller errors compared to Hutchinson's trick.
---
We note that empirically, the stability of the loss curve in Fig. 8 in Appendix B indicates that the proposed LCSS has lower variance than SSM (random projection) and even DSM. | Summary: This manuscript proposes a new score matching method that bypasses the Jacobian trace by applying Stein’s identity, enabling effective regularization and efficient computation.
Strengths: 1. The method is computationally efficient compared to other SSM variants.
2. Experimental results demonstrate the effectiveness of the proposed method.
Weaknesses: 1. The advantage of the proposed method compared to denoising score matching (DSM) is unclear. The manuscript mentions that it restricts the SDE to be affine, but it does not clarify the benefit of using a non-affine SDE. Furthermore, the influence of the SDE on the generative model needs to be elaborated.
2. The experimental results do not show significant improvements over DSM. The proposed method achieves comparable sample quality, as shown in Table 3.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to weaknesses.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer taking the time to read our paper thoroughly.
We hope the following responses clarify our contribution. Additionally, our response to Reviewer tfBV below provides an argument about the advantage over SSM and FD-SSM, which we would like the reviewer to examine.
-----------------
Response to Q.1
-----------------
The design of SDE directly influences the performance of score-based diffusion models, as demonstrated in [1][2]. The benefits of non-linear SDE, particularly highlighted in [3], enable more accurate alignment of scores with the ground-truth data distributions than affine SDE and thus enhance the quality of generated samples. (Fig. 2 in [3] illustrates this.)
Our LCSS allows for the engineering of non-linear SDEs with DSM-equivalent performance.
This paper does not cover the creation of new non-linear SDE based on LCSS, which is noted as a limitation, but we focus on describing LCSS itself as a novel score-matching method in this work.
- [1] Dockhorn, T., Vahdat, A., & Kreis, K. Score-based generative modeling with critically-damped langevin diffusion. (ICLR 2022)
- [2] Karras, T., Aittala, M., Aila, T., & Laine, S. Elucidating the design space of diffusion-based generative models. (NeurIPS 2022)
- [3] Kim, D., Na, B., Kwon, S. J., Lee, D., Kang, W., & Moon, I. C. Maximum likelihood training of implicit nonlinear diffusion model. (NeurIPS 2022)
-----------------
Response to Q.2
-----------------
The evaluations in Table 3 utilize large models, such as DDPM++, at a low resolution of $ 32 \times 32$, where the results show that LCSS performs comparably to DSM, as the reviewer stated.
However, qualitative evaluations presented in Figs. 4-6 conducted with the smaller model NCSNv2 at $256 \times 256$ resolution demonstrate noticeable differences between LCSS and DSM; DSM exhibits poor image generation performance under these conditions, as suggested in [4], whereas LCSS can still produce high-quality images.
The stable performance of LCSS with limited model capacity is its advantage over DSM.
- [4] Song, Y., & Ermon, S. (2020). Improved techniques for training score-based generative models. (NeurIPS 2020) | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AUC Maximization under Positive Distribution Shift | Accept (poster) | Summary: Due to a positive distribution shift, training and test distributions are not identical. However, existing AUC maximization methods don’t take it into account. To address this shift, this paper theoretically shows a new way to maximize the AUC on the test distribution by using positive and unlabeled data in the training distribution and unlabeled data in the test distribution. Finally, four real-world datasets validate the effectiveness of the proposed method.
Strengths: - The proposed setting is novel and practical in AUC optimization. The distribution of negative data is generally stable but the distribution of positive data is more diverse or time-varying in medical diagnosis, intrusion detection, and visual inspection.
- The method presentation is easy to understand. This paper first introduces basic AUC fundamental knowledge. Then, it gives the problem setting of the proposed positive distribution shift. Based on this setting, the final expression is obtained through some intuitive and simple derivation. To be specific, the AUC maximization on the test distribution can be accomplished by using positive and unlabeled data in the training distribution and unlabeled data in the test distribution.
Weaknesses: - The effect of the proposed methods on MINST and Fashion MINST datasets is not significant, which is inconsistent with those on the other datasets. The authors don’t give any explanation.
- The authors do not fully compare their method with the latest ones. For example,
- Positive-Unlabeled Learning with Label Distribution Alignment. (TPAMI 2023)
- Dist-PU: Positive-Unlabeled Learning from a Label Distribution Perspective. (CVPR 2022)
- Positive-unlabeled learning using random forests via recursive greedy risk minimization. (NeurIPS 2022)
- All theoretical derivations are only based on the sigmoid surrogate loss. As far as I know, square loss is also popular. Can the theoretical results extend to the other losses?
- There are some typos. For example,
- In line 105, “However, these all methods assume that” should be “However, all these methods assume that”.
Technical Quality: 4
Clarity: 4
Questions for Authors: please refer to Weakness
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive comments and constructive feedback.
> The effect of the proposed methods on MINST and Fashion MINST datasets is not significant, which is inconsistent with those on the other datasets. The authors don’t give any explanation.
As described in Line 279, since MNIST and Fashion MNIST are relatively simple data, the performances of our method and PURR that does not consider class-imbalance might be similar. More specifically, if the supports of positive and negative conditional densities are separated (i.e., data are simple), both losses of positive and negative data can be minimized without conflict between them, even when there is a class imbalance. Thus, class imbalance might not be a severe problem in this case. In contrast, when positive and negative-conditional densities overlap (i.e., complex data), minimizing losses for positive data located in the overlapped region causes to increase losses for negative data in the same region. Thus, when there is a class imbalance, positive data that are few can often be ignored. Therefore, in this case, class imbalance becomes a more severe problem. Since SVHN and CIFAR10 are more complex than MNIST and Fashion MNIST, our method worked well. We will clarify this.
> The authors do not fully compare their method with the latest ones.
Thank you for sharing relevant works.
We additionally compared our method with the method [a] in your comment (PURDA).
The results are described in Table 5 of the PDF file in the global response.
PURDA used positive and unlabeled data in the training distribution in this experiment since it is designed for ordinary PU learning.
Margin $\rho$ is selected from $\\{0.1,1,10\\}$ by validation data.
Our method performed better than PURDA.
This is because PURDA does not consider the distribution shift.
All the methods in your comment do not consider the distribution shift and, thus, are unsuitable for our setting.
In the final version, we will include this result and add all related works in your comments in Section 2.
[a] Positive-Unlabeled Learning with Label Distribution Alignment. (TPAMI 2023)
> All theoretical derivations are only based on the sigmoid surrogate loss. As far as I know, square loss is also popular. Can the theoretical results extend to the other losses?
This is an insightful question. As long as we use symmetric losses (i.e., loss $l$ satisfying $l(z)+l(-z)=K$ for any $z \in \mathbb{R}$ and $K$ is a constant) [6], we can derive the same final loss function in Eq. 13. This is because the second term in Eq. 10 becomes constant, and thus, we can ignore it.
We note that symmetric losses include a wide range of losses such as sigmoid, ramp, unhinged, and zero-one losses, although the squared loss is not included [6].
We will include this discussion in the final version.
> There are some typos.
Thank you for pointing out the typos. We will again carefully proofread and prepare the final version.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal! The authors have clarified all of my concerns. Hence, I decide to keep my rating and increase my confidence.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your response. We are pleased to hear that you have increased the confidence level to 5. We sincerely appreciate the time and effort you have dedicated to reviewing and providing feedback on our paper. | Summary: The paper proposes a method for AUC maximization in binary classification problems under positive distribution shift. They introduce their method, which is simple and easy to implement/understand, and then show it works well in some experiments.
Strengths: - The paper is well written and easy to understand;
- The paper proposes a well-motivated method and show how it can be easily implemented in practice;
- The experiments are convincing.
Weaknesses: - The authors do not discuss how the classification threshold can be chosen in a practical situation under positive distribution shift.
Technical Quality: 3
Clarity: 4
Questions for Authors: How should the practitioner choose classification threshold after training their classifiers using your method?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors discuss limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive comments and constructive feedback.
> How should the practitioner choose classification threshold after training their classifiers using your method?
Thank you for the insightful question.
In practical use, there are many situations where it is beneficial just to be able to sort data in score order. For example, in anomaly detection, experts or operators can check data with high scores within the cost they can spend or until anomalous data do not appear. In disease diagnosis, patients with higher scores can be given priority for detailed examination.
In recommendation systems, products can be presented to users in order of score.
However, when a classification threshold is required, we can use the estimated class-prior in the test distribution to determine the threshold.
Specifically, we first extract negative data in unlabeled data from the training distribution by applying (off-the-shelf) PU learning to PU data in the training distribution. Since PU data and the class-prior in the training distribution are available in our setting, we can perform it. Next, since the negative distribution does not change in our setting, the extracted negative data can be regarded as negative data in the test distribution. We can estimate the class-prior in the test distribution by applying existing class-prior estimation methods [40, 59, 14] to the extracted negative and unlabeled data in the test distribution. The threshold can be set to the top $N_{{\rm te}} \pi_{{\rm te}}^{{\rm est}}$-th score of unlabeled data, where $\pi_{{\rm te}}^{{\rm est}}$ is the estimated class-prior. We will include this discussion.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply!
1. I agree with you sorting the data; that's a good point;
2. I am a bit confused with your suggestion here. What is the purpose/rationale of setting the threshold to the $N_{te}\pi_{te}$-th score?
---
Reply to Comment 1.1.1:
Title: Thank you for your prompt response.
Comment: Thank you for your prompt response!
We are pleased to hear that you agree with sorting the data. We will address this point in the final version.
Here, we will explain the threshold in more detail.
Now, we have $N_{{\rm te}}$ unlabeled data in the test distribution. When the true positive class prior (the ratio of positive data in unlabeled data) in the test distribution is $\pi_{{\rm te}}$, we can regard that $N_{{\rm te}} \pi_{{\rm te}}$ positive data are included in the $N_{{\rm te}}$ unlabeled data.
Thus, when the $N_{{\rm te}}$ unlabelled data are sorted by score, the top $N_{{\rm te}} \pi_{{\rm te}}$ instances can be considered positive (assuming this scoring is accurate), and the score of the $N_{{\rm te}} \pi_{{\rm te}}$-th instance can be the boundary separating positive and negative. Thus, we can use the score of the $N_{{\rm te}} \pi_{{\rm te}}$-th instance as the threshold. Since true prior $\pi_{{\rm te}}$ is unknown, we used the estimated prior $\pi _{{\rm te}}^{{\rm est}}$ instead.
Does this response meet your satisfaction?
If you have any further questions or suggestions, please do not hesitate to contact us. | Summary: This paper considers AUC maximization when the conditional probability distribution of the positive class changes in the test phase. To this end, the unbiased loss function is derived. The loss is approximated by positive and unlabeled data from training distribution, unlabeled data from test distribution, and class-prior of training distribution. In experiments, the proposed method outperformed the existing methods over the four benchmark datasets.
Strengths: - This is the first study on AUC maximization for positive distribution shift.
- The proposed method outperformed the existing methods.
- The proposed method does not require the class-prior of the test distribution.
Weaknesses: - Unlike the existing study [15, 42], the negative distribution is not considered.
- It lacks theoretical analyses of the proposed method.
- The extension of the proposed method is discussed but not evaluated in the experiments.
Technical Quality: 2
Clarity: 2
Questions for Authors: Is the proposed loss function unbiased to its supervised counterpart?
It would be valuable if there were discussion or experimental results showing the effect of the number of unlabeled data from the test distribution. In some applications, collecting a lot of unlabeled data from the test distribution might be difficult. In such a situation, the experimental results would help practitioners understand how many samples are necessary to collect.
According to the literature, the non-negative risk estimator plays a crucial role in training deep neural networks. However, the proposed method does not mention the non-negativity of the risk estimator. Did the authors encounter that the risk estimator went to a large negative value in experiments? If not, what points in the proposed method avoid the issue?
Regarding the Extension in Section 4.4, it would be nice to cite the existing work.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The limitations are discussed in Appendix A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and constructive feedback.
> Unlike the existing study [15, 42], the negative distribution is not considered.
The method in [15] assumes the positive distribution shift as in our method. Thus, it does not consider the negative distribution change.
The method in [42] assumes the covariate shift, i.e., $p _{{\rm te}} (x) \neq p _{{\rm tr}} (x)$ but $p _{{\rm te}}(y|x) = p _{{\rm tr}} (y|x)$. Although the covariate shift has been traditionally studied, the assumption of $p _{{\rm te}}(y|x) = p _{{\rm tr}} (y|x)$ is often restrictive in practice.
One of the main contributions of our work is to highlight an important problem (both class imbalance and positive distribution shift occur) that has been overlooked despite its many applications, as described in Section 1.
> The extension of the proposed method is discussed but not evaluated in the experiments.
We have already evaluated the extension of our method (Eq. 16) in Table 6 of Section C.4.
The extension of our method ($\alpha=0.999$) slightly tended to perform better than our original method ($\alpha=1.0$) by using additional labeled negative data.
> Is the proposed loss function unbiased to its supervised counterpart?
Yes. As long as the assumption of positive distribution shift (Eq. 7) is satisfied, the derived AUC in Eq. 12 is equivalent to the original AUC in Eq. 8.
> It would be valuable if there were discussion or experimental results showing the effect of the number of unlabeled data from the test distribution.
Thank you for the constructive comment. We additionally evaluate our method with a small number of
unlabeled data from the test distribution.
The results are described in Table 3 of the PDF file in the global response.
As expected, the performance of our method tended to increase as the number of unlabeled data $N_{{\rm te}}$ increased. Nevertheless, our method tends to outperform puAUC, which does not use unlabeled test data, even when $N _{{\rm te}}=500$. Since many unlabeled data are often easy to collect, we believe that our method is useful in practice. We will include these results in the final version.
> According to the literature, the non-negative risk estimator plays a crucial role in training deep neural networks. However, the proposed method does not mention the non-negativity of the risk estimator.
Thank you for the insightful comment. In our preliminary experiment, we evaluated our method with the non-negative loss correction (Ours w/ nn), which prevents the empirical loss from being smaller than zero. However, the results with and without the loss correction (Ours w/ nn and Ours) were almost identical, as described in Table 4 of the PDF file. Thus, we proposed the current simpler loss function.
We describe the non-negative correction used in this experiment.
For clarity, we consider to minimize the AUC risk, $R _{\rho} (s) := \mathbb{E} _{{\bf x} ^{{\rm p}} \sim p ^{{\rm p}} _{{\rm te}} ({\bf x} )} \mathbb{E} _{{\bf x} ^{{\rm n}} \sim p ^{{\rm n}} _{{\rm te}} ({\bf x} )} [ f({\bf x} ^{{\rm n}}, {\bf x} ^{{\rm p}}) ] $, which is equivalent to maximize the AUC in Eq. 8 since $ {\rm AUC} _{\rho} (s) = 1 - R _{\rho} (s) $. The minimum value of this risk is zero. Then, the corresponding loss for Eq. 13 becomes ${\cal L} _{{\rm risk}}(s) := \mathbb{E} _{{\bf x} \sim p _{{\rm te}} ({\bf x}) } \mathbb{E} _{{\bar {\bf x} } \sim p _{{\rm tr} } ({\bf x}) } [ f({\bar {\bf x}} ,{\bf x}) ] - \pi _{{\rm tr} } \mathbb{E} _{{\bf x} \sim p _{{\rm te}} ({\bf x}) } \mathbb{E} _{{\bf x} ^{{\rm p}} \sim p ^{{\rm p}} _{{\rm tr}} ({\bf x}) } [ f({\bf x} ^{{\rm p}} ,{\bf x}) ] $.
Since this loss is derived from the AUC risk, $\mathbb{E} _{{\bf x} \sim p _{{\rm te}} ({\bf x} )} \mathbb{E} _{{\bf x} ^{{\rm n}} \sim p ^{{\rm n}} _{{\rm te}} ({\bf x} )} [ f({\bf x} ^{{\rm n}}, {\bf x}) ]$, it should not take negative values. Thus, to prevent negative values of its empirical estimate ${\hat {\cal L} _{{\rm risk}}} (s) $, we used the loss with the absolute value function $|{\hat {\cal L} _{{\rm risk}}} (s)| $ for the optimization. This correction is successfully used in PU learning [15] or other weakly supervised learning [a].
By the way, if class-prior $\pi _{{\rm te}}$ is known, we can derive a tighter lower bound on the AUC risk. Specifically, we can obtain $\mathbb{E} _{{\bf x} \sim p _{{\rm te}} ({\bf x} )} \mathbb{E} _{{\bf x} ^{{\rm n}} \sim p ^{{\rm n}} _{{\rm te}} ({\bf x} )} [ f({\bf x} ^{{\rm n}}, {\bf x}) ] \geq (1-\pi _{{\rm te}})/2$. Here, we used the definition of $p _{{\rm te}} ({\bf x} )$ and the fact that the AUC risk between the same densities is $1/2$ as described in Lines 163--164. By substituting Eq. 11 for this, we can obtain ${\cal L} _{{\rm risk}}(s) \geq (1-\pi _{{\rm te}})(1-\pi _{{\rm tr}})/2 =: b > 0$. As a result, we can use $| {\hat {\cal L} _{{\rm risk}}} (s) - b| + b$ for the optimization. Our method with this correction (Our w/ b) tended to enhance the performance of it without the correction (Ours) in Table 3. However, note that Ours has the strong advantage of not requiring the class-prior and performed better than existing methods. We will include this result in the final version.
[a] Lu, Nan, et al. "Mitigating overfitting in supervised classification from two unlabeled datasets: A consistent risk correction approach." AISTATS2020.
> Regarding the Extension in Section 4.4, it would be nice to cite the existing work.
Thank you for the suggestion. Since the extension in Section 4.4 uses positive, negative, and unlabeled data in the training distribution, it is related to semi-supervised learning.
A few studies [41,b] especially use the PU learning for semi-supervised learning.
However, semi-supervised learning does not consider the distribution shift. The final version will include a more detailed discussion and citations.
[b] Sakai et al. Semi-supervised classification based on classification from positive and unlabeled data. In ICML2017
---
Rebuttal Comment 1.1:
Comment: Thank you for your answer.
Some of my questions and concerns are resolved.
> The method in [42] assumes the covariate shift
The paper [42] discussed the negative distribution shift and the risk to adapt the shift.
> We have already evaluated the extension of our method (Eq. 16) in Table 6 of Section C.4.
It would be better to put Table 6 in the main body.
> Eq. 12 is equivalent to the original AUC in Eq. 8.
It is nice to mention this for clarification.
> the results with and without the loss correction (Ours w/ nn and Ours) were almost identical
If there is an interpretation of why this happens, the contribution of this paper will become stronger.
---
Reply to Comment 1.1.1:
Comment: We appreciate your reply.
> The paper [42] discussed the negative distribution shift and the risk to adapt the shift.
Thank you for pointing this out. As you mentioned, the paper [42] discussed the negative distribution shift as well as the covariate shift.
Specifically, this paper considers the case, $p _{{\rm te}} ^{{\rm p}} ({\bf x}) = p _{{\rm tr}} ^{{\rm p}} ({\bf x}) $ but $p _{{\rm te}} ^{{\rm n}} ({\bf x}) \neq p _{{\rm tr}} ^{{\rm n}} ({\bf x}) $.
When positive and unlabeled data in the training distribution and unlabeled data in the test distribution are available, this paper showed that the original PU learning without re-weighting can deal with the negative distribution shift. This method requires class-prior $\pi _{{\rm te}}$.
We illustrate that the same can be said for AUCs easily. That is, negative distribution shifts can be addressed by the existing AUC maximization studies [41,54,55]. Specifically, the AUC in the test distribution is
${\rm AUC}_{\sigma} (s) = \mathbb{E} _{{\bf x} ^{{\rm p}} \sim p ^{{\rm p}} _{{\rm te}} ({\bf x} )} \mathbb{E} _{{\bf x} ^{{\rm n}} \sim p ^{{\rm n}} _{{\rm te}} ({\bf x} )} \left[ f({\bf x} ^{{\rm p}}, {\bf x} ^{{\rm n}}) \right] = \frac{1}{1-\pi _{{\rm te}}} \mathbb{E} _{{\bf x} ^{{\rm p}} \sim p ^{{\rm p}} _{{\rm tr}} ({\bf x} )} \mathbb{E} _{{\bf x} \sim p _{{\rm te}} ({\bf x} )} \left[ f({\bf x} ^{{\rm p}}, {\bf x}) \right] + C$,
where $C$ is a constant, and in the second equal sign, we used $p^{{\rm n}} _{{\rm te}} ({\bf x} ) = \frac{1}{1-\pi _{{\rm te}}} [ p _{{\rm te}}({\bf x}) - \pi _{{\rm te}} p _{{\rm te}} ^{{\rm p}}({\bf x}) ]$ and $p _{{\rm te}} ^{{\rm p}} ({\bf x}) = p _{{\rm tr}} ^{{\rm p}} ({\bf x})$, and the fact that the AUC between the same densities is a constant as described in Lines 163--164. The form of the derived AUC is equivalent to that in existing studies [41,54,55] and can be maximized with test unlabeled and positive training data. Unlike ordinary PU learning, test class-prior $\pi _{{\rm te}}$ is not required since it does not affect the optimization.
We will include this discussion in the final version.
> It would be better to put Table 6 in the main body.
Thank you for the suggestion. We will put Table 6 in the main body of the final version.
> It is nice to mention this for clarification.
We agree on you. We will mention this in the final version for clarity.
> If there is an interpretation of why this happens, the contribution of this paper will become stronger.
The reason of the ineffectiveness of Ours w/ nn is that the non-negative constraint is insufficient/weak in our setting. Specifically, in Ours w/ nn, the non-negativity of the empirical loss, ${\hat {\cal L} _{{\rm risk}}} (s)$, is derived from the non-negativity of the corresponding expected loss, ${\cal L} _{{\rm risk}} (s) = (1-\pi _{{\rm tr}}) \mathbb{E} _{{\bf x} \sim p _{{\rm te}} ({\bf x} )} \mathbb{E} _{{\bf x} ^{{\rm n}} \sim p ^{{\rm n}} _{{\rm te}} ({\bf x} )} [ f({\bf x} ^{{\rm n}}, {\bf x}) ]$.
Here, $f({\bf x} ^{{\rm n}}, {\bf x}) := \sigma (s({\bf x} ^{{\rm n}}) - s({\bf x})) \geq 0$, and when $s({\bf x}) \gg s({\bf x} ^{{\rm n}})$, $f({\bf x} ^{{\rm n}}, {\bf x})$ becomes zero.
However, since $p _{{\rm te}} ^{{\rm n}} ({\bf x})$ is contained in $p _{{\rm te}} ({\bf x}) (= \pi _{{\rm te}} p _{{\rm te}} ^{{\rm p}} ({\bf x}) + (1-\pi _{{\rm te}}) p _{{\rm te}} ^{{\rm n}} ({\bf x})$), the minimum value of the expected loss ${\cal L} _{{\rm risk}} (s)$ actually be greater than zero. Thus, the empirical loss could not be sufficiently constrained with the non-negativity, and thus, the performance did not improve.
Additional information is required to know better/tighter constraints of the expected loss ${\cal L} _{{\rm risk}} (s)$.
When class-prior in the test distribution $\pi _{{\rm te}}$ is known, the tighter constraint can be derived, ${\cal L} _{{\rm risk}}(s) \geq (1-\pi _{{\rm te}})(1-\pi _{{\rm tr}})/2 =: b > 0$, and using this (Ours w/ b) tends to enhance the performance of Ours in Table 3 in the PDF file. (Note that Ours has the advantage of not requiring $\pi _{{\rm te}}$ and performed better than the existing methods even without the loss correction).
In the final version, we will include this discussion of why non-negativity is ineffective in our method.
---
Rebuttal 2:
Comment: Thank you for your response.
We investigated the learning process for the training loss, validation loss, and test AUC. Here, we used ${\hat {\cal L} _{{\rm risk}}} (s)$ for the loss. Note that the validation loss is the value of ${\hat {\cal L} _{{\rm risk}}} (s)$ calculated with validation data (PU data in the training distribution and U data in the test distribution).
The training loss of our method became smaller than the lower bound $b$ as learning progressed and took negative on simple datasets (MNIST and Fashion MNIST). The validation loss (test AUC) tended to decrease (increase) initially but gradually increased (decreased) or stopped improving as learning progressed. These trends are consistent with ordinary PU learning, such as the study [22].
However, the validation loss and test AUC were well correlated, and our method was able to select good models by using early-stopping with the validation loss. This is one of the reasons for the good performance of our method without the loss correction. Early-stopping was effective in preventing overfitting in the PU learning context.
Note that all methods in our experiments used early-stopping.
We will include this result and discussion in the final version.
---
Rebuttal Comment 2.1:
Comment: We would like to kindly inform you that we have sent a response to your question. We may have posted it just as the OpenReview email notifications temporarily ceased, so we wanted to ensure you received it by notifying you again. We apologize for the inconvenience and appreciate your understanding. | Summary: This paper addresses the challenge of maximizing the Area Under the Receiver Operating Characteristic Curve (AUC) in imbalanced binary classification problems where there is a positive distribution shift--this shift is where negative data remains constant, but positive data varies. A new method is proposed that utilizes labeled positive and unlabeled data from the training distribution, along with unlabeled data from the test distribution, to maximize the AUC effectively in the presence of such shifts.
Strengths: This paper introduces a new loss function designed for AUC maximization under positive distribution shifts. Previous research has focused separately on AUC maximization and positive distribution shifts, but this study found the intersection of these two areas. The authors have successfully identified and explored this new research niche. The proposed loss function, derived from mathematical foundations, can be readily integrated into neural network training, offering a practical application for enhancing model performance. This paper is well-structured and clearly written, making it easy to follow.
Weaknesses: Despite its strengths, this research primarily offers a simple proposal of a loss function, suggesting its contributions to the field might be limited. An expansion to include various metrics, such as F-1 and G-mean of TPR and TNR, which are also relevant for imbalanced data classification, could enrich this paper. Additionally, the experimental validation is somewhat restricted, utilizing only four datasets, all of which are image datasets. A more comprehensive evaluation using a broader range of datasets is necessary to fully assess the proposed loss function's effectiveness. Therefore, the reviewer believes that the contribution of this research may not be substantial enough for acceptance at a top-tier conference.
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1: Please specify the scenarios where both class imbalance and positive distribution shift occur. Providing detailed examples will help readers grasp the practical significance of this research problem.
Q2: Why did the authors choose to conduct their experiments exclusively with image datasets? Are there any other real-world problems?
Q3: AUC maximization can be implemented not just as a loss function for neural networks, but across various machine learning methods. Why did you choose to focus on proposing a loss function?
Q4: The reviewer is not convinced that Lines 4-5 in Algorithm 1 sufficiently demonstrate the training process. There needs to be a more detailed and mathematical explanation of how model parameters are updated using the proposed loss function.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: L1: An expansion to include various metrics, such as F-1 and G-mean of TPR and TNR, which are also relevant for imbalanced data classification, could enrich this paper.
L2: For extremely imbalanced cases, training difficulties are likely to arise. It is necessary to address how the proposed loss function can be effectively minimized in these scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and constructive feedback.
> An expansion to include various metrics, such as F-1 and G-mean of TPR and TNR, which are also relevant for imbalanced data classification, could enrich this paper.
Thank you for the suggestion. We agree that extensions to maximize other metrics such as F-1 could enrich our paper. However, AUC is the most representative metric for imbalanced data, and many papers (including top conference and journal papers such as ICLR, AAAI, ICCV, ICDM, NeurIPS, and TPAMI) focus solely on AUC maximization [12, 29, 41, 54, 55, 57, 58, 60, 62]. We, therefore, believe that the current proposal is still worthy of publication. Extending our problem setting to maximize other metrics is an interesting future work.
As a supplement, we evaluated F1 scores of each method.
The results are described in Table 1 of the PDF file in the global response.
Our method also outperformed the others even when it maximizes the AUC.
This result may imply that AUC maximization can also help improve other evaluation metrics for imbalanced data.
We will include this result and the above discussion (future work) in the final version.
> the experimental validation is somewhat restricted, utilizing only four datasets, all of which are image datasets. A more comprehensive evaluation using a broader range of datasets is necessary to fully assess the proposed loss function's effectiveness.
Thank you for the constructive comment.
We additionally evaluate our method with two tabular datasets (Hospital Readmission and Hypertension [a]).
In Hospital Readmission, the task is to predict the 30-day readmission of diabetic hospital patients.
In Hypertension, the task is a hypertension diagnosis for high-risk age.
Both datasets are widely used for distribution shift adaptation studies [a].
We used positive-shifted data to construct the positive distribution shift.
The experimental settings, such as the number of data, are the same as the submitted paper.
The results are described in Table 2 of the PDF file.
Our method outperformed the others. These results show that our method works well in tabular datasets. We will include these results in the final version.
[a] Gardner et al. "Benchmarking distribution shift in tabular data with tableshift." NeurIPS2023.
> Q1: Please specify the scenarios where both class imbalance and positive distribution shift occur. Providing detailed examples will help readers grasp the practical significance of this research problem.
We have described the motivating examples (cyber security, medical diagnosis, and visual inspection) where both class imbalance and positive distribution shifts occur in the first and second paragraphs of Section 1. For example, in cyber security, malicious data (positive data) are much smaller than benign data (negative data). In addition, malicious adversaries often change their attacks (malicious data) to bypass detection systems, while benign data do not change much [9, 15, 63]. Thus, our problem setting is suitable for this application.
> Q3: AUC maximization can be implemented not just as a loss function for neural networks, but across various machine learning methods. Why did you choose to focus on proposing a loss function?
We focus on the loss function because it has a high degree of generality: it can be combined with any (differentiable) models such as linear models, kernel models, and neural networks, as described in Lines 66--67. This characteristic is beneficial in practice because the available computing resources vary depending on the application site.
For example, when resources such as GPUs are scarce, which is often the case, our loss function can be used with lightweight models such as liner models. When ample resources are available, huge models such as large neural networks can also be used.
> Q4: The reviewer is not convinced that Lines 4-5 in Algorithm 1 sufficiently demonstrate the training process. There needs to be a more detailed and mathematical explanation of how model parameters are updated using the proposed loss function.
Sorry for the lack of explanation.
We describe a more detailed and mathematical explanation of Lines 4 and 5 in Algorithm 1 below:
- Let $\\{ {\bar x} _{{\rm tr},m} ^{\rm p} \\} _{m=1}^{P}$ be sampled positive data from $X _{{\rm tr}} ^{\rm p}$ and $\\{ {\bar x} _{{\rm tr},m} \\} _{m=1} ^{M _{{\rm tr}}} \cup \\{ {\bar x} _{{\rm te},m} \\} _{m=1} ^{M _{{\rm te}}}$ be sample unlabeled data from $X _{{\rm tr}} \cup X _{{\rm te}}$. Then, in Line 4, we calculate the loss in Eq. 14 on the sampled data. That is,
${\hat {\mathcal L}} _{{\rm sampled}} (\theta) := - \frac{1}{M _{{\rm te}} M _{{\rm tr}}} \sum _{n,m=1} ^{M _{{\rm te}}, M _{{\rm tr}}} f({\bar {\bf x}} _{{\rm te}, n}, {\bar {\bf x}} _{{\rm tr}, m}) + \frac{\pi _{{\rm tr}}}{M _{{\rm te}} P} \sum _{n,m=1} ^{M _{{\rm te}}, P } f({\bar {\bf x}} _{{\rm te}, n}, {\bar {\bf x}} _{{\rm tr}, m} ^{{\rm p}})$,
where $\theta$ is parameters of score function $s$.
- In Line 5, we update $\theta$ by using a stochastic gradient descent. That is,
$\theta \leftarrow \theta - \mu \frac{\partial {\hat {\mathcal L}} _{{\rm sampled}}}{\partial \theta} (\theta)$,
where $\mu \in \mathbb{R} _{>0}$ is a learning rate.
We will add this explanation in the final version.
> L2: For extremely imbalanced cases, training difficulties are likely to arise. It is necessary to address how the proposed loss function can be effectively minimized in these scenarios.
As you mentioned, highly imbalanced data will be difficult to learn for all methods, including ours. This is due to the difficulty in extracting positive data information from unlabeled data. However, in Table 4 of Section C.2, we confirmed that our method outperformed the others when small class-priors on the training distribution (e.g., $\pi _{{\rm tr}}=0.01$) are used. | Rebuttal 1:
Rebuttal: Dear all reviewers,
Thank you very much for the detailed and constructive feedback on our paper. We would like to revise the paper based on the comments. A pdf file with additional experiments is attached to this global response.
Best regards,
Authors
Pdf: /pdf/b619aba9f8d582053efbff25b4b1a9f22bbb635e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Simple and Optimal Approach for Universal Online Learning with Gradient Variations | Accept (poster) | Summary: This paper studies universal Online Convex Optimization (OCO) with gradient-variation-dependent regret bounds. That is, to design one single algorithm that is unaware of but is able to adapt to both two following groundtruth: 1) the type of curvatures: the loss functions could be convex, strongly convex, or exp-concave; 2) the curvature coefficient: exp-concavity $\alpha$ or strong convexity $\lambda$. As a result, the regret guarantee achieved by the algorithm scales with the cumulative gradient variation $V_T$ (which depends on the loss function sequence), rather than the time horizon $T$, as well as the corresponding curvature type of the underlying loss functions.
This paper proposes a new simple algorithm, that for the first time achieves optimal gradient-variation bounds for all three curvature types. Note that the gradient-variation bounds immediately imply small-loss (aka first-order) regret bounds as well as worst-case bounds. The #base learners is also improved from $(\log T)^2$ to $\log T$ due to the two-layer structure. The main result also finds broad applications including the SEA model and dynamic regret bounds.
Technique-side, the improvement comes from an alternative way to analyze the the empirical gradient variation w.r.t. surrogate losses and utilizing a negative Bregmen divergence term (due to linearization) to cancel other positive terms, which is often omitted in the analysis.
Strengths: I overall like such results. The authors present their observations and insights (from the regret analysis) in detail, leading to improved (and indeed optimal) regret bounds and even (conceptually) simpler algorithm design.
Weaknesses: I didn’t spot any significant technical issues, and I'm just suggesting some minor “weakness”.
1. When the authors introduce the notion of $F_T$ and small-loss bound for the first time (around Eq. (1.2)), they may want to add that now the loss functions are non-negative (which I think should be necessary for all small-loss/first-order bounds?). Obviously, one can’t take squared root or logarithmic to a negative number.
2. In the application to dynamic regret, the problem setup is not clearly defined. What is the type of loss function? Is the strong-convexity/log-concavity known? It is particularly confusing since it’s right after the universal OCO setup.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The idea of utilizing negative Bregmen divergence terms also appeared in other problems, such as high-probability regrets in adversarial bandits [1]. Could the authors comment on the connection (if any) between the use therein and this work?
2. Under the universal OCO setup, is it possible to handle time-varying curvatures, just like in [2]?
3. Seems that a universal OCO algorithm cannot be anytime? The reason is that the number of base learner (for discretization) depends on $T$.
References
[1] Lee, Chung-Wei, Haipeng Luo, Chen-Yu Wei, and Mengxiao Zhang. "Bias no more: high-probability data-dependent regret bounds for adversarial bandits and mdps." Advances in neural information processing systems 33 (2020): 15522-15533. https://arxiv.org/abs/2006.08040
[2] Luo, Haipeng, Mengxiao Zhang, and Peng Zhao. "Adaptive bandit convex optimization with heterogeneous curvature." In Conference on Learning Theory, pp. 1576-1612. PMLR, 2022. https://proceedings.mlr.press/v178/luo22a/luo22a.pdf
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback and appreciation of our work! We answer your questions in the following.
**Q1.** When the authors introduce the notion of $F_T$ and small-loss bound for the first time (around Eq. (1.2)), they may want to add that now the loss functions are non-negative (which I think should be necessary for all small-loss/first-order bounds?). Obviously, one can’t take squared root or logarithmic to a negative number.
**A1.** Thank you for the advice. Actually, the non-negative assumption is not necessary. Specifically, the non-negativity is only used in the self-bounding property, i.e., $\|\nabla f(x)\|^2 \le 2L f(x)$ to achieve small-loss bounds. Without non-negativity, the self-bounding property still holds as $\|\nabla f(x)\|^2 \le 2L (f(x) - \inf_x f(x))$ [A Modern Introduction to Online Learning, Theorem 4.23]. In this case, we can correspondingly define a slightly complicated notion of small loss. We will emphasize that the non-negativity assumption is only adopted here for simplicity in the revised version.
---
**Q2.** In the application to dynamic regret, the problem setup is not clearly defined. What is the type of loss function? Is the strong-convexity/log-concavity known? It is particularly confusing since it’s right after the universal OCO setup.
**A2.** Thank you for the advice. Our results for dynamic regret minimization hold for ***convex*** functions, and we aim to validate the generality of our proposed technique in this application. We will clarify the problem setup more clearly in the revised version.
---
**Q3.** The idea of utilizing negative Bregmen divergence terms also appeared in other problems, such as high-probability regrets in adversarial bandits [1]. Could the authors comment on the connection (if any) between the use therein and this work?
**A3.** Thank you for the reference. Lemma B.13 in [1] lower-bounds a quantity of $D_{\Psi}(u,x_t)$, where $u$ is the comparator and $x_t$ is the learner's decision. Their similarity to ours is that both work uses a negative Bregman divergence term for cancellation purposes. However, the difference is that their Bregman divergence is defined on a normal barrier $\Psi(\cdot)$, a special self-concordant barrier, which is a widely used concept in bandit convex optimization problems. In contrast, our Bregman divergence is defined on the online function $f_t(\cdot)$ and focuses on the full information setup.
---
**Q4.** Under the universal OCO setup, is it possible to handle time-varying curvatures, just like in [2]?
**A4.** Thank you for the insightful question. Our work and [2] investigate two orthogonal problems regarding function curvatures. Concretely, we study the case where the curvatures are ***unknown and homogeneous***, while [2] focuses on ***known but heterogeneous*** curvatures ('known' means that the learner can access the curvature coefficients after submitting her decisions). Handling time-varying curvatures in universal problems (i.e., ***both unknown and heterogeneous*** curvature) is extremely hard and it seems impossible to deal with such a challenging problem at the moment.
---
**Q5.** Seems that a universal OCO algorithm cannot be anytime? The reason is that the number of base learner (for discretization) depends on $T$.
**A5.** Thank you for the question. As the reviewer said, achieving anytime algorithms is hard since the number of base learners relies on the time horizon $T$. To handle unknown $T$ (an easier case), a possible solution that we could imagine may be using the doubling trick. However, it may import additional logarithmic factors in the final regret bound, which are unacceptable for exp-concave and strongly convex functions since $\log T$ factors would ruin the desired $\log V_T$ regret. This is also one of the open problems raised in the previous work of Zhang et al. [2022].
---
Rebuttal 2:
Comment: I thank the authors for the reply. Currently, I do not have any other concerns and I maintain my scores. | Summary: The paper studied the problem of regret minimization of a set of functions $\{f_t\}_{t=1}^{T}$ over a compact and convex constraint set $\mathcal{X}$, i.e.,
$\sum{t=1}^{T}f_{t}(x_t) - \text{min}{x\in\mathcal{X}}\sum{t=1}^{T}f_{t}(x),$
where $x_t$ is the output of the proposed algorithm at round $1\leq t\leq T$.
The set of functions ${f_t}_{t=1}^{T}$ potentially satisfy certain curvature assumptions, e.g., strong convexity, convexity, or exp-concavity. In the paper, it is unknown which curvature assumption the function satisfies. The main goals of the paper are the following:
1. To construct a universal algorithm that adaptively acts on the curvature property of the function and achieves a proper regret bound.
2. For the case where the function is $\lambda$-strongly convex or $\alpha$-exp-concave, the algorithm should be adapted with respect to the curvature parameter, $\lambda$ to $\alpha$.
3. The algorithm should achieve a good problem-dependent regret bound: The goal of the paper is to attain a regret bound that depends on the following quantities:
$V_T = \sum_{t=1}^{T}\text{sup}_{x\in \mathcal{X}} \| \nabla f_{t}(x) - \nabla_f{t-1} (x) \|^2, \quad \text{and}\quad F_T = \text{min}{x\in\mathcal{X}}\sum{t=1}^{T}f_t(x).$
The proposed algorithm of the paper is a modification of the algorithm proposed by [1]. Similar to the approach introduced by [1], in Algorithm 1 (page 5) of the paper, the authors proposed $\emph{base learners}$ that are aggregated by a meta-algorithm, which outputs the final output of the algorithm at round $t$, $x_t$. The contribution of the paper mainly concerns the technical aspects that outperform the performance of [1] from the following points of view:
1.The paper improves the number of required base learners from $\log(T)^2$ (in [1]) to $\log(T)$.
2.This improvement of the algorithm outperforms the algorithm proposed by [1] up to a logarithmic factor for the situation where the loss functions $f_t$ are convex.
[1] Y.-H. Yan, P. Zhao, and Z.-H. Zhou. Universal online learning with gradual variations: A multi-layer online ensemble approach. In Advances in Neural Information Processing Systems 36 (NeurIPS), 2023.
Strengths: The paper uses simple but interesting technique that contributes in tigher bounds for the case of convex losses. Inspired by [2] the authors used that exploit the imposed smoothness assumption of the loss function and Bregman divergence negative term from linearization of loss function, explained in Section 3.2.
[2] P. Joulani, A. Raj, A. Gyorgy, and C. Szepesvari. A simpler approach to accelerated optimization: iterative averaging meets optimism. In Proceedings of the 37th International Conference on Machine Learning (ICML), 2020.
Weaknesses: The main weakness of the paper lies in its presentation. The content is too dense, and the last section on dynamic regret could be moved to the appendix. Some key parts of the paper are not well explained. For instance, it is unclear how the authors managed to outperform the number of required base learners in [1] by a logarithmic factor. Was this achieved through the application of a Bregman divergence negative term?
The contribution of the paper is limited to a simple technical improvement that enhances the achieved regret up to a logarithmic factor for convex functions.
The optimality of the result with respect to $V_T$ and $F_T$ has not been discussed by the authors.
[1] Y.-H. Yan, P. Zhao, and Z.-H. Zhou. Universal online learning with gradual variations: A multi-layer online ensemble approach. In Advances in Neural Information Processing Systems 36 (NeurIPS), 2023.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. I do not understand the comment on the small $\alpha$ and $\lambda$ in lines 146-147. Can these cases be considered convex? For the convex case, the rate is of the order of $\sqrt{T}$. How do the optimal minimax results hold for this regime, which indicates that the regret is linear?
2. I would appreciate it if the authors could explain the question I raised in the weakness section and outline the main difference that helps them improve the number of base learners.
3. I do not understand the comment in line 305: "which can be easily canceled by the negative term from curvatures in the meta regret." For this cancellation to occur, the coefficient in the equation after line 304 really matters. Could the authors explain this further?
4. Could the authors explain if the final result is optimal with respect to $V_T$ and $F_T$?
Minor Points:
The terms $\sigma_{\text{\max}}$ and $\Sigma_{\text{\max}}$ are not defined in Theorem 3.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback and appreciation of our work! Due to the 6,000-character limit of the rebuttal, we address your major questions below and respond to other minor issues in the next reply after the discussion period starts.
---
**Q1.** I do not understand the comment on the small $\alpha$ and $\lambda$ in lines 146-147. Can these cases be considered convex? For the convex case, the rate is of the order of $\sqrt{T}$. How do the optimal minimax results hold for this regime, which indicates that the regret is linear?
**A1.** Thank you for the question. We take $\alpha$-exp-concavity as an example for illustration. Since **exp-concave functions are also convex**, universal methods actually guarantee the final rate as $\min\\{\frac{d}{\alpha} \log V_T, \sqrt{V_T}\\}$, thus safeguarding $\sqrt{V_T}$ even when $\alpha = 1/T$. Similarly, a bound of $\min\\{\frac{d}{\alpha} \log T, \sqrt{T}\\}$ holds for minimax regret for exp-concave functions. We will discuss this issue in the next version. Thank you for highlighting it for improvement.
---
**Q2.** It is unclear how the authors managed to outperform the number of required base learners in [1] by a logarithmic factor. Was this achieved through the application of a Bregman divergence negative term?
**A2.** Thank you for the question. We clarify that the ***base learner number is only determined by the algorithm design***. Previous $\log^2 T$ base learners come from their *multi-layer (three-layer)* algo. with a *two-layer meta learner*, as explained in Line 90 and Lines 233-237. By contrast, we have only $\log T$ base learners by using ***the single Optimistic ADAPT-ML-PROD as the meta learner***, resulting in *two layers (not three)*. We clarify that simply using Optimistic ADAPT-ML-PROD in previous methods (e.g., in [3]) cannot achieve the optimal rates. The simplicity of our meta algo. is owing to our novel analysis to bypass the stability cancelation in [3].
---
**Q3.** I do not understand the comment in line 305: "which can be easily canceled by the negative term from curvatures in the meta regret." For this cancellation to occur, the coefficient in the equation after line 304 really matters. Could the authors explain this further?
**A3.** Thank you for the insightful observation. We have chosen appropriate coefficients in the detailed proofs in the appendices and we omitted them from the main paper only for clarity. For more details, please kindly refer to the proof of Theorem 1. For example, in the 'Regret Analysis' part on Page 17, the coefficients from $C_2$ to $C_7$ are used to carefully balance the positive and negative terms such that they can be canceled. And we clarify that these coefficients only exist in analysis and thus can be chosen arbitrarily.
---
**Q4.** Could the authors explain if the final result is optimal with respect to $V_T$ and $F_T$?
**A4.** Thank you for the great question. We take $V_T$-bounds as an example for clarification.
* For known curvatures (strongly convex/exp-concave/convex), the SOTA rates are $\log V_T$, $d \log V_T$ and $\sqrt{V_T}$ , and can recover minimax optimal $\log T$, $d \log T$ and $\sqrt{T}$. For convex functions, an $\Omega(\sqrt{V_T})$ lower bound is known in [Regret Bounded by Gradual Variation for Online Convex Optimization, Remark 5]. Nevertheless, for the other two cases, though the current rates are believed to be optimal, the precise problem-dependent lower bounds are still unclear.
* In our paper, for "optimality", we mean that our universal bounds match the same best-known rates as if the curvature information were known. We will revise the paper to make this point clearer to readers.
---
**Q5.** The contribution of the paper is limited to a simple technical improvement that enhances the achieved regret up to a logarithmic factor for convex functions.
**A5.** We would like to take this opportunity to further highlight our contributions, including the problem, techniques, and applications. We will improve the presentation in the revised version.
* **Problem:** Due to the importance of universal online learning due to its robustness and gradient variation due to its profound connections with stochastic/adversarial optimization, game theory, etc. Studying how to achieve gradient-variation regret in universal online learning is essential. We contribute to ***achieving the optimal rates in this fundamental problem***, thereby solving the ***major open problem in [3]*** (please kindly refer to their conclusion).
* **Techniques:** ***Our technique's simplicity is advantageous for its generality***. It succeeds in avoiding controlling the algorithm stability of $\|x_t - x_{t-1}\|^2$, the reason for the suboptimality and inefficiency of [3]. It is ***the first alternative solution to gradient-variation regret since it was first proposed in [1]***. We believe this technical insight is useful for broader applications and take ***dynamic regret minimization*** as an example, where a much simpler method using our technique obtains the same SOTA dynamic regret as [2].
* **Applications:** We validate the significance of our results in the ***stochastically extended adversarial (SEA) model*** and ***dynamic regret minimization*** and achieve ***SOTA*** rates therein. Note that [3] only achieved suboptimal bounds in the SEA model (Table 3 in [3]) and cannot be used in dynamic regret minimization.
---
We will carefully revise the paper according to your suggestions and questions. If our responses have satisfactorily addressed your concerns, we would appreciate it so much if you could re-evaluate our work. Thank you!
**References:**
[1] Online Optimization with Gradual Variations, COLT 2012 (Best Student Paper)
[2] Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization, JMLR 2024
[3] Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach, NeurIPS 2023
---
Rebuttal Comment 1.1:
Title: Response to other comments
Comment: In this reply, we respond to presentation/definition issues, hoping to help you better understand our work.
**Q6.** The content is too dense, and the last section on dynamic regret could be moved to the appendix.
**A6.** Thank you for the advice. We use dynamic regret minimization as an example to validate the generality and significance of our technique and believe it is worth listing in the main part. Given that an extra page is allowed in the camera-ready version, we will provide more detailed explanations of our method and applications to help readers understand them better.
**Q7.** The terms $\sigma_{\max}$ and $\Sigma_{\max}$ are not defined in Theorem 3.
**A7.** Thank you for pointing it out. $\sigma_{\max}^2 \triangleq \max_{t \in [T]} \max_{\mathbf{x} \in \mathcal{X}} \mathbb{E}\_{f_t \sim \mathcal{D}\_t} [\\|\nabla f_t(\mathbf{x}) - \nabla F_t(\mathbf{x})\\|^2]$ and $\Sigma_{\max}^2 \triangleq \max_{t \in [T]} \sup_{\mathbf{x} \in \mathcal{X}} \\|\nabla F_t(\mathbf{x}) - \nabla F_{t-1}(\mathbf{x})\\|^2$. We will add their definitions in the revised version. | Summary: This paper investigates the problem of universal online convex optimization to achieve problem dependent regret guarantees for different classes of convex functions (strongly convex, exp-concave, and convex) simultaneously. Problem/function/data dependent regret guarantees have become popular in literature to bridge stochastic and adversarial guarantees.
Strengths: S1) The paper is well written and easy to understand.
S2) The literature review is comprehensive and up to date.
S3) Simplicity of the incorporation of Bregman divergence is a plus.
Weaknesses: W1) The contribution seems limited in that the improvement is only logarithmic for both efficiency and regret results.
W2) While the regret analysis is novel, algorithmic contribution is very limited, which leads me to believe this paper is more suitable to be a technical note.
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1) Why is $\log^2 T$ computational complexity claimed to be inefficient throughtout the paper? In Table 1, the number of gradient queries and base learners are given as part of efficiency, however, a decrease on the number of queries seems much more significant to me.
Q2) Improvement over the results of Yan et al. [2023] seems incremental. Are there scenarios where this improvement becomes significant?
Q3) Is your approach the same as Zhang et al. [2022] but using Optimistic ADAPT-ML-PROD [Wei et al., 2016] instead of ADAPT-ML-PROD [Gaillard et al., 2014]?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: No significant limitations. Necessary assumptions about the problem setting are properly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your feedback. Below, we aim to address your concerns about the number of base learners, the significance of our contributions, and the algorithmic improvements.
---
**Q1-a.** Why is $\log^2 T$ computational complexity claimed to be inefficient throughout the paper?
**A1-a.** Thank you for the question. We clarify that the $\log^2 T$ complexity is ***less efficient*** compared with methods with only $\log T$ base learners. We will revise the statements of "inefficient" in the next version.
**Q1-b.** In Table 1, the number of gradient queries and base learners are given as part of efficiency, however, a decrease on the number of queries seems much more significant to me.
**A1-b.** Thank you for the comment. Online ensemble runs a meta learner over multiple base learners and each base learner runs an online learning algo. separately. Consider a simple case: each base learner runs OGD $x_{t+1} = \Pi_{D}[x_t - \eta \nabla f_t(x_t)]$, where $\nabla f_t(x_t)$ is the gradient and $\Pi_D[\cdot]$ is the projection onto the domain. Since computing gradients and projections is usually time-consuming, and each base learner has to conduct such operations in their own update, it is essential to reduce the number of them. In this work, we reduce the gradient query number to one per round by using surrogate functions and reduce the projection number by reducing the number of base learners. We will revise the paper to make this clearer to readers.
---
**Q2.** The contribution seems limited in that the improvement is only logarithmic for both efficiency and regret results. Are there scenarios where the improvement over the results of Yan et al. [2023] becomes significant? (response to W1 and Q2)
**A2.** Thank you for the question. We clarify the significance of our paper from the aspects of the problem, applications, and techniques below.
* **Problem:** OCO is fundamental in online learning due to its generality. Curvatures are important to the best attainable results in OCO. However, traditional methods require knowing them in advance to select suitable algorithms, which is cumbersome in practice. ***Universal*** methods do not require such prior knowledge and can achieve the same optimal rates as if the curvatures were known. Furthermore, the ***gradient variation*** is essential in modern online learning due to its profound connections with stochastic/adversarial optimization, game theory, etc. Therefore, studying how to achieve gradient-variation regret in universal online learning is an essential problem. We contribute to ***achieving the optimal rates in this fundamental problem***, thereby solving the ***major open problem in [3]*** (please kindly refer to their conclusion).
* **Applications:** We validate the significance of our results in the ***stochastically extended adversarial (SEA) model*** and ***dynamic regret minimization*** and achieve ***SOTA*** rates therein. Note that [3] only achieved suboptimal bounds in the SEA model (Table 3 therein) and cannot be used in the dynamic regret minimization problem.
* **Techniques:** We have provided a novel technical perspective to gradient-variation regret. Instead of controlling the algorithm stability of $\|x_t - x_{t-1}\|^2$, which is the reason for the suboptimality and inefficiency of [3]'s method, we propose leveraging the Bregman divergence-related negative terms. This is ***the first alternative solution to gradient-variation regret since it was first proposed in [1]***. We believe this technical insight is useful for broader applications and take ***dynamic regret minimization*** as an example, where a much simpler method using our technique obtains the same SOTA dynamic regret as [2]. It cannot be done via the techniques of [3].
At last, we clarify that our improvement in efficiency is not just reducing previous ones by a log factor, but ***simplifying the previous complicated three-layer algorithm of [3] to a simpler two-layer one***, as explained in Lines 223-237.
---
**Q3.** While the regret analysis is novel, algorithmic contribution is very limited. Is your approach the same as Zhang et al. [2022] but using Optimistic ADAPT-ML-PROD [Wei et al., 2016] instead of ADAPT-ML-PROD [Gaillard et al., 2014]? (response to W2 and Q3)
**A3.** Thank you for the question. Below, we clarify the differences between our method and that of Zhang et al. [2022].
* **Meta Algorithm:** We use Optimistic ADAPT-ML-PROD while they used ADAPT-ML-PROD. We clarify that simply using Optimistic ADAPT-ML-PROD in previous methods (e.g., in [3]) will ***not*** lead to the same optimal rates as we did. The simplicity of our meta algorithm is owing to our novel analysis to bypass the stability cancelation in [3].
* **Base Algorithm:** We use ***surrogate functions***, defined in eq. (3.1), for the base learner update in Algorithm 1 (they directly optimized $f_t(\cdot)$). This is also the key to ensuring our method requires only one gradient query per round.
Besides, as shown in our title, one of our contributions is proposing a *simple and optimal* algorithm. In our humble opinion, designing a simple and optimal algorithm is also an unignorable contribution, because ***simple algorithms are usually more efficient and reflect the essence of the problem***, which is also acknowledged by Reviewer #y66n as one of our contributions.
---
We will carefully revise the paper to ensure our contributions are clear to readers. If our responses have properly addressed your concerns, we would appreciate it so much if you could re-evaluate our work. Thank you!
**References:**
[1] Online Optimization with Gradual Variations, COLT 2012 (Best Student Paper)
[2] Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization, JMLR 2024
[3] Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach, NeurIPS 2023
---
Rebuttal Comment 1.1:
Comment: I acknowledge the author's rebuttal. Thank you for a detailed response. I suggest you revise the manuscript with these explanations. I have no further questions.
---
Reply to Comment 1.1.1:
Title: Thanks for the feedback
Comment: We appreciate your feedback and acknowledgment of our work. We will emphasize more about the computational efficiency, the significance of our improvements over Yan et al. [2023], and the algorithm contributions in the revised version. Thanks again for your valuable and helpful review. | Summary: The authors study the regret minimization problem in online convex optimization without access to curvature information. They tackle the task of achieving problem-dependent optimal regret while requiring no prior knowledge of the function class (convex, exp-concave, or strongly convex). They propose an efficient-to-implement two-layer online ensemble structure that requires only one gradient query within each round. Their main technical novelty lies in providing a novel approach for gradient-variation bounds.
Strengths: The authors tackle a very interesting problem in online convex optimization. The paper is well-written and the presentation makes it easy to follow. The main novelty lies in Sections 3.2 and 3.3, where they provide a new way of tackling gradient variations by utilizing the Bregman divergence term. They also make clever use of Proposition 1 in their analysis. The overall method utilizes techniques from several existing works and cleverly combines them to achieve an impressive bound on the regret.
Weaknesses: The proposed approach seems reasonable to me. While I have not gone through the technical details very carefully, I seek one clarification on the proof of Theorem 1. In my opinion, the bottleneck of the proof is in showing the existence of an appropriate choice of $C_3$ and $C_4$ (page 17, line 594, 596). Can the authors comment if such a setting always exists? I would at least expect certain conditions like $\alpha_i^* > G^2/9L $ or $\lambda_i^* > 1/9L$ for results to hold.
Another small thing: I understand the authors ignore very small terms like $\log \log T$ from the order notation. It might be good to put a note in the introduction about it while presenting the result. I understand that it is there in Section 3.1 -- it might be good to move it earlier.
Technical Quality: 3
Clarity: 3
Questions for Authors: See above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback and appreciation of our work! In the following, we answer your questions about feasible analytical parameters and the statements of the $\log \log T$ term.
---
**Q1.** In my opinion, the bottleneck of the proof is in showing the existence of an appropriate choice of $C_3$ and $C_4$ (page 17, line 594, 596). Can the authors comment if such a setting always exists? I would at least expect certain conditions like $\alpha_i^* > G^2 / 9L$ or $\lambda_i^* > 1/9L$ for results to hold.
**A1.** Thank you for the great question! We checked the proof carefully again and realized that the current choices of these parameters may not be favorable enough. To fix this, we provide another choice below.
For simplicity, we omit unimportant constants (we omit $C_0$ also since $C_0 \approx \log \log T$) and consider satisfying the following two conditions simultaneously: $\frac{1}{C_3} + \frac{1}{C_6} - \alpha_i^* \le 0$ and $\frac{1}{C_6} - 1 \le 0$. The new choice of parameters is $C_6 = \max\{1, \frac{2}{\alpha_i^*}\}$ and $C_3 = \frac{2}{\alpha_i^*}$ such that $\frac{1}{C_6} \le \frac{\alpha_i^*}{2}$, $\frac{1}{C_6} \le 1$, and $\frac{1}{C_3} = \frac{\alpha_i^*}{2}$. Note that these constants only exist in analysis and thus can be chosen arbitrarily. As a cost, we need to analyze the positive terms depending on $C_3$ and $C_6$, i.e., $O(C_3 + \ln C_6)$, as shown in the equation above Line 594. Fortunately, the positive terms are also acceptable since $C_3 = \frac{2}{\alpha_i^*} \approx \frac{1}{\alpha}$ and $\ln C_6 \approx \ln (1+\frac{1}{\alpha}) \le \frac{1}{\alpha}$. And $\frac{1}{\alpha}$ is a constant independent of time horizon $T$ and can be absorbed into the final bound of $\frac{d}{\alpha}\log V_T$. We will use this new choice of analytical parameters in the revised version. Thanks for highlighting this point for improvement.
---
**Q2.** I understand the authors ignore very small terms like $\log \log T$ from the order notation. It might be good to put a note in the introduction about it while presenting the result. I understand that it is there in Section 3.1 -- it might be good to move it earlier.
**A2.** Thank you for the advice. We will add a statement about the $\log \log T$ term in the caption of Table 1 in the revised version. Thanks for highlighting this point for improvement.
---
We believe that our work offers valuable contributions to the community. We hope our response addresses your concerns and we are happy to provide further clarifications if needed during the following author-reviewer discussions.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. The new settings of parameters seem to work well. Could you comment on a couple of more things :
1. The construction of the algorithm is dependent on knowing the time horizon $T$. Is it possible to extend this to scenarios when the time horizon is not known?
2. The extension to dynamic regret bound seems interesting to me. If I am correct this part does not utilize any curvature information, likely due to $P_T$ and $V_T$ being the dominant terms in the final expression. Is that correct? Also, in Theorem 5, is there an $\mathcal{O}(V_T)$ term missing from the final expression?
---
Reply to Comment 1.1.1:
Title: Response to the follow-up questions
Comment: Thanks for your feedback and the follow-up questions. Below, we answer your questions about the unknown time horizon and the dynamic regret bound, hoping to help you gain a better understanding of our work.
---
**Q3.** The construction of the algorithm is dependent on knowing the time horizon $T$. Is it possible to extend this to scenarios when the time horizon is not known?
**A3.** Thanks for this insightful question. As shown in Eq. (2.1) on Page 4, setting the number of base learners requires knowing $T$ in advance. To remove this dependence, a possible solution that we could imagine is the ***doubling trick***. However, this would possibly import an additional $\log T$ factor to the final regret bound, which would ruin the desired $\log V_T$ rate for exp-concave and strongly convex functions. Indeed, as explained in the response **A5** for Reviewer #y66n, making our algorithm anytime is a challenging open problem worth exploring in future work. This challenge is also noted in previous studies of universal online learning like Zhang et al. [2022].
---
**Q4.** The extension to dynamic regret bound seems interesting to me. If I am correct, this part does not utilize any curvature information, likely due to $P_T$ and $V_T$ being the dominant terms in the final expression. Is that correct? Also, in Theorem 5, is there an $O(V_T)$ term missing from the final expression?
**A4.** Thank you for the great question. Below, we answer your two questions separately.
* **Curvature Information:** In this part, we aim to validate the effectiveness and generality of our technique in dynamic regret minimization for ***convex*** functions. It means that we do ***not*** consider curvature information such as exp-concavity or strong convexity in this part. We will make more clarifications about the problem setup of this part in the revised version.
* **Missing $O(V_T)$:** In Theorem 5, there is no missing $O(V_T)$ term, and our obtained $O(\sqrt{(1+V_T+P_T)(1+P_T)})$ dynamic regret has matched the state-of-the-art rate, see Theorem 4 in [1].
[1] Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization, JMLR 2024
---
We greatly appreciate your thorough review and helpful feedback. We will incorporate the above clarifications in the revised version. Please let us know if you have any more questions, and we are happy for further discussions. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Lips Are Lying: Spotting the Temporal Inconsistency between Audio and Visual in Lip-Syncing DeepFakes | Accept (poster) | Summary: This paper tackle deepfake detector problem with audio-visual data focusing on lipsync fake which generally a higher quality fake data. For that, this paper propose a dataset and a method. The dataset (AVLips) is formed using available datasets and 3 methods for the lipsync methods. The method (LipFD) extracts global and local features. For global features, transformer is utilized to get the context from all regions. For local regions, the video is cropped to different face areas: face + background, face, lips areas; and extract feature from each area. Weighting is used to select which cropped areas are more important.
Strengths: 1. Contributions (dataset and method) in addressing lipsync based deepfake look sufficient.
2. Fine-grained features are considered.
3. Analysis in real scenarios is interesting.
Weaknesses: 1. Ablation removing one or two out of the 3 branches for local feature extraction is missing. Figure 8 is just showing the important weight extracted from the overall framework doesn't show exactly how much the performance drop if the branches are not included.
2. Details are not clear (see Questions).
Technical Quality: 2
Clarity: 1
Questions for Authors: 1. Line 267: is there statistics how often the latency below 100ms happen?
2. Does this work utilize audio data or not? Figure 4 bottom-left and Line 83-84 indicate audio data is used but I can't find anything related to audio input in the equations.
3. Eq. (5): Where is j index in the equation? And what is none-cropped region?
4. Line 185: \textit{notice} from where?
5. Figure 8: why lip is not as important in real data? I assume synchronized lip is also sign of real.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 4
Limitations: I can't find the limitations section. In the checklist, there should be limitation section however the reference to the section is broken.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Ze8K, we are genuinely grateful for your valuable feedback. We sincerely hope our clarifications below can address your concerns.
---
**W1:** Ablation removing one or two out of the 3 branches for local feature extraction is missing.
**R1:** Thank you for your suggestion. Following your comment, we have conducted ablation experiments for the three branch structures.
|Removed|ACC|AP|FPR|FNR|
|-|-|-|-|-|
|None|95.27|93.08|0.04|0.04|
|Head|90.63|84.15|0.00|0.18|
|Face|86.32|84.12|0.23|0.04|
|Lip|89.22|87.61|0.19|0.03|
|Face + Lip|64.15|62.11|0.69|0.03|
|Head + Lip|82.97|82.39|0.33|0.01|
|Head + Face|74.05|65.78|0.03|0.47|
- **Head:** Contains features present in real videos. Retaining only this branch increases the FPR as the model classifies more samples as real. This region covers a larger area and contains more coarse-grained features, leading to lower model accuracy.
- **Face:** Contains features from both real and forged videos, with more real features. It provides rich facial (e.g., expressions) and lip movement information. The model's accuracy is lower when retaining only this branch compared to retaining two branches, showing the need for additional information from other regions.
- **Lip:** Contains features specific to forged videos. A near-zero FPR shows the model captures LipSync synthesis features well. However, lip features alone don't cover all lipsync dynamics (e.g., rorrelation between lip movements and head poses), and video cropping blurriness can also increase the FNR.
These three branches significantly impact the model's performance. Experiments show retaining more branches increases accuracy. The Region Awareness dynamically weights three regions, adjusting the model's attention to effectively utilize features from real and fake samples, achieving a balanced performance.
---
**Q1:** Is there statistics how often the latency below 100ms happen?
**R2:**
Thanks for the insightful question. According to statistical data from Akamai and Ookla, average global internet latency for broadband connections often ranges from 30 ms to 60 ms [1]. Practical observations in video conferencing support this:
- Microsoft Teams: Reports network latency around 50 ms to 100 ms in optimal conditions. [2]
- Zoom: Reports network latency often around 100 ms, with a recommended latency of 150 ms or less to avoid noticeable delays between video and audio. [3]
[1] Ookla. "Speedtest Global Index – Internet Speed around the world – Speedtest Global Index." Speedtest. 2024.
[2] Microsoft. "Media Quality and Network Connectivity Performance in Microsoft Teams." Microsoft. 2021.
[3] Zoom Support. "Accessing meeting and phone statistics." 2024.
---
**Q2:** Does this work utilize audio data or not?
**R3:** Thank you for the insightful question! We followed Yang et al. [1] for data preprocessing to encode both image and audio spectrum. Here's an outline of our process:
- Convert audio to spectrum using librosa.
- Convert the power spectrum (amplitude squared) to dB units.
- Calculate the magnification ratio $r$ for spectral alignment based on the number of video frames $N$, pixel length per frame $W$, and spectrum length $L$. Formula: $r=\frac{N\cdot W}{L}$.
- Resize the spectrum and align 5 frames with the corresponding audio spectrum.
As a result, the input samples ($I$ in Eq. (1) and Image in Fig. 4 bottom-left) contain both video and audio information. We will modify Eq. (1) to make it clearer in our revision.
[1] W. Yang et al., "AVoiD-DF: Audio-Visual Joint Learning for Detecting Deepfake." TIFS. 2023
---
**Q3:** Where is j index in Eq. (5)? What is none-cropped region?
**R4:** Thanks for pointing it out. Here is the revised formula:
\begin{equation}L_{RA}(\theta_{GR},\theta_{RA})=\sum_{j=1}^N\sum_{i=1}^{T}\frac{k}{\exp([\omega_j^i]_{max}-[\omega_j^i]_h)}\end{equation}
Index $i$ denotes the frame number in one frame, and index $j$ indicates the sample currently being processed in the batch. We accumulate the losses of the $N$ samples in a batch from $1$ to $N$.
None-cropped region is equal to "Head" region. (i.e., images in Fig 4. head series.)
---
**Q4:** Line 185: notice from where?
**R5:** Our model is primarily designed for detecting LipSync forgery. Since LipSync precisely synthesizes the shape and movement of a person's lips based on a given audio, the alterations are primarily focused on the lower part of the face [1, 2]. Therefore, we artificially constrain the model's focus, directing it to concentrate more on the regions where LipSync makes high-frequency modifications, namely the face and lip regions.
[1] Guan et al. "Stylesync: High-fidelity generalized and personalized lip sync in style-based generator." CVPR. 2023.
[2] Ki et al. "StyleLipSync: Style-based personalized lip-sync video generation." CVPR. 2023.
---
**Q5:** Figure 8: why lip is not as important in real data?
**R6:**
As mentioned in response to your W1, both the Head and Face regions contain features necessary for determining real videos. For real samples, the lip region remains important, but its weight decreases due to the increased importance of the facial and head regions. The Region Awareness module assigns higher weights to the Head and Face regions to preserve features specific to real videos. Lip region features are maintained at a similar weight to the Face region to retain traces of forgery.
---
**L1:** I can't find the limitations section.
**R1:** Thank you for pointing this out! We apologize for our formatting oversight. We have discussed our limitations in Appendix E. Here are more details:
- LipFD performs slightly worse on Chinese samples compared to English, likely due to pronunciation differences and training on an English dataset. We plan to include more Chinese data in future work.
- The model underperformed on FF++ and DFDC due to being trained solely on the LipSync dataset. We plan to add DeepFake data to improve general detection capabilities.
---
Rebuttal 2:
Comment: Thank you authors for the clear rebuttal. I am increasing my rating to weak accept and I hope the paper is modified based on the rebuttal for more clarity to the readers.
---
Rebuttal 3:
Title: Thank You for Your Positive Feedback!
Comment: Thank you so much for your positive feedback! We will ensure all your suggested modifications and additional experiments are properly included in our revision. Thank you again for the precious time and positive feedback. It encourages us a lot! | Summary: This paper focuses on a new setting in Deepfake detection called lip-syncing fraud, which only contains fewer minor cues on the leap region. To tackle this issue, the authors provide a novel method called LipFD to obtain the features from both a global view and a regional view. Also, with the new AVLips dataset, this method shows a SOTA result compared to the recent methods.
Strengths: 1. This work provides a new setting on Deepfake called AVLips with a large number of high-quality samples.
2. The method mainly focuses on generating the features from the lips region which is novel.
Weaknesses: Although the proposed method shows a good result, there are some confused expresses which may bring a hard understanding to readers:
1. For equation 3, what is $RA(\cdot)$ mean? What is $[F_G|\{F_R\}^i_j]$ means? There lack an explanation of these operations.
2. It will be better to have an ablation study on the selection of a vision transformer. Including the pretrain, the structure, etc.
3. It could be better to have more details about the dataset, including the number of samples, the visualization of samples with different methods, etc.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weakness.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: No, the author did not include. The authors should address limitations.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 9Sa1, we sincerely thank you for your valuable time and feedback. We are encouraged by your positive comments on our novel explorations, insightgul investigations, extensive experiments, and good motivation. We sincerely hope our following clarifications and new experiments can address your concerns.
---
**W1:** For equation 3, what is $RA(\cdot)$ mean? What is $[F_G| \\{ F_R \\} {_j^i}]$ means? There lack an explanation of these operations.
**R1:** Thank you for your valuable comments. $RA(\cdot)$ is our Region Awareness Module (line 183) that takes $[F_G| \\{ F_R \\} {_j^i}]$ as input and computes weights based on their content (lines 178-183). $[F_G| \\{ F_R \\} {_j^i}]$ denotes the global feature ($F_G$, line 169) concatenated with three series of region features ($F_R$, line 175) to represent the relationship between temporal features and region visual features (Fig. 4b). We will add more details in our revision.
---
**W2:** It will be better to have an ablation study on the selection of a vision transformer. Including the pretrain, the structure, etc.
**R2:** Thank you for your valuable comments. Following your helpful suggestions, we conducted an experiment regarding different pertrained datasets and structures.
- With the same architecture, the parameter count has a relatively small impact on final performance. Larger pretrained datasets and more challenging pretraining tasks lead to superior model performance and more balanced recognition capabilities (reflected in small differences in False Positive Rate and False Negative Rate). **This aligns with our statement in the paper: "To effectively carry out its task (capture temporal features), the encoder necessitates extraordinary representational capacity, which can be attained through exposure to a vast number of images" (lines 158-159).**
- Under the same pretrained dataset, more advanced model architectures typically lead to better final performances. For example, Swin Transformer achieves better results than vanilla ViTs. This is possibly because the window-based approach employed by Swin Transformer is more suitable for capturing long-term dependencies in video data, assisting in better identification of temporal features. We will further explore more effective model achitectures in our future work.
| ViTs | ACC | AP | FPR | FNR |
| ---------------- | ----- | ----- | ---- | ---- |
| CLIP:ViT/L14 | 95.27 | 93.08 | 0.04 | 0.04 |
| CLIP:ViT/B16 | 95.00 | 92.05 | 0.03 | 0.07 |
| ImageNet:ViT/L16 | 93.28 | 91.13 | 0.09 | 0.04 |
| ImageNet:ViT/B16 | 93.27 | 91.13 | 0.09 | 0.04 |
| ImageNet:Swin-B | 94.66 | 92.53 | 0.06 | 0.04 |
| ImageNet:Swin-S | 94.59 | 90.71 | 0.02 | 0.09 |
---
**W3:** It could be better to have more details about the dataset, including the number of samples, the visualization of samples with different methods, etc.
**R3:** Thank you for your valuable feedback. We have provided a detailed description and visualization of the dataset in Appendix. A. Here, we continue to supplement with additional information:
- Data volume:
- Videos: 12,000
- Audios: 12,000
- Number of samples: 600,000 (up to a fifty-fold expansion ratio)
- Forgery methods:
- Dynamic: Wav2Lip, TalkLip
- Static: MakeItTalk
- Perturbation methods: blockwise, contrast adjustment, saturation adjustment, Gaussian blur, pixelation, compression
- Covered scenes:
- Common datasets: LRS2, FF++, DFDC
- Real-world scenarios: video calls, streaming media
Further data visualization and statistical analyses will be included in the the revision.
---
**L1:** The authors should address limitations.
**R4:** Thank you for this pointing out! We are deeply sorry for our formatting oversight. We have discussed our limitations in Appendix. E. Here are some more detailed explanations:
- During real-world evaluation of WeChat video calls, we found that LipFD performs slightly worse on Chinese samples compared to English. We believe this is due to pronunciation differences between Chinese and English, as well as our model being trained on a purely English dataset. In future work, we plan to incorporate more Chinese corpus for training to build a general LipSync detector.
- Our model did not achieve optimal performance on FF++ and DFDC primarily due to being trained solely on the LipSync dataset, which exhibits a significant domain gap with DeepFake. To enhance our performance, we plan to incorporate DeepFake data into our training set to develop a more comprehensive detector.
---
Rebuttal Comment 1.1:
Title: Thanks to Reviewer 9Sa1
Comment: Dear Reviewer 9Sa1:
Please allow us to sincerely thank you again for reviewing our paper and the valuable feedback, and in particular for recognizing the strengths of our paper in terms of new setting, novel method, new dataset, and high effectiveness.
Please kindly let us know if our response and the new experiments have properly addressed your concerns. We are more than happy to answer any additional questions during the discussion period. Your feedback will be greatly appreciated!
Best,
Paper6542 Authors
---
Rebuttal 2:
Title: Review after Rebuttal
Comment: Thanks for your nice explanation of my concerns. I think this is good work and I will keep my rate as weakly accept.
---
Rebuttal Comment 2.1:
Title: Thank You for Your Positive Feedback!
Comment: Thank you so much for your positive feedback! It encourages us a lot! | Summary: The proposed work introduces a pioneering method for detecting lip-syncing forgery, an often overlooked threat in current research. By leveraging discrepancies between lip movements and audio signals, a dual-headed detection architecture significantly enhances detection accuracy. This work also contributes to the first large-scale audio-visual LipSync dataset, comprising nearly one hundred thousand samples, and conducts extensive experiments that demonstrate our method's efficacy. Results show up to 94% average accuracy in LipSync detection, with robust performance in real-world scenarios.
Strengths: 1. this work proposes a new research problem -- lip forgery detection, which is meaningful and useful. A dataset for this research problem is also proposed.
2. The anonymous github makes this work very convincing.
3. The real-life applications shown in Fig. 6 is very impressive.
Weaknesses: the proposed algoritm, LipFD does not have a strong techincal novelty in learning region and global features from the multi-modal input.
Technical Quality: 4
Clarity: 4
Questions for Authors: N/A
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: This method might be under-optimized solution for the facial forgery detection.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Qots, we sincerely appreciate your precious time and valuable comments. Your positive comments of our interesting and relevant topic, clear and simple presentation, novel ideas, convincing experimental evaluation, and impressive application are very encouraging to us. We sincerely hope our following clarifications can address your concerns.
---
**W1:** the proposed algoritm, LipFD does not have a strong techincal novelty in learning region and global features from the multi-modal input.
**R1:** Thanks for your valuable comments. We recognize that we are based on widely used global features and multi-modal inputs. However, our method differs in the following aspects:
- First, we focus on the LipSync forgery detection problem, which is different from what previous literature have explored. While we totally agree that learning region and global features from multi-modal input is a widely used strategy for different tasks [3, 4, 5, 6], we arrived at this method in a motivated, principled way, driven by our novel observations on both global temporal inconsistencies and forgery traces in local regions in LipSync forgeries. These fundamental differences set our method apart from existing techniques, despite we finally converged to similar feature extraction strategies.
- Regarding local features, previous methods for identifying deepfakes mainly rely on single-frame images, using a single encoder to extract detailed features [1, 2]. We innovatively introduce a novel Region Awareness module to dynamically weight different region features, allowing the model to automatically focus on the more informative parts. Experimental results demonstrate that our strategy achieves excellent results.
- Regarding global features, previous methods mostly utilize a dual-headed architecture, using separate video and audio encoders to extract features from both modalities. However, this separation of audio and video inherently introduces alignment issues between audio and video features, making it challenging for encoders to accurately align specific video segments with audio segments. We align audio and images at the fine-grained level during the preprocessing stage for the first time and use ViT's patching technique to capture the inconsistencies between the temporal lip movements and audio spectra, achieving good results.
- We also want to respectfully mention that, beyond LipFD, we also introduced the first large scale audio-visual LipSync dataset into the community, and offered an in-depth investigation on the unique properties of LipSync forgeries. We provide these findings for the first time, and we hope our datasets and analysis can help the community better understand the unique properties and challenges associated with LipSync forgeries. We will add more discussions on this aspect in our revision.
Thank you again for the helpful comments.
[1] Haliassos, Alexandros, et al. "Lips don't lie: A generalisable and robust approach to face forgery detection." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.
[2] Haliassos, Alexandros, et al. "Leveraging real talking faces via self-supervision for robust forgery detection." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[3] Oorloff, Trevine, et al. "AVFF: Audio-Visual Feature Fusion for Video Deepfake Detection." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[4] Wang, Rui, et al. "AVT2-DWF: Improving Deepfake Detection with Audio-Visual Fusion and Dynamic Weighting Strategies." arXiv preprint arXiv:2403.14974 (2024).
[5] Wang, Kai, et al. "Region attention networks for pose and occlusion robust facial expression recognition." IEEE Transactions on Image Processing 29 (2020): 4057-4069.
[6] Peng, Ziqiao, et al. "Synctalk: The devil is in the synchronization for talking head synthesis." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
---
Rebuttal Comment 1.1:
Title: Thanks to Reviewer Qots
Comment: Dear Reviewer Qots:
Please allow us to sincerely thank you again for reviewing our paper and the valuable feedback, and in particular for recognizing the strengths of our paper in terms of pioneering method, novel, meaningful and useful research problem, convincing results, and impressive application scenarios.
Please kindly let us know if our response and the clarifications have properly addressed your concerns. We are more than happy to answer any additional questions during the discussion period. Your feedback will be greatly appreciated!
Best,
Paper6542 Authors | Summary: The paper introduces a novel method, LipFD, dedicated to detecting lip-syncing forgeries by exploiting temporal inconsistencies between lip movements and audio signals. This unique approach addresses a significant gap in existing DeepFake detection methods. Experimental results demonstrate that LipFD achieves high accuracy across multiple datasets, showcasing its effectiveness and robustness.
Strengths: - This paper addresses a novel problem by focusing on specific DeepFake types that are challenging to detect with current DeepFake detection algorithms but perform quite well in state-of-the-art models.
- The paper is well-written and easy to follow. Experimental results indicate the effectiveness of the proposed method.
- The proposed dataset provides a solid foundation for further research in this field.
Weaknesses: - The diversity of fake videos in the training set is limited, as it only includes three methods: MakeitTalk, Wav2Lip, and TalkLip. This limitation can lead to overfitting, as the classifier may easily learn the distinct patterns of these methods. For example, Wav2Lip produces blurry lip images and shows obvious artifacts when fusing lip and facial images. To demonstrate generalizability, testing on additional state-of-the-art generation methods is encouraged.
- While the method performs well on the proposed LipSync dataset, there is some variability in performance across different datasets like FF++ and DFDC. This indicates potential limitations in generalizability across diverse datasets, possibly due to the limited variety of fake videos in the training set. A robust model should be capable of detecting both LipSync and general DeepFake videos effectively.
Technical Quality: 3
Clarity: 3
Questions for Authors: - There is a question about the spectrogram in Figure 2. How is the spectrogram obtained? From Figure 4, it seems to be the audio spectrogram. However, the audio of the fake video is real, so why are there unexpected characters like "the darkest part of the spectrum"?
- What is the meaning of "static" and "dynamic" in Line 122?
- There is a typo: LRS2 in Table 1 should be AVLips.
- Why use MakeitTalk? MakeitTalk generates the whole face instead of only the lip region, which does not align with the definition of LipSync as outlined in this paper.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer K6BS, we sincerely thank you for your valuable time and comments. We are encouraged by your positive comments on our novel task, interesting idea, good writing and high effectiveness. We sincerely hope our clarifications and new experiments can address your concerns. We are happy to answer more questions and conduct more experiments if needed.
---
**W1:** The diversity of fake videos in the training set is limited which can lead to overfitting. Testing on additional SOTA methods is encouraged.
**R1:** Thank you for the constructive comment! We totally understand your concerns about the diversity of fake videos in our datasets. We hope the following explanations can address your concerns:
- We chose MakeitTalk, Wav2Lip, and TalkLip for evaluation because they are the most representative and influential LipSync methods, covering audio-driven single image talking face generation [1, 2], contrastive learning-based approaches [3, 4], and audio-driven video lip-syncing [5, 6]. Our method, **trained only on Wav2Lip forged data**, performed well on these methods (Table 2).
- Regarding generalizability, we **retrained LipFD on MakeitTalk, Wav2Lip, and TalkLip**, then evaluated it on the recent SOTA LipSync method, SadTalker. The results suggest our model maintains high transferability to unseen forgery methods. We will evaluate LipFD on more SOTA methods in our revision.
|AUC|AP|FPR|FNR|
|-|-|-|-|
|94.53|99.00|0.09|0.01|
[1] Zhang, Wenxuan, et al. "Sadtalker: Learning realistic 3d motion coefficients for stylized audio-driven single image talking face animation." CVPR. 2023.
[2] Wang et al. Audio2head: Audio-driven one-shot talking-head generation with natural head motion. IJCAI. 2021.
[3] Zhou et al. Pose-controllable talking face generation by implicitly modularized audio-visual representation. CVPR. 2021.
[4] Wang et al. "Seeing what you said: Talking face generation guided by a lip reading expert." CVPR. 2023.
[5] Guan et al. "Stylesync: High-fidelity generalized and personalized lip sync in style-based generator." CVPR. 2023.
[6] Peng et al. "Synctalk: The devil is in the synchronization for talking head synthesis." CVPR. 2024.
---
**W2:** LipFD has some variability in performance across FF++ and DFDC.
**R2:** Thanks for your insightful comment! We hope our following explanations can address your concerns:
- We admit that our model does not achieve the best performance on FF++ and DFDC. This is mainly because (1) Since our method is primarily targeted towards LipSync detection, we only include videos that involve LipSync in our AVLips dataset; (2) our dataset was constructed based on LRS2, which only includes head data, while the fake videos in FF++ and DFDC typically involve the whole body as well as backgrounds. Due to these domain gaps, solely training on our AVLips may lead to suboptimal performances on general DeepFakes.
- The response to reviewer Ze8K W1 shows that features contained in face and lip regions are both utilized by LipFD, so we believe our model can detect general DeepFakes by simply enriching the training set. To verify this, we fine-tuned LipFD using 400 DFDC samples and tested it on unseen DFDC data. The result shows that LipFD achieves better performance with only a little bit more DeepFake data.
|AUC|AP|FPR|FNR|
|-|-|-|-|
|92.50|89.31|0.07|0.07|
---
**Q1:** Fig. 2: how is the spectrum obtained? Why are there unexpected characters like "the darkest part of the spectrum"?
**R3:** Thank you for the insightful question. We are deeply sorry that our submission may lead you to some misunderstandings. We hope the following clarifications can address your question:
- The spectrogram is extracted using the librosa library. It is a standard library for audio processing that is widely utilized by previous works.
- Please kindly note that we do not mean that the audio of synced video is fake. Typically, if the spectrum of sometime is dark, it means there is a silence or very small volume at the moment [1] (i.e., the person is not talking). However, the LipSync forgery may misallocate lip opening frames (Figure 2b, frame 3) that is contradictory to the spectrum information, leading to temporal inconsistency between audio and video. Our LipFD leverages this inconsistency as a feature to detect LipSync forgeries.
[1] Ilyas, Hafsa, Ali Javed, and Khalid Mahmood Malik. "AVFakeNet: A unified end-to-end Dense Swin Transformer deep learning model for audio–visualdeepfakes detection." Applied Soft Computing 136 (2023): 110124.
---
**Q2:** What is the meaning of "static" and "dynamic"?
**R4:** Thank you for the insightful question. Dynamic methods refer to those that take a video as input and generate a forged video (e.g., Wav2Lip, TalkLip). Static methods, on the other hand, use a single image as input to generate a video through their models (e.g., MakeItTalk, Wav2Lip with the `--static` flag enabled).
---
**Q3:** There is a typo: LRS2 in Table 1 should be AVLips.
**R5:** Thanks for the helpful comments. We will modify them accordingly and conduct a careful proofreading to avoid other typos in our revision.
---
**Q4:** Why use MakeitTalk?
**R6:** Thanks for the insightful question! We categorize LipSync methods into static (generate from a single image) and dynamic (generate from a video). MakeitTalk, a leading static method, generates realistic lip movements, expressions, and head movements. It is defined as a talking face generation method that includes LipSync [1] and is considered a LipSync method in follow-up papers [2, 3]. Therefore, we include MakeitTalk as a LipSync method in our work.
[1] Zhou et al. "Makelttalk: speaker-aware talking-head animation." TOG. 2020.
[2] Zhang et al. "Sadtalker: Learning realistic 3d motion coefficients for stylized audio-driven single image talking face animation." CVPR. 2023.
[3] Guan et al. "Stylesync: High-fidelity generalized and personalized lip sync in style-based generator." CVPR. 2023.
---
Rebuttal Comment 1.1:
Title: Thanks to Reviewer K6BS
Comment: Dear Reviewer K6BS:
Please allow us to sincerely thank you again for reviewing our paper and the valuable feedback, and in particular for recognizing the strengths of our paper in terms of novel problem, novel and unique approach, great significance on filling the gap, good writing, and high effectiveness and robustness.
Please kindly let us know if our response and the new experiments have properly addressed your concerns. We are more than happy to answer any additional questions during the discussion period. Your feedback will be greatly appreciated!
Best,
Paper6542 Authors
---
Rebuttal Comment 1.2:
Title: Review after rebuttal
Comment: Thanks for the authors' detailed feedback. After reviewing the rebuttal, I find that the authors have addressed most of my concerns. Consequently, I have decided to maintain my initial rating.
---
Reply to Comment 1.2.1:
Title: Thank You for Your Positive Feedback!
Comment: Thank you so much for your positive feedback! It encourages us a lot! | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Safe Time-Varying Optimization based on Gaussian Processes with Spatio-Temporal Kernel | Accept (poster) | Summary: The authors propose a time-varying extension of SAFEOPT to overcome the problems of time-varying rewards under time-varying safety constraints.
Under stationarity conditions, optimality guarantees are provided and the numerical simluation shows a (favorable) comparison to the SAFEOPT.
Strengths: 1. The paper is very well written and easy to follow.
2. Based on related work, the problems of time-varying rewards under time-varying safety constraints are an open problem in literature, and his paper addresses that.
3. The paper provides formal safety guarantees for their TVSAFEOPT algorithm.
Weaknesses: 1. *Some delineation to related work seems rather vague and requires stronger justification.* An example for TVSBO: the time-variable and temporal aspect of the kernel can just as well be interpreted as context using existing results. Perhaps a table would help here to highlight key aspects.
2. *Lack of real-world data experiments and comparison to related work.* To support the downsides of existing approaches, an empirical comparison to existing TVSBO approaches mentioned in the related work section would be needed.
3. The *empirical results could be more convincing* by adding a variety of initial safe sets and revised plots. The current plots/results are hard to parse.
4. It would be beneficial if the *theoretical/technical challenge of extending safety to the time-varying case were more detailed*. This would streamline the presentation and help in assessing the impact of the contribution.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. *In the Appendix, a spatio-temporal SE kernel is introduced. How is this construction different from using an SE-ARD kernel with a composite variable $z = [x^T,t]^T$?* If I am not mistaken, for the SE-ARD kernel there would be no different than defining a single kernel with $z$.
2. It is mentioned that the Lipschitz constants are to be known beforehand. However, while commonly assumed, *how do you get a hold of an RKHS norm bound $B$ (related to Assumption 2.1) to compute the UCB?*
3. *Could you provide Figure 1 sooner in the manuscript?* It would be super helpful to see this central illustration already on page 2.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. Practicality of the safety guarantee: requiring many Lipschitz constants for both for space and time while also requiring an RKHS norm bound.
2. Theoretical and empirical impact: Lack of comparisons to TVSBO approaches makes the implications of the contribution unclear both theoretically and empirically.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**
We thank the reviewer for the insight. We refer the reviewer to Table 1 in the pdf. The main difference between our method and context-based methods lies in how time is handled in the safe sets. Context-based methods require a safe seed to be available for every context or at every iteration, which may be impractical, as time is not a decision variable. Conversely, out TVSafeOPT ensures safety by tightening the definition of the safe set, and thus even just an initial safe seed would be sufficient for the algorithm to find a safe optimum.
**W2**
We provide a comparison of the performance of our proposed algorithm, TVSafeOPT, with the solution of an approximate optimization problem used in practice (baseline) and SafeOPT, in the PDF. Contextual approaches with time as a context would require a safe seed for every context, which is unavailable for the unknown changes in a compressor. Similarly, detecting changes due to degradation in turbomachinery is notoriously difficult [10]. The performance of event-triggered SafeOPT relies on the chosen event detection method, thus making the comparison of performance in the compressor case study inaccurate.
The results of the comparison are shown in Fig. 3 in the attached PDF.
**W3**
We added a comparison of safe sets computed by TVSafeOPT (top row in Fig. 1) and SafeOPT (bottom row in Fig. 1). Because TVSafeOPT takes the possible changes in time into consideration, the safe sets computed by TVSafeOPT are contained in the ground truth safe regions while those computed by SafeOPT have multiple violations. .
**W4**
Technically, the challenge arises in the non-stationary case from the fact that a safe/unsafe action at one moment might become unsafe/safe in the future. This means that a TVSBO algorithm should be aware of such information in order to find a sub-optimal solution while guaranteeing safety. ETSafeOPT [13] hopes to detect the change with an event trigger. However, in general, when the reward or safety functions change continuously or frequently, ETSafeOPT suffers from poor performance and even loses empirical safety guarantee. In contrast, context-based methods [1,3] well handle this issue with prior knowledge on the time change infused through the temporal kernel. However, existing context-based methods usually require an initial guess of safe set for each context value, which is often unrealistic. Our algorithm does not rely on change detection and overcomes the challenge related to the safe set initialization at every iteration. This is done by propagating the initial safe set to the future utilizing the temporal kernel.
Theoretically, proving the safety and near-optimality guarantees for TVSBO algorithms is challenging even in the stationary case because the common proof techniques rely on the monotonicity of the lower bounds, upper bounds, and thus the safe sets, maintained by the algorithm. We lose such monotonicity by design of the algorithm. Instead we manage to circumvent this issue by proposing an lower bound of the safe set which is non-shrinking and converging. Furthermore, near-optimality guarantee in the non-stationary case is a very promising yet challenging problem. The first open-end question to answer is what kind of convergence property under what conditions we can prove for the non-stationary case.
**Q1**
Indeed, there is no technical difference in the two notations, we kept $x$ and $t$ separate to emphasise that $t$ indicates time and, as such, it is not a decision variable.
**Q2**
Indeed, the Lipschitz constant is usually unknown in realistic optimization scenarios. Its value can be obtained based on information about the capacity of the chosen kernel [7]. Such information, however, may be unavailable, making the choice a challenge. Oftentimes, however, an educated guess allows overcoming the aforementioned challenge. The authors in [12] have shown that in practice, it is often sufficient to choose an upper bound on $B$ [7], $\beta\geq 2$. This is needed to approximate the Lipschitz constant by an optimistic upper bound. Given that our method relies on Gaussian processes, we made a localized approximation of what has been proposed in [8].
**Q3**
Thanks. We will apply this change.
**L1**
We refer the reviewer to Q2.
**L2**
To make our contribution clear, we have now added an overview of the key aspects of existing methods for time-varying safe BO in Table 1 in the PDF. See also Fig, 2,3
[1] F. Berkenkamp, A. Krause, A. P. Schoellig, \emph{Bayesian optimization with safety constraints: safe and automatic parameter tuning in robotics},Machine Learning 2021;
[2] A. Holzapfel, P. Brunzema, S. Trimpe, \emph{Event-triggered safe Bayesian optimization on quadcopters}, L4DC 2024;
[3] C. K\"onig, M. Turchetta, J. Lygeros, A. Rupenyan, and A. Krause, Safe and efficient model-free adaptive control via Bayesian optimization, ICRA, 2021;
[4] C. K\"onig, M. Ozols, A. Makarova, E. C. Balta, A. Krause, and A. Rupenyan, {Safe risk-averse Bayesian optimization for controller tuning, 2023;
[5] D. Widmer, D. Kang, B. Sukhija, J. H\"ubotter, A. Krause, and S. Coros. Tuning legged locomotion controllers via safe Bayesian optimiztion, CoRL, 2023.
[7] J. Bergstra, R. Bardenet, Y. Bengio, B. Kegl, Algorithms for hyper-parameter optimization, NIPS, 2021
[8] F. Berkenkamp, A. P. Schoellig, A. Krause, Safe controller optimization for quadrotors with Gaussian processes, ICRA, 2016.
[10] Y. Li, P. Nikitsaranont, Gas turbine performance prognostic for condition-based manteinance, 2009.
[12] C. K\"onig, M. Turchetta, J. Lygeros, A. Rupenyan, A. Krause, Safe and efficient model-free adaptive control via Bayesian optimization, ICRA, 2021
[13] A. Holzapfel, P. Brunzema, S. Trimpe, Event-triggered safe Bayesian optimization on quadrotors, L4DC, 2024
[14] M. Guptca, R. Wadhvani, A Rasool. Comprehensive analysis of change-point dynamics detection in time series data: A review, 2024
---
Rebuttal Comment 1.1:
Title: Reviewer Reply for Submission20789 by Reviewer yo4J
Comment: I thank the authors for the rebuttal and the answers to my questions and concerns, and also appreciate them including a table for related work.
Since I was already on the positive side in my initial review, so I prefer to keep my overall score with an increases in parts of evaluation. | Summary: This paper presents a safe Bayesian optimization algorithm TVSAFEOPT with a spatial-temporal kernel and time Lipschitz constants, which improves on SAFEOPT with time-varying reward and safety constraints. The optimality guarantee is proved for the stationary case and the safety guarantee for more general settings. The method is tested on a synthetic problem and gas compressors.
Strengths: 1. The use of a spatio-temporal kernel in Bayesian optimization for time-varying safety constraints is novel.
2. A formal proof of safety and optimality guarantee under certain assumptions.
Weaknesses: 1. More discussion on how to make a tradeoff between optimality and safety is encouraged.
2. Will this conservatism in safety become too large in high-dimensional problems?
2. The method to choose the proper initial safe set and kernel parameters is unclear.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How to find an initial safe set for complex problems?
2. How to find the kernel parameter for each task?
2. What is the computational complexity compared to other BO baselines?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The societal impact is discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the useful feedback, and the positive assessment of the paper. We provide a point-by-point answers to all raised suggestions, comments, and questions.
**W1**
We thank the reviewer for this suggestion. In the paper we are focused on safety critical systems where satisfying the safety constraints has highest priority over finding the optimum. The focus on safety is illustrated in Fig. 1 in the paper, where the proposed algorithm follows the safe set, even though the safe set changes over time. In particular, at time $t=170$, the algorithms takes decisions within a safe set that is smaller than the initial safe set, further emphasizing the safety at the expense of optimality. We will discuss these aspects in the Appendix C of the revised paper.
**W2**
We thank the reviewer for raising this question. As our approach is suitable for safety critical conditions, the focus is put on maintaining safety under change, therefore safety considerations ``dictate'' the optima. If the dimensionality of the problem increases, safety constraints might arise across multiple dimensions, from multiple directions at the price of optimality. This clarification will be included in Appendix C.
**W3**
Indeed, the choice of the safe set and the kernel is part of ongoing research [6] and typically requires some extended knowledge about the underlying functions[7.8]. A common choice for the kernel is a squared exponential kernel that intuitively puts emphasis on the points that are close to each other. The meaning of ``close'' is further defined by hyperparameters, which can be obtained for instance through maximum likelihood estimation [9].
Regarding the choice of the safe set, often a single safe point is already sufficient [Ch. 16, 10]. One of our assumptions is that at least one safe point is available in the initial safe set, and this is most often the case in practice. In most application it is reasonable to assume some prior knowledge is available, which in turn can be exploited in the initialization phase. This assumption is a direct consequence of the underlying principle that both the objective and the constraint functions can be measured. For example, in the compressor optimization case, the safe point is obtained from domain knowledge - an equal distribution of the gas is a safe point, but may be far from optimal if the compressors are dissimilar [10].
We further point out that most of the algorithms in this domain require the safe set to be reinitialized after each iteration (see Table 1 in the PDF and the associated references at the end of this paragraph). In contrast, TVSafeOPT requires only one initial safe set at $t=0$. As indicated, we will dedicate a subsection in Appendix C to address these points.
**Q1**
In complex setting it is still reasonable to assume that at least one single safe point is available. As we also mention in the answer to Weakness 3 raised from the reviewer, this would be already sufficient for TVSafeOPT to run and to guarantee optimality within the safe region that can either enlarge or shrink along time (see Fig. 1 in the submitted manuscript).
For example, in the compressor optimization case, the safe point is obtained from domain knowledge - an equal distribution of the gas is a safe point, but may be far from optimal if the compressors are dissimilar [11].
**Q2**
The choice of the kernel and the hyperparameters of the algorithm is a part of ongoing research [6] and typically requires some extended knowledge about the underlying functions [7,8]. A common choice for the kernel is a squared exponential kernel that intuitively puts emphasis on the points that are close to each other. The meaning of ``close'' is further defined by hyperparameters, which can be obtained for instance through maximum likelihood estimation [10]. One way to adjust the hyperparameters is to collect data in advance and optimize the hyperparameters via maximum likelihood optimization, following standard GP regression methods.
**Q3**
Through observations during the simulations, the computational complexity of each iteration of TVSafeOpt is only slightly heavier than SafeOpt; The sample complexity of the algorithm is similar to SafeOpt [1] in the stationary case and scales as $\mathcal{O}(\varepsilon^{-2})$, where $\varepsilon$ is the accuracy threshold for the optimization. In the non-stationary case, it is a still open problem to bound the sample complexity.
**References**
[1] F. Berkenkamp, A. Krause, A. P. Schoellig, \emph{Bayesian optimization with safety constraints: safe and automatic parameter tuning in robotics},Machine Learning 2021;
[2] A. Holzapfel, P. Brunzema, S. Trimpe, \emph{Event-triggered safe Bayesian optimization on quadcopters}, L4DC 2024;
[3] C. K\"onig, M. Turchetta, J. Lygeros, A. Rupenyan, and A. Krause, \emph{Safe and efficient model-free adaptive control via Bayesian optimization}, ICRA, 2021;
[4] C. K\"onig, M. Ozols, A. Makarova, E. C. Balta, A. Krause, and A. Rupenyan, \emph{Safe risk-averse Bayesian optimization for controller tuning}, IEEE Rob. and Autom. Letters, 2023;
[5] D. Widmer, D. Kang, B. Sukhija, J. H\"ubotter, A. Krause, and S. Coros. Tuning legged locomotion controllers via safe Bayesian optimiztion, CoRL, 2023.
[6] C. Fiedler, J. Menn, L. Kreisk ̈other, S. Trimpe, On safety in safe Bayesian Optimization, arXiv, 2024
[7] J. Bergstra, R. Bardenet, Y. Bengio, B. Kegl, Algorithms for hyper-parameter optimization, NIPS, 2021
[8] F. Berkenkamp, A. P. Schoellig, A. Krause, Safe controller optimization for quadrotors with Gaussian processes, ICRA, 2016.
[9] Y. Sui, A. Gotovos, J. Burdick, A. Krause, Safe exploration for optimization with Gaussian processes, PMLR, 2015.
[10] Y. Li, P. Nikitsaranont, Gas turbine performance prognostic for condition-based manteinance. Applied Energy, 2009.
[11] B. G. Liptak, Instrument Engineers' Handbook, vol. 2, 2005
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. Based on the responses I am comfortable with my current score for this paper. | Summary: The paper introduces the TVSAFEOPT algorithm, which is based on Gaussian processes with spatio-temporal kernels, designed specifically for optimizing time-varying rewards under time-varying safety constraints. The algorithm provides formal safety guarantees in a general time-varying setting, ensuring safety even when exploring non-stationary safe regions. It robustly subtracts safety margins to prevent unsafe decisions, adapting in real-time to changing environments. Furthermore, they provide optimality guarantees for locally stationary optimization problems, ensuring near-optimal solutions when the optimization problem becomes stationary.
Strengths: They provide formal safety guarantees in dynamic environments, ensuring safe decision-making even in non-stationary settings.
Additionally, the algorithm offers optimality guarantees for stationary optimization problems, enhancing its reliability and performance
Extensive numerical simulations were provided to validate the proposed approach.
Weaknesses: They extend the Safeopt algorithm from literature. However, it is clear on what are the additional contributions and difference between these two different approaches.
Technical Quality: 2
Clarity: 2
Questions for Authors: -
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1:They extend the Safeopt algorithm from literature. However, it is clear on what are the additional contributions and difference between these two different approaches.**
We thank the reviewer for the positive assessment of our paper, and for their constructive feedback. We now provide a table (see Table 1 in the PDF and refer to References) to give a clear overview of the contributions of our proposed approach TVSafeOPT, compared with other methods associated with time-varying safe optimization. In contrast to safe learning methods based on contextual Bayesian optimization, such as Contextual SafeOPT [1], which rely on using time as a context and thus require a safe seed for every context, our TVSafeOPT requires only an initial safe seed. Adaptive Goal Oriented Safe Exploration (A-GoOSE) can also handle a single initial seed, but without providing theoretical guarantees on safety across time [3,4]. Furthermore, compared to event-triggered methods such as ETSafeOPT [2], our method requires the initial safe set to be provided only once, at the initial time instant, $t=0$. In contrast, ETSafeOPT requires a new safe state initialisation after each iteration. Moreover, we are able to provide both safety and convergence guarantees to the optimal solution within the safe set, while ETSafeOPT guarantees only safety.
**References**
[1] F. Berkenkamp, A. Krause, A. P. Schoellig, \emph{Bayesian optimization with safety constraints: safe and automatic parameter tuning in robotics},Machine Learning 2021;
[2] A. Holzapfel, P. Brunzema, S. Trimpe, \emph{Event-triggered safe Bayesian optimization on quadcopters}, L4DC 2024;
[3] C. K\"onig, M. Turchetta, J. Lygeros, A. Rupenyan, and A. Krause, \emph{Safe and efficient model-free adaptive control via Bayesian optimization}, ICRA, 2021;
[4] C. K\"onig, M. Ozols, A. Makarova, E. C. Balta, A. Krause, and A. Rupenyan, \emph{Safe risk-averse Bayesian optimization for controller tuning}, IEEE Rob. and Autom. Letters, 2023;
[5] D. Widmer, D. Kang, B. Sukhija, J. H\"ubotter, A. Krause, and S. Coros. Tuning legged locomotion controllers via safe Bayesian optimiztion, CoRL, 2023.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thanks for the clarifications. I would like to maintain my score | null | null | Rebuttal 1:
Rebuttal: Dear Chairs,
Dear Reviewers,
Thank you for the thoughtful feedback on our manuscript. All three reviewers found our results of interest to the wide readership of NeurIPS. In particular, the reviewers appreciated the theoretical guarantees for safety and optimality of our proposed TVSafeOPT algorithm.
The main reservation of the reviewers concerns the choice of the initial safe set and robustness of our algorithm with respect to the initial safe set, together with a more clear explanation on how the proposed algorithm differs from the current time-varying safe learning algorithms with direct comparison. Other suggestions to further improve the manuscript ask for a clarification of the technical challenges of the time varying setting and a discussion on trading off safety and optimality.
We have now revised the manuscript to address all the comments from the reviewers, in particular:
1) We show via additional experiments how our algorithm is robust with respect to perturbation on the initial safe set;
2) We make it clear how our algorithm departs from the other time-varying optimization algorithms in the literature concerning optimization in safety critical settings, and provided Table 1 to make an overview about the state of the art and the contribution of our paper.
We also show, experimentally, the contribution of our algorithm with respect the the baseline SafeOPT and to a time-varying BO baseline, event-triggered BO;
3) We describe the technical challenges in providing theoretical guarantees for the case in which the objective function does not reach a steady state.
We hope that these revisions, as well as the individual answers provided to all comments and questions improve the presentation of our algorithm. We appreciate the opportunity to resubmit our manuscript for potential publication in NeurIPS and thank you in advance for your time.
Yours sincerely,
The authors
Pdf: /pdf/8d1da74a03ec898b6d4a1d1d3c0e5673bea0092b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion | Accept (poster) | Summary: This work presents Diffusion Forcing, a new framework for probabilistic sequence modeling that combines diffusion models with Bayesian filtering. This framework builds on state of the art approaches to sequence modeling using diffusion models, but has several novel contributions.
First, it allows the model to use *independent* noise levels per element in the sequence, which is a key factor for the stability of autoregressive generation and conditional, guided generation.
Second, this work casts the proposed method for sequential decision making, by defining a new guidance technique that allows the generation of the next toke by guidance on the full distribution of future tokens.
Third, the authors go at great length in demonstrating empirically that their proposed framework is general and can be applied beyond text generation, as opposed to related work.
Diffusion forcing relies on simple ideas: noising is understood as (partial) masking, and it is cast for sequential data, giving rise to a causal variant of diffusion forcing. In practical terms, we have a dynamical system modeled with a simple RNN, in which hidden states follow the Markovian principle: the next hidden state depends on the previous hidden state and a current observation. Previous, next and current are to be intended as indexes in the sequence. Observations are obtained by running a diffusion model with independent noise levels per sequence index, and noisy observations can be used to transition to the next hidden state. A connection with Bayesian filtering is made clear in the paper. Then, we end up with an observation model (for state transitions) and a denoising model (for the diffusion of the observations).
The authors provide a sound development of the training procedure and objective, by showing that their training algorithm optimizes a weighted ELBO on the expected log-likelihood.
Strengths: * This work presents substantial improvement over the literature on the joint application of diffusion models and autoregressive models
* The proposed methodology is technically sound, and well supported by intuition, formal proofs and a wide range of experiments
* The experimental section expands over the literature by focusing on several domains including long-range video generation, planning, compositional generation and multivariate time series forecasting
Weaknesses: * The intuition of the effects of noising on long-horizon generation (appendix B.2) is very similar to the ideas described in a related work AR-Diffusion [62]. This does not highlight the contribution of *independent* noise levels per sequence index
* Experiments do not compare (at least to the best of my understanding) Causal Diffusion Forcing to AR-Diffusion, which would be the natural competitor. Nevertheless, I understand that this would require considerable amount of adaptation work, since AR-Diffusion tackles language modeling mainly
* I liked Appendix B.6, but it is not referenced in the main, and I think this would be more helpful than the figures in sec 3.1
Technical Quality: 3
Clarity: 3
Questions for Authors: Q.1: could you please provide a clear summary of why *independent* noise levels are key for your method, and substantiate the difference with respect to AR-Diffusion [62]? I have read Appendix B.2 and Appendix C, where you attempt at clarifying, but I think the benefits for stability, and conditioning on corrupted observations is not spelled out sufficiently
Q.2: is there a way to compare your work to AR-Diffusion that would not require a substantial re-factoring of their code, such that it can be applied to one (e.g. video) use case in your experiments? Another way to go would be to modify your CDF method and use linearly dependent noise levels, to ablate on the requirement for independent noise levels
Minor (no need to answer):
* typos: I could spot one typo in line 186: $[x_1^0, x_2^0, x_3^{K/2}]$
* please check the proofs in the appendix as there are some typos that slipped there, as well as the text in the appendix that has several grammar problems, missing verbs and the like
====== POST REBUTTAL MESSAGE ======
Thank you for the rebuttal. I have raised my score.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Independent noise vs AR-Diffusion
We thank the reviewer for highlighting the need for a more explicit discussion relative to AR Diffusion.
We would first like to clarify that the stabilization discussion in Appendix B.2 is orthogonal to AR-diffusion. The key to AR-Diffusion is training and sampling **future** tokens with linearly growing noise similar to the pyramid sampling of Diffusion Forcing. On the other hand, stabilization is a technique of assigning non-zero noise levels to **context** tokens (history) instead of future tokens. In fact, our stabilization experiment uses autoregressive diffusion, not pyramid noise like AR-Diffusion.
Furthermore, AR-diffusion style pyramid sampling is just one of the possible sampling schemes supported by diffusion forcing **after training once**. In online decision-making like our robot experiment, we can use autoregressive diffusion to achieve the best speed, use pyramid sampling to achieve better guidance, or many other schemes. In **Figure 1 of rebuttal pdf**, we further show that non-causal diffusion forcing can do frame interpolation by assigning noise level 0 to keyframes at arbitrary time steps, while AR-diffusion’s non-causal variant, Rolling Diffusion lacks this flexibility.
Finally, we found that Diffusion Forcing beats AR-Diffusion even using AR-Diffusion’s own sampling scheme as we show and analyze in our response to the next question. This shows that independent noise, just like standard techniques such as image pre-training, plays an important role in enhancing the underlying representation.
> Experimental comparison to AR-Diffusion
We are happy to report that we have re-implemented and benchmarked with AR-Diffusion for video generation, as well as a concurrent work named Rolling Diffusion[1] which is basically non-causal AR-diffusion. As shown in **Figure 4 of the rebuttal pdf**, both AR-diffusion and Rolling-diffusion performed worse in FVD metric compared to diffusion forcing. Our main insight is twofold: 1. while both use linearly growing noise levels along the time axis, both seem to be sensitive with respect to the slope of this linear growth. We spent a fair amount of time tuning the two baselines, but since one has to use the same slope during both training and testing, it requires us to re-train many times to find one slope that’s reasonable for our data. 2. We observe that training with linearly growing noise has too much redundancy. The trajectories tend to have a higher signal-to-noise ratio which made the task too easy. We tried the boundary condition in Rolling Diffusion[1] and multiple tunings to mitigate this, but still failed to match Diffusion Forcing. In fact, table 2 of Rolling Diffusion [1] shows that linearly growing noise isn’t necessarily better than normal diffusion in a large video dataset, aligning with our observation. In addition, a fixed slope at training time creates more inconvenience for DDIM sampling as well due to rounding, creating practical pain for any user who wants to tune DDIM steps.
> Reference Appendix B.6 in the main paper
Thank you for your suggestion - we will definitely reference Appendix B.6 in the main paper, and attempt to move it to the main paper entirely, space after incorporating all reviewer feedback permitting!
> Typo
Thank you for catching the typo, we have fixed it!
> Grammar and spelling in appendix
We have done a full pass over the appendix and polished grammar and spelling, thanks for pointing this out!
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: Dear Authors,
thank you for the rebuttal, which answered all my questions clearly. Also, thank you for the additional experiments provided, as well as the thorough discussions with the other reviewers.
For all these reasons, I will raise my score.
---
Rebuttal 2:
Comment: Dear Reviewer, thank you for your insightful feedback! We will carefully revise our paper to incorporate your suggestions and the latest results from our discussion.
Title: Thank you to the reviewers | Summary: The authors introduce Diffusion Forcing (DF), a method for diffusion of sequential data where the noise level at each token can be different (“independent”). The authors show that DF provides more flexible steerability properties and more stable rollouts compared to full-sequence diffusion and teacher forcing. Experimentally, these enable stable video prediction along several timesteps, improved performance on planning tasks, and more robust robotic visuomotor control, relative to the respective relevant baselines.
The following is a more detailed summary.
### Overview of the method
Diffusion Forcing (DF) denoises a sequence of noisy tokens $x^k\_1, \cdots, x^k\_T$ at noise level $k$, starting at $k=K$ (maximum noise) and finishing at $k = 0$. A sequence of hidden states $z\_1, \cdots, z\_T$ is also maintained throughout. Importantly, different tokens can be denoised by different amounts at each denoising step.
The architecture has two main components:
an encoder $p\_\theta(z\_t | z\_{t-1}, x^{k\_t}\_t, k\_t)$ mapping the previous hidden state $z\_{t-1}$, the current noisy token $x^{k\_t}\_t$ and the noise level $k$ to the new value of the current hidden state $z\_t$;
a denoiser $\epsilon\_\theta(z\_t, x^{k\_t}\_t, k\_t)$, which is used to denoise $x^{k\_t}\_t$.
At training time, the noise levels $(k\_t)\_{1 \leq t \leq T}$ are sampled independently, and the encoder and denoiser are trained jointly using the usual diffusion loss on the output of $\epsilon\_\theta$.
At inference time, the tokens are initialized with independent Gaussians. They are then denoised by first computing hidden states from left to right (via an RNN, in this case) using $p\_\theta$, and then by updating the values of the tokens using their current values and the hidden states.
The authors provide an ELBO interpretation for their loss function in the appendix.
### Features of Diffusion Forcing
- The authors highlight the following features of DF:
- It supports classifier guidance, like ordinary diffusion;
- It allows for keeping the noise level higher for future tokens. This makes intuitive sense in an auto-regressive setting, where future tokens depend on past tokens.
- It supports a flexible planning horizon, as tokens are denoised sequentially.
- It supports a more flexible form of classifier guidance (or reward guidance): past tokens can be guided by rewards that depend on future tokens, due to DF’s autoregressive architecture.
When doing reward guidance, the authors propose drawing many samples of possible future trajectories, and averaging their rewards, rather than using a single sample as in ordinary classifier guidance. They term this approach Monte Carlo Tree Guidance (MCTG).
### Overview of experimental findings
- The authors evaluate Diffusion Forcing on video prediction, planning and robotics tasks. Their findings can be summarized as:
- In video prediction (datasets: Minecraft gameplay and DMLab), DF provides more stable rollouts than full-sequence diffusion and teacher forcing. In particular, DF’s rollouts do not diverge as the number of tokens increases.
- In planning (environment: Maze2d from D4RL), DF produces more more consistent trajectories, and executing the generated actions indeed produces a trajectory similar to that given by the generated states. - In addition, DF with MCTG significantly outperforms Diffuser on Maze2d environments.
- In robotics, DF is robust to missing or noisy observations and can perform imitation learning with memory (as it maintains a hidden state, rather than directly mapping observations to actions).
In the appendix, the authors provide additional experiments on compositionality and time series prediction.
Strengths: 1. The authors propose an original and performant method combining strengths of diffusion (steerability, robustness to noise, high-quality gradual sample generation) and auto-regressive sequence modelling (flexible horizons, temporal causality, memory in the case of RNNs).
1. In addition, the authors provide a theoretical justification of their loss function in terms of an evidence lower bound (ELBO).
1. The paper is written clearly, providing a clear motivation for the authors’ approach, contextualizing DF relative to existing work (especially Diffuser, AR-Diffusion and Diffusion Policy), and highlighting the main contributions of the method conceptually and experimentally.
1. Trajectory inconsistency is a major limitation of Diffuser, which I have contended with in my own research. Mitigating this limitation is an important enabler of bringing the strengths of diffusion to bear in sequential decision making.
1. Monte Carlo Tree Guidance can be seen as maximizing an empirical estimate of the expected future reward. From a policy optimization perspective, this seems more principled than doing gradient ascent on the realized cumulative reward of a given trajectory, as is done in full-sequence diffusion (e.g. Diffuser). As the authors explain in Appendix B.3, this technique relies on the architecture of DF to be effective.
1. The results on video prediction, available in an anonymized project website provided in the abstract, are particularly impressive in terms of stability and 3D consistency. This, together with results on planning and robotics, indicates DF might contribute to advances in diffusion world models; a research area of established relevance that has received significant attention recently.
Weaknesses: 1. Clarification on classifier guidance term $\nabla\_x \log c (x^{\textrm{new}}\_{1:H})$: If this term is to be understood as the gradient of $x \mapsto \log c(x)$ evaluated at $x^{\textrm{new}}\_{1:H}$, then the gradients of $c$ on future tokens would not flow to previous tokens, as the inputs $x$ are “frozen” before being fed into $\log c$. It seems that what the authors mean to say is that future tokens are treated as a differentiable function of past tokens when computing the gradients. It would strengthen the exposition if the authors either clarify this point in the paper, or update the notation to avoid confusion, as the current notation might lead the reader to believe that the gradients from future tokens do not flow into past ones.
1. The naming of Monte Carlo Tree Guidance seems to misleadingly suggest a similarity with Monte Carlo Tree Search (MCTS). However, the method consists of sampling several future trajectories independently and averaging their guidance gradients, which seems quite divorced from MCTS, which involves actual search on a tree of states and actions and backpropagation of rewards through this tree. As such, I believe naming the technique Monte Carlo Guidance would be more appropriate.
1. High-dimensional control evaluation: Janner et al. (2022) evaluate Diffuser on high-dimensional control locomotion tasks from D4RL. It would be interesting to see an evaluation of Diffusion Forcing in this setting, in particular regarding the consistency between states and actions. I recall from my own experience that executing longer plans from Diffuser in these locomotion environments in an open-loop fashion (i.e. no re-planning) led to trajectories diverging from the generated states, as noted by the authors. It would be interesting to see whether this is addressed by Diffusion Forcing on these higher-dimensional environments.
1. The compositional generation environment referenced in Section 4.3 is very similar (if not identical) to the one used by Janner et al. (2022) in Figure 1b of their paper. I believe it is likely worth mentioning this in Section 4.3.
1. Minor formatting problems
1. Line 186: $x^{K/2\_3}$ -> $x^{K/2}\_3$
1. Table 1 caption: “Diffusion Forcingkeeps” -> “Diffusion Forcing keeps”; “Diffusion Forcingachieves” -> “Diffusion Forcing achieves”
1. Line 495: “in full abstraction” -> “in full generality”
Line 503: “likelihood for likelihood of all” -> “likelihood for all”
1. Equation A.3: superscript $k\_2$ on the LHS should be $k\_s$
Line 516: revise the bracketing of the expression involving $p\_\theta$.
1. Line 522: “under uniform levels” -> “under uniformly sampled levels”
1. Line 524: “in the sequel” -> “in the following section”
Equation A.5: $s \leq T$ -> $1 \leq s \leq T$
1. Line 592: specify range for $s$ on the first expectation
1. Line 598: correct superscripts $t\_k$ to $k\_t$
1. Line 608: revise bracketing of the numerator inside the $\ln$
1. Line 616: In the last and penultimate lines, replace $\frac{\ln p(...)}{q(...)}$ by $\ln \frac{p(...)}{q(...)}$
1. Line 628: “we” -> “we have”
1. Line 631: correct superscript of $x\_t$ on the second line
1. Line 634: expression with $p\_\theta$ broken between lines
1. Line 635: capitalize Dirac
1. Equation B.1: include \left[ \right] in the brackets
1. Line 664: “we are” -> “we use”
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Does the flexible planning horizon (line 209) of Diffusion Forcing not derive from the choice of an RNN as the architecture, rather than e.g. a UNet? Would implementing existing methods such as Diffuser (Janner et al. 2022) not allow for a similar property?
1. In the paragraph “Benefit of Modeling Causality”, the authors highlight that states and actions produced by DF are causally related, which does not hold in practice for Diffuser. Do the authors claim this is due to DF explicitly incorporating temporal structure into its architecture? Could it not also be due to the use of an observation model $p\_\theta(x^0\_t|z\_t)$ to predict the noise-free token $x^0\_t$ from a hidden state $z\_t$?
1. At first sight and in its current form, the method seems tailored to the use of an RNN architecture, rather than a Transformer. For example, the denoiser is applied token-wise, with the information from previous tokens affecting the current token only via the hidden states $z\_t$. How would the method have to be adapted, if at all, to work with transformers, in case one wants to scale up Diffusion Forcing?
1. Janner et al. (2022) showcase in Section 5.4 show how to apply Diffuser with a variable planning budget, and study how the resulting performance varies with the planning budget. Can Diffusion Forcing also be run with a variable planning budget, through warm-starting (as for Diffuser) or otherwise? If so, it would strengthen the paper if the authors described how, and included a similar budget vs. performance analysis, especially in planning and robotics tasks.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors address relevant limitations in Section 5 and social impacts in the checklist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their in-depth review - we are particularly happy to be able to address some of the limitations of Diffuser that the reviewer had to contend with themselves in the past!
> Clarification on Classifier Guidance Term
Sorry about the confusion. The reviewer is exactly correct in that future tokens are a differentiable function of past tokens via the posterior model, and that this enables us to perform guidance. We adopted the notation for simplicity and to avoid clutter/accommodate space restrictions, but we will absolutely clarify the meaning of that gradient in the text of the “Guidance” section in our subsequent revision. On the other hand, in our transformer implementation of diffusion forcing, any token can attend to the entire history, allowing us to reproduce the guided planning results as well.
> The naming of Monte Carlo Tree Guidance is misleading
Thank you for your comment. Amusingly, we had an internal discussion among the authors on exactly this topic - we report that one of the parties feels highly vindicated by your comment. We will follow your advice and rename the method into Monte Carlo Guidance!
> Consistency of action and state in high-dim control environments
We tried CDF on one high-dimensional control environment (hopper-medium-expert) without guidance to exclude confounding factors. Unfortunately, the observation still diverges from the ground truth eventually without feedback as no model is perfect even in deterministic environments. However, it’s clearly better than Diffuser just like our visualizations in the 2d maze experiment. In the generic RL setting, the environment can be stochastic, and therefore open-loop execution of a plan is expected to diverge from the plan even with an oracle model. Therefore, we believe a thorough evaluation of dynamics better falls in the result in Section 4.5, which shows CDF is competitive for high-dimensional time-series data.
> Similarity of compositional generation environment with Janner et al. (2022).
Thank you for highlighting this relationship - we will absolutely add a reference to Janner et al. (2022) to Section 4.3!
> Minor formatting problems
We apologize for these mistakes and have fixed all of them for the camera-ready!
> Does the flexible horizon property derive from RNN
Flexible horizon planning doesn’t depend on RNN. We’ve reimplemented diffusion forcing with both transformer and 3D-Unet (with temporal attention), showing flexible horizon planning is fully reproducible with attention instead of RNN. For causal attention, this is obvious. For non-causal variants, diffusion forcing allows us to mark padded future entries as pure noise to achieve a flexible horizon. The only ability that depends on RNN is infinite rolling **without** sliding window because state space models have more invariance, although transformer can roll out longer **with** sliding window.
> Could combining Diffuser with RNNs also lead flexible planning horizon?
Diffuser’s maze planning result is strictly fixed horizon. While Diffuser made an argument about flexible horizon via 1D convolution, its maze planning results critically depend on the replacement technique we mentioned in Appendix B.4. That is, Diffuser fixed the last token to be at the goal throughout the diffusion process to generate a fixed length plan of a carefully chosen length. If the objective is to reach the goal asap, this objective would fail, as one doesn’t know which token shall be replaced as the goal. We implement this on Diffuser and cannot reproduce their numbers. We suspect that the mere reason why Diffuser got high scores is that their fixed length objective coincidentally encourages very slow plans that made the agent stay near the goal for longer, an out-of-distribution behavior that doesn’t exist in the dataset.
> Source of causality in DF vs. Diffuser: architecture or observation model
It’s most likely the architecture itself. Our transformer implementation is direct empirical evidence that a separate observation model is the key. It also follows theoretically: consider an alternative formulation of $x_t=[o_t, a_t, r_t]$ like diffuser instead of $x_t=[o_{t+1}, a_t, r_t]$ used in our paper. Even though the hidden state and observation model are untouched, causality is mathematically broken - at time step t, one wants to diffuse action $a_t$ given current $o_t$, but the process of diffusing $a_t$ samples another $o_t$, which might be different from the existing observation. Such weird inconsistency suggests the temporal structure is critical.
> scale up Diffusion Forcing for transformers
Between the initial draft and rebuttal, we’ve already reimplemented diffusion forcing with architectures like Transformers or 3D-Unet and reproduced results. We also find Diffusion Forcing to work well in latent diffusion settings for higher-resolution videos (**Figure 1 of rebuttal pdf**).
> Can Diffusion Forcing, like the Diffuser, be run with a variable planning budget, through warm-starting or otherwise?
Unlike Diffuser[1], diffusion forcing implements DDIM sampling which already achieves good speed without Diffuser’s warm-starting technique. Therefore, we provide an alternative planning budget analysis here in terms of frequency of replanning. In **Figure 2 of rebuttal pdf**, we show the performance when we replan 1x, 0.5x, 0.25x, 0.125x of episode length and for 50 steps. We found that diffusion forcing’s performance worsens as we replan less frequently. We’d like to clarify that this is partially due to the dataset itself - the maze dataset never contains the behavior of staying at the goal once reached, and the agent always walks away from the goal even after reaching it. Therefore, diffusion forcing is supposed to follow this suboptimal behavior if there is less replanning.
[1] Michael Janner, Planning with Diffusion for Flexible Behavior Synthesis, 2022
---
Rebuttal Comment 1.1:
Title: Thank you to the authors
Comment: Thanks to the authors for taking my suggestions into consideration. I consider the achievements of this paper to be impressive and highly relevant for the line of work at the intersection of control and generative modeling. Hence, I maintain my position of strongly recommending acceptance.
---
Reply to Comment 1.1.1:
Title: Thank you to the reviewers
Comment: Dear Reviewer, thank you for your insightful feedback! We will carefully revise our paper to incorporate your suggestions and the latest results from our discussion. | Summary: This paper proposes to augment autoregressive models with diffusion. Specifically, rather than generating every token in one shot (one neural network evaluation), the paper proposes to gradually denoise the tokens following an autoregressive order. That is, every token is given a different noise level (lower for former tokens and higher for latter ones), and the tokens are jointly denoised to generate better samples. Compared to pure autoregressive prediction, diffusion forcing allows the model to refine the samples through the diffusion process. Compared to diffusion models, the proposed model is capable of variable-length generation and extrapolation.
The authors also demonstrate additional potential generation tasks that can be done by diffusion-forcing models such as guided autoregressive sampling.
Empirical results demonstrate that diffusion forcing performs well on video prediction and various planning tasks.
Strengths: This paper proposes an interesting combination of autoregressive models and diffusion models and demonstrates that the combination of both outperforms both individual models in terms of performance. Further, the diffusion-forcing paradigm offers many more applications that are otherwise impossible. For example, while doing variable-length generation, the model can leverage classifier-based/-free conditions. This provides much better flexibility to inference-demanding tasks such as planning and control.
The authors propose a training objective of diffusion forcing models based on noise prediction. The objective is proved to be a reweighted version of the evidence lower bound and thus is sound.
Diffusion forcing achieves much better performance compared to autoregressive models and diffusion models in long-horizon generation tasks.
Weaknesses: A more detailed discussion of the noise schedule is desired to better understand the effectiveness of diffusion forcing. Is it necessary to use different noise schedules in different tasks to achieve good performance? Further, can we train the model with various/arbitrary noise schedules and at evaluation time find a good schedule? If any of these is possible it will greatly reduce the training complexity and extend diffusion forcing to more applications.
Theorem 3.1 states that the proposed objective is equivalent to a reweighting of the evidence lower bound. However, it is unclear how the noise schedule biases the reweighting since a very badly balanced ELBO can render the training process unstable.
How diffusion forcing balances efficiency and performance. In the extreme case where only one denoising step per token is allowed, diffusion forcing reduces to autoregressive generation. How much performance gain can we expect if we allow for more computation time?
Technical Quality: 3
Clarity: 3
Questions for Authors: How easy or difficult can diffusion forcing be applied to non-autoregressive generation? Although diffusion forcing improves the performance in the autoregressive generation regime, some tasks (e.g., constrained text generation) require awareness of future tokens to generate the current ones. I wonder if diffusion forcing can be extended to this regime.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As discussed by the authors, one of the main limitations is that diffusion forcing is only tested with RNN base models but not other autoregressive models such as autoregressive transformers.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful feedback. We respond to your comments below:
> Can we train with arbitrary noise schedules and find a good schedule at evaluation time?
Yes - at the core of Diffusion Forcing lies exactly the idea that by training with **arbitrary, random** noise schedules, we can experiment with arbitrary noise schedules at evaluation time to find the optimal schedule for each task! In practice, we found this to be very useful as users can try different noise schemes at sampling time for different applications after training once while the slightest tuning would require retraining for various baselines. The proof of the ELBO extends to arbitrary noise schedules as well.
> Is it necessary to use different noise schedules in different tasks?
In short, yes. For example, we found that pyramid sampling with causal diffusion forcing (CDF) is best for MCTG planning while autoregressive diffusion offers the best consistency for causal video. When conditioning on ground truth context frames, CDF works best without stabilization; In contrast, when context is previously generated, CDF works best with stabilization as shown in **Figure 3 of rebuttal pdf**. Furthermore, even if we commit to one specific schedule such as pyramid sampling, it is critical to tune the rate with which uncertainty grows with time via the noise schedule under pyramid sampling. DF allows us to do this easily after training once, while AR-Diffusion [2] (suggested by 35x2 and 2PA2) makes this difficult as this rate is a training hyperparameter and fixed at test time.
We will use the extra page from the camera ready to include this discussion of noise schedules!
> Reduce the training complexity
Surprisingly, we found that random, independent noise schedules at training time do not significantly add to the training complexity. Due to limited space, please refer to the experiment in general response’s “Improved Model Performance” section which shows the overhead of independent noise is insignificant when considering the overall training compute.
We speculate that random, independent noise levels at training time can be seen as a form of data augmentation[3], leading to a bigger “bang for the buck” of training data. Further, they might in expectation improve the quality of gradient flow backward in time.
> Balance between performance and computation time
The suggested sampling scheme reduces to auto-regressive diffusion with ddim_steps of 1, which mainly captures the low-frequency part of the data only. If we have more compute budget, we can sample auto-regressively with >1 ddim_steps and get much higher quality samples. As shown in **Figure 2 of rebuttal pdf**, the main video metric fvd decreases as the sampling time increases. A similar trend follows for maze planning.
> reweighted ELBO can be poorly balanced
Indeed, the formal version of Theorem 3.1 in the Appendix, Theorem A.1 demonstrates the weighting explicitly: that the ELBO on the expected log-likehoods, over the randomness of the noise schedule, corresponds to an objective which reweights the contribution of terms from denoising steps k by their step number k. Thus, the reweighting factor is by at most a factor of the total number of denoising steps, K. Note that this weighting by k is independent of the noise schedule (that is, of the variance of the noise at each step k), and depends only on the position k of the noising step (as well as our decision to select noise levels uniformly and independently). We will add this as a remark in the revision.
Though we could choose to up weight gradients from earlier steps by more (as Thm A.1 makes the weights explicit), not reweighting the gradients according to the ELBO corresponds to effectively weighting gradients at smaller denoising steps k more, which may be desirable as these steps capture the “higher resolution” features of the model.
Lastly, as noted in the remarks following Corollary A.2, our ELBO derivation is sufficiently tight that, given a sufficiently representative neural network, the optima of all (non-trivial) reweightings of our ELBO are also optima of the desired expected likelihood. Hence, the weighting does not meaningfully bias the training objective. We do acknowledge that the weighting may be salient for the dynamics of optimization, but empirically, we find that our decision not to compensate for the step-wise weighting of the ELBO leads to reliable training. We will clarify the remarks following Corollary A.2 to emphasize this point further.
> Main result is RNN
We’ve reimplemented diffusion forcing with both transformer and 3D-Unet architectures after the draft submission. Please see the general response for details due to limited space.
> Apply diffusion forcing to non-autoregressive generation
In the maze planning experiments, we already show how guidance can be used to achieve certain future outcomes. In fact, reconstruction guidance discussed in Appendix B.4 has been widely used to achieve consistency with context through guidance. Alternatively, one simply uses a non-causal neural network architecture, like our implementation of 3D-Unet, and then - as discussed in the paper - varies the per-token noise level at training time. One can further mask out padded entries by marking them as full noise. We are actively pursuing this direction in follow-up work: Initial results in **Figure 1 of rebuttal pdf** showcase frame interpolation result of Diffusion Forcing without the expensive reconstruction guidance. However, a detailed discussion of this setting would exceed the capacity of the present paper, though we will discuss this direction in the discussion section.
[1] Michael Janner, Planning with Diffusion for Flexible Behavior Synthesis, 2022
[2] Tong Wu et al. “Ar-diffusion: Auto-regressive diffusion model for text generation”, Neurips 2023
[3] Diederik P. Kingma, “Understanding Diffusion Objectives as the ELBO with Simple Data Augmentation”, Neurips 2023
---
Rebuttal Comment 1.1:
Comment: I thank the reviewer for their detailed response and for adding more experiments and ablations, which strengthen the paper. Therefore, I will maintain my positive rating.
---
Reply to Comment 1.1.1:
Title: Thank you to the reviewers
Comment: Dear Reviewer, thank you for your insightful feedback! We will carefully revise our paper to incorporate your suggestions and the latest results from our discussion. | Summary: This paper introduces Diffusion Forcing, a novel training paradigm for sequential generative modeling using diffusion models. Diffusion Forcing learns from sequential tokens with varying independent noise levels, enabling more flexible sampling strategies and general capabilities such as guidance. The experimental results demonstrate that Diffusion Forcing outperforms existing methods, including full sequence diffusion and teacher forcing, across various tasks.
Strengths: 1. The proposed Diffusion Forcing method is general and flexible, making it applicable to various tasks.
2. The paper provides a comprehensive discussion on the capabilities of Diffusion Forcing.
3. The experiments are well-designed and effectively demonstrate the proposed method's effectiveness.
Weaknesses: 1. **Writing clarity and organization.**
The writing style impacts readability, making the paper challenging to follow. It would benefit from a clearer organization. The paper primarily covers three points: (a) the proposed Diffusion Forcing (DF) method with independent noise levels and its theoretical analysis, (b) the capabilities of DF, including flexible sampling strategies, and (c) experimental results on various tasks. However, the current structure does not clearly present these points, particularly the DF method. Separating the design of DF and the intuitive explanation from the Bayesian filtering perspective, and listing the resulting capabilities in a separate section, would enhance clarity.
2. **Clarity of figures.**
The figures are not well-explained and are difficult to understand without referring to the text. For instance, Figure 1 omits latent states in the sampling process for both Diffusion Forcing and Teacher Forcing, which is confusing.
3. **Minor issues and typos.**
- Line 97: missing a ")"
- Line 139: "nevel" should be "level"
- Line 186: "$x^{K/2_3}$" should be "$x^{K/2}_3$"
- Line 178, 184, etc.: paragraph titles are inconsistently formatted
- Line 522: missing a "("
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. **Consistency between training and sampling algorithms.**
In Algorithms 1 and 2, there appear to be inconsistencies between the training and sampling algorithms. Can the authors provide an intuitive explanation for these inconsistencies? Specifically:
- During training, the predicted noise $\hat{\epsilon}\_t$ is calculated using the latent from the previous step $z\_{t-1}$, whereas during sampling, $\hat{\epsilon}_t$ is calculated using the latent from the current step $z_t^{\text{new}}$.
- Similarly, during training, $\hat{\epsilon}\_t = \epsilon\_\theta(z, x\_t\^{k\_t}, k\_t)$ uses the same noise level $k\_t$ of the noisy observation $x\_t\^{k\_t}$, but during sampling, $x\_t$ has a noise level $\mathcal{K}\_{m+1,t}$ instead of $k = \mathcal{K}_{m,t}$.
2. **Stabilizing auto-regressive generation.**
The authors propose conditioning on the (latent of) slightly noisy previous tokens with a noise level $0 < k \ll K$ to stabilize the auto-regressive generation. How were the values of $k$ chosen in the experiments? Could the authors provide ablation studies on the impact of using this trick?
**I promise to raise the score once all the weaknesses/questions are solved.**
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful feedback! We respond to your comments below:
> Writing clarity and style & typos
Sorry about the confusion. It’s true that we opted out of an independent “Method” section to introduce the intuitions first. With the extra 1 page for the camera-ready version, we promise to structure the paragraphs more clearly. In particular, following your advice, we will make (1) one “methods” section that discusses DF in one subsection as well as its intuition obtained from Bayesian Filtering in another subsection, (2) one “Capabilities” section discussing novel capabilities, and (3) one ” Results” section. Thank you for your time on the list of typos, we will make sure to fix them in camera-ready.
> Figure clarity
Thank you for the constructive advice. We will make sure to caption the figures with more details with the 1 extra page limit of the camera-ready version so readers can easily understand them without referring to the main text. For the sampling scheme in Figure 2 (Assuming the figure index in your comment is a typo), we were trying to illustrate a scheme that illustrates both the transformer and RNN as mentioned in the caption. Therefore, we will likely instead remove latent from the training subfigure to make sure it’s consistent with the sampling figure and add a new figure in the appendix to illustrate RNN variant with latent state. We provide a preview of modification in **Figure 5 of the rebuttal pdf**.
> Consistency between training and sampling algorithms.
Thank you for pointing this out. We will definitely fix the algorithm box carefully: 1. $z_{t-1}$ in line 8 of the training algorithm should be changed to $z_{t}$ 2. In line 7 of the sampling algorithm, the $k$ should indeed be corrected to be $\mathcal{K}_{m+1,t}$. We will fix line 5 to reflect this. 3. Outside the two errors you pointed out, some $\alpha$ are accidentally indexed via $t$ or $k$, not $k_t$.
> Details about stabilization and ablating $k$
In our implementation, we choose $k=20$ for $K=1000$ but this value is largely dependent on the difficulty of the task. In **Figure 3(a)** of the rebuttal pdf, we present the result of the requested ablation of this stabilization value. We choose a slightly harder setting for this ablation, reducing the frame rate by half and using lower numbers of DDIM steps to make compounding error more obvious. One can observe that the FVD metric improves as we gradually increase the stabilization level from 0, reaching optimal around 100 before FVD starts to rise. We found stabilization to be extremely important for datasets with high stochastic like the BAIR robot pushing dataset shown in **Figure 3(b) of the rebuttal pdf**. We cannot achieve stable autoregressive rollout even with very high DDIM steps. Stabilization is a fundamental technique to get this dataset to work.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I highly appreciate the contribution of your work, including novel frameworks, capabilities and extensive experiments, and I expect the writing could be refined in further versions. I raise my score to 6.
---
Reply to Comment 1.1.1:
Title: Thank you to the reviewers
Comment: Dear Reviewer, thank you for your insightful feedback! We will carefully revise our paper to incorporate your suggestions and the latest results from our discussion. | Rebuttal 1:
Rebuttal: ## General Response
We thank the reviewers for their comments and suggestions. We are pleased that the reviewers find our paper original & interesting (Reviewers 35x2,tz82), general & flexible (Reviewers a6Fo, tz82), that it has great performance (Reviewers a6Fo, tz82,35x2) with substantial improvement over prior methods (Reviewers 2PA2) backed by well-designed experiments (Reviewers a6Fo,2PA2) and theoretical justification (Reviewers tz82,35x2,2PA2).
The outstanding concerns center around comparison to prior methods, scaling up beyond RNNs (Reviewers tz82,35x2), the need for independent noise (Reviewers tz82, 2PA2), and more ablations to help understanding (Reviewers a6Fo, tz82,35x2,2PA2). We present further discussion, supported by additional experimental results, to address these concerns.
> Comparison to new baselines and variants
We reimplemented AR-diffusion[1] and its non-causal variant, Rolling-diffusion[2] with multiple rounds of tuning. As we detailed in Appendix C, AR-Diffusion uses linearly growing noise during training and sampling. Their sampling scheme closely resembles Diffusion Forcing’s pyramid sampling so we also present a result of diffusion forcing but using AR-diffusion’s own pyramid sampling scheme. **In Table 4(a) of the rebuttal pdf**, we find that Diffusion Forcing has a clear advantage over all baselines, causal or non-causal. Furthermore, the fact that Diffusion Forcing can beat new baselines with their own sampling schemes further highlights the importance of independent noise, as we detail below.
> The need for independent noise
We’ve included additional investigative experiments that demonstrate the importance of independent noise, both for **improved model performance** as well as **numerous model capabilities**. We summarize them here and provide detailed discussions in the individual responses.
### Improved Model Performance
- In **Table 4(a) of rebuttal pdf** We trained with Diffusion Forcing but using AR- and Rolling-diffusion’s sampling schemes for inference: this yields improvements over those baselines, suggesting the benefit lies in the training via independent noise.
- We present experimental evidence that, when adopting a standard technique for video diffusion models - image pre-training - the added complexity of independent noise is well warranted: we pretrain Diffusion Forcing and all baselines with images with no temporal structure, before training on video data. While this pretraining improves the performance of all baselines, Diffusion Forcing still (1) performs the best, and (2) its performance in earlier training iterations (20k) is superior to that of baselines at convergence. This shows that the overhead of independent noise is justified when considering the overall training compute (equivalent to 100k video training steps)
- We see a strong practical benefit of independent noise when tuning hyperparameters such as uncertainty rate in pyramid sample, which is painfully inconvenient in AR-Diffusion since it requires retraining.
### Model Capabilities
- As noted in the main text, the use of independent noise confers a number of additional capabilities in our model, including stabilization of autoregressive rollout (Sec. 4.1), modeling causal uncertainty (Sec. 4.2), and removing the need for expensive reconstruction guidance when conditioning on context (Appendix B.4). None of these capabilities can be achieved by full-sequence diffusion and AR-diffusion can only achieve the first and third one. To demonstrate yet another capability, we present preliminary results on using Diffusion Forcing for frame interpolation without reconstruction guidance in video prediction **Figure 1 of rebuttal pdf**.
> Further ablations
We’ve added the following ablations.
1. An ablation of stabilization level k for our ability to stabilize autoregressive generation. As shown in **Figure 3(a) of the rebuttal pdf**, we found that video metrics monotonically improve as one goes from no stabilization to a certain level and monotonically worsen as we further increase the value. We further present a qualitative visualization in **Figure 3(b) of the rebuttal pdf** for a more stochastic dataset and found stabilization is an indispensable technique to prevent blowing up.
2. Multiple ablations of performance vs computing budget across domains. In **Figure 2 of the rebuttal pdf**, we found that diffusion forcing can trade off speed for better quality by varying DDIM sampling steps for video and replan frequency for planning.
> Scaling up beyond RNNs
It is straightforward to adapt Diffusion Forcing to transformers, and no changes to the proposed framework have to be made. One simply uses any architecture - causal or non-causal, and then - as discussed in the paper - varies the per-token noise level at training time.
After the initial draft, we’ve further reimplemented Diffusion Forcing with alternative architectures such as the transformer and 3D-UNet with temporal attention. We found that our approach similarly works. In addition, we’ve integrated diffusion forcing with modern techniques such as latent diffusion, allowing it to scale up to much longer video sequences (300) or those with a much higher resolution (512x512). In **Figure 1 of the rebuttal pdf**, we present non-cherry-picked samples of Diffusion Forcing on RE10K, a hard dataset that’s high-resolution, with latent diffusion and temporal attention. We’ve successfully reproduced maze planning results using transformers as well. These results show positive signs of scaling up diffusion forcing. However, we find that a detailed quantitative evaluation of these new variants would exceed the capacity of the present paper, though we will discuss this direction and interesting implications in the discussion section.
[1] Tong Wu et al. “Ar-diffusion: Auto-regressive diffusion model for text generation”. Neurips 2023.
[2] David Ruhe et al. “Rolling Diffusion Models”. Arxiv 2024
Pdf: /pdf/f3641fe140310ec62b6a7beea077ed61700d1081.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Transferable Adversarial Attacks on SAM and Its Downstream Models | Accept (poster) | Summary: This work discusses an interesting security issue of deploying a model fine-tuned on a large foundational model in private downstream applications. It proposes a universal meta-initialized and gradient robust adversarial attack (UMI-GRAT) to break the powerful SAM and its various downstream models, without requiring prior knowledge of the specific downstream task and data distribution. The author explores the challenges associated with threating transfer-based adversarial attack without the task-related prior knowledge and provides the theoretical insights on the deviation in updating the adversarial perturbation when using the open-sourced model as the surrogate model. An extensive evaluation of UMI-GRAT's performance, transferability, and efficiency was conducted across five datasets and three different downstream tasks (medical image segmentation, shadow segmentation and camouflaged object segmentation), demonstrating the high effectiveness of the UMI-GRAT approach.
Strengths: 1. This work discusses a critical adversarial issue of deploying large foundation model in real-world applications and for the first time considers a more challenging and practical scenario where the adversarial attacker breaks SAM and its downstream models in the absence of prior knowledge on the task and data distribution.
2. This work provides the in-depth analysis on the challenge of threating the transferable adversarial attack via open-sourced SAM and proposes the corresponding theoretical insights and solution.
3. The work establishes a detailed experimental framework and the proposed UMI-GRAT shows superior performance on misleading various SAMs’ downstream models compared with previous methods, which serve as a preliminary exploration for future research.
Weaknesses: 1. It’s recommended to give more comprehensive analysis of the UMI noise, including the size of the natural image dataset and the effect of various hyperparameters.
2. There are more metrics such as $E_\phi$, $F_\beta^\omega$ in the camouflaged object detection task. It would be beneficial if the author could provide further data pertaining to these evaluation metrics to enrich the analysis.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In Figure 3, the peak cosine similarity of the generated adversarial perturbation is observed between the 20th and 30th iteration. So will increasing the iterative step of generating adversarial perturbation enhance the transferability?
2. Model ensemble is an effective method to enhance the adversarial attacks’ transferability. Will the ensemble of different SAMs benefit the UMI-GRAT?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The experimental results are all based on SAMs and their downstream models. It will provide more valuable insights if expanding the scope of analysis to assess whether this adversarial threat also applies to other large foundation models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer aXPE,
Thank you so much for taking the time to read this paper and giving constructive feedback. Please find our response below.
> Give a more comprehensive analysis of the UMI noise.
Following your valuable suggestion, we conducted an evaluation to analyze the impact of the natural dataset size $N$ and the total meta iteration times $T_m$ on the UMI using the COD10K dataset. We presented the results in the following table.
| Hyperparameters | $S_\alpha\downarrow$ | MAE$\uparrow$ |
| :---------------: | :------------------: | :-----------: |
| $T_m=7,N=20,000$ | 36.11 | **24.74** |
| $T_m=7,N=200$ | 37.54 | 20.83 |
| $T_m=7,N=5,000$ | 36.81 | 23.23 |
| $T_m=3,N=20,000$ | 36.43 | 24.30 |
| $T_m=5,N=20,000$ | **36.01** | 24.58 |
| $T_m=11,N=20,000$ | 36.59 | 24.59 |
The results indicate that the size of the dataset $N$ has a significant impact on the UMI. As a smaller dataset can introduce substantial bias to the generated UMI, this thereby leads to the degraded performance of the generated UMI. The effect of $T_m$ becomes minor when larger than 5.
> Provide further data pertaining to $E_\Phi $ and $F_\omega ^\beta $.
We presented the performance of $E_\Phi $ and $F_\omega ^\beta $ on the COD10K dataset under five different attacks in the following table:
| Attack strategy | $E_\Phi\downarrow$ | $F_\omega ^\beta\downarrow$ |
| :-------------: | :----------------: | :-------------------------: |
| Without attack | 0.918 | 0.801 |
| MI-FGSM | 0.569 | 0.019 |
| PGN | 0.537 | 0.021 |
| BSR | 0.513 | 0.058 |
| ILPD | 0.502 | **0.017** |
| UMI-GRAT | **0.478** | 0.019 |
The results show that our method achieves the best attack performance in terms of the E-measure ($E_\Phi$) and the second-best performance on the weight F-measure $F_\omega ^\beta $. We will add those metrics in the table of our revised paper for better clarification.
> Will increasing the iterative step of generating adversarial perturbation enhance the transferability?
We conducted comparative experiments under 5 different attacks with iteration times $T_a=20$ and the results are reported in **Table R2** of our submitted PDF file.
The results indicate that increasing the attack iterations **greatly enhances** the adversarial examples on attacking camouflaged SAM, where the **domain gap from the natural images is minor**. However, For those tasks characterized by a substantial domain gap, increasing the iteration cannot bring performance gain.
> Will the ensemble of different SAMs benefit the UMI-GRAT?
Following your valuable suggestion, we explored enhancing adversarial attacks by the ensemble under two types of scenarios:
1. The transferability from pre-trained SAM to Medical-SAM.
2. The transferability from the pre-trained SAM ViT-B to SAM ViT-H.
We adhered to the experimental settings outlined in Section 6.1 and employed an ensemble of **SAM ViT-B and SAM ViT-L**. We reported the mDSC for Medical SAM and mIOU for the original SAM. The results are presented in the following tables:
| | Medical-SAM(mDSC$\downarrow$ ) | SAM ViT-H(mIOU$\downarrow$ ) |
| :-----------------: | :----------------------------: | :--------------------------: |
| UMI-GRAT | 5.22 | 15.41 |
| UMI-GRAT + Ensemble | 13.54 | 12.09 |
The results indicate that the ensemble yields performance gains in general transfer tasks while degrading the performance on the pre-trained model to fine-tuned model transfer task. As the fine-tuned model solely inherits information from its pre-trained one, incorporating uncorrelated gradient information brings unnecessary deviation, thus degrades the performance.
> Expanding the scope of analysis to assess whether this adversarial threat also applies to other large foundation models.
Following your good suggestion, we evaluated the effectiveness of our proposed UMI-GRAT on two new scenarios:
1. **The transferability of MUI-GRAT to other pre-trained methods.** We attacked the **MAE ViT-S and ViT-B that are fine-tuned on Chexpert [2]** using solely **the pre-trained MAE ViT-S** and reported the results in **Table. R3** of our submitted PDF file.
2. **The effectiveness of MUI-GRAT on the general transfer attack setting.** We attacked the **MAE ViT-B and DenseNet-121** **fine-tuned** **on Chexpert [2]** using **MAE ViT-S fine-tuned** on the same dataset and reported the results in **Table. R4**.
We utilized models provided by [1] and followed the same experimental setting. We evaluate 8 attack methods on the Chexpert [2] dataset, where the model needs to diagnose five predefined diseases in the chest X-ray images. We reported the mean Area Under the Curve (mAUC) to evaluate the performance.
The results in **Table. R3** shows that the proposed **MUI-GART maintains the best method** on attacking both the fine-tuned **MAE-ViT-S and MAE-ViT-B** models. In **Table. R4**, we evaluated the transferability between different ViTs and transferability from the ViT to the CNN. The experimental results indicate that our MUI-GRAT maintains effectiveness on general transfer tasks, demonstrating its good generalizability.
------
We do appreciate your constructive feedback. We will add those experiments and analyses mentioned above in the appendix of our revised version.
[1] Delving into masked autoencoders for multi-label thorax disease classification. In WACV 2023.
[2] Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In AAAI 2019.
---
Rebuttal Comment 1.1:
Title: Response to the Authors
Comment: After carefully reviewing the comments from the other reviewers and the author's rebuttal, all of my concerns have been adequately addressed. Therefore, I decide to raise my score.
---
Reply to Comment 1.1.1:
Title: Response to reviewer aXPE
Comment: Dear Reviewer aXPE,
Thank you so much for your positive and constructive feedback, which is very helpful and makes our paper stronger!
We are glad that our responses address your concern. We are always available and eager to address any additional questions you might have during our discussion.
Best regards,
The Authors | Summary: In this paper, the authors present a new approach for adversarial attacks on Segment Anything Model (SAM)-based downstream models, addressing the challenge of attacking without prior knowledge of the downstream task or data distribution. Their key contribution is a universal meta initialization-based algorithm that exposes inherent vulnerabilities in the foundation model. The authors also introduce a gradient robust loss, which simulates uncertainty through gradient-based noise augmentation. This loss is derived from a theoretical formulation of adversarial update deviation between the open-sourced SAM and its fine-tuned downstream models. The authors provide an analytical demonstration of how their proposed method enhances attack transferability. The effectiveness of their approach is thoroughly validated through comprehensive experiments.
Strengths: Originality: This is the first work to explore the feasibility of adversarially attacking various downstream models fine-tuned from the Segment Anything Model (SAM). The introduction of a universal meta initialization-based algorithm to uncover intrinsic vulnerabilities in foundation models is both effective and efficient. Additionally, the formulation of adversarial update deviation and the proposal of a gradient robust loss that simulates uncertainty with gradient-based noise augmentation further enhance the transferability of adversarial examples.
Quality and Clarity: The writing is generally clear but has room for improvement. The methodology and results are well-structured, though some technical sections could benefit from additional clarification.
Significance: This work is highly significant given the increasing prevalence of foundation models like SAM. The proposed methods for enhancing attack transferability have important implications for AI system security and could influence future directions in both offensive and defensive strategies in adversarial machine learning for SAM.
Weaknesses: 1 - My major concern is related to the novelty of the proposed approach. Although I agree that this is the first work in the context of SAMs, the main components, such as downstream agnostic adversarial examples and meta learning-based fast initialization, have already been proposed in the literature.
2 - The authors, in line 45, briefly highlight downstream agnostic examples in just one line. They should clarify in the related work section how their work is different from references 55 and 56 of the main paper, beyond just applying it to SAM. Similarly, another related work that the authors missed is [1] (given below), in which the generated adversarial examples are agnostic to downstream tasks.
3 - Similarly, the authors did not mention any work related to meta-learning-based adversarial examples in the paper. There are multiple works that use meta-learning to craft universal adversarial examples, such as [1, 2] below. The authors use these meta-learning-based methods for initialization of adversarial examples, but this has already been explored in [3] below. The authors should mention these meta-learning-based approaches in their paper and discuss how their method is different from these approaches, beyond just the application to SAMs.
4 - It is not clear to me when the authors claim in line 8 that they are attacking "without accessing the downstream task." What is the task here? Is it not the segmentation task? In [1], their task-agnostic adversarial examples are effective against classification, detection, and segmentation. Since the downstream task here is segmentation-based, is it not obvious what the task is? Please clarify this.
5 - The authors should include some specific aspects of SAM to make their attack more unique. Currently, they are utilizing the SAM image encoder, which, in my opinion, is not much different from the previous works listed below.
6 - For experiments, why have the authors compared their method with intermediate-level feature-based approaches? They should also compare it with different downstream agnostic adversarial approaches as listed below.
7 - In Equation 8, how did the authors choose the threshold lambda?
[1] A Self-supervised Approach for Adversarial Robustness (CVPR-2020)
[2] Learning to Generate Image Source-Agnostic Universal Adversarial Perturbations (IJCAI22)
[3] Meta Adversarial Perturbations (AAI2022-Workshop)
[4] Adversarial Initialization with Universal Adversarial Perturbation: A New Approach to Fast Adversarial Training
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the weakness section. While the paper presents an approach to attacking SAM-based downstream models, it largely combines existing methods rather than introducing new techniques. The current strategy, though effective, does not fully exploit SAM's unique architecture.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer iuQV
Thank you so much for taking the time to read this paper and giving constructive feedback. Please find our response below.
> Q1, Q2, and Q3: The novelty of the proposed method. The authors should discuss how their method is different from meta-learning-based approaches [2,3,4]. Clarify how their work is different from references 55 and 56. Another work[1] is missing.
Although utilizing the UAP as the initial point is discussed in [2,3,4] and downstream-agnostic attacks are discussed in [1, 55, 56], the core methodology design in our UMI is significantly different from previous work. Moreover, the proposition of GRAT in mitigating the gradient misalignment is unique, novel, and effective. The main differences are:
1. **Compared with meta-learning approaches:** [3] utilizes the UAP to generate AEs with a **one-step update**, and [4] uses UAP for **adversarial training**. [2] enhances the existing UAP under the **few-shot learning** scenarios: learning a UAP with a few examples to attack the same victim model. Different from previous work, we first utilize the UMI to extract the intrinsic vulnerability **inherent in the pre-trained foundation model** and utilize it as the prior knowledge to enhance the **attack on fine-tuned downstream models**.
2. **Compared with model-agnostic approaches:** [1] first proposed a representation-based adversarial attack to enhance the downstream-agnostic adversarial training. [55] proposed the first framework for generating downstream-agnostic UAP on self-learning models and [56] extends this attack to multimodality. However, those methods do not consider the utilization of **intrinsic vulnerability in the pre-trained model and the gradient misalignment brought by fine-tuning**, thus do not work well when the downstream dataset **exhibits a distinctive domain gap** from the pre-trained dataset (e.g. from a natural dataset to the medical dataset).
The novelty of our work is two-fold:
1. **Exploitation of intrinsic vulnerability via UMI**: we first utilize the UMI to extract the **intrinsic vulnerability inherent in the pre-trained foundation model** and utilize it to enhance the attack on fine-tuned downstream models.
2. **Rectification of gradient misalignment via GRAT**: Inspired by our **Proposition 1, which formulates the deviation** occurred on attacking the unknown fine-tuned model, we propose the **GRAT to effectively mitigate this misalignment**. The experimental results in Figure 4 and Table 2 demonstrate that **when the fine-tuned model has a pronounced gradient update, the proposed GRAT greatly rectifies the deviation and brings a great performance gain**.
We will discuss and cite all the mentioned work above in our revised paper.
> Q2 and Q6: Why compare with intermediate-level feature-based approaches? They should also compare it with different downstream agnostic approaches.
**The Reason for comparison with ILPD is**: as shown in Equation (7) of the submitted paper, our attack aims to **maximize the feature distance**. We thus discuss and compare our method with the SOTA feature-based attack.
**Comparison with [1]**: according to Section 3.1 and Equation (4) in [1], the attack of [1] directly perturbs the input to **maximize feature distortion via MI-FGSM**. According to Equation (7) of the submitted paper, all methods compared in Table 1 aim to **maximize the feature distortion of SAM's image encoder**. This means **the compared MI-FGSM in Table 1 is totally the same as the attack algorithm proposed in [1]**. We will add more explanations in our revised version to dispel this confusion.
**Comparison with [55, 56]**: as [55, 56] focus on **universal adversarial perturbation**, directly comparing them with **the input-specific attacks** is somewhat unfair to them. We evaluated the Adv-PER [55] in attacking **normally trained and adversarial-trained Medical SAMs** and reported the experimental results in **Table R5** of our submitted PDF file, indicating that the Adv-PER does not work well when the downstream dataset exhibits a significant domain gap from the pre-trained dataset.
> Q4. Clarify the claim "without accessing the downstream task".
Our attack perturbs the feature encoder of SAM, which, by fine-tuning the decoder, can zero-shot and few-shot transfer to various **downstream tasks, such as edge prediction**. **Without accessing the task, the distortion in the feature encoder** will adversely affect the performance across different types of decoders. To substantiate this and demonstrate the generation, we conducted a new experiment on attacking the MAE model fine-tuned on the Chexpert [5], a **task for diagnosing chest X-ray diseases**. We reported the results in **Table R3 and R4** of our submitted PDF, demonstrating the effectiveness and generalization of our method across different downstream tasks.
> Q5. The authors should include some specific aspects of SAM to make their attack more unique.
As stated in lines 548-549 in our submitted paper, our proposed UMI-GRAT is not contingent upon a prior on model’s architecture, suggesting its potential applicability across various model paradigms. The experiments on attacking the fine-tuned MAE ViTs and CNNs in Tables R3 and R4 demonstrate its good generalization. Following your suggestion, we will clarify this in our revised version.
> Q7. How to choose the threshold lambda in Equation 9?
Thanks for your good suggestion, we will add the discussion below in our revised paper:
"We initialize the $\lambda$ with a value of 0.05, which is easily satisfied by most inputs, at the first epoch and increase it by a factor of 2 if at least 50% of the inputs meet this threshold."
---
We greatly appreciate your constructive feedback and will add all the experiments with analyses and cite all mentioned papers in our revised version.
[5] Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In AAAI 2019.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed most of my concerns. While this is the first work in the context of SAM, components like meta-learning-based adversarial examples already exist in the literature and should be properly credited. Additionally, related work [1] should be properly cited. I have raised the score and hope the authors will open-source their code for reproducibility.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer iuQV
Comment: Thank you for your positive feedback. We are glad that our responses address your concerns. In our revised version, we will properly cite all the mentioned papers [1,2,3,4]. All codes will be open-sourced for reproducibility.
We are always available and eager to address any further questions you may have during our discussion. | Summary: This paper proposes an adversarial attack against fine-tuned derivatives to a publicly available foundation model, such as the Segment Anything Model (SAM). In the proposed threat model, attackers can potentially manipulate these downstream models even without knowing the specific task or data they are used for. Under this threat model, proposes a new attack method called UMI-GRAT (Universal Meta-initialized and Gradient Robust Adversarial Attack). Through a bi-level optimization procedure, this method leverages the information from the open-source SAM to create adversarial examples that can fool mislead the original SAM and its fine-tuned versions. Finally, this paper demonstrates the effectiveness of the proposed UMI-GRAT attack against SAM through extensive experiments.
Strengths: 1. The paper is motivated by real-world safety concerns for fine-tuning a public foundation model on private domain-specific datasets.
2. The figures and tables are well-polished and generally reflect the overall message of the paper.
3. The proposed UMI-GRAT attack method is unique and backed by theoretical analysis.
Weaknesses: 1. The effectiveness of the proposed attack is only demonstrated by attacking the SAM model. However, more experiment settings (e.g. against pretrained MAE models) are warranted to demonstrate the generalizability of the proposed attack.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does the domain gap between the natural image dataset used to obtain the universal adversarial trigger and the downstream dataset influence the effectiveness of the attack?
2. How effective is the proposed method against adaptive defense? For example, if the downstream victim model has gone through adversarial training, how effective would the adversarial trigger obtained on the unguarded pretrained SAM be against the guarded victim model?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors acknowledge the limitations of this work in the appendix. They candidly acknowledge the limitations in evaluations as the proposed attack is only evaluated against SAM. I appreciate the authors openly acknowledging this limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Q28B,
Thank you so much for taking the time to read this paper and giving constructive feedback. Please find our response below.
> 1. More experiment settings (e.g. against pretrained MAE models) are warranted to demonstrate the generalizability.
Following your good suggestion, we evaluated the effectiveness of our proposed UMI-GRAT on two new scenarios:
1. **The transferability of MUI-GRAT to other pre-trained methods.** We attacked the **MAE ViT-S and ViT-B that are fine-tuned on Chexpert [2]** using solely **the pre-trained MAE ViT-S** and reported the results in **Table. R3** of our submitted PDF file.
2. **The effectiveness of MUI-GRAT on the general transfer attack setting.** We attacked the **MAE ViT-B and DenseNet-121** **fine-tuned** **on Chexpert [2]** using **MAE ViT-S fine-tuned** on the same dataset and reported the results in **Table. R4**.
We utilized models provided by [1] and followed the same experimental setting. In both scenarios, we evaluate 8 attack methods on the Chexpert [2] dataset, where the model needs to diagnose five predefined diseases in the chest X-ray images. Following [1], we report the mean Area Under the Curve (mAUC) to evaluate the performance. Each experiment is run for 5 times.
The results in **Table. R3** shows that the proposed **MUI-GART maintains the best method** on attacking both the **MAE-ViT-S and MAE-ViT-B** models fine-tuned on the downstream data. Moreover, the combination of the MUI-GRAT with the second-best method BSR [3] brings a further enhancement to the performance.
In **Table. R4**, we evaluated the transferability between different ViTs and transferability from the ViT to the CNN. The experimental results indicate that our MUI-GRAT maintains effectiveness on general transfer tasks (MUI-GRAT achieves **the second-best** attack performance compared to other SOTA methods), demonstrating its good generalizability.
> 2. How does the domain gap between the natural image dataset used to obtain the universal adversarial trigger and the downstream dataset influence the effectiveness of the attack?
The efficacy of the universal adversarial trigger increases as the domain gap between the downstream dataset and the natural image dataset narrows. As mentioned in lines "334-340" of the submitted paper, the UMI extracts the intrinsic vulnerability inherent in the foundation model, thus being more effective when the victim model inherits significant information from the pre-trained model. The following table presents the performance of the SAM before and after being fine-tuned on downstream tasks:
| | Medical SAM (mDSC) | Camouflaged SAM (MAE) |
| -------------------------------- | :----------------: | :-------------------: |
| Performance pre/post-fine-tuning | 1.39/81.88 | 0.050/0.025 |
| Attack performance gain by UMI | 3.29 | **2.98** |
| Attack performance gain by GRAT | **34.49** | 0.37 |
The results show that the fine-tuning brings a significant performance gain for Medical SAM but a minor performance gain for Camouflaged SAM, indicating that the domain gap between the medical and natural datasets is huge and the model is accompanied by a pronounced gradient update. The domain gap between the camouflaged and natural datasets is small so that the gradient modification following fine-tuning is minor. The performance gain shown in the rest data demonstrates that the UMI derives a greater advantage than the GRAT when the domain gap is small and GRAT displays a contrary trend, which aligns with our analysis in Section 4.
> 3. How effective is the proposed method against adaptive defense?
Following your constructive feedback, we conduct comparative experiments on attacking the **adversarial trained Medical SAM**. As our attacks aim to maximize the feature distance, we thereby consider two types of adversarial training (AT) mechanisms:
1. **Feature-wise AT**: the defender is **aware of feature-wise attacks**, thus assuming an attacker whose objective is to **maximize the distance of feature embedding** during AT, and hence the defender minimizes the adversarial loss.
2. **Output-wise AT**: the defender is **unaware of feature-wise attacks**, thus assuming an attacker whose objective is to **maximize the final segmentation loss** during AT, and hence the defender minimizes the adversarial segmentation loss.
We optimize the Medical SAM by incorporating the adversarial loss into the training loss with a weight hyperparameter $\tau$. We evaluate $\tau=0.1\,0.5$ and use the MI-FGSM with iteration $T_a =1, 5$ with bound $\epsilon=10$ as the attacking strategy.
We ran 9 different attacks on 6 different AT models on CT-Scan and presented the results in **Table R.5**. The results demonstrate that:
1. The proposed UMI-GRAT remains the most effective method even if the model has a certain robustness via AT, demonstrating the effectiveness of our proposed UMI-GRAT.
2. Feature-wise AT surpasses output-wise adversarial training for those feature maximizing attacks in most scenarios.
3. The robustness of AT varies drastically with different training hypermeters. Enlarging the weight $\tau$ benefits the robustness. An intriguing finding is that using a stronger gradient attack (e.g . MI-FGSM $T_a =1\,to \, 5$) during AT may damage the robustness towards **adversarial examples generated from the pre-trained model**.
---
We greatly appreciate your constructive feedbacks. We will add those experiments and analyses mentioned above in the appendix of our revised version.
[1] Delving into masked autoencoders for multi-label thorax disease classification. In WACV 2023.
[2] Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In AAAI 2019.
---
Rebuttal Comment 1.1:
Title: Thanks for Your Response & Missing PDF
Comment: Thank you for your detailed responses and follow-up experiments! However, I couldn't find the PDF file you submitted. If you could provide me with a pointer to the revised PDF, I will make sure to go over it in the upcoming days. Thank you!
---
Rebuttal 2:
Title: The missing PDF
Comment: Dear Reviewer,
Thank you so much for your follow-up. The PDF file is attached to the overall Author Rebuttal. However, it seems that there is a bug in the OpenReview website that makes this rebuttal invisible to reviewers. We believe that NeurIPS will address this issue soon, and you can find the submitted PDF in the Author Rebuttal section.
Best regards,
The Authors
---
Rebuttal Comment 2.1:
Title: Comments Adequately Addressed
Comment: Thank you for the clarification! I can now see your follow-up PDF. I think my comments are adequately addressed, so I raise my score to seven.
---
Reply to Comment 2.1.1:
Title: Response to Reviewer Q28B
Comment: Dear Reviewer Q28B,
Thank you so much for your positive and constructive feedback, which is very helpful and makes our paper stronger!
We are glad that our responses address your concern. We are always available and eager to address any additional questions you might have during our discussion.
Best regards,
The Authors | Summary: This paper investigates the vulnerability of Segment Anything Model (SAM) and its downstream models to transferable adversarial attacks. The authors propose a novel attack method called Universal Meta-Initialized and Gradient Robust Adversarial attack (UMI-GRAT) that leverages the open-sourced SAM to generate adversarial examples effective against fine-tuned downstream models, even without access to the downstream task or dataset.
Strengths: 1. The paper tackles a practical and challenging problem of attacking downstream models fine-tuned from a publicly available foundation model without knowledge of the downstream task or data.
2. The proposed UMI-GRAT method is well-motivated and technically sound. The authors provide theoretical insights into the gradient deviation problem and propose a robust solution using gradient noise augmentation.
3. The paper presents extensive experiments demonstrating the effectiveness of UMI-GRAT in attacking SAM and its downstream models
Weaknesses: See Questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does the performance of UMI-GRAT vary with different choices of hyperparameters, such as the perturbation bound ε and the number of iterations in UMI and LGR. Especially, line 278 mentions that the perturbation bound is 10, which is a bit too large.
2. According to the latest benchmark **[R1]**, baselines used in the paper are not SOTA methods. How does it compare with NCS **[R2]**, ANDA **[R3]**, DeCowA **[R4]** and L2T **[R5]**?
3. Is the proposed method a universal transfer attack method? Although this question is mentioned on line 548, can the performance of UMI-GRAT and SOTA be compared under a general transfer attack test setting?
4. UMI-GRAT is a gradient-based attack method. How does the proposed method perform when the model has a certain robustness (such as adversarial training)?
---
**[R1]** Devling into Adversarial Transferability on Image Classification: A Review, Benchmark and Evaluation.
**[R2]** Enhancing Adversarial Transferability Through Neighborhood Conditional Sampling.
**[R3]** Strong Transferable Adversarial Attacks via Ensembled Asymptotically Normal Distribution Learning. CVPR. 2024.
**[R4]** Boosting Adversarial Transferability across Model Genus by Deformation-Constrained Warping.
**[R5]** Learning to Transform Dynamically for Better Adversarial Transferability
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are discussed in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Qxcd,
Thank you so much for taking the time to read this paper and giving constructive feedback. Please find our response below.
> 1. How does the performance of UMI-GRAT vary with different hyperparameters, such as the bound $\epsilon$ and iterations?
Following your good suggestion, we conducted experiments using two additional sets of hyperparameters: bound $\epsilon=4$ with iteration $T_a=10$ and bound $\epsilon=10$ with iteration $T_a=20$. We evaluated 5 different attacks and reported the experimental results in **Table R2** of our submitted PDF file. The results indicate that:
1. In datasets that exhibit a large domain gap from natural image datasets (e.g., medical and shadow datasets), the attack bound $\epsilon$ is critical for transferability. Reducing the bound leads to a substantial performance decline for all attack algorithms. For the camouflaged object segmentation that shares a small domain gap with the original SAM's task, reducing the norm bound only causes a marginal performance drop.
2. Increasing the attack iterations markedly enhances the AEs when the surrogate and victim domains are proximate. However, for those tasks characterized by a substantial domain gap, increasing the iteration has little help.
> 2. Is the proposed method a universal transfer attack method? Can the UMI-GRAT and SOTA be compared under a general transfer attack test setting?
Following your good suggestion, we evaluated the effectiveness of our proposed UMI-GRAT on two new scenarios:
1. **The transferability of MUI-GRAT to other pre-trained methods.** We attacked the **MAE ViT-S and ViT-B that are fine-tuned on Chexpert [2]** using solely **the pre-trained MAE ViT-S** and reported the results in **Table. R3** of our submitted PDF file.
2. **The effectiveness of MUI-GRAT on the general transfer attack setting.** We attacked the **MAE ViT-B and MAE DenseNet-121** **fine-tuned** **on Chexpert [2]** using **MAE ViT-S fine-tuned** on the same dataset and reported the results in **Table. R4**.
We utilized models provided by [1] and followed the same experimental setting. In both scenarios, we evaluate 8 attack methods on the Chexpert [2] dataset, where the model needs to diagnose five predefined diseases in the chest X-ray images. We reported the mean Area Under the Curve (mAUC) to evaluate the performance.
The results in **Table R3 and R4** demonstrate the great generalizability of our proposed method. **Table R3** shows that the proposed **MUI-GART maintains the best method** on attacking both the fine-tuned **MAE-ViT-S and MAE-ViT-B** models. In **Table. R4**, the evaluation of the transferability between different ViTs and transferability from the ViT to the CNN indicates that our MUI-GRAT maintains effectiveness (**the second-best method**) on general transfer tasks.
> 3. How does it compare with NCS **[R2]**, ANDA **[R3]**, DeCowA **[R4]** and L2T **[R5]**
Following your good suggestion, we evaluated two currently open-sourced methods, **ANDA[R3] and L2T[R5]**. We conducted experiments considering the above two scenarios along with attacking the **normally trained and adversarial trained Medical SAM**. We set the n_ens=5 for ANDA and num_ scale=3 for L2T and keep the rest hyperparameters the same. We reported the experimental results in **Table R3, R4, and R5** in our submitted PDF file.
The results show that L2T performs well in both transfer tasks on MAE-based models while failing in attacking the SAM and adversarial-trained SAM. ANDA performs well on the general transfer tasks while failing on all pre-trained to fine-tuned transfer tasks. We hypothesize the reason for their failures as:
1. Due to the great misalignment in the gradient discussed in Definition 1 and Proposition 1, the transfer between pre-trained and fine-tuned models is much harder than the general one.
2. The feature-oriented attacking task may potentially undermine those output-oriented attack methods.
> 4. How does the proposed method perform when the model has a certain robustness (such as adversarial training)?
Following your constructive feedback, we conducted comparative experiments on attacking the **adversarial trained Medical SAM**. As our attacks aim to maximize the feature distance, we thereby consider two types of adversarial training (AT) mechanisms:
1. **Feature-wise AT**: the defender is **aware of feature-wise attacks**, thus assuming an attacker to **maximize the distance of feature embedding** during AT, and the defender thereby minimizes the adversarial loss.
2. **Output-wise AT**: the defender is **unaware of feature-wise attacks**, thus assuming an attacker to **maximize the final segmentation loss** during AT, and the defender thereby minimizes the adversarial segmentation loss.
We optimize the Medical SAM by incorporating the adversarial loss into the training loss with a weight factor $\tau$. We used $\tau=0.1\,0.5$ and took the MI-FGSM with iteration $T_a =1, 5$ and the same bound $\epsilon=10$ as the attacking strategy.
We ran **9 different attacks on 6 different AT models** and presented the results in **Table R.5**, demonstrating that:
1. The proposed UMI-GRAT remains the most effective method even if the model has a certain robustness via AT, demonstrating the effectiveness of our proposed method.
2. Feature-wise AT surpasses output-wise AT towards those feature maximizing attacks in most scenarios.
3. Enlarging the weight $\tau$ benefits the robustness. An intriguing finding is that using a stronger gradient attack during AT (e.g . 5-steps MI-FGSM) may damage the robustness.
---
We greatly appreciate your constructive feedback and will add all those experiments with analyses and cite all mentioned papers in our revised version.
[1] Delving into masked autoencoders for multi-label thorax disease classification. In WACV 2023.
[2] Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In AAAI 2019. | Rebuttal 1:
Rebuttal: We express our sincere appreciation to all the reviewers for their elaborate and constructive feedback. We summarize our rebuttal as follows:
1. As suggested by reviewer **pDnz**, we conducted the **randomness experiment** and presented the experimental results in **Table R1** of the PDF document attached below.
2. As suggested by reviewer **Qxcd and aXPE**, we discussed the effect of different hyperparameters for different attack algorithms and presented the experimental results in **Table R2** of the PDF document attached below.
3. As suggested by reviewer **Qxcd**, we compared our method with **ANDA and L2T** and presented the experimental results in **Table R3, R4 and R5** of the PDF document attached below.
4. As suggested by reviewers **Qxcd, Q28B, and aXPE**, we evaluated our proposed method on **MAE pre-trained model and on a general transfer adversarial attack task**. The results are presented in **Tables R3 and R4** of the PDF document attached below.
5. As suggested by reviewers **Qxcd and Q28B**, we evaluated our proposed method on the **adversarial-trained Medical SAM** and reported the results in **Table R5** of the PDF document attached below.
6. As suggested by reviewer **iuQV**, we evaluated the mentioned **model-agnostic method** and presented the results in **Table R5** of the PDF document attached below.
Pdf: /pdf/594462e8c941de50a9e3e5ddc3fd51a30361f62d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: In this paper, the authors propose an adversarial attack method that can contaminate downstream tasks from the perspective of adversarial transferability. They address the problem that SAM models do not have similar optimisation routes after fine-tuning for different downstream tasks by designing universal meta initialization. In this paper, the authors address the problem that SAM models do not have similar optimisation routes after fine-tuning for different downstream tasks by designing UMI noise. The authors introduce the idea of meta-learning to allow their algorithm to quickly adapt to different situations, i.e., downstream tasks.
Strengths: 1. The theoretical part of this paper is detailed, the experiments are sufficient. The comparison with other methods shows the sophistication of their approach.
2. The attacks proposed in this paper are novel. It contributes to the topic of attacking downstream tasks of large models. A discussion on adversarial transferability is introduced under this topic.
Weaknesses: 1. The readability of the Methodology section of this article is somewhat poor. The authors define the problem to be solved through the form of propositions. Similarly, if the authors could summarise the formulas as Theorem and put the proof process (both formulas and reasoning) specifically in the supplementary material, it would make the article more coherent.
2. The randomness of the experimental results is unknown. I understand that due to the larger computational effort, it is not practical to report error lines on all major experiments. But it would be better for the authors to report a set of randomness on a smaller dataset and simpler settings, which will influence the reviewers' opinion of the results of this method.
Technical Quality: 3
Clarity: 2
Questions for Authors: Two questions listed, see Weaknesses for details. Note that if the authors can demonstrate the randomness of their algorithms, that will help to get a higher rating.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors correctly list the implications of their work for the use of large models such as SAM in downstream tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer pDnz,
Thank you so much for taking the time to read this paper and giving constructive feedback. Please find our response below.
> 1.The readability of the Methodology section of this article is somewhat poor. The authors define the problem to be solved through the form of propositions. Similarly, if the authors could summarise the formulas as Theorem and put the proof process (both formulas and reasoning) specifically in the supplementary material, it would make the article more coherent.
Thanks for your constructive feedback. We will consolidate the Proposition and Equations (15) through (18) into a Theorem and put the remaining formulas and explanatory text in the appendix.
> The randomness of the experimental results is unknown. It would be better to report a set of randomness experiments on a smaller dataset and simpler settings.
Following your good suggestion, we evaluated **10** attack methods presented in our paper over **5** random seed runs on the subset of SAM's downstream tasks and reported the mean performance with its standard deviation. We use the same experimental setting provided in Section 6.1 of our submitted paper.
The details of each subset are:
1. For the Medical SAM, we selected 'case0008' (comprising 148 images with a resolution of 512x512) from the validation set of the CT-Scan dataset.
2. For the Shadow SAM, we randomly selected 100 images with a resolution of 1024x1024 from the test set of the ISTD dataset.
3. For the Camouflaged SAM that is evaluated across three different datasets, we randomly selected 40 images with a resolution of 1024x1024 from each dataset (COD10k, CAMO, CHAME), with a total of 120 images.
The results are shown in **Table R1** of our submitted **PDF file**, which demonstrates that the randomness is small and similar among all attacking methods. The uncertainty level of UMI-GRAT in mean Hausdorff Distance (mHD) is marginally higher compared to the other methods. This can account for the higher mHD value achieved by the UMI-GRAT.
We greatly appreciate your constructive feedback. We will conduct the randomness evaluation over the entire dataset and update the results in Table 1 and Table 2 of our paper in the revised version.
---
Rebuttal Comment 1.1:
Title: Response to the Authors
Comment: It looks like the randomness of most of the data is controlled. The authors also promised to fix the mentioned errors in the next draft. Therefore I decided to raise my score.
---
Reply to Comment 1.1.1:
Title: Response to the Reviewer pDnZ
Comment: Dear Reviewer pDnZ,
Thank you so much for your positive and constructive feedback, which is very helpful and makes our paper stronger!
We are glad that our responses address your concern. We are always available and eager to address any additional questions you might have during our discussion.
Best regards,
The Authors | null | null | null | null | null | null |
Stepping Forward on the Last Mile | Accept (poster) | Summary: **Context**. The focus of the present paper is on-device fine-tuning (gradient computation and weight update **starting from a pre-trained model**) under limited memory budget. One way to cut the memory cost of storing the computational graph for gradient computation by standard backprop is the Memory Efficient Zeroth Order (MeZO) optimizer [Malladi et al, 2023], whereby a directional gradient is computed via weight perturbation (a.k.a SPSA): computing the loss $L$ difference yielded by two forward passes with weights differing by $\epsilon u$ estimates $\nabla L \cdot u$. Since it is a purely forward procedure, it obviates the need to cache activations to execute a backward pass.
**Core contribution**. The present paper proposes a quantized version of MeZO where weight perturbation, gradient computation and weight update are carried out on quantized quantities. The proposed algorithm, coined QZO-FF (Alg. 1), is tested against a variety of fine-tuning tasks (few-shot learning, cross-domain and in-domain adaptation), modalities (image and audio data), and architectures (convolutional, attention-based, recurrent), with several variants being explored (with fp8 / fp32 activations) and benchmarked against standard backprop. The efficiency of QZO-FF, both in terms of resulting performance and memory usage, is demonstrated.
**Paper outline**. More precisely:
- Section 2 provides background knowledge on memory-efficient backprop (2.1), forward-mode differentiation (2.2) and quantized training (2.3).
- Section 3.1 and 3.2 formalizes further "forward gradients" (3.1) and the SPSA / weight perturbation procedure to estimate them (3.2). A hardware-friendly extension of SPSA coined as "Sign-m-SPSA", which estimates $\text{sign}(\nabla L \cdot u) u$, , is introduced along with the resulting SGD update (3.2).
- Section 3.3 presents the core algorithmic contribution by combining SPSA / weight perturbation and weight quantization (Alg. 1). More precisely, weights and perturbation are statically, symmetrically quantized (e.g. their range are estimated and set once, before fine-tuning), with one scale for each ($\Delta_w$ and $\Delta_q$). Therefore: i) $\Delta_w$ and $\Delta_q$ are fixed, with weights and perturbations quantized with 16 and 8 bits respectively, ii) the integer part of the perturbed weights is accumulated in 32 bit, iii) the dequantized perturbed weights is quantized-dequantized back into 16 bits using the same $\Delta_w$ scale (Eq. 6). The Sign-m-SPSA gradient estimator is applied and quantized-dequantized using the perturbation scale ($\Delta_z$, Eq. 7) . Finally, the weight update itself is quantized, such that it happens in the quantized integer part and is rescaled by $\Delta_w$ (Eq. 8). Alg. 1 summarizes the procedure in the case where the number of perturbed directions at use is 1 ($m=1$).
- Section 3.4 presents several algorithmic "enhancements" of the QZO-FF algorithm to improve the optimization procedure itself or its memory footprint.
- Section 4 presents experimental results. First, few-shot learning is considered (4.1) on visual and audio data. Here, "FF" refers for short to "QZO-FF". A quantized version of FF, where 8 bits activations are used, is also tested. On vision, three architectures are tested (ResNet12, 18 and ViT tiny) on 5 different standard few-short learning datasets. Two scenarii are considered: full fine-tuning and linear probing. It is shown overall that FF always yields better performance than the zero-shot baseline and within 5%, accuracy-wise, to the BP baseline on 26/30 experiments, and that the ViT backbone yields the least degradation. On audio, a similar experiment is done with two architectures (CRNN, AST) on two audio datasets. On 11/16 experiments, FF accuracy is 5% off the BP baseline. Then, a cross-domain adaptation task (4.2) is considered, where the different algorithmic enhancements previously introduced (e.g. quantized FF, gradient averaging, "sharpness aware" scheme...) are tested. Most importantly, it is observed that quantizing weights to 8 bits jeopardize the FF algorithm. Finally, sector, 4.3 presents in-domain OOD adaptation using the same fine-tuning schemes (LP, D-VPT) with three levels of corruptions of the CIFAR-10 dataset as OOD datasets. In this setting, FF achieves comparable performance with BP.
Strengths: - The problem tackled is highly relevant to on-device training, pragmatic and builds upon recent work [Malladi et al, 2023].
- The proposed algorithm is sound and well-explained.
- There are a lot of experimental settings, data modalities and architectures being explored.
- The proposed technique is effective in providing a learning signal, effectively training models and yielding relatively good performance compared to the BP baseline.
Weaknesses: - It is unclear what is kept in full precision in the proposed procedure (see my questions below).
- On a related note, it is also unclear that the proposed algorithm enhancements don't offset the advantages of manipulating statically quantized quantities (see my questions below).
- The experiments aren't all sufficiently well explained, neither in the main nor in the appendix, which is frustrating because there is a lot of work done there and we fail to deeply understand the proposed setups. I would even say that there are almost too many different experimental setups. Under constrained time budget to write the paper, I would have prioritized a lesser number of better detailed experiments rather than a lot of them left unsufficiently explained.
- **There aren't any error bars in any table and figures**, although the authors ticked in their checklist that they reported error bars and provided appropriate information about the statistical significance of their experiments (L. 520). For lack of this, it is very hard to draw any clear conclusion in terms of comparison between the different algorithms at use, e.g. is there a statistically significant gap here, or are these two results within error bar? We don't know.
- I don't understand what the 2D plot of the loss landscape really brings here in terms of insights.
Technical Quality: 3
Clarity: 3
Questions for Authors: - L. 135: "in order to mitigate the noisy component of forward gradients estimated by SPSA, we propose sign-m-SPSA": do you have evidence that sign-m-SPSA results in less noisy gradients?
- Section 3.3: could you please clarify what is kept in full precision? I see at least three different quantities not being quantized: i) the scales $\Delta_w$ and $\Delta_z$, ii) the loss for each perturbed weights and therefore its difference, iii) the averaged gradient (Eq. 7). Most importantly, do you confirm that you need to accumulate gradients across each direction ($i=1 \cdots m$) in higher precision (32 bits I guess?) and then quantize-dequantize it using the scale of the perturbation $\Delta_z$? Your pseudo-algorithm only treats the case $m=1$ so it remains unclear how this all work when $m>1$ and you need to average gradients. **Could you please write a new pseudo-algorithm**, alike Alg.1, **in the case $m>1$**, highlighting with **two different color codes** the quantized (int8 and int16) and full precision (fp32) quantities?
- L.193-198 (momentum-guided sampling): I think that incorporating momentum into your approach is crucial. However, I don't understand neither how it works. What do you mean by "as training progresses, a history of the momentum $z$ is incorporated to guide the new sampling process"?
- L.199 (sharpness-aware perturbation): you mean an "extra step of **directional** gradient ascent"?
- L.204-210 (sparse updates): which sparsity scheme did you employ? top-k magnitude based scheme may be quite costly, if it boils down to ranking all the weights by their magnitude.
- L.211 (kernel-wise normalization): in this case we agree that $\hat{g}$ needs to be stored in full precision? Also, the computation of the norms of $z$ and $w$ is computationally expensive ($O(d)$, where $d$ denotes the dimension of $w$ or $z$), as expensive as it would be to dynamically recompute $\Delta_z$ and $\Delta_w$, which you avoided by statically quantizing them. Don't you lose the advantage here of using static scales if in any case you need to perform this $O(d)$ operations?
- L.216 (few-shot learning): "a few labeled samples are available", but how many? Could you please clarify the experimental setup?
- L.222: "16w8a" means 16 bits for weights and 8 for activations, correct? I may not take this for granted and clearly define this notation.
- Table 2: I would rather compute the **relative** accuracy degradation (acc_BP - acc_qFF / acc_BP) rather than the **absolute** accuracy degradation.
- Table 2: the accuracy degradation when employing FF in the FT setting compared to BP in the same setting is quite severe (11.08% gap), although you are using a relatively small architecture (ResNet12) with relatively small input dimensionality (32x32). Why is this the case?
- Table 2: **there are not any error bars**, which makes it hard to make any sense of a $\sim 0.2/0.5$ difference between two experiments.
- Could you please define precisely what you mean by "zero shot" (I assume no training at all?), "linear probing" (I assume only the last linear layer is learned?) and "full fine-tuning" (all parameters are learned)?
- L. 238 (audio benchmark): could you please detail the few-shot setup for this task, and the tasks themselves? It is important for people not familiar with this literature.
- L. 252: I really did not understand what "cross-domain adaptation" really is about. Could you please explain better what it is?
- L.256: what is "visual-prompt tuning with deep prompts"?
- Fig. 2: except for large discrepancies between bars, it is difficult to draw any conclusion from this figure **for lack of error bars**. Could you please add them?
- Which conclusions / insights do you really gain from plotting the 2D contours of the loss landscape in the different settings?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses and questions above.
If I had detailed description of each of the experimental setups tackled, a precise knowledge of what exactly is kept in full precision, how some of the algorithmic enhancements really work and error bars on all figures, I would be prone to increasing my score. I really want to encourage the authors to do so because I do believe that the core of this work is of interest.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewers for their careful review of our paper, their interest in the core idea of our work, and valuable suggestions. The comments regarding the detailed experimental setups, precision used in the algorithm, explanations of enhancement mechanisms and notations are well taken, and the manuscript will be revised accordingly. Given the large number of experiments across multiple benchmark datasets, backbones, finetuning methods, and different modalities, we need additional time to report the error bar metrics. However, we will include error bars across as many settings as possible in the final paper. Our responses to the questions are given as follows.
**Question 1 (L. 135):**
The design of sign-m-SPSA addresses gradient noise in two key ways.
__1. Sign-based optimizers:__ Sign-based optimizers, such as sign-SGD (Bernstein et al. [2018]), have shown good practical performance for gradient compression. In our case, the magnitude of loss changes due to perturbed weights can be quite noisy. Therefore, incorporating the sign(.) operation in m-SPSA largely reduces the noise and enhances training stability.
__2. Quantization-friendly:__ Sign-m-SPSA is designed to be compatible with quantization. It constrains the range of gradient values to be the same as perturbation z for static quantization. This maintains consistency in gradient estimation within the quantized space.
**Question 2 (precision):**
We have updated the pseudo-algorithm with gradient averaging, and highlighted different precisions with color codes. Specifically,
__1. Quantization scaling factors:__ The scaling factors $\Delta_w$ and $\Delta_z$ are floating-point values, indicating the minimum representation power of the quantization space. In fixed-point engines, these scaling values are approximated using a multiplication ($\times m$) and a right shift ($\gg k$) operations in the re-quantization stage through a post-processing hardware block, as detailed in Appendix A. For instance, $\Delta_z \approx \frac{m}{2^k} $.
__2. Loss calculation:__ The loss can be computed either in floating-point precision or in quantized space, depending on the implementation and hardware support. Only the sign of the loss difference is necessary for gradient computation.
__3. Gradient accumulation:__ When m>1, higher precision is used to accumulate gradients, followed by a re-quantization step. For example, if $m = 4$, we require at least 2 additional bits to store the intermediate values. Techniques such as right-shifting the gradients ($g \gg 2$) can be employed to manage memory usage while still allowing accurate accumulation. There is a trade-off between memory efficiency and accuracy.
**Question 3 - 6 (enhancement techniques):**
We would like to address various enhancement techniques in the common rebuttal section.
**Question 7 (few-shot learning):**
We use a 5-way 5-shot setting (5 labeled samples) for vision tasks and a 5-way-1-shot setting (1 labeled sample) for audio tasks.
**Question 8 (16w8a):**
Yes, “16w8a” denotes 16-bit weights and 8-bit activations. We will clarify the term.
**Question 9 (relative metric):**
Thanks for the valuable suggestion. We agree that relative accuracy degradation is a more informative metric for assessing the impact of the change, while the absolute values intuitively highlight the performance gap.
**Question 10 (accuracy degradation):**
The accuracy gap between BP and FF can vary based on factors such as backbone architecture, dataset and task difficulty. For instance, with the CIFAR-100 dataset, which only contains low resolution images (32x32), FF faces some challenge. Using a stronger backbone such as ViT, can help bridge this accuracy gap. This indicates that while FF may show more degradation with smaller architectures and low-resolution inputs, performance improvements can be achieved with more advanced models.
**Question 11 and 16 (error bars):**
It is generally expected that BP outperforms FF in terms of accuracy in most tasks. However, our goal is to narrow this gap while leveraging the memory benefits of FF. We acknowledge the importance of including error bars. We will include error bars across as many settings as possible in the updated manuscript.
**Question 12 (terms of training methods):**
All terms are correct. We will clearly define these terms to avoid any ambiguity.
**Question 13 (L. 238):**
We will provide more detailed task information in the audio benchmark section to ensure clarity for readers who may not be familiar with this literature.
**Question 14 (L. 252):**
1. "Cross-domain" refers to fine-tuning on tasks with data distribution significantly different from those of the pre-trained model. For example, a model pre-trained on the ImageNet might be adapted to perform tasks on the VWW dataset.
2. "In-domain", on the other hand, refer to fine-tuning a model on tasks with data distributions more closely related to the pre-trained data, but with some variations. For example, a model pre-trained on CIFAR-100 might be adapted to handle corrupted versions of the same dataset.
**Question 15 (VPT-deep):**
This refers to VPT-deep, a fine-tuning method for Transformer models, as introduced by (Jia et al. [2022]).
**Question 17 (2D contours):**
Plotting the 2D contours of the loss landscape provides valuable insights into the training dynamics of different methods. These plots visualize how the loss evolves over training epochs and how the optimization paths differ.
From the contours, we observe that both FF and BP exhibit locally smooth loss landscapes, with trajectories generally following the gradient decent direction. However, compared to BP, FF tends to take more conservative steps at the beginning of the training, resulting in slower convergence. Despite this, both methods converge to a local minimum after 100 epochs, indicating that FF, while slower, ultimately reaches a comparable solution to BP.
---
Rebuttal Comment 1.1:
Title: Answer to rebuttal
Comment: Dear authors,
Thank you very much for answering my questions and adding the detailed pseudo-algorithm in the global rebuttal.
I am happy to increase my score to accept!
---
Reply to Comment 1.1.1:
Title: Replying to reviewer's comment
Comment: We sincerely appreciate the reviewer's recognition of the significance of our work and the increased score.
Thank you! | Summary: This paper explores the feasibility of on-device training using fixed-point forward gradients. The authors propose methods including sign-m-SPSA, Momentum Guided Sampling, Sharpness-aware Perturbation, Sparse Update, and Kernel-wise Normalization to reduce memory footprint and accuracy gaps and conduct experiments across various deep learning tasks in vision and audio domains. Key contributions of this paper include formulating forward gradients in the quantized space, demonstrating the feasibility of on-device training, and visualizing the neural loss landscape during training. The study shows that training with fixed-point forward gradients might be a practical approach for model customization on edge devices.
Strengths: ++ This paper proposes an improved method for forward gradients, called Quantized Zeroth-order Forward Gradient (QZO-FF), which enables forward gradients training using quantization.
++ QZO-FF is quantized and does not require backpropagation, thereby reducing memory overhead and eliminating the need for processors to have training capabilities. However, I doubt this because even though forward gradients do not require backpropagation, they still need to update weights and possibly save momentum and they need to perform additional quantization for $z$ in QZO-FF. Therefore, we may need some hardware adaption to assist feed-forward training.
++ The experiments across various benchmarks show that there is only a slight degradation in accuracy while the memory cost is reduced.
Weaknesses: 1. Some results are missing in the experiment. For example, (1) the memory cost of (BP, LP, fp16) is not measured. I think the memory cost of LP is important because it seems that the reduction of memory cost mainly comes from LP instead of FF and Quant in Figure 3 and Figure 4, and I think the claim that "this number is further reduced to only 0.28MB" and "the saving increases to 8.1× when sparse update and fixed-point are enabled" in Appendix B is totally misleading and unfair. (2) The accuracy of (BP, LP, quant) is not measured so there is no baseline for (FF, LP, Quant). (3) The accuracy of (FF, FT, Quant) and (BP, FT, Quant) is not measured. (BP, FT, Quant) should be some BP fixed-point training methods like Quantization-Aware Scaling (QAS) mentioned in related work.
2. Lack of ablation studies. The effects of techniques proposed in Section 3.4 are not well-studied. (1) There is no ablation study for Section 4.1. (2) The effect of sharpness-aware and kernel-wise normalization is not measured separately in Section 4.2. (3) I want to know __which__ of these techniques work in __what__ experiment settings. I believe that, as a new algorithm with many enhancement techniques, the authors should inform the readers about which parts of the algorithm are useful under which circumstances.
3. The model size (100K - 80M) is somewhat small compared to the concept of "pretrained models". How does the proposed method perform for larger models and how does the model size affect the effectiveness of the method?
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Although one can understand the meaning of these symbols after a careful reading, the notation in equation (6) is somewhat confusing because $1_q$ and $\epsilon_q$ have the same subscript but different scaling factors. I think it would be better to add a notation related to the scaling factor above them.
2. typo in line 271: extenteded
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The limitations are discussed well by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewers for their careful review of our paper, their interest in the core idea of our work, and valuable suggestions. The comments regarding more detailed explanations of techniques, experimental setups and comparisons are well taken, and the manuscript will be revised accordingly. Our responses to the questions are given as follows.
**Question 1 and 2 (notation in equation 6, typo in line 271):**
Thank you for pointing this out.
The subscript $q$ refers to the quantized value. In this context, $1_q$ and $\epsilon_q$ each have their own scaling factors ($\Delta_z$ and $\Delta_w$) and bit-widths.
We will correct the typo, and revise the notation to clearly distinguish these elements.
**Weakness 1 (some missing results):**
__1. The memory cost of (BP, LP, fp16):__
Thank you for your feedback. We understand the importance of accurately representing the memory costs associated with different training methods. The extent of memory saving with FF depends on the number of layers being fine-tuned, and their positions within the network. When applied to methods such as Full Fine-tuning (FT), LoRA and other Parameter-Efficient Fine-tuning (PEFT) approaches, FF shows significant memory reduction because it eliminates the need to store intermediate activations. This is evident when comparing the memory usage of (FF, FT) vs (BP, FT). In contrast, for Linear Probing (LP), where only the last layers are updated, the memory savings are less. Typically, finetuning with LP results in lower accuracy compared to FT or LoRA.
We will revise our statements and ensure that the claims are fair and well-supported.
__2. The accuracy of (BP, LP, Quant) and (BP, FT, Quant):__
We use (BP, LP, fp16) as a higher accuracy baseline to compare with (FF, LP, Quant). With techniques such as Quantization-Aware Scaling (QAS) reported in the literature, we expect (BP, LP, Quant) to be close of (BP, LP, fp16) in terms of accuracy. Similarly, we expect that (BP, FT, Quant) to perform comparably to (BP, FT, fp16). Currently, support for Quant formats with BP is limited on most fixed-point engines.
**Weakness 2 (Ablation studies):**
__1. Ablation study for Section 4.1__
We conducted ablation studies in Section 4.2 to evaluate the effectiveness of quantized FF, the impact of bit-width variations, and different perturbation sampling strategies. This section provides a more controlled experimental setting for these analyses.
In Section 4.1, we focused on evaluation few-shot learning across a diverse set of benchmark datasets, tasks, backbones, and modalities. This broader approach aims to demonstrate the general applicability of our method across various scenarios, while the more detailed ablation studies in Section 4.2 address specific aspects of our approach.
__2. The measure of sharpness-aware and kernel-wise normalization:__
In our cross-domain adaptation setup, we observed a consistent, but only a modest increase in accuracy with the combination of sharpness-aware and kernel-wise normalization.
__3. Various techniques and experiment settings:__
Thank you for raising this important point. We would like to address this comment in the common rebuttal section, providing a detailed explanation of how each technique performs. We will include a discussion of each technique and its effectiveness across different experimental settings in the updated manuscript, to guide readers on the optimal use of these enhancements.
**Weakness 3 (QZO-FF for larger models):**
Thank you for the suggestions. We will address this comment in the common rebuttal section.
---
Rebuttal 2:
Comment: Thanks for the authors' response! Some of my concerns haven't been addressed so I keep my score.
**The memory cost of (BP, LP, fp16)**
Will the authors provide the memory cost of (BP, LP, fp16)? How will the author modify their statements?
**Ablation study for Section 4.1**
Why did the author only provide an ablation study in Section 4.2 and not include one in Section 4.1?
**QZO-FF for larger models**
The authors haven't conducted experiments on models larger than 80M so I think the concept of "pre-trained models" in the abstract (line 1) and the introduction (line 20) is overclaimed.
---
Rebuttal Comment 2.1:
Title: Replying to reviewer's comment
Comment: We sincerely appreciate the reviewer’s comment on the fair comparison of memory cost of BP and ZO-FF among various training methods. To address this, we would like to make some clarifications.
__1. Memory cost of BP and ZO-FF__
* For a network with $N$ layers, BP with FT consumes $O(N)$ memory while ZO-FF always uses $O(1)$ memory. With an increased number of $N$, ZO-FF benefits more on memory.
* In the case of LP, where only the last layers are updated, it is expected that the difference of memory usage between BP and ZO-FF will be very small. For example, in our ViT tiny network for vision tasks, the total memory usage and scratch memory usage of (BP, LP, fp16) is $11.81MB$ and $0.45MB$, respectively. This number is very close to that of (FF, LP).
* We will provide an extra bar in Figure 3 and 4 for the case of (BP, LP, fp16), and make it clear that the comparison of memory is among the same training method. We do not compare memory usage of (BP, FT) vs (FF, LP), since they are updating different number of layers in network.
__Ablation studies:__
We choose Section 4.2 to study the impact of various factors on ZO-FF. This section has a more controlled experimental setting (ViT tiny backbone, and VWW dataset) between BP and ZO-FF. In our ablation studies, we keep all the settings the same and vary only one factor at a time. In Section 4.1, we would like to focus on the few-shot learning benchmark results, where broader experimental settings are used, including various backbone network architectures, datasets, and vision/audio modalities.
__ZO-FF for larger models:__
We apologize for any confusion regarding "pre-trained models". In our work, the pre-trained model refers to models that are pre-deployed on the device, for instance, an object detection model. These models may need additional adaptation or personalization over time.
The primary focus of our work is to enable such model fine-tuning on existing edge devices with fixed-point engines (e.g., NPUs, DSPs, MCUs). These hardware are primary designed for inference, therefore, typically have very limited memory and lack support for BP. Implementing BP on such hardware requires substantial engineering efforts. Under this context, ZO-FF directly leverages fixed-point forward calls for gradient estimation, with the same memory cost as inference, facilitating model fine-tuning without hardware adaptation. With all the benefits of ZO-FF, we believe that it is an attractive point along the compute-memory trade-off, and especially suitable for on-device fine-tuning use cases when the limited memory is the main stumbling block.
Thank you! | Summary: The authors investigate fixed-point forward gradients for quantized training. They conduct experiments across various deep learning tasks in vision and audio to assess if this method yields competitive models while conserving memory and computational resources.
They introduce algorithm enhancements to reduce memory usage and accuracy gaps compared to backpropagation, using fixed-point precision for forward gradients during training or adaptation.
Their findings demonstrate the feasibility of on-device training with fixed-point forward gradients across diverse model architectures (e.g., CNN, RNN, ViT-based) and parameter sizes (100K to 80M), offering practical solutions for model adaptation on edge devices.
The authors also visualize neural loss landscapes and training trajectories, providing insights into the dynamics of training with forward gradients for efficient on-device model adaptation.
Strengths: 1 .They understand quantization and tried not to leave anything float
2. Experimenting with SAM and ZO is nice
3. The paper is well written
Weaknesses: 1. Sadly no experiments on LLMs on which most fine tuning is done today
2. Marginal novelty: generally they just added quantization to ZO-FF – is that enough?
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. why you loop over w can’t you just do it vector wise?
2. Can you specify m (the number of pertubations) used for each experiment?
3. Can you calculate the memory consumption (MB/GB) and computation complexity (in FLops) compared to QLoRA with BP.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewers for their careful review of our paper, their interest in the core idea of our work, and valuable suggestions. The comments regarding the motivation, novelty, impact of our work, and detailed comparisons of hardware complexity are well taken, and the manuscript will be revised accordingly. Our responses to the questions are given as follows.
**Question 1 (loop over $w$):**
The loop of $w$ pertains to performing updates on a per-tensor basis. Each individual tensor $w_i$ is the trainable parameters for each layer of the model. It is processed in a vectorized manner, and is associated with its own quantization scaling factor. We will clarify this notation in the algorithm description to ensure a better understanding of the process.
**Question 2 (specify of $m$):**
In our experiments, the number of forward-forward calls performed ($m$) for averaging gradients is set to $3$, unless otherwise specified in the ablation studies. Details on this parameter are provided in the appendix.
**Question 3 (memory consumption and computation complexity):**
We appreciate your recommendations and feedback. We will list the memory cost and FLOPs for all models used in backpropagation (BP) and forward-forward (FF) process in the updated manuscript. Specifically,
We compare BP and FF from two perspectives:
__1. Memory efficiency:__ The extent of memory saving with FF depends on the number of layers being fine-tuned, and their positions within the network. When applied to methods such as Full Fine-tuning, LoRA and other Parameter-Efficient Fine-tuning (PEFT) approaches, FF shows significant memory reduction because it eliminates the need to store intermediate activations.
__2. Computation complexity:__ For a single iteration, BP performs one forward pass and one backward pass, while FF needs two forward passes. The FLOPs of a backward pass are roughly 2x of that of the forward pass (e.g., for both Convolutional and Linear layers). In our experiments, we observed a 1.5x speedup in one iteration of the training. However, the total computation (or training time) depends on the number of iterations required for the training to be converged.
**Weakness 1 (QZO-FF for LLM models):**
We would like to address this comment in the common rebuttal section.
**Weakness 2 (novelty):**
Most existing neural processors on edge devices are optimized as efficient fixed-point inference engines. We believe continuously adapting pre-trained models to local data on the edge is crucial for effective model deployment.
To enable training on edge devices with constrained memory, we leverage low-bit precision techniques. While previous work by [J. Lin, 2022], introduced Quantization-Aware Scaling for BP, to mitigate accuracy loss due to quantization, BP is memory intensive, and unsupported by many existing inference engines. In contrast, ZO-FF shows a significant advantage in this context.
To our knowledge, there is no prior research demonstrating the feasibility and impact of quantized ZO-FF, particularly with regard to different bit-widths and their effects on performances. Our work addresses this gap by showing the effectiveness of quantized ZO-FF through extensive experiments across various deep learning benchmarks and modalities. We believe this contribution is of substantial interest to the industry, addressing both practical challenges and opportunities in deploying models on edge devices.
---
Rebuttal Comment 1.1:
Title: Answer to the Authors rebuttal
Comment: I would like to thank the authors for their detailed response. However, I still believe that a fair comparison of memory and computational complexity should include the number of steps required to achieve a certain level of accuracy. Currently, when a practitioner has a system with limited memory, they often use activation checkpointing, a technique used to manage memory consumption in large language models (LLMs) during training. Instead of storing all intermediate activations for backpropagation, only a subset of activations is saved at "checkpoints." The rest of the activations are recomputed as needed during the backward pass. This approach reduces memory usage significantly at the cost of increased computation time. If QZO-FF requires three perturbations and converges much more slowly, say five times slower to achieve same accuracy, it might not be beneficial to use it. Can you add convergence curves for BP and QZO-FF or numbers of iterations required to reach the reported accuracy? If so I'll consider raising my score.
---
Reply to Comment 1.1.1:
Title: Replying to reviewer's comment
Comment: We sincerely appreciate the reviewer’s comment on the comprehensive comparison of QZO-FF and BP regarding memory, computational complexity, and convergence speed. We agree that these factors are essential for evaluating different gradient calculation methods. To address these aspects, we will provide training curves, along with reporting the accuracy of each method in our ablation studies, and include a time-compute-memory trade-off analysis comparing BP, BP with checkpointing and QZO-FF. Specifically,
__1. Convergence speed and empirical measurements__
Our experiments on the ViT tiny network with $5.2M$ parameters shows that ZO-FF converges approximately $2$x slower than BP, for both LP and VPT-deep training methods. Unfortunately, due to the format limitation, we cannot upload any link or graphs in the current response page. However, we will include the training curves in the Appendix to illustrate the convergence of BP and QZO-FF across various settings (e.g., $m=1$, $m=3$ for gradient averaging) and different precisions (fp16, quant). These curves will show the actual convergence behavior and training time required to achieve the reported accuracy.
__2. Compute-Memory Trade-off__
Analyzing the time-memory trade-off for backpropagation (BP) is complex. However, the work of [Griewank et al. 2008] gives a general rule of time-memory trade-off for BP (Rule 21). For a network with $N$ layers, and a time-memory tradeoff hyperparameter $c = O(1)$, there exists a BP algorithm that runs in $O(cN)$ time and consumes memory proportional to $O(N^{1/c})$.
* In the case of $c=1$ (storing everything during the forward path), BP consumes $O(N)$ compute and $O(N)$ memory. Given the FLOPs of a backward pass are roughly $2$x of that of the forward pass, ZO-FF (with $m=3$) consumes $O(2N)$ compute and $O(1)$ memory, so it uses more compute but substantially less memory.
* Gradient checkpointing [Chen et al. 2016] reduces memory cost of BP by recomputing some activations. In their experiments, choosing $c=2$ achieves $O(\sqrt{N})$ memory at $O(2N)$ computation. In comparison, ZO-FF is more compute-efficient at the same memory cost.
We want to emphasize that the memory cost of gradient checkpointing is bounded by ZO-FF and BP, and FF is more compute-memory efficient at the same memory cost as gradient checkpointing along the pareto curve.
As an example, the memory cost of training a ViT tiny network using different gradient estimation methods is illustrated as following (input size 224x224x3, batch size 1, and FP16 data type. QZO-FF is using16w8a). For BP with gradient checkpointing, we assume half of the activations are stored during forward time.
| **methods** | **weights** (MB) | **activations (peak + stored)** (MB) | **weight gradients** (MB) |
|---------------------------|-----------------------|----------------------------|-----------------------|
| **BP** | ██████████ 10.54 | ▓▓▓▓▓▓▓▓▓▓▓▓▓ 19.56 | ▒▒▒▒▒▒▒▒▒▒ 10.54 |
| **BP_grad_checkpointing** | ██████████ 10.54 | ▓▓▓▓▓▓▓ 9.92 | ▒▒▒▒▒▒▒▒▒▒ 10.54 |
| **ZO_FF** | ██████████ 10.54 | ▓ 0.14 | ▒▒▒▒▒▒▒▒▒▒ 10.54 |
| **QZO_FF** | ██████████ 10.54 | 0.07 | ▒▒▒▒▒ 5.27 |
__Additional comments:__
Beyond memory savings, QZO-FF provides practical benefits in several areas: 1) it enables training on existing edge devices with fixed-point engines (e.g., NPUs, DSPs, MCUs), which typically have very limited or no support for BP, because these hardware are primarily designed for inference. Implementing BP on such hardware requires substantial engineering efforts. Additionally, the required memory to run training is critical, determining the feasibility of enabling such feature on device. QZO-FF directly leverages fixed-point forward calls for gradient estimation, with the same memory cost as inference, facilitating model fine-tuning without hardware adaptation. This is a primary focus of our work. 2) QZO-FF is also suitable for training with non-differentiable objectives (e.g., maximizing accuracy, F1-score), where BP cannot be directly applied.
In summary, we do not expect QZO-FF to outperform BP in convergence speed or as a replacement of BP, and we will include the above analysis to guide readers on the optimal choose of these techniques. With all the benefits of QZO-FF, we believe that it is an attractive point along the compute-memory pareto curve and especially suitable for on-device fine-tuning use cases when the limited memory is the main stumbling block.
Thank you! | Summary: The paper proposes a quantization approach for fine-tuning pretrained data to new local data on resource-constrained devices. In particular, the weights perturbation, gradients estimation, and weights updates are quantized to either 8-bit or 16-bit. This quantization approach is combined with Momentum Guided Sampling, Sharpness-aware Perturbation, Sparse Update, and Kernel-wise Normalization to enhance fine-tuning performance. The proposed approaches are evaluated on various AI benchmarks. The results of this study indicate that quantized forward gradients are a good candidate for a fine-tuning approach that can be deployed on edge devices.
Strengths: 1- The paper is well-written and well-organized.
2- The quantized approach is evaluated on a variety of tasks that show the generalizability of the new approach.
3- The Sign-m-SPSA-SGD approach is interesting and novel.
Weaknesses: 1- The author are recommended to discuss the accuracy degradation of quantized forward gradients compared to the backpropagation algorithm. In some cases, the accuracy degradation is high (more than 5%). A comparison of performance versus hardware complexity (FLOPs or another metric) is recommended, as seen in [1].
2- Evaluating the efficacy of quantized forward gradients on fine-tuning LLM models such as LLaMA-3 is recommended.
[1] Carmichael, Zachariah, et al. "Performance-efficiency trade-off of low-precision numerical formats in deep neural networks." Proceedings of the conference for next generation arithmetic 2019. 2019.
Technical Quality: 3
Clarity: 3
Questions for Authors: Why is the random perturbation vector z sampled from a normal distribution with zero mean and standard deviation? Is it possible to sample from a log-normal distribution since activation gradients are shown to be distributed near log-normal [1]?
[1] Chmiel, Brian, et al. "Neural gradients are near-lognormal: improved quantized and sparse training." arXiv preprint arXiv:2006.08173 (2020).
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author addresses the limitations of this study by mentioning the initialization requirements for forward gradients and 16-bit weight quantization.
1- It would be suggested to discuss various initialization approaches that might be used instead of a pretrained network.
2- It would be suggested to discuss other numerical formats, such as 8-bit floating point or Posit, to solve the 16-bit weight quantization problem.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewers for their careful review of our paper, their interest in the core idea of our work, and valuable suggestions. The comments regarding accuracy discussions and comparisons of hardware complexity are well taken, and the manuscript will be revised accordingly. Our responses to the questions and identified weaknesses are given as follows.
**Question 1 (sampling of $z$):**
The choice of sampling $z$ from a normal distribution with zero mean and unit variance is supported by the literature [Baydin et al. [2022], Section 3.2], which proved that the forward gradient is an unbiased estimator of the true gradient, when the scalar components of perturbation are independent, and follow a zero mean, unit variance distribution.
In our case, Sign-m-SPSA is also designed to be compatible with quantization. It constrains the range of gradient values to be the same as perturbation $z$ for static quantization. This maintains consistency in gradient estimation within the quantized space.
In addition to using a normal distribution, we explored using a binomial distribution from a quantization-friendly perspective. Our experiment, as shown in Figure 2(a), indicates that the binomial distribution is also effective. Sampling from other distributions (e.g., log normal distribution) is another interesting direction. We tested this variation on our cross-domain adaptation setup with fp16 precision, and the results were promising. The quantization impact of such distribution needs to be further investigated.
**Weakness 1 (accuracy and hardware complexity):**
We appreciate your recommendations and feedback. We will include a more detailed accuracy analysis, and list the memory cost and FLOPs of all the models used for backpropagation (BP) and forward-forward (FF) in the updated manuscript. Specifically,
__1. Accuracy Discussion:__
Since FF solely utilizes directional derivatives for gradient estimation, it is expected that BP generally outperforms FF in terms of accuracy in most tasks. The accuracy gap between BP and FF can vary based on factors such as the backbone architecture, dataset, and task difficulty. We observed that accuracy gap tends to increase on more challenging tasks. However, using a stronger backbone such as ViT, can help bridge this gap. This indicates that while FF may show more degradation with smaller architectures and low-resolution inputs, performance improvements can be achieved with more advanced models.
__2. Hardware Complexity:__
We compare BP and FF from two perspectives:
* __Memory efficiency__: BP needs memory to store model parameters, all the intermediate activations and gradients, whereas FF avoids storing intermediate activations, the size of which could be considerably large in many models.
* __Computation complexity__: For a single iteration, BP performs one forward pass and one backward pass, while FF needs two forward passes. The FLOPs of a backward pass are roughly 2x of that of a forward pass (e.g., for both Convolutional and Linear layers). In our experiments, we observed a 1.5x speedup in one iteration of the training. However, the total computation (or training time) depends on the number of iterations required for the training to be converged.
**Weakness 2 (QZO-FF for LLM models):**
We would like to address this comment in the common rebuttal section.
**Limitation discussion 1 (initialization):**
Our work primarily focuses on model fine-tuning on edge devices with fixed-point engines, where we assume that the pre-trained network provides a good initialization. To facilitate reproducibility, we use widely available open-source pre-trained networks as backbones. In our cross-domain adaptation experiments, we initialize the decoder layers randomly to ensure a consistent and straightforward experimental setup. We have added an accuracy report with mean and standard deviations across 5 runs to our ablation studies in Section 4.2.
**Limitation discussion 2 (numerical formats):**
Thank you for the valuable suggestions. Based on our extensive investigations, 16-bit weight quantization is crucial for accurately capturing perturbations and accumulating weight changes in FF. However, we recognize the potential benefits of exploring alternative numerical formats such as 8-bit floating point, or ultra-low bit formats, in forward gradient learning.
In future work, we plan to explore these numerical formats to address challenges associated with 16-bit weight quantization. Currently, support for such formats is limited on most fixed-point engines.
We will include additional discussions in the updated manuscript. | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewers for their careful review of our paper, and their interest in the core idea of our work. We appreciate all the feedback, valuable suggestions and recommendations. The comments regarding notations, technical discussions, experimental clarifications are well taken, and the manuscript will be revised accordingly. Our responses to each reviewer’s questions are submitted separately. Additionally, we would like to address a few comments as follows.
**Suggestion of evaluating QZO-FF on LLM models:**
Thank you for the valuable suggestion. Our paper currently focusses on the feasibility and the effectiveness of quantized ZO optimization for fine-tuning smaller models on edge devices with fixed-point engines. We have deployed our method on memory-constrained edge devices, and brought the training capability to fixed-point engines.
While our current work centers on these smaller models (ConvNets, and Transformer models), we recognize the potential application of our approach to larger LLM models such as LLaMA-3. Techniques Like LoRA and other Parameter-Efficient Fine-Tuning (PEFT) methods could be combined with our quantized ZO approach for LLMs. Recent literature, such as the MeZO work, indicates that ZO training can be effective across various LLM tasks. However, the impact of quantized ZO with low precision for LLMs remains an open question. We plan to extend our research to evaluate the performances of our quantized ZO approach on LLM models and various benchmark tasks in future work.
**Clarification of various enhancement techniques:**
These enhancement techniques are optional, and often involve trade-offs between memory, computation and accuracy, depending on the hardware memory budget. We will provide a more detailed discussion of each technique and its effectiveness across different experimental settings in the revised manuscript, to guide readers on the optimal use of these enhancements. Specifically,
__1. Momentum-guided sampling:__
This enhancement introduces memory overhead to increase the accuracy. Instead of sampling solely from a zero-centered Gaussian distribution, perturbations are computed from a combination of a momentum-centered and a zero-centered Gaussian distribution. Mathematically, $z_1 \sim \mathbb{N}(0, \mathbb{I}_n* \sqrt{\alpha})$, $z_2 \sim \mathbb{N}(z_t, \mathbb{I}_n* \sqrt{1-\alpha})$, and
$z _{t+1} = \beta*z_1 + (1-\beta)*z_2$. Here, $\beta$ is a smoothing parameter; $\alpha$ and $\beta$ can be adaptively adjusted during training. While $\beta=1$ corresponds to the baseline version without momentum-guided sampling, this approach enhances performance by improving perturbation quality through sampling history.
__2. Sharpness-aware perturbation:__
This technique improves the forward gradient direction by performing an addition step of directional gradient ascent, targeting regions where the loss curve is steeper. We observed that incorporating sharpness-aware optimization enhances the overall performance of QZO-FF across various experiments.
__3. Sparse updates:__
We base our sparsity update approach on recent work by (Chen et al. [2024]), which utilizes a zero-order sparsity method. We also experimented with a random sparsity scheme, which proved effective as well. Due to the intrinsic properties of ZO method, reducing the number of parameters updated per iteration generally leads to improved accuracy.
__4. kernel-wise normalization:__
The primary motivation to incorporate static quantization is to ensure compatibility with hardware support. Many existing fixed-point neural processors on edge devices only support static graph quantization, where weights and activations are quantized prior to compilation. Dynamic quantization, which involves recalculating quantization parameters during runtime, is computationally expensive, and often not supported efficiently.
For kernel-wise normalization, obtaining the norm of weights involves a trade-off between computation and accuracy. However, efficient implementations using GEMM and SQRT operations can minimize the overhead on hardware.
**Figures:**
We have updated our pseudo-algorithm to include gradient averaging when $m>1$, and highlighted different precisions with color codes. Please noted that the scaling factor is approximated through a multiplication and a right shift operation on fixed-point processors (Appendix A). An accuracy report with mean and standard deviations has been added to our ablation studies in Section 4.2. Please refer to attached PDF for the updates.
Pdf: /pdf/6a91160a25d42332f56f9860b18e86f0f0d990e0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Text-DiFuse: An Interactive Multi-Modal Image Fusion Framework based on Text-modulated Diffusion Model | Accept (spotlight) | Summary: A new paradigm of multi-modal image fusion named Text-DiFuse is introduced, based on the diffusion model. The paradigm embeds a mechanism for aggregating feature-level multi-modal image information into the diffusion process of degrading multi-modal images, addressing the optimization gap between "degradation removal" and "multi-modal information fusion". Additionally, a zero-shot model is introduced to modulate the fusion strategy based on user-input target text, enhancing the saliency of the target of interest. The conducted experiments suggest significant improvements in both human visual perception and advanced computer vision tasks.
Strengths: 1) Embedding the mechanism of aggregating feature-level information into multiple diffusion processes to fuse multi-modal information is interesting. It is foreseeable that this diffusion paradigm produces fused images with better fidelity compared to methods based on likelihood-constrained diffusion models.
2) The coupled approach effectively resolves the issue of compound degradation in the process of multi-modal fusion, as evidenced by experimental results demonstrating significant advantages over the sequential approach.
3) The authors emphasize the importance of foreground targets in advanced visual tasks and propose enhancing target saliency through zero-shot assisted re-modulation. This approach diverges from traditional uniform fusion rules, demonstrating effectiveness.
4) This approach shows strong applicability. It demonstrates superior performance in multiple tasks including infrared and visible image fusion, medical image fusion, and polarization image fusion.
Weaknesses: 1) After the diffusion model is effectively trained, the sampling process can follow different step intervals. The information fusion in this method is integrated into the diffusion process, but the article does not seem to specify the sampling interval at which the results are obtained. Also, this article does not discuss the impact of the sampling interval on the fusion performance.
2) The presentation is slightly unclear. For example, from Equation 2 to Equation 6, both the features and the images carry the condition N that represents the degradation. Why does equation 7 no longer include N? Why can it be considered that the degradation has been removed at this point?
3) In Table 2 and Figure 4, some existing image restoration methods are cascaded in front of the fusion method to promote fairness in comparison, such as low-light enhancement (CLIP-LIT), denoising (SDAP), and white balance (AWB) algorithms.
Please explain the choice of the order in which they are connected in series, i.e. why low light enhancement first, then denoising, and finally white balance.
4) Modulating the salience of targets of interest in the fusion process through language is novel. Intuitively, I think the improvement in semantic properties brought about by this modulation is widespread. Currently, the effectiveness of language modulation has only been verified in the semantic segmentation scenario. It is recommended to provide an evaluation in the object detection scenario to further verify its role.
Technical Quality: 4
Clarity: 4
Questions for Authors: Please refer to the weaknesses part.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have analyzed the limitations and potential negative impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: Sampling interval and its impact.\
Reply: In our method, image restoration and information integration are mutually coupled. This is reflected in the physical connection, where a fusion control module is embedded within the internal structure of the diffusion model. Once all the networks are trained, we can follow the standard diffusion model testing procedure, which involves performing T steps of continuous sampling. It is worth noting that information fusion needs to be performed at each sampling step. In this case, the only factor affecting the final fusion result is the number of sampling steps. More sampling steps mean better performance, but they also result in significant time consumption. Therefore, setting an appropriate number of sampling steps is a matter worth discussing.\
In tasks where the ground truth is available, the number of sampling steps can be well determined by checking whether the generated results are sufficiently close to the ground truth. However, for the image fusion task, where ground-truth data do not exist, we rely on visual perception and multiple no-reference metrics to make the assessment. Specifically, we set the number of sampling steps to 2, 3, 4, 5, 10, 25, 500, and 1000, with qualitative and quantitative results shown in Figs. r7 and r8. Notably, each metric is normalized along the step dimension for easier presentation. It can be observed that as the number of steps increases, noise is gradually removed and the scene texture becomes increasingly refined. Corresponding to the quantitative results, 25 steps achieve good performance saturation, with subsequent increases in the number of steps resulting in only slight fluctuations in scores. Note that the only exception is AG, as it is affected by noise during the diffusion process. Therefore, in our experimental section, the number of sampling steps is set to 25.
Q2: Removal of degradation symbols $N$ in Eq. (7).\
Reply: There are two conceptual differences that need to be clarified first. In all the equations, the symbol $N$ refers to the degradation from the source images, specifically including improper lighting, color distortion, and random noise. In contrast, the noise in the intermediate results obtained from continuous sampling of the diffusion model arises from the Gaussian noise assumption of the diffusion theory itself. Eqs. (2)-(6) represent the process of encoding, fusion, decoding, and the estimation of mean and variance for degraded multi-modal images. In this process, the objects being processed contain the degradation from source images, so they all include the condition $N$. Differently, Eq. (7) represents the intermediate result obtained after a single complete sampling step. Therefore, even though early sampling steps still contain Gaussian noise following the diffusion assumption (see Fig. r7), it does not need to include the symbol $N$. We will clarify their distinctions in the final version to avoid misunderstandings.
Q3: Concatenate order of image restoration algorithm.\
Reply: In Table 2, we introduce three image restoration algorithms as preprocessing steps of other comparative image fusion methods, including CLIP-LIT, SDAP, and AWB. Among them, CLIP-LIT is a low-light enhancement algorithm, SDAP is a denoising algorithm, and AWB is a white balance algorithm. In the experiment, we follow the processing sequence of low-light enhancement first, followed by denoising, and finally white balance. The choice of this sequence is related to the dependencies among the three types of degradations we are focusing on. Specifically, in low-light images, both scene content and degradation present low-intensity properties, and the signal-to-noise ratio is low. The deep entanglement of noise, color distortion, and useful signals makes degradation removal more challenging. Thus, we first use CLIP-LIT to improve exposure, thereby reducing the difficulty of denoising and color correction. Furthermore, color correction based on white balance requires locating the white light source, and noise interferes with the accuracy of finding the light source. Therefore, we then perform SDAP to remove noise. After addressing exposure and noise, AWB is applied last to achieve color correction.
Q4: Verification of text control on the object detection.\
Reply: Our method supports text control, enabling the enhancement of the salience of objects of interest based on instructions. Following the reviewer's suggestion, we further verify the semantic gain brought by text modulation on the object detection task. Specifically, the MSRS dataset [r1] is used, which includes pairs of infrared and visible images with two types of detection labels: person and car. Therefore, the text instruction is formulated as: "Please highlight the person and car," which guides our method to enhance the representation of these two types of objects in the fused image. Then, we adopt the YOLO-v5 detector to perform object detection on infrared images, visible images, and fused images generated by various image fusion methods. The visual results are presented in Fig. r6, in which more complete cars and people can be detected from our fused images while showing higher class confidence. Furthermore, we provide quantitative detection results in Table r5. It can be seen that the highest average accuracy is obtained from our fused images, demonstrating the benefits of text modulation. Overall, these results indicate that text control indeed provides significant semantic gains, benefiting downstream tasks.\
[r1] PIAFusion: A progressive infrared and visible image fusion network based on illumination aware. Information Fusion, 2022.
---
Rebuttal Comment 1.1:
Comment: The response is clear and my concerns are addressed. In particular, the effectiveness of semantic attribute improvement is verified in the object detection scenario. I think this observation is inspiring. I also see that the authors perform an additional comparison by using an all-in-one image enhancement algorithm InstructIR, still showing this work's effectiveness. Consequently, I plan to keep my original rating and recommend accepting this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your positive feedback on our paper. The textual modulation enhances generalized semantic attributes, illustrating that abstract text encompasses rich semantic information and can aid in improving machine vision perception. This observation can inspire the design of methods for various high-level visual tasks. In the future, we will explore controllable semantic decision paradigms based on text integration, achieving various interesting functions, such as text-assisted, text-specified, and text-deception decisions. Furthermore, implicit integration of information restoration and fusion is indeed more attractive than explicit concatenation. This is not only because of its advanced performance but also due to its ability to handle multiple tasks with a single set of parameters. We commit that if this paper is accepted, all clarifications provided in the rebuttal will be incorporated into the camera-ready version. | Summary: This work focuses on the topic of multi-modal image fusion. Two innovations enhance the performance of the fusion. One is the clever integration of information fusion into the diffusion process. This coupling way enables the fusion function to resist degradation. The other is the introduction of a text-based fusion remodulation strategy. This changes the limitation of previous fusion methods that could only use fixed mappings, allowing for the dynamic adjustment of the fused image based on specific requirements. This remodulation also enhances semantic attributes, improving the scores of the semantic segmentation task.
Strengths: 1. Integrating information fusion into the diffusion process is novel. Especially, each sampling step triggers an information fusion, which enhances the sufficiency of information fusion. This coupling can ensure the robustness of information fusion, addressing challenges such as low light, noise, and color cast.
2. The introduction of multi-modal large models is interesting, particularly the ability to remodulate fused images using textual commands. This capability could potentially facilitate the flexible deployment of the proposed method across different application requirements. The demonstration of enhanced semantic attributes and improved semantic segmentation performance is good.
3. Overall, the experiments are relatively sufficient. The comparative experiments include both baseline comparisons and pre-enhancement comparisons, which are important for ensuring fairness.
4. The code is provided, which helps in reproducing the performance.
Weaknesses: 1. On page 5, line 174, the source data used for fusion contains degradation, [{Xb,Y}|N]. My question is, in Equations (9) and (10), where do the clean {Xb,Y} used to guide the fusion come from? Is there a multi-modal dataset that contains paired degraded and clean data? The paper seems to lack an explanation for this.
2. The forward process of the diffusion model involves T steps of noise addition, while the reverse process consists of T steps of iterative sampling. Is the Z0 obtained in equation (8) a hypothetical Z0 derived from the diffusion relation at each sampling, or is it the Z0 after completing the full T steps of sampling? This determines the object of the constraints in the loss functions (9) and (10). It would be better to provide a detailed discussion on this.
3. Only after the T steps of sampling can the data without degradation be obtained. So why can Z_{t-1}^b in equation (7) be considered free from degradation N?
4. It's understandable that using textual modulation to control the desired targets of interest can enhance semantic attributes. My question is whether these enhanced semantic attributes can be generalized. In other words, can it also be effective in other high-level visual tasks besides semantic segmentation?
5. Typo: The Zt on the left side of equation (8) seems to have a missing superscript b.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please answer the question raised in Weaknesses.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes, there are discussions about the limitations and potential negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: Clean data for the loss construction.\
Reply: Constructing Eqs. (9) and (10) actually involves very stringent data requirements. Specifically, they require a pair of degraded multi-modal images describing the same scene, along with their corresponding clean versions. Unfortunately, such a dataset is currently not available. To alleviate this challenge, we adopt a two-step strategy. Specifically, we first pre-train the diffusion model to learn the image restoration capability. In this step, we only need degraded-clean image pairs, without the need for paired multi-modal images that describes the same scene. Once the diffusion model is trained, it can be used to process existing degraded multi-modal image fusion datasets, to generate the required clean multi-modal image pairs. At this point, all the data required for constructing Eqs. (9) and (10) has been obtained.
Q2: Source of the constrained fused image in Eq. (8).\
Reply: Our method couples image restoration and information integration by inserting a fusion control module within the diffusion model. In other words, each step of sampling will be accompanied by an information fusion. Theoretically, the final fused image requires T steps of continuous iterative sampling to obtain. However, during training, it is inefficient to wait for T steps of sampling to obtain the result and then construct the loss functions of Eqs. (9) and (10). Therefore, we customize Eq. (8) based on the diffusion relationship. According to it, we can derive the corresponding fake final fused image from the results of any step of sampling and apply the corresponding fusion constraints.
Q3: Omission of degradation $N$ in Eq. (7).\
Reply: Indeed, diffusion models often require a certain number of sampling steps to progressively remove the noise. However, it is needed to clarify that the degradation symbol $N$ in Eqs. (2)-(6) is not the same thing as the noise in the diffusion process. Specifically, the symbol $N$ refers to the degradation from the source images, specifically including improper lighting, color distortion, and random noise. They are used as the conditions of the diffusion model, included in the source image and fed into the denoising network. In contrast, the noise in the intermediate results obtained from continuous sampling of the diffusion model arises from the Gaussian noise assumption of the diffusion theory itself. Eqs. (2)-(6) represent the process of encoding, fusion, decoding, and the estimation of mean and variance for degraded multi-modal images. In this process, the objects being processed contain the degradation from source images, so they all include the condition $N$. Differently, Eq. (7) represents the intermediate result obtained after a single complete sampling step. Therefore, even though early sampling steps still contain Gaussian noise following the diffusion assumption (see Fig. r7), it does not need to include the symbol $N$. We will clarify their distinctions in the final version to avoid misunderstandings.
Q4: Generalization of semantic attributes.\
Reply: Our method supports the use of textual instructions to re-modulate the information fusion process. Its purpose is to increase the salience of the object of interest, and enhance its presentation quality on the fused image. In our method, no specific downstream task is used to guide this remodulation process, so the gain in semantic attributes is not fixed to any particular task. In other words, beyond the semantic segmentation validation presented in the main text, the semantic attribute gain achieved through textual remodulation can certainly be generalized to other downstream tasks.\
For proving this point, we implement application experiments on the object detection task. Specifically, the MSRS dataset [r1] is used, which includes pairs of infrared and visible images with two types of detection labels: person and car. Therefore, the text instruction is formulated as: "Please highlight the person and car," which guides our method to enhance the representation of these two types of objects in the fused image. Then, we adopt the YOLO-v5 detector to perform object detection on infrared images, visible images, and fused images generated by various image fusion methods. The visual results are presented in Fig. r6, in which more complete cars and people can be detected from our fused images while showing higher class confidence. Furthermore, we provide quantitative detection results in Table r5. It can be seen that the highest average accuracy is obtained from our fused images, demonstrating the benefits of text modulation. Overall, these results indicate that text control indeed provides significant semantic gains, benefiting downstream tasks.\
[r1] PIAFusion: A progressive infrared and visible image fusion network based on illumination aware. Information Fusion, 2022.
Q5: Typos.\
Reply: Thanks for pointing out these issues. We will carefully correct all typos in the final version and further enhance the presentation of the figures and tables.
---
Rebuttal Comment 1.1:
Title: To author's response
Comment: Thanks for the effort to clarify my questions in the provided rebuttal. Using the two-step strategy to address the limitation of data unavailability is clever, and I'm pleased to see the general semantic attributes brought by textual modulation. Therefore, I'm inclined to accept this paper. In the camera-ready version, please include the provided clarifications about the source of Z0 in loss functions (9) and (10) and the degradation N.
---
Reply to Comment 1.1.1:
Comment: We truly appreciate your efforts in improving our paper. If this paper is accepted, we plan to incorporate the following revisions into the camera-ready version in response to your recommendations:
1. Explain the source of clean data for constructing loss functions: We adopt a two-step strategy to alleviate this challenge of unavailable data. The core idea of the two-step strategy is to relax the high data requirements by pre-training a generative model with limited available data. Then, this pre-trained model allows for the production of data that is not available in reality.
2. Explain the source of $Z_0$: We customize Eq. (8) based on the diffusion relationship, so we can derive the corresponding fake final fused image $Z_0$ from the results of any sampling step and apply the corresponding fusion constraints. We also consider using the results after all sampling as Z0 in loss functions (9) and (10). The quantitative results are reported below, demonstrating the advantages of our method compared to the full sampling strategy.
| $Z_0$ | EN | AG| SD| SCD| VIF|
| ---- | ---- | ---- | ---- | ---- | ---- |
| Full Sampling | 5.93 | 1.74 | 23.99 | 1.26 | 0.63 |
| Ours | **7.08** | **3.31** | **47.44** | **1.44** | **0.76** |
3. Explain the degradation $N$: The degradation symbol in Eqs. (2)-(6) is not the same thing as the noise in the diffusion process. The former refers to the degradation from the source images, specifically including improper lighting, color distortion, and random noise. In contrast, the latter comes from the Gaussian noise assumption of diffusion theory itself.
4. Verify textual modulation in object detection: We implement application experiments on the object detection task to verify the generalization of semantic attributes. Experimental results indicate that textual modulation indeed enhances semantic information, thereby benefiting downstream tasks. In the future, we will explore controllable semantic decision paradigms based on text integration, achieving various interesting functions, such as text-assisted, text-specified, and text-deception decisions.
We greatly appreciate your positive feedback on our work. If you have any further questions or concerns, please feel free to contact us. | Summary: This paper addresses two primary challenges in multimodal image fusion: the mixed degradation of modalities and the insufficient salience of target objects. It proposes two methods to tackle these challenges: feature-level fusion diffusion and the re-modulation of fusion rules in target areas using a zero-shot segmentation model. They implement adequate experiments for evaluation, and the results demonstrate this method's advanced performance across various aspects, including the visual and semantic.
Strengths: + The mixed degradation of modalities and the insufficient salience of target objects are two interesting problems in multimodal image fusion. This paper’s discussion and solution of these two problems may promote the usability of fusion methods in real scenarios.
+ The information fusion at the feature level is integrated into the diffusion process, which effectively realizes the degradation removal.
+ The customized object highlighting strategy based on the zero-shot segmentation model is flexible. In particular, its gain in semantic attributes will increase the usability of the fused image in downstream tasks.
+ This paper conducts lots of comparative experiments and ablation studies on the overall method.
+ The narrative of this paper is comprehensive and clear. For me, it's easy to follow.
Weaknesses: - This paper mentioned that the diffusion model is pre-trained to enable the denoising network to have the degradation removal function. However, details about the construction of the data used to train the diffusion model are missing. They need to describe this process to make the overall approach clearer.
- This paper focuses on multimodal image fusion, being reflected in the title. In the main text, the proposed method is evaluated in two scenarios: infrared and visible image fusion and medical image fusion. In the supplementary materials, they further provide experiments on polarization image fusion. I am curious whether the applicable scenarios of the proposed method can be further expanded, such as the typical fusion of near-infrared and visible bands.
- The experiments on polarization image fusion only provide visual results, and it would be better to add a quantitative evaluation.
- I noticed that the proposed method separates the chrominance component and the brightness component, and then performs de-degradation on them separately. An explanation of why this operation is needed should be given. Perhaps an ablation experiment could more intuitively show the effect of this operation.
- There are some minor typos, such as potential misspellings of dataset names in Tables 1 and 2. In addition, there seems to be a lack of underline on AG's second place.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. How were the degradation condition data constructed, were paired supervised datasets used or synthetic datasets?
2. Has there been an attempt to evaluate the fusion on the fusion of near-infrared and visible bands?
3. Could you provide the quantitative results of polarization image fusion?
4. The separation of chrominance and brightness requires more explanation.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Limitations and broader impacts have been included.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: Dataset for training diffusion model.\
Reply: In our work, acquiring image restoration capability depends on pre-training a conditional diffusion model, which needs paired clean and degraded data. The clean data are used to build the loss function for supervision, while the degraded data act as conditioning inputs for the denoising network. Therefore, we use existing supervised datasets and additionally simulate a portion of the data to meet the requirements of mixed degradation. \
Our method primarily addresses three common types of degradation in the fusion scenario: improper lighting, color distortion, and noise. For improper lighting, we use 2,220 image pairs from the MIT-Adobe FiveK Dataset [r1], covering images with varying exposures and their corresponding ground truth manually adjusted by photography experts. For color distortion, we use 1,031 image pairs from the Rendered WB dataset [r2], including color-biased images under various light sources such as fluorescent, incandescent, and daylight, as well as corresponding reference images manually calibrated under the Adobe standard. For noise, we add Gaussian noise, pulse noise, Poisson noise, Rayleigh noise, and uniform noise to 2,220 clean images from the MIT-Adobe FiveK Dataset and 2,220 clean images from the MSRS dataset to obtain noised images. All these image pairs constitute the complete dataset for training our diffusion model, driving our model’s learning for compound degradation removal.\
[r1] Learning photographic global tonal adjustment with a database of input/output image pairs. CVPR 2011.\
[r2] When color constancy goes wrong: Correcting improperly white-balanced images. CVPR, 2019.\
[r3] PIAFusion: A progressive infrared and visible image fusion network based on illumination aware. Information Fusion, 2022.\
Q2: Expand application scenarios.\
Reply: This is an insightful suggestion. Of course, our proposed method can be further generalized to other multi-modal image fusion scenarios, as its methodology is a general fusion paradigm. To prove this, we conduct comparative experiments in the near-infrared and visible image fusion scenario. The visual results are shown in Fig. r5, where our method effectively integrates texture details from the near-infrared band with those from the visible image, while preserving natural color attributes of the visible image. Notably, the inherent image restoration capability of our method allows it to produce vivid fused images in underexposed scenes without causing overexposure like MRFS, as seen in the results of the first row. Furthermore, the quantitative results in Table r4 show that our proposed method ranks first in three of all five metrics and second in the other two. Overall, our method can be generalized to near-infrared and visible image fusion scenario with promising performance.
Q3: Quantitative evaluation of polarization image fusion.\
Reply: We conduct a quantitative assessment of the polarization image fusion, as reported in Table r4. Our method achieves the best scores in four of all five metrics, including EN, AG, SD, and SCD. These results indicate that our fused image contains the most information, the richest texture, the best contrast, and the most feature transfer from the source images. Overall, the quantitative results validate the advantages of our method in the polarization image fusion scenario, demonstrating its strong multi-modal fusion generalization capability.
Q4: Brightness-chrominance separation.\
Reply: Image fusion requires a high level of color fidelity to the scene. Taking the infrared and visible image fusion as an example, the colors in the fused image are required to be as consistent as possible with those in the visible image. Therefore, by independently purifying and preserving the chrominance components in the visible image, our method can effectively and conveniently achieve color fidelity.\
Next, we discuss why our method does not directly process three-channel images. Firstly, from the perspective of image restoration alone, directly processing color images is entirely feasible. However, our method requires embedding information fusion into the latent layers of the diffusion model used for image restoration. This means that features from the gray infrared image could potentially interfere with the color distribution of features from the visible image. In particular, this interference occurs in the highly nonlinear latent space, where some small changes can be amplified by the decoder to produce large color distortions. In this case, ensuring the expected color fidelity is very difficult. Second, the interference is directly related to the way multi-modal features are fused. In our method, we use a nonlinear neural network called the Fusion Control Module to perform information aggregation, which is guided to retain significant thermal radiation objects while preserving rich background textures. These two goals correspond to the similarity loss functions (see Eqs. (9) and (10)) based on the indicators of pixel intensity and gradient. Under such optimization guidance, it is difficult to avoid disrupting the color distribution in the features from the visible image. For verifying, we adapt our proposed method to directly process three-channel images without separating brightness and chrominance components, and the results are presented in Fig. r1. Clearly, color distortion occurs. Furthermore, we implement quantitative evaluation in Table r1. The direct processing strategy decreases the color score CIECAM16 and also negatively affects other metrics to varying degrees
Q5: Typos and underline on $AG$'s second place.\
Reply: Thanks for pointing out these issues. We will carefully correct all typos in the final version and further improve the presentation of the figures and tables.
---
Rebuttal Comment 1.1:
Comment: OK. Your reply solved my doubts.
---
Reply to Comment 1.1.1:
Comment: We truly appreciate your efforts in improving our paper. If this paper is accepted, we plan to incorporate the following revisions into the camera-ready version in response to your recommendations:
1. Supplement data construction details: We use existing supervised datasets (MIT-Adobe FiveK Dataset, Rendered WB dataset) and additionally simulate a portion of the data to meet the requirements of mixed degradation (such as improper lighting, color distortion, and noise). These data constitute the complete dataset for training our diffusion model, driving our model's learning for compound degradation removal.
2. Add additional application scenarios: We extend our proposed model to the near-infrared and visible image fusion task. The experimental results continued to demonstrate the advantages of our method.
3. Supplement quantitative evaluation of polarization image fusion: We supplement the quantitative assessment of the polarization image fusion, showing that our method achieves the best scores in four of all five metrics.
4. Discuss brightness-chrominance separation: We discuss the two reasons why our method does not directly handle three-channel images, including the nonlinear amplification of interference and the impact of fusion loss functions.
We greatly appreciate your positive feedback on our work. If you have any further questions or concerns, please feel free to contact us. | Summary: This paper proposes an interactive framework that can exploit the intrinsic connection between image restoration and multi-modal image fusion.
The authors embed information fusion within the diffusion process and address the "composite degradation challenge" i.e., multi-modal information integration with
effective information restoration from degradation like colour casts, noise, and improper lighting. Particularly, first, independent conditional diffusion models are applied
to each modality with compound degradation -- the degradation removal priors are embedded into the encoder-decoder network. A fusion control module (FCM) sits in
the multi-step diffusion process to manage the integration of multi-modal features and remove degradation during T-step sampling. Next, to interactively enhance
focus on objects of interest during diffusion fusion, the authors designed a text-controlled fusion re-modulation strategy that incorporates a text and a zero-shot OWL-ViT to
identify the objects of interest. In other words, this step performs a secondary modulation with the built-in prior to enhance saliency.
Strengths: - It is interesting to see the effect of combining image restoration and multi-modal image fusion in a single framework.
- The proposed method is well-motivated and the authors provide a clear explanation of the method.
- The Text-controlled fusion re-modulation strategy could be useful in many applications.
- The authors provide the code in the supplementary material (although I have only dry run the code and not tested it).
- Extensive experiments are conducted to validate the proposed method.
- The authors provide ablation studies to show the effectiveness of each component of the proposed method.
Weaknesses: For now, I have minor concerns and mostly questions (as listed in the next section).
- The authors should add a brief discussion on the competitors in supplementary material. For example, differences between TarDAL, DeFusion, LRRNet, DDFM, and MRFS.
- Typo in Eq. 2: $\Theta_{t}^{X^{B}}$ should be $\Theta_{t}^{X^{b}}$.
- Improve the caption of Figure 2. I had to read the entire paper to understand the figure (it should be self-explanatory).
- Not much of a weakness, but the authors could improve the clarity of the paper if they added the tensor dimension of each variable in Figure 2.
Technical Quality: 3
Clarity: 2
Questions for Authors: - In the proposed method, input visual image X is broken into brightness and chroma components. I wonder if this step is absolutely necessary -- or can we skip $\eta^{c}_{
theta}$ and directly combine both $X$ and $Y$ as three-channel images $\mathbb{R}^{H \times W \times 3}$.
- What if I use InstructIR (for image restoration) followed by MaxFusion (for multi-modal fusion) -- how would it compare with the proposed method?
- (InstructIR) https://arxiv.org/pdf/2401.16468 | Github: https://github.com/mv-lab/InstructIR
- (MaxFusion) https://arxiv.org/pdf/2404.09977 | Github: https://github.com/Nithin-GK/MaxFusion
- In the Limitation and Future work section, will a no-training approach be possible? For example, "MaxFusion"-like approach but with the proposed deep integration of image restoration and multi-modal fusion.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors discussed limitation section in supplementary A.4 -- Particularly, Table S1 shows number of parameters and runtime. This is highly appreciated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: Discussion on the necessity of the brightness-chrominance separation. \
Reply: Unlike the image generation task emphasizing diversity, image fusion demands high color fidelity. For instance, in infrared and visible image fusion, the fused image should closely match the colors of the visible image. To this end, our method independently purifies and preserves the chrominance components of the visible image for color fidelity.\
While a direct processing approach without separating brightness and chrominance is appealing, it faces several challenges in our work. First, from image restoration alone, directly processing color images by the diffusion model is entirely feasible. However, our method integrates information fusion into the latent layers of the diffusion model, which can cause gray infrared features to interfere with the visible image’s color distribution. Such interference in the nonlinear latent space can cause the decoder to amplify small changes into significant color distortions. Second, the interference is related to the feature fusion way. Our method uses a nonlinear neural network, the Fusion Control Module, to aggregate features. This module preserves key thermal objects and background textures, guided by similarity loss functions based on pixel intensity and gradient (see Eqs. (9) and (10)). This optimization can disrupt color distribution due to direct pixel changes. To verify, we adapt our method to process three-channel images without separating brightness and chrominance, as shown in Fig. r1. This strategy causes noticeable color distortion. We also conduct a quantitative evaluation. Table r1 reveals that the direct processing strategy lowers the color score CIECAM16 and negatively impacts other metrics to varying extents.
Q2: InstructIR plus MaxFusion.\
Reply: MaxFusion is designed for conditional image generation, requiring results to meet conditions like depth, skeleton, segmentation, and edges. Its strength lies in extending single-condition to multi-condition generation through feature fusion. However, MaxFusion is unsuitable for the image fusion task due to lower fidelity. Specifically, image fusion needs the fused image to maintain pixel-level consistency with multiple source images, preserving significant objects and sharp textures. In contrast, MaxFusion focuses on semantic consistency, producing diverse results with different styles. In Fig. r2, using MaxFusion for infrared and visible image fusion with infrared and visible images as conditions shows that it does not meet the goal of enhancing scene representation in image fusion.\
Thus, we conduct comparisons using the reviewer-recommended InstructIR followed by several advanced image fusion methods. First, we input different text prompts into InstructIR to address improper lighting, noise, and color distortion. The restored images are then fused with advanced fusion methods. Results are shown in Fig. r3 and Table r2. Our method which implicitly integrates image restoration and fusion, shows better performance than these methods following a sequential strategy. In particular, our method can balance thermally salient object retaining and degradation removal, while competitors cannot.
Q3: No-training approach.\
Reply: MaxFusion extends single-modal image generation models to multi-modal ones through feature-level fusion. Specifically, MaxFusion proposes a no-training fusion strategy, which uses the variance maps of the intermediate feature maps of the diffusion model to select or weighted sum multi-modal features, assuming that pixels with higher variance represent higher priority for condition control.\
Differently, our method uses a neural network, the fusion control module (FCM), which is trained under the constraints of Eq. (11) to fuse multi-modal features aiming at preserving significant objects and textures. Of course, our method can also be extended to a non-training version by using a statistics-based fusion strategy. In the ablation study, we have evaluated three no-training fusion strategies: maximum, addition, and mean (see Table 5). These methods performed worse than our learnable FCM. Furthermore, we incorporate MaxFusion’s variance-based fusion strategy in our method (see Fig. r4 and Table r3), and it still falls short compared to our FCM. In future research, we will explore more powerful no-training fusion strategies, achieving performance comparable to retraining.
Q4: Discussion on the competitors.\
Reply: Due to limited space, we discuss only a few newer competitors. TarDAL uses a GAN with dual discriminators to preserve objects and textures, and collaborates with object detection for semantic optimization. DeFusion employs a masked autoencoder to decouple unique and common features, achieving complementary feature aggregation. LRRNet introduces low-rank representation model-based networks and combines pixel-level and feature-level losses for multi-modal fusion. MRFS couples image fusion and segmentation at the feature level to improve fused image quality. However, these methods do not fully address composite degradation, resulting in lower robustness. Given the diffusion model’s strong generative capabilities, applying it to image fusion could address composite degradation issues, but the lack of clean references complicates this. DDFM uses source images to guide sampling direction for implicit fusion, but still cannot solve composite degradation due to the inability to retrain the diffusion model. Our method tackles this by first optimizing the diffusion model and then embedding a fusion module within it, achieving integration of image restoration and fusion. Additionally, our method includes a text control interface for further modulation and enhancement of objects. These designs make our method both robust and flexible.
Q4: Figures, tables, and typos.\
Reply: We will carefully correct all typos in the final version and further improve the presentation of the figures and tables.
---
Rebuttal Comment 1.1:
Comment: We thank Reviewer DZyq for the insightful and valuable feedback. We have included new experimental results to demonstrate the necessity of brightness-chrominance separation in our method. Additionally, we have conducted a comparison with InstructIR+Fusion to further highlight the advanced performance of our method. Furthermore, a no-training version of our methods has been explored. We hope we have addressed all of your concerns. If you have any additional questions, please let us know. | Rebuttal 1:
Rebuttal: We sincerely thank each of the reviewers, area chairs, and program chairs for investing their time and effort into our paper. These valuable comments have enriched our understanding of the research problem and will greatly improve the quality of our manuscript. \
According to the reviewers' comments, we have added some validations. Firstly, we compare a three-channel direct processing strategy without brightness-chrominance separation, demonstrating the necessity of handling brightness and chrominance separately for color fidelity in our method. Secondly, we utilize the state-of-the-art image restoration method InstructIR as a preprocessing step of several advanced image fusion methods for further comparison, showing the advantages of our method with implicit integration of restoration and fusion. Thirdly, drawing inspiration from MaxFusion, we showcase the performance of our method with the non-training fusion strategy. Fourthly, we further extend the application scenario of our method to the near-infrared and visible image fusion, and supplement the quantitative results of the polarization image fusion. Fifthly, we validate the generalizability of the semantic gain offered by our proposed textual control in the object detection task. Finally, we investigate the impact of sampling steps on the final fused result and provide the basis for selecting 25 sampling steps in the experiments. All the figures and tables from the above validations are included in the global PDF file. In addition, we have prepared detailed individual responses for each reviewer to address and clarify all the raised issues and concerns.
Pdf: /pdf/b26676cbd3c4bfdb2be0aef61f934dab926ffa2b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
In Pursuit of Causal Label Correlations for Multi-label Image Recognition | Accept (poster) | Summary: This paper proposes a simple yet effective method based to address the issue of contextual bias for multi-label image recognition. It utilizes the casual intervention theory to pursue causal label correlations and suppress spurious label correlations. It utilizes the k-means to model the confounders, and employs the cross-attention mechanism to achieve the causal intervention. Experimental results demonstrate the efficacy of this approach.
Strengths: The paper is well-written and easy to understand.
This method seems easy to implement.
The approach achieves good results.
The problem is interesting in multi-label recognition tasks.
Weaknesses: Why the k-means algorithm is used to build confounders, the author should give further explanation.
In the paper, the number of cluster centers is only calculated to 100, and what will happen if it continues to increase?
Regarding inference time, how many forward passes does the method require?
In L191, how to obtain P(c) from the data?
Technical Quality: 4
Clarity: 4
Questions for Authors: The authors should provide more detailed explanations and experiments about confounders.
The authors should provide a description of the inference process.
The authors should clarify how to obtain a prior of confounders.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have addressed the limitation of this method. This method is dataset-dependent, and it can not determine the specific semantics of confounders, resulting in limited interpretability. Besides, it only considers the causal intervention, and does not consider causal reasoning.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer BNe7 for the positive comments on our work. In the following, we present our responses addressing the raised concerns.
**(Weakness 1)** In this work, we apply K-means clustering on the spatial features extracted from a pre-trained classification network for confounder modeling. This approach is based on our realization that the cofounders for recognizing certain object are often hard to define and enumerate – objects, scene, and even the texture of the environment are all potential confounders. On the other hand, a pre-trained classification CNN tends to activate the discriminative regions (which could be objects, scene and even the texture) in an image. Therefore, K-means clustering on CNN spatial features can produce a compact set of prototypes to represent potential confounders like objects, scenes and textures.
**(Weakness 2)** Thanks for your suggestion. We conduct additional experiments regarding the number of the clustering centers, as illustrated in Fig. 2 in our attached pdf file. Our experiments show that 80 clusters are sufficient to represent potential confounders in our datasets, and 400 clusters does not bring significant accuracy gain. We point out that this conclusion still remains as an empirical observation. And, we hypothesize that for more complex multi-label image classification datasets (which might be collected in the future), further increasing the number of clusters may be necessary for modeling complex confounders and achieving higher accuracy.
**(Weakness 3)** As the confounders are pre-computed, only one forward pass is required to obtain the final prediction.
**(Weakness 4)** With K-means clustering, each spatial feature will be assigned to a certain cluster $c$. We calculate the number of spatial features for each cluster, divide it by the total number of spatial features, and then obtain $P(c)$ for each cluster.
**(Question 1)** Thanks for your suggestion. We plan to add more explanations about confounders (e.g., our response to Weakness 1) and more experiments (e.g., other implementation choice for confounders like random vectors) in our revised version.
**(Question 2)** Since the confounders (and the prior $P(c)$) are pre-computed, the inference process of the causal branch (formulated by Eq. 11) only requires one forward pass. As a result, the inference process of our whole pipeline (Eq. 2) also only requires one forward pass. We will further detail the inference process of our method in the revised version.
**(Question 3)** Please refer to our response to Weakness 4.
**(Limitations)** As described in Sec A.1, our current approach still has limitations in interpretability and higher-level causal modeling. We consider these as our future works, and will try our best to solve these limitations.
---
Rebuttal Comment 1.1:
Title: Respond to the authors
Comment: The authors have addressed my concerns, I will raise my score correspondingly.
---
Rebuttal 2:
Comment: We are thankful for your acceptance and constructive feedback.
Title: Response to Reviewer BNe7 | Summary: This paper presents a novel approach to addressing label correlations in multi-label image recognition by using causal intervention. The method involves decoupling features, modeling confounders, and implementing causal interventions to capture useful contextual information while suppressing spurious label correlations. This approach is highly innovative and has significant potential applications.
Strengths: 1. **Innovative Approach:**
The paper introduces a novel method that applies causal intervention to model label correlations in multi-label image recognition. This innovative approach addresses the challenge of spurious label correlations and captures useful contextual information, which is a significant advancement in the field.
2. **Comprehensive Methodology:**
The proposed framework integrates several complementary techniques, including feature decoupling with a Transformer decoder, confounder modeling through clustering, and causal intervention using cross-attention mechanisms. This comprehensive methodology enhances the robustness and accuracy of multi-label image recognition models.
3. **Thorough Experimental Validation:**
The paper conducts extensive experiments across multiple datasets, demonstrating the effectiveness of the proposed method. The results consistently show improvements over existing approaches, particularly in scenarios with contextual biases, underscoring the practical value of the method.
Weaknesses: 1. **Lack of Hyperparameter Analysis:**
The paper does not provide a detailed analysis of the hyperparameters involved in the proposed method, such as the number of clusters for confounders or the parameters of the cross-attention module. A sensitivity analysis of these hyperparameters would be beneficial to understand their impact on model performance and to guide practitioners in tuning the model effectively.
2. **Insufficient Discussion on Method Limitations:**
The paper lacks a thorough discussion on the limitations of the proposed method. It would be valuable to include an analysis of scenarios where the method might not perform well, such as when the selection of confounders is inaccurate or when the causal relationships between labels are weak. Addressing these limitations can provide a more balanced view of the method's applicability and robustness.
3. **Limited Ablation Studies:**
Although the paper includes some ablation studies, the number and depth of these experiments are not comprehensive enough. More detailed ablation studies are needed to analyze the independent contribution of each module (e.g., feature decoupling, confounder modeling, and causal intervention) to the overall performance. This would help in understanding the importance and effectiveness of each component of the proposed method.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I noticed another paper titled "Counterfactual Reasoning for Multi-Label Image Classification via Patching-Based Training" that also employs causal inference to address multi-label image classification. The methods in these papers differ in implementation and theoretical basis. Could you further elaborate on the main differences and advantages of your approach compared to this work?
2. Your paper does not provide a detailed analysis of the hyperparameters involved in the proposed method. Could you explain the rationale behind the chosen hyperparameters and their impact on the model's performance?
3. There is a lack of discussion on the limitations of your proposed method. In what scenarios might your method underperform, and how could future work address these limitations?
4. Could you explain why certain confounders were selected for modeling in your approach? How does the choice of confounders impact the effectiveness of causal intervention in your model?
5. How does your method handle cases where the causal relationships between labels are weak or not well-defined? Does this affect the model's accuracy, and if so, how?
6. How does your approach ensure robustness against noise and variability in the data? Are there any specific strategies employed to handle noisy or incomplete labels?
7. Could you provide more details on how the feature decoupling using the Transformer decoder specifically contributes to reducing contextual biases in multi-label image recognition?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The effectiveness of the proposed method relies heavily on accurately modeling the confounders. If the selection of confounders is not precise or representative of the underlying data distribution, the causal intervention may not effectively distinguish between useful contextual information and spurious correlations. This could potentially limit the method's performance in scenarios where confounder selection is challenging.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer wRCW for the detailed feedbacks on our work. In the following, we present our responses addressing the raised concerns. Should our rebuttal effectively address the concerns, we kindly hope you can raise your score.
**(Weakness 1)** We agree with you that a detailed analysis of the hyperparameters is crucial to validate the robustness of our method. Regarding the number of clusters for confounders as you suggested, **we have already presented the results in our main paper**. Please refer to Tab. 4 and Sec. 5.3.2 for details. As for the parameters, in the following Table, we compare our method to the baseline (naive ResNet-50) and Q2L (SOTA method). Our method significantly outperforms the baseline, as well as the SOTA Q2L method which has much larger model size. In particular, our causal module only adds marginal parameters, but can improve the accuracy (mAP) on COCO-stuff from 56.8 to 60.6, demonstrating the effectiveness of our causal label correlation modeling.
| Decouple | Param. | mAP on COCO-stuff All |
| --- | --- | --- |
| Q2L |175.3M | 57.2 |
| Baseline | 26.9M | 55.0 |
| Baseline+Decouple | 41.1M | 56.8 |
| Baseline+Decopule+Causal (Ours) | 45.3M | 60.6 |
**(Weakness 2)** We agree with you that a discussion about the limitations is valuable for a more complete understanding of our method. Actually, **we have discussed the limitations of our current confounder modeling and causal intervention in Sec. A.1** (Supplemental Material). We plan to add such discussions to the main body of our paper.
Regarding the confounders, **we have investigated the effects of different confounder modeling methods (see Tab. 6)**. In addition, we also add the comparisons of our K-means based confounders and random confounders, which clearly show that inaccurate confounders will lead to significant performance degradation.
|Method | Exclusive | Co-occur | All |
| --- | --- | --- | ---|
|Random | 22.1 | 66.3 | 56.1 |
|K-means | 29.7 | 69.6 | 60.6 |
**(Weakness 3)** We agree with you that comprehensive ablation study is helpful for understanding the importance and effectiveness of each component of the proposed method. Actually, **we have presented ablation experiments to validate the effectiveness of each component of our method.** Please refer to Tab 3 and Sec. 5.3.1.
**(Question 1)** Thanks for suggesting this reference paper. There are two core differences between our method and the method proposed in this paper. Firstly, the meaning of the nodes in the Structural Causal Model is different: we consider two different target objects as $X$ and $Y$, while the reference paper considers the target object as $X$ and model prediction as $Y$. Secondly, our high-level considerations in confounder modeling are different. The reference paper considers co-occurring objects as confounders. While in our paper, we argue that purely modeling confounders with object-level features is insufficient, and the confounders should also include background or text information. We will add discussions about our paper and this reference paper in our revised paper.
**(Question 2)** Please refer to our response to weakness 1.
**(Question 3)** Please refer to our response to weakness 2.
**(Question 4)** **In Sec 4.3 (line 203~217) we discuss our understanding about confounder modeling in context of multi-label image recognition.** We argue that confounders should not only consider the objects with labels defined by the dataset, but also include other types of objects, image background and even image textures. Based on this understanding, we present to model the confounders by clustering the spatial features extracted by a pre-trained classification network, which may characterize confounders with object-level, texture-level and scene-level concepts. In Tab. 6, we investigate the impact of different confounder modeling methods, where our method achieves the best result.
**(Question 5)** Our method aims to suppress spurious label correlations and enhance causal label correlation. Therefore, if the correlation between two labels are spurious, our method can always suppress their label correlations for multi-label image recognition. We would like to highlight that this does not mean that our method can always make right prediction, but means that it can always suppress spurious label correlations to reduce the probability of error prediction caused by it.
**(Question 6)** How to handle noisy or incomplete labels is an important research topic in machine learning. However, this is not the focus of our paper, and our current method cannot address the partial label issue. Thanks for your suggestion, and we consider extending our method to the partial label setting as our future work.
**(Question 7)** We would like to highlight that pure feature decoupling using the Transformer decoder cannot reduce contextual biases. To reduce contextual biases, we explicitly model causal label correlations based on the decoupled features and confounders, with the guidance of causal intervention in causal theory.
**(Limitation)** On one hand, confounder modeling is a crucial step in our method, and bad choices may lead to obvious performance degradation. But, on the other hand, we would like to highlight, it remains an open question about confounder modeling for visual recognition tasks. In this work, we present our understanding about confounders (see line 203~217), and then develop a simple but effective modeling approach based on pre-trained classification network and K-means clustering. In our ablation experiments (Tab. 6), we show the advantages of our confounder modeling approach.
In short, what we would say is that for this open question (confounder modeling), we have presented our rational analysis, as well as a simple approach with experiments validating its effectiveness.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
We hope that our rebuttal has effectively clarified your confusion and that the additional experiments we provided have strengthened the validity of our approach. We eagerly await your feedback on whether our response has adequately resolved your concerns, or if further clarification is needed. | Summary: This paper proposes a causal intervention mechanism for multi-label image classification, where causal label correlations are pursued and spurious label correlations are suppressed. To achieve this, the authors frame a pipeline consisting of a branch for decoupling label-specific features and a branch for summarizing causal label correlations. The results from both branches are combined for final predictions on image labels. Comparative experiments and ablation studies demonstrate the effectiveness of the proposed causal intervention mechanism.
Strengths: - The paper is generally well written with clear motivation and objectives.
- Causal intervention is technically novel and well motivated in terms of multi-label image classification.
- Experimental results are impressive, outperforming sub-optimal methods by a considerable margin. Ablation studies are aslo well designed to showcase the contribution.
Weaknesses: - Line 175: 'Correaltions' -> 'Correlations'.
- Line 237: 'Transformer encoder' -> 'Transformer decoder'.
- $f_{fc}$ in Eq.6 and $f_{fc}$ in Eq.11 should be different if their parameters are not shared.
- In Figure 4, in the causal label correlation branch, the confounder features are added into label-specific features. However, the outputs are not seen to be used in subsequent steps, and it seems that only the label-specific features are utilized for causal intervention. The diagram of this module needs to be improved.
- More experimental evidence should be provided to verify the effectiveness of the confounder modeling. For example: using random vectors to replace cluster centers as confounders. Only the feature visualization and ablation study on clustering center number are unconvincing.
- Although this paper is well motivated, the modeling process, especially Equation 11, is confusing.
Technical Quality: 3
Clarity: 2
Questions for Authors: - How is the operation $f_{merge}$ removed from the second line of Eq.11? Why is the summation over all confounders $c$ also removed? Even if it can be removed, which confounder does $c$ in the last line of the formula refer to? The Eq.11 is confusing and needs further clarification. It would be best if the authors add a pseudocode to illustrate this process.z
- Are the results in Table 1 and Table 2 reported by re-training these models on the relevant datasets? If so, the authors should clarify the experimental details for fair comparison.
- According to the Table 8, in terms of the intra-dataset comparisons on MS-COCO, Q2L achieves the same performance in mAP as the proposed method. However, Q2L only requires a Transformer decoder for decoupling label-specific features. Therefore, we question the generalizability of the proposed causal intervention mechanism on multi-label image classification task, wondering whether it is only effective on specific datasets.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See Weakness and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer NEe5 for the constructive comments and suggestions. In the following, we present our responses addressing the raised concerns. Should our rebuttal effectively address the concerns, we kindly hope you can raise your score.
**(Weakness 1 and 2)** Thanks for your reminding, and we will correct these typos in our revised manuscript.
**(Weakness 3)** To differentiate these two different fully-connected layers, we will use $f_{fc1}$ and $f_{fc2}$ in Eq. 6 and Eq. 11, respectively.
**(Weakness 4)** Thanks for your comments, which remind us that the current diagram for causal label correlation modeling in Fig. 4 lacks a crucial arrow to “feed” the added features into the cross-attention operation. We update our Fig. 4 (see our attached pdf file) to show this.
**(Weakness 5)** Thanks for your kind suggestion, and we agree that random confounders should be the simplest baseline for comparison. We conduct experiments for this random version of confounders: it leads to significant performance degradation (measured by mAP), especially on the “Exclusive” subset of COCO-Stuff. We believe that confounders should be modeled at semantic level (which cannot be achieved by random vectors), and thus present our K-means based solution upon semantic features extracted by a pre-trained classification backbone network.
| Method | Exclusive | Co-occur | All |
| --- | --- | --- | ---
| Random | 22.1 |66.3 | 56.1 |
| K-means | 29.7 | 69.6 | 60.6 |
**(Weakness 6)** We agree with you that our modeling process (as formulated in Eq. 11) should be further clarified. Please refer to our responses to Question 1 in the following.
**(Question 1)** Regarding the $f_{merge}$ in the first two rows in Eq 11, it represents our high-level idea that “We seek for a model to combine the information of all label-specific features $x_i$ and one potential confounder feature c to predict the logit of label $Y_j$. From the third row in Eq.11, we merge all label-specific features $x_i$ into a feature matrix $X$, and thus $f_{merge}$ can be removed.
Regarding the summation over all confounders, we thank for your kind reminding: it cannot be removed in Eq. 11. We will update Eq. 11 in our revised paper as the following.
$$
\begin{align}
{Z}\_c &= {X} + c \,, \\\\
\hat{y}\_{causal}^j &= f_{merge}([P(Y_j|do(X_1), ..., P(Y_j|do(X_N)]) \\\\
&= f_{merge}([\sigma(\sum_{c} f_{y_j}(x_1, c) \cdot P(c)), ..., \sigma(\sum_{c} f_{y_j}(x_N, c) \cdot P(c))]) \\\\
&\approx \sigma(\sum_{c} f_{y_j}({X}, c) \cdot P(c)) \\\\
&= \sigma(\sum_{c} f_{fc2}(f_{cross\\_atten}(y_j, {Z}_c, {Z}_c))\cdot P(c))
\end{align}
$$
where ${Z}\_c$ is the addition-based combination of all label-specific features ${X}$ and a confounder feature $c$,
and $f\_{fc2}$ is a fully-connected layer applied upon the cross-attention feature to obtain the logit.
We provide the following pseudocode to illustrate the implementation process of Eq. 11.
---
Define $X \in \mathbb{R}^{N \times D}$, $C \in \mathbb{R}^{M \times D}$, $P \in \mathbb{R}^{M}$ is the label-specific features, confounders and priors.
Z = X.unsqueeze(1) + C.unsqueeze(0) # $Z \in \mathbb{R}^{N \times M \times D}$
yj = X[j, :]
y_causal_j = 0
for c in range(M):
y_causal_j += fc(cross_atten(yj, Z[:,c,:], Z[:,c,:])) * P[c]
y_causal_j = y_causal_j.sigmoid()
---
**(Question 2)** To ensure fair comparisons, we retrain these models based on the codes released by authors.
**(Question 3)** Honestly, we present our results on MS-COCO for completeness, rather than for showing the advantage of our method. This is because MS-COCO generally satisfies the i.i.d assumption, while our method aims to improve practical multi-label recognition where the training and test images may not follow the i.i.d assumption, as the co-occurence between objects might change in testing. We follow the basic experimental settings in [1] which also aims to overcome contextual bias (but not from a causal intervention perspective), but also present our results under challenging cross-dataset setup. In this sense, our main experimental results are sufficient to validate the core contributions of this work: our method significantly outperforms existing methods under non-i.i.d settings, and can still achieve competitive results for i.i.d setting.
[1] Singh, Krishna Kumar, et al. "Don't judge an object by its context: Learning to overcome contextual bias." CVPR. 2020.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
We hope that our rebuttal has effectively clarified your confusion and that the additional experiments we provided have strengthened the validity of our approach. We eagerly await your feedback on whether our response has adequately resolved your concerns, or if further clarification is needed.
---
Rebuttal Comment 1.2:
Title: Response to Author Rebuttal
Comment: Thank you very much for your thorough response to my comments and questions. Most of my doubts are well addressed. However, I still have concerns about the effectiveness of the proposed method, given the fact that the proposed method shares the same label-specific feature learning architecture as Q2L [1], but does not show any performance improvement on MS-COCO with the additional mechanism of causal intervention. In contrast, CCD [2], IDA [3], CMLL [4] and PAT [5] have all demonstrated the effectiveness of causal intervention on this dataset. The author's explanation is not convincing.
[1] Query2Label: A Simple Transformer Way to Multi-Label Classification (arXiv 2021);
[2] Contextual Debiasing for Visual Recognition with Causal Mechanisms (CVPR 2022);
[3] Causality Compensated Attention for Contextual Biased Visual Recognition (ICLR 2023);
[4] Causal multi-label learning for image classification (NN 2023);
[5] Counterfactual Reasoning for Multi-Label Image Classification via Patching-Based Training (ICML 2024).
---
Rebuttal 2:
Comment: Thank you for your response. **We must emphasize that our approach does not simply add causal intervention to Q2L**. Q2L includes both a Transformer Decoder and a Transformer Encoder, and uses a specially proposed ASL Loss [1]. In contrast, without considering the causal intervention branch, we only utilize the Transformer Decoder with less parameters for feature decoupling, and train our model with general multi-label classification loss.
Furthermore, **Table 8 aims for performance comparison, rather than ablation study. It does not indicate that our causal intervention method gains no performance improvement on MS-COCO.** We conducted an ablation study on MS-COCO as shown in the table below (CMLL was not compared due to lack of experiments at resolution 448x448 and unavailability of code). It can be seen that our causal intervention module still provides some improvements on the MS-COCO dataset. Compared with Q2L, we achieve similar results with fewer parameters (refer to our rebuttal for Weakness 1 of Reviewer wRCW). Additionally, when compared with other methods specifically designed based on causal theory for the MS-COCO dataset, our method still achieves competitive results.
| Method | mAP |
| --- | --- |
| Res101 | 79.1 |
| Res101 + Decouple | 83.7 |
| Res101+Decopule+Causal (Ours) | 84.9 |
| Q2L | 84.9 |
| CCD | 84.0 |
| IDA | 84.8 |
| PAT | 85.0 |
We consider pursuing higher accuracy on MS-COCO as our future work, by combining our causal intervention method with more advanced backbone networks, multi-label loss, or pre-training methods.
[1] Asymmetric loss for multi-label classification, 2020.
Title: Response to Reviewer NEe5
---
Rebuttal Comment 2.1:
Title: Response to Author Rebuttal
Comment: Thank you for your quick reply. All my concerns are addressed.
---
Reply to Comment 2.1.1:
Title: Response to Reviewer NEe5
Comment: We sincerely thank you once again for the constructive comments and suggestions. Should our rebuttal effectively address the concerns, we kindly hope you can raise your score, which we believe is vital for the final decision on our work. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful analysis and feedbacks, which are invaluable for understanding how to improve our paper. We address the questions and concerns raised by each reviewer point-by-point in the respective threads below. We also attach a PDF containing one updated Figure in response to Reviewer **NEe5,** and one additional Figure ****for Reviewer **BNe7.**
In general, reviewers agree that we present a novel effective pipeline that integrates causal label correlation modeling that can improve practical multi-label recognition where the training and test images may not follow the i.i.d assumption, as the co-occurence between objects might change in testing. Specifically, we appreciate their assessment of our paper writing as **“well written with clear motivation and objectives”** (NEe5) and **“well-written and easy to understand”** (BNe7). The reviewers concur that **“Causal intervention is technically novel and well motivated in terms of multi-label image classification”** (NEe5). They give positive comments on our experiments that **“Experimental results are impressive, outperforming sub-optimal methods by a considerable margin. Ablation studies are also well-designed to showcase the contribution”** (NEe5), **“extensive experiments across multiple datasets, demonstrating the effectiveness of the proposed method. The results consistently show improvements over existing approaches, particularly in scenarios with contextual biases, underscoring the practical value of the method”** (wRCW), and **“This method seems easy to implement. The approach achieves good results. The problem is interesting in multi-label recognition tasks”** (BNe7).
Most of the reviewers expect more in-depth analysis and comparisons of the **confounder modeling** in our pipeline. Here, we would like te state the motivation, approach, validation, and limitation of our confounder modeling in detail.
**Motivation**: On one hand, confounder modeling is crucial for causal intervention, as evidenced by our new experiments that compare random confounders and our method (see our response to Weakness 5 for reviewer **NEe5**). But on the other hand, based on our survey, it remains an open question about cofounder modeling for visual recognition tasks. In this work, we present our understanding about cofounders modeling. That is, confounders for recognizing a certain object are often hard to define and enumerate – objects, scene, and even the texture of the environment are all potential confounders.
**Approach**: We model the confounders by clustering the spatial features extracted with a pre-trained classification network, as the classification network tends to activate object-level, texture-level, or scene-level semantic concepts [1]. By clustering spatial features with K-means, we obtain a compact set of prototypes to represent potential confounders like objects, textures and scenes. Our approach is significant different from previous works like VC R-CNN [2] that relies on bounding box annotations (which are often absent for image classification task), and only considers pre-defined objects as confounders.
**Validation**: In the main body of our paper, we validate the effectiveness of our confounder modeling approach in Tab. 6, and investigate the impact of the number of clustering centers in Tab. 4. Furthermore, we compare our confounder modeling approach with simple random confounders during rebuttal (response to Weakness 5 for reviewer **NEe5**).
**Limitation**: As described in Sec. A.1 in our Supplemental Material, our confounder modeling approach is dataset-dependent, although our cross-dataset multi-label image classification experiments can justify its generalization ability to a large extent. On the other hand, although our modeled confounders are often concerned with object-level, texture-level and scene-level concepts, it is difficult to determine the specific semantics of these concepts, resulting in limited interpretability. We leave better confounder modeling approach as our future work.
[1] Zhou, Bolei, et al. "Learning deep features for discriminative localization." CVPR. 2016.
[2] Wang, Tan, et al. "Visual commonsense R-CNN." CVPR. 2020.
Pdf: /pdf/7a57ede6f4c13afa71aa91cd504fb1c01ddf6afc.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Proving Theorems Recursively | Accept (poster) | Summary: This paper designs a novel hierarchical search algorithm (POETRY) for generating formal proofs with large language models step-by-step. In particular, POETRY will first search for proof steps with proof level 0 (these steps typically correspond to subgoals in the proof), and check the correctness of the level 0 proofs by assuming that all the subgoals can be proved. If and only if the level 0 proofs are correct, POETRY will recursively search for proofs to each of the proposed subgoals. Compared with the baseline best-first search methods with the same compute, POETRY significantly improves the pass@1 succ rate on both miniF2F valid and test set, as well as the PISA test set.
Strengths: - The POETRY algorithm is neat and novel by mimicking how human write mathematical proofs hierarchically.
- This paper is well written and easy to follow.
- The POETRY algorithm has potentials to be further improved by incorporating premise-selection techniques such as sledgehammer or Magnushammer.
Weaknesses: - From Table 1, it seems to me that the improvement from the search algorithm is less significant than the beam search. A drawback of the beam search method is that the algorithm becomes deterministic, meaning that generating more samples per theorem does not improve its performance. Since this paper only shows pass@1 results, it is unclear how the POETRY algorithm scales with more computing resources.
Technical Quality: 4
Clarity: 4
Questions for Authors: - In the example shown in Figure 4, it seems to me that Path 1 is quite similar to the first part of the proof found by POETRY. Can the authors elaborate on why is the GPT-f baseline tries to utilize a more complex way to prove the first property?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors adequately addressed the limitations and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed and constructive comments. And thank you for your acknowledgment in POETRY. We hope our responses and rebuttal materials (RM) address your concerns.
## Weaknesses
### w1. How POETRY scales with more computing resources.
Indeed, a large portion of the improvement comes from using the beam-search decoding method, resulting in deterministic outcomes. However, we believe the performance improvement brought by POETRY is unrelated to the specific decoding method used. Several alternative methods, such as beam sampling or sampling with a larger number of samples per input, can replace beam-search decoding.
Regarding scaling POETRY with more computing resources, it can be extended with larger models, more training data, and expert iteration processes. Arming POETRY with more capable language models, the advantage of recursive theorem proving will be amplified as we tackle more complex problems that require more structural reasoning.
Moreover, POETRY opens up new avenues for parallel computing. The current version of rBFS is single-threaded, adapted from BFS to showcase the performance of recursive theorem proving. In future work, we could explore parallel recursive Monte Carlo tree search, where each sub-level proof sketch can be explored concurrently. Information from different sub-searches can backpropagate to the main branch, dynamically allocating computing resources to more promising sub-searches.
Therefore, we believe POETRY’s advantage will not diminish as computing resources scale up; instead, it will show more prominent advantages compared to step-by-step approaches. We are excited to explore this aspect in our future work.
## Question
### Q.1 Cause of complex proof in Figure 5(b).
The GPT-f Baseline example in Figure 4(b) uses a standard best-first search algorithm to find the proof, where the choice of nodes to explore is determined by the cumulative log probability of the tactics. Although the GPT-f Baseline manages to produce the first two steps in Path 2, the log probability for the second line `show "⋀n. deg R f < n ⟹ n_mult f n = 0"` is too low compared to other generated tactics. Consequently, the search algorithm keeps exploring other states (like steps in Path 1) with higher log probabilities.
While one might use a separate value function model to re-rank the proof states, it is still challenging to ensure the value function will always prefer the better path. POETRY, on the other hand, takes advantage of the Isabelle system, allowing proofs to be validated in advance and thus reducing the dependency on the value function. | Summary: This paper proposes POETRY, a method for formal theorem proving using language models by training the model to iteratively decompose the problem into sketches, recursively. The authors focus on Isabelle. At each step, POETRY takes a proof state and goal and predicts either a formal sketch (a proof using sorry at each step), or a ground proof step (e.g. 'by ...') that requires no recursion. These intermediate states are visited within best-first search, where the score of a node is given by the log-probability of all predictions made so far to get to that node. Intuitively, POETRY works by recursively generating lower level steps / sketches, until finding a complete proof, getting feedback from the formal environment at each step. To train the LM, the authors introduce a simple method to decompose existing Isabelle proofs from the AFP as if they had been generated by this recursive proof generation process. Experiments on minif2f show improvements on top of GPT-f and a version of Thor without Sledgehammer.
Strengths: The paper is well motivated and tackles a timely topic, using a standard, hard benchmark (minif2f) for methods in this space. The writing is mostly clear (though see some notes below).
POETRY is a simple, sound and novel method to structure the proof generation process. It should be adaptable to other interactive theorem provers with some work. POETRY allows the prover model to get more intermediate feedback from the environment compared to methods that try to produce the whole proof at once. It also uses this feedback in a way that is complementary to proof repair methods (generally what the standard when we consider using intermediate feedback).
Weaknesses: The choice of baselines (for Table 1) seems a bit convoluted. In particular, I don't really understand why use Thor without sledgehammer. The main point of Thor is to learn when to use hammers in proofs. Removing this makes the model much more similar to the GPT-f baseline.
As for the choice of pass@1, even though POETRY only makes a single prediction at the end, it gets feedback from Isabelle at each node in its tree. So that doesn't seem like a fair comparison either, if POETRY makes many intermediate predictions and calls Isabelle during its search, whereas GPT-f and Thor w/o sledgehammer seem to only produce and test a single prediction. It might be more fair to match the methods based on some other metric, like number of tokens generated, or number of calls to Isabelle (whichever seems to be the most significant bottleneck).
The question of "Can POETRY find longer proof?" is a bit ill posed as is. It would be possible for a method to find very long proofs that do not *need* to be long, and do better on this analysis without really being able to prove more complex theorems. What I think the authors are trying to show here is that POETRY can solve harder problems, estimating hardness by looking at proof length. For this, you might want to compare success rate based on the length of the ground truth proof: perhaps the baselines perform very poorly on theorems where the human proof is longer, whereas POETRY might have a better success rate. Another option is to show that either POETRY generates proofs of similar length to the ground truth proof (so, when POETRY generates long proofs, you'd estimate that the ground truth proof would also be long), or that it generates proofs of similar length to the baselines in cases where they all manage to prove a theorem. Any of these would help show that this result is not trivial.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Is an intermediate sketch valid (accepted by Isabelle) as long as it's syntactically correct and the last step shows the thesis? Or do you manage to get richer feedback besides that the last step declares to show the thesis?
* For problems that both POETRY and the GPT-f baseline solve, does POETRY tend to generate longer proofs?
* As for the relationship with LEGO-Prover, you mention that in some cases it is impossible to decompose a proof into lemmas, but still possible to decompose it into sketches recursively. Do you have an example?
* What exactly is the search algorithm used in the Thor w/o sledgehammer and GPT-f baselines? It is a one-shot prediction? Or do you use the (also best-first) search method described in the original GPT-f paper?
* What is Thor without sledgehammer? It sounds like a different thing other than Thor.
* I'm confused by what Figure 5 is trying to show. Fundamentally, reorganizing a proof into sketches shouldn't change its inherent complexity (e.g., the atomic proof steps). Is this just comparing the full proof length against the number of steps in the top-level sketch (without considering the deeper sketches recursively)? If you were to consider the steps in the deeper sketches recursively, I'm assuming you would not expect to see a reduction (if you do, where would that come from? do you have an example?)
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed and constructive comments. We hope our responses and rebuttal material address your concerns.
## Clarifications on the summary
There are a few misunderstandings in summary that we would like to clarify: Within each proof sketch, POETRY operates step-by-step, searching for a complete proof sketch. At each step, POETRY takes a proof state (goal) as input and predicts one proof step, not the entire proof sketch. This step could be a conjecturing step followed by “sorry” (e.g., have c1:"a=1" sorry), or a normal proof step (e.g., by …). The search for the current sketch only stops after the sketch is complete and accepted by Isabelle.
## Weaknesses
### w1. Clarification on the baseline
Both `Thor w/o Sledgehammer` and the `GPT-f Baseline` are reproductions of the GPT-f paper in the Isabelle formal system. `Thor w/o Sledgehammer` is directly adopted from the original Thor paper as an ablation setting using only LM. Since Thor is not open-sourced, we reproduced it as `GPT-f Baseline`. The methodology in `Thor w/o Sledgehammer` is the same as in our `GPT-f Baseline`, with only implementation differences detailed in Section 4.1. So the use of `Thor w/o Sledgehammer` in Table 1 is not convoluted, as it illustrates the performance of the previous paper’s implementation result of the `GPT-f Baseline`.
### w2. Pass@1 metric
POETRY, `GPT-f Baseline`, and `Thor w/o Sledgehammer` all perform step-by-step searches using the best-first search algorithm. Therefore, the number of interactions with Isabelle and the language model are the same for POETRY and both baselines. So it’s perfectly fair to use the pass@1 score for comparison.
### w3. On "Can POETRY find longer proof?"
It is crucial to demonstrate POETRY’s ability to find longer proofs, as it shows its capability to solve complex problems. Previous step-by-step methods often result in significantly shorter proofs than those found by POETRY, diminishing the possibility of solving more complex problems.
We appreciate your insightful suggestions for analyzing POETRY’s results. We further included histograms showing the number of problems solved by GPT-f Baseline and POETRY, categorized by the length of the ground truth proofs (Figure 1 in the RM). From the Figure, it is evident that POETRY has an obvious tendency to solve harder problems (longer ground truth proofs). And outperforms the GPT-f Baseline across various problem difficulties in the multi-level subset. Therefore, we believe POETRY demonstrates a clear advantage over GPT-f Baseline and is capable of solving more complex problems.
## Question
### q1. The condition of a sketch being valid
When a sketch is accepted by Isabelle (receiving ONE signal `no goals`), it means the sketch is both syntactically and semantically correct. There is no difference in the signal given back from Isabelle whether a proof sketch is valid or the complete proof is valid. POETRY relies on post-checking to determine if the proof contains `sorry`.
### q2. Proof length differences
According to w3, POETRY can indeed solve harder proofs. For proof lengths, POETRY does not tend to generate longer proofs than the GPT-f Baseline for problems they both solve. Algorithmically, POETRY behaves almost identically to GPT-f Baseline, only proceeding to deeper levels when a complete proof sketch is found, which is challenging. Statistically, 82.3% of the problems solved by both have the same proof length, and 96.0% have proof length differences smaller than 3, caused by algorithmic randomness.
Occasionally, POETRY generates longer proofs with redundant steps, as shown in Figure 2(a) in the RM. This is due to POETRY’s greedy exploration mechanism, which sometimes explores dummy sketches. These cases are rare (2.4% of solved problems where POETRY’s proof is 3 steps longer than GPT-f Baseline’s). We believe this issue can be addressed by implementing a value function to prioritize informative sketches over redundant ones in future work.
Therefore, combining the w3 response, it is evident that POETRY not only handles simple problems but also excels at solving harder problems and conducting complex structural reasoning.
### q3. Example in LEGO-Prover
Here is an example from the LEGO-Prover GitHub:
```Isabelle
theorem aime_1983_p9:
fixes x::real
assumes "0<x" "x<pi"
shows "12 \<le> ((9 * (x^2 * (sin x)^2)) + 4) / (x * sin x)"
proof -
define y where "y = x * sin x"
have "12 \<le> (9 * y^2 + 4) / y"
proof -
have c0: "y > 0"
by (simp add: assms(1) assms(2) sin_gt_zero y_def)
have "(9 * y^2 + 4) \<ge> 12 * y"
by sos
then show ?thesis
using c0 by (simp add: mult_imp_le_div_pos)
qed
then show ?thesis
by (simp add: power_mult_distrib y_def)
qed
```
Conjectures like `have "12 \<le> (9 * y^2 + 4) / y"` or `have c0: "y > 0"` are relatively local. Making these into independent lemmas requires many local variables, resulting in redundant proof lines and lemma statements. This example is not cherry-picked and this problem is common in many theorems. POETRY addresses this by building proofs level-by-level directly within the proof, avoiding redundant lemma statements.
### q4. The search algorithm used in the baselines
`Thor w/o sledgehammer` and `GPT-f Baseline` use the best-first search algorithm described in the original GPT-f paper.
### q5. Explanation with Thor w/o sledgehammer
Please refer to w1.
### q6. Confusion in Figure 5(b)
This figure compares the full proof length (`Original`) against the number of steps in all sketches (`Recursive`), including both top-level and deeper sketches. The Original line counts each complete theorem as a single data point (e.g. proof with length 20), whereas the Recursive line treats each decomposed proof sketch as a single data point (e.g., decomposed sketches with length 9 and 11). Therefore, the total number of data points for these two plots is different. We will provide a more detailed explanation in our final paper.
---
Rebuttal 2:
Title: Waiting for further discussion
Comment: Dear Reviewer 9pow,
We hope our rebuttal sufficiently addressed your concerns. Is there any additional information we can provide that might lead you to increase your rating? We look forward to your feedback.
Many thanks,
Author
---
Rebuttal Comment 2.1:
Comment: I thank the authors for the clarifications on the method, as well as the new analyses. I do think these clarify the results, and have alleviated most of my concerns. I have raised my score assuming the new results will make it to the paper: I think the method contributes a simple but neat idea for the problem.
About q1. (condition of a sketch being valid), I think your answer confirms my understanding, but I'd still like to clarify just because I'm interested (and this would also be helpful to clarify in the paper). Since sorry can prove any goal, my question was whether the semantic validation that Isabelle is capable of doing is essentially just that the sketch ends by concluding the thesis. For instance, this would be a toy example based on the LEGO-Prover example above:
theorem aime_1983_p9:
fixes x::real
assumes "0<x" "x<pi"
shows "12 \<le> ((9 * (x^2 * (sin x)^2)) + 4) / (x * sin x)"
proof -
have c0: "x = 0" by sorry
then show ?thesis by sorry
qed
I'm understanding this would be marked by Isabelle as a valid sketch, or is there something else I'm missing?
What I wanted to understand is exactly how much semantic feedback you can get at the sketch level, since sorry can prove anything. Your results show that structuring the proof search process in this way is helpful overall, but I'm just trying to understand whether it's more of a helpful bias to the LLM vs how much extra feedback it actually enables you to extract from the environment.
---
Reply to Comment 2.1.1:
Title: More explanation on `show ?thesis`
Comment: Dear reviewer 9pow
Thank you for your prompt reply and for recognizing the value of our work. We will include all the new results in our paper.
Regarding the example you mentioned, POETRY does validate the sketch, and the usefulness of the skipped conjectures was not tested. We are unable to extract additional semantic information from the environment to assist in this situation.
We acknowledge this problem during the development of POETRY, and to address it, POETRY employs the “last unsolved sorry” strategy during proof search (as detailed in Appendix A.1). This strategy focuses on the final sorry in the proof sketch (i.e., the sorry in `show ?thesis sorry`), allowing for quicker validation of the proposed conjectures by first concentrating on the final incomplete part of the proof.
Another potential approach could involve explicitly disallowing the use of sorry after `show ?thesis` and requiring the model to find a complete proof within the `show ?thesis` block. However, this is acceptable for proofs that can be accomplished with a single level of `show ?thesis` (i.e., `show ?thesis by ...`). For proofs requiring more complex structural reasoning within the `show ?thesis` part, this method would reduce POETRY to a step-by-step approach and fail to leverage the structural reasoning capabilities that POETRY is designed to utilize.
Given the current constraints in extracting semantic information from the environment, beyond the “last sorry” strategy, a promising direction might involve using neural-based value functions as assistants to help distinguish between useful and less useful sketches. We are excited to explore this aspect in our future work.
Have our responses above adequately addressed your concerns? Is there any additional information we can provide to persuade you to improve your rating? | Summary: The authors introduce a method called POETRY (proving theorems recursively) for constructing formal proofs in Isabelle/HOL. POETRY performs best-first search on proof sketches guided by a language model fine-tuned on proof sketches. POETRY outperforms other algorithms guided by language models that prove theorems step-by-step. POETRY also outperforms other methods that integrate automated theorem provers and language models.
Strengths: While the idea of a proof sketch is not novel, the combination of the data curation process to enable the construction of proof sketches is. This takes a step towards generating conjectures which would be crucial to making progress on neural theorem proving.
Weaknesses: 1. It seems to me that the real reason for the success of POETRY is not the algorithm per say, but the data curation to construct proof sketches. In this vein, it would be instructive to have a before sorry and after having sorry to illustrate how the dataset is constructed.
2. There should be more context explaining how to compare the Lean results against Isabelle/HOL. These are two different formal systems, with different proof methodologies.
3. More details on success cases and failure cases would help understanding the pros and cons of the approach taken in POETRY. For instance, are there certain kinds of problems that POETRY performs well on, e.g., geometry problems? How does POETRY perform when existentials need to be instantiated? Is it the case that POETRY can prove the same theorems as previous step-by-step approaches and can additionally prove more theorems that are longer, or do the approaches prove different short theorems?
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. The distinction between a proof sketch and the decomposition of a theorem into a tree of conjectures needs to be addressed. Is there any difference?
2. In your training procedure, do you fine-tune on any theorems in the miniF2F dataset?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed and constructive comments. We hope our responses and rebuttal material (RM) address your concerns.
## Clarification on strengths.
There are a few misunderstandings regarding our strengths that we would like to clarify. While we are not the first to use the term `proof sketch`, there are significant differences between the `proof sketch` in POETRY and those in previous works like `Draft, Sketch and Prove` or `LEGO-Prover`:
- In previous work, `proof sketches` do not use the keyword `sorry` to defer the proof of concrete conjectures but instead use Sledgehammer to directly prove these conjectures.
- The “proof sketch” is not used recursively in previous work, with only one sketch per problem. In the appendix, we present examples of proofs that require multiple layers of recursive expansion, which are challenging to resolve using automated tools like Sledgehammer within a single sketch. Our method allows these subgoals to be addressed incrementally, maintaining a consistent level of difficulty throughout the process.
## Weaknesses
### w1. Dataset construction.
Please see Figure 1 in the paper for the process of data construction. Figure 1(a) shows the original proof, and Figure 1(b) depicts the decomposed proof sketches with sorry added. Additionally, Table 1 in the RM shows the final training crops for this problem. The final training data for POETRY and GPT-f Baseline have the same input text and the same number of training examples. The only difference is the additional `sorry` keyword in proof steps that state conjectures or subgoals. Therefore, the model trained with POETRY data does not receive more information compared to the `GPT-f Baseline`. The success of POETRY not only comes from the data curation process but also from the novel recursive BFS and the recursive proving methodology itself. We will include this table in our paper for better illustration.
### w2. Context explaining the comparison with Lean result against Isabelle.
The results for Lean and Isabelle are indeed not directly comparable, and we will add context explaining this matter in the paper. Here is a brief explanation:
- We follow previous work like [Li et al](https://arxiv.org/abs/2404.09939v1). to provide a comprehensive demonstration of the benchmark results in miniF2F.
- Although the results are not directly comparable, the high-level ideas of these approaches share many similarities. The GPT-f Baseline shows much resemblance to PACT, FMSCL, and LeanDojo. By showing these results side by side, we can better understand how different formal systems perform in the benchmark.
### w3. Details on Pros and Cons of POETRY
We have followed your advice and conducted a more thorough analysis of POETRY.
- **Problem types that POETRY excels at.** In the RM, Figure 1 shows the number of problems solved by GPT-f Baseline and POETRY based on the length of the ground truth proofs. POETRY has a clear tendency to solve harder problems with longer ground truth proofs. As a domain-agnostic method, POETRY may not demonstrate a clear advantage in specific mathematical domains. However, Figure 1(c) clearly shows that POETRY has a significant advantage in problems requiring multiple levels of reasoning.
- **POETRY performance on existential instantiation.** In Isabelle, existential instantiation can be performed using the `define`, `let`, or `obtain` tactics. `define` and `let` typically require explicit construction of a term that satisfies the existential condition and does not introduce a new level, making POETRY’s behavior similar to GPT-f Baseline. In contrast, obtain allows extracting a term that satisfies a given property without explicit construction, with POETRY using sorry to skip the verification of satisfiability. This approach offers more flexibility, enabling the proof to focus on variable usage and deferring validation. Although not directly involving existential instantiation, below is an example of a proof by POETRY using `obtain`:
```isabelle
lemma terminates_tl_raw:
assumes "terminates g"
shows "terminates (tl_raw g)"
proof
fix st :: "bool \<times> 'a"
obtain n s where "st = (n, s)"
by (cases st) blast+
from assms have "s \<in> terminates_on g"
by (metis terminatesD)
thus "st \<in> terminates_on (tl_raw g)"
unfolding \<open>st = (n, s)\<close>
apply(induction s arbitrary: n)
by(case_tac [!] n)(auto intro: terminates_on.intros)
qed
```
Here, the top-level sketch instantiates n and s and proceeds with proving conjectures, with the actual verification of the condition deferred.
- **POETRY indeed proves more theorems that are longer and harder.** From Figure 1 in the RM, it is clear that POETRY is capable of proving more difficult problems (with longer ground truth lengths) and excels in multi-level subsets.
Figure 2 in the RM provides more cases to better understand POETRY’s pros and cons. As illustrated, though happening sparsely, POETRY’s greedy exploration mechanism might lead to finding proofs with redundant steps or failing to find shallow proofs. However, we believe this issue can be addressed by implementing a value function to prioritize informative sketches over redundant ones in future works.
## Questions
### q1. Relations between proof sketches and the tree of conjectures.
Proof sketches strictly include the tree of conjectures and also contain various other elements. For example, the obtain and show statements instantiate variables, and the subgoal tactic focuses on specific goals (shown in Figure 2(c) in the RM). POETRY inserts sorry whenever a proof step increases a proof level, allowing any form of proof sketch as long as it is permitted by the Isabelle language.
### q2. Finetuning on miniF2F.
No, we do not finetune any data with the miniF2F dataset. All the training data we use comes from the AFP Library and Isabelle built-ins.
---
Rebuttal 2:
Title: Waiting for further discussion
Comment: Dear Reviewer BhWC,
We hope our rebuttal sufficiently addressed your concerns. Is there any additional information we can provide that might lead you to increase your rating? We look forward to your feedback.
Many thanks,
Author
---
Rebuttal 3:
Title: Thank you for your response
Comment: Thank you for your response and clarifying the strengths. Perhaps I mis-worded the section on the strengths poorly, but I understood that the proof sketches both insert a sorry and are applied recursively. I'm merely pointing out that recursive decomposition and lazy evaluation is in general not novel. On the contrary, I think it is clever to take a formal proof dataset and algorithmically insert sorry/admitted at strategic points and show that this additional signal can be leveraged is novel.
w1. Dataset construction.
Thank you for the reference to Figure 1. To clarify my original question concerning having a before and after sorry, which is partially shown in Figure 1, is precisely what information is contained in the arrow from the upper level to a recursive level. In particular, does it contain the minimal proof context for the entire theorem required to make the subgoal well-typed or just the local subgoal plus a reference to a location in the existing proof? It was not clear to me which it was from the Figure or the paper description.
> Therefore, the model trained with POETRY data does not receive more information compared to the GPT-f Baseline.
I disagree with this statement. Your approach augments the dataset with sorry which precisely indicates when multi-level proofs are required as per your dataset curation process. Put another way, sorry indicates that a hammer will likely fail. This is additional information that other methods do not see and could benefit from! To test this, you could change the dataset curation process to insert a sorry every n proof levels instead of 1, or at random proof levels, to tease apart the effect of dataset augmentation on your approach.
w2. Context explaining the comparison with Lean result against Isabelle.
Thank you for this additional clarification.
w3. Details on Pros and Cons of POETRY
Thank you for the additional work in the RM. With regards to the types of problems that Poetry excels at, I buy that it does indeed prove longer theorems. However, I was wondering how it performs on kinds of problems such as algebraic vs. geometric, since algebraic are more compute heavy (longer proofs) and geometric problems require more constructions (i.e., existentials). Thank you for the discussion on existentials as this is a limitation of this work.
I appreciate the work overall and find the approach with dataset augmentation with sorry is valuable. I am inclined to maintain my score since I still have slight concerns about weakness 1.
---
Rebuttal 4:
Title: Response to Reviewer BhWC
Comment: Dear Reviewer BhWC,
We appreciate your prompt reply and recognition of our work.
## w1. Dataset Construction
Regarding your question, the language model perceives only the local subgoal when entering the next level, without knowing the location of the `sorry`. The rBFS handles all movements, whether going deeper or jumping backward. As the model progresses to the next level (as indicated by the arrow in Figure 1), the rBFS identifies the sorry and extracts the proof state immediately preceding it (i.e., the state of the conjecture). This proof state is then used to prompt the model for the next steps. Therefore, there is no difference between the input in the middle of the sketch and the input that starts a low-level sketch. While adding minimal proof context from the upper level could improve POETRY’s performance (and we believe it would), we did not include this to ensure a fair comparison with the GPT-f baseline. This allows us to clearly assess the impact of recursive proving.
> This is additional information that other methods do not see and could benefit from!
We agree that the model receives more information compared to the GPT-f baseline in this aspect, and we acknowledge the value of the suggested ablation experiment. However, due to the limited time remaining during the rebuttal period, we may not be able to complete this experiment. We will include these ablation results in our final paper.
## w3. Types of problems that POETRY excels
To provide more clarity on the types of problems where POETRY excels, we’ve included the table below, comparing the performance of POETRY and the GPT-f baseline across different mathematical categories in the PISA dataset. The results are categorized based on the directories of lemmas in the AFP library. From the table, we can see that POETRY outperforms the GPT-f baseline in most categories, including geometry and algebra.
| Category | GPT-f Baseline | POETRY |
| -------- | -------- | -------- |
| **Graph Theory** | 38.7% | **41.9%** |
| **Cryptography** | 65.2% | **69.5%** |
| **Logic** | 48.1% | **53.2%** |
| **Linear Algebra** | 39.3% | 39.3% |
| **Set Theory** | 54.5% | **59.1%** |
| **Computation Theory** | 51.7% | 51.7% |
| **Probability and Statistics**| 52.2% | **65.2%** |
| **Differential Equations** | 52.3% | 52.3% |
| **Combinatorics** | 55.0% | **62.5%** |
| **Geometry** | 46.7% | **50.0%** |
| **Abstract Algebra** | 41.5% | **50.9%** |
| **Algebraic Geometry** | 53.1% | **61.2%** |
| **Algorithms and Data Structures** | 59.1% | **63.6%** |
| **Functional Analysis** | **46.7%** | 40.0% |
| **Number Theory** | 35.0% | **45.0%** |
| **Miscellaneous** | **50.5%** | 50.0% |
Have our responses above adequately addressed your concerns? Is there any additional information we can provide to persuade you to improve your rating? | Summary: This paper introduces POETRY, a new method to prove theorems recursively. The key ideas are to use a modified best first search algorithm for the search part, and a *sorry* tactic for assumptions at the current level (to be proven later). The authors provide the intuition that this recursive structure allows POETRY to prove theorems in a top-down fashion similar to humans, getting into the details of proving a conjecture only if it is actually relevant to the best overall proof being explored. The authors conduct experiments with two standard benchmarks, showing notable improvements over baselines and SOTA search-based methods (but not LEGO-Prover etc. which rely on substantially larger general purpose LLMs).
Strengths: The paper is very well structured and clearly written. Intuitions, method details, connection to existing methods, limitations, and take away messages from experiments are all very well articulated.
The idea seems simple but is apparently novel (see 'weaknesses' below, related to this).
The gains over several baselines are notable, of 5% or more (absolute).
I am assuming the authors will publicly release the code of their POETRY system for further research on this topic.
Weaknesses: Not being very familiar with the area, I am surprised none of the existing SOTA methods use a similar recursive, top-down search of in theorem proving. I will have to defer to other, more knowledgeable reviewers for assessing novelty of the present work.
I did not fully follow why a *novel* recursive best-first search strategy is needed here. The description of this section (3.2) can probably use some clarification. E.g., why could one not account for the conjecture's level in the utility of the conjecture, and thus implicity enforce level-by-level proof search? On the same note, could the authors comment on the relationship between their proposed recursive best-first search and a combination of standard breadth-first search (i.e., staying within a level) and best-first search (i.e., preferring to explore the most promising node first)?
Just for completeness, it would have been good to know how well very large LLM based methods, such as LEGO-Prover, do on the considered benchmarks.
Technical Quality: 4
Clarity: 4
Questions for Authors: Please see weaknesses section above.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed and constructive comments. We hope our responses and rebuttal material address your concerns. As you mentioned in the strengths section, we will release all the code, models, and data on the POETRY system to support further research on this topic.
## Weaknesses
### w1. Novelty on the recursive method.
To the best of our knowledge, we are the first to introduce this type of recursive, top-down search in the domain of neural theorem proving. The challenge of the step-by-step proof search has been long recognized, but no effective method has been proposed until now. Other reviewers (BhWC, 9pow, p8XS) acknowledge the novelty, simplicity, and effectiveness of POETRY’s design, and believe it will be crucial to advancing neural theorem proving.
### w2. The necessity of the novel recursive best-first search
There are various options to handle recursive proving approaches, and we chose to use recursive best-first search (rBFS) for the following reasons:
- **Alignment.** The rBFS algorithm aligns well with the recursive proving paradigm. As proofs are broken into several proof sketches, it's natural to align each sketch with an individual BFS search.
- **Equal Treatment for Equivalent Matters.** Within a single problem, each sketch we seek to find is fundamentally the same, allowing the same structure to occur in every sketch. Thus, we want equal treatment for every sketch and avoid unnecessary defunctions at certain levels.
- **Better extensiveness.** Equal treatment also enhances the algorithm’s extensiveness. Currently, we experiment with BFS as the core algorithm, but it will be easy to integrate new search algorithms like MCTS or other improved search methods into rBFS.
- **Easy parallelization.** Currently, rBFS is executed in a single-threaded manner, but since each sub-search is identical to the others, rBFS can easily run in parallel, leveraging more powerful computation resources.
As you pointed out, one could indeed use vanilla BFS for recursive theorem proving with a dedicated design of node priority scores to enforce implicit level-by-level search. However, while it’s straightforward to prioritize the conjecture’s level at the top level, ensuring the same behavior for the conjecture level inside the proof of the top-level conjecture becomes complex. Adding the uncertainty of log-prob makes the algorithm intricate and uncontrollable. Nonetheless, we acknowledge that such an algorithm is possible and may have advantages over rBFS in certain aspects. However, this design would lose the beneficial properties of rBFS listed above.
Regarding the combination of standard breadth-first search and best-first search, it’s important to note that the current best-first search operates similarly to the breadth-first search in the GPT-f paper. The priority score for the best-first search is cumulative log-prob, where the score for each node is calculated by accumulating all the log-prob from the root node to the current node. Consequently, nodes at deeper levels tend to have lower scores, and the actual behavior of best-first search resembles breadth-first search. Therefore, we could not come up with a way to combine breadth-first search and best-first search to tackle recursive proving approaches.
### w3. The performance on miniF2F with large language model.
We have included Table 2 in the RM, listing baseline methods that utilize large language models on the miniF2F benchmark. It is important to note that these methods use a completely different approach to prove theorems. Instead of performing searches using BFS algorithms, these approaches leverage the general-purpose capabilities of LLMs to translate natural language proofs into formal code (often termed autoformalization). Therefore, comparing these methods with POETRY is not only unfair in terms of model size and the amount of training data but also incomparable in core methodology. | Rebuttal 1:
Rebuttal: Dear Reviewers and ACs,
Thank you very much for the time and effort you have dedicated to reviewing our paper. We appreciate the thorough suggestions and constructive feedback on our manuscript.
We are also grateful for the positive recognition from the reviewers regarding our motivation (eTZY, 9pow, p8XS), contribution (eTZY, BhWC, 9pow, p8XS), and strong results (eTZY, p8XS), as well as the potential future impact of our work (eTZY, BhWC, 9pow, p8XS). We acknowledge the concerns raised by reviewers BhWC and 9pow, which may stem from some previously incomplete observations in our work. Reviewer 9pow raised concerns about our baseline comparison, which appear to stem from misunderstandings of our methods and baseline. POETRY operates step-by-step, like GPT-f Baseline, within each proof sketch, without incurring extra computation costs compared to the baseline method, ensuring a fair comparison. We will provide further explanations in our revised manuscript.
We have carefully considered all the suggestions provided by the reviewers and included the additional information requested in our one-page rebuttal. We have denoted the attached PDF document as rebuttal material, abbreviated as RM. The key updates are as follows:
- **(BhWC)** Table 1 compares the constructed training corpus for GPT-f Baseline and POETRY, providing a clear view of the differences in training data.
- **(eTZY)** Table 2 illustrates benchmark results using a large language model. These results are not comparable but are presented for demonstration purposes.
- **(BhWC and 9pow)** Figure 1 shows a histogram of the number of problems solved by `GPT-f Baseline` and POETRY based on the length of the ground truth proofs. This analysis reveals that POETRY tends to solve more complex problems and excels at solving problems requiring structural reasoning.
- **(BhWC and 9pow)** Figure 2 provides two failure cases showcasing issues caused by the greedy exploration process, which can lead POETRY to find proofs with redundant steps or fail to find shallow proofs. However, these occurrences are sparse.
Additionally, we outline our paper's main contributions, including additional conclusions drawn during the rebuttal:
- We propose POETRY, a novel approach for neural theorem proving. Addressing a long-standing issue in previous step-by-step approaches, we introduce a recursive proving approach to find proofs hierarchically within the proof. **To our knowledge, we are the first to explore the recursive proving paradigm in neural theorem proving.**
- We introduce a novel recursive best-first algorithm that naturally aligns with the idea of recursive theorem proving. Additionally, it has good extensiveness and the potential for parallel computation.
- Experiments conducted on the miniF2F and PISA datasets demonstrate significant performance gains with our POETRY approach over state-of-the-art methods. POETRY achieves an average proving success rate improvement of 5.1% on miniF2F.
- A substantial increase in the maximum proof length found by POETRY is observed, from 10 to 26. Through thorough analysis, we confirm that POETRY is capable of solving more complex problems and excels at solving problems requiring structural reasoning.
We thank all the reviewers again for their time and effort in reviewing our paper and are committed to addressing every issue raised.
Pdf: /pdf/a34355654ab26b012ba5789d7b20e4f4b27f739b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Neural Synaptic Balance | Reject | Summary: This paper aims to study and explain the phenomenon of neural synaptic balance, where a balanced neuron means that the total norm of its input weights is equal to the total norm of its output weights. Particularly, the authors study the reasons why and when randomly initialized balanced models (so, models whose neurons are balanced) tend to be balanced at the end of training as well. The study takes into account many different components of neural networks (activations, layer kinds, regularisers).
Strengths: The study is very comprehensive, and sheds light on some interesting properties of deep neural networks.
Weaknesses: While it is true that, as the authors state in the conclusion, neural synaptic balance is a theory that is interesting on its own, I would encourage the authors to expand the discussion on possible application domains of this theory. Why is it interesting? What are the advantages that a complete understanding of such phenomenons could bring to the table?
Technical Quality: 3
Clarity: 2
Questions for Authors: Backpropagation is not biologically plausible, and hence does it really make sense to state that the methods proposed by the authors are, if they are then applied to backdrop-based models? I would suggest to either remove such a discussion, or to expand on it, showing even empirically on small models, that the results extend to different kinds of neural networks, where both neural activities and synapses are updated locally in a bio-plausible way (PC). A third way of addressing this would be to add a discussion on it, and avoid to do the experiments.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: No concerns here
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer VcLX for the positive review of this work and insightful comments.
While we have focused here on developing the theory of neural synaptic balance, neural synaptic balance has practical applications. It can be viewed as an additional, complementary, method of regularization on par with other methods, such as dropout. It is based on a rigorous theory that connects it to convex optimization. And finally, it may have additional applications in biological or neuromorphic systems, due to the locality of the balancing operations.
The interesting fact about neural balance is that, while balancing a single neuron may ruin the balance of its adjacent neurons, iterated stochastic balancing of all the neurons in a network leads to a unique, stable, configuration of the weights (the globally balanced state).
--------------
Regarding the biological implausibility of backpropagation, we will add a discussion to the final version. Note that the balancing algorithm presented in our work can be applied to a network after training with any learning rule. In other words, Theorem 5 does not depend on the training algorithm and balancing can be applied to any set of weights, at any time, during or after learning, and with any cost function. [For example, one could train a network with L2 regularization and apply L1 balancing to the weights after the training is complete.]. | Summary: The authors present a theory of neural synaptic balance, defined as the condition in which a total loss achieves the same value for the input weights to a neuron and its output weights. This is different from the well studied E/I balance in neuroscience and machine learning literature. The authors show mathematical derivations of how to balance a neuron without affecting the outcome of the network and show that balancing a network is a convex optimization process.
Strengths: The paper is overall clear and detailed, the mathematical proofs are sound and the paper structured well moving from straightforward claims to less trivial points.
Weaknesses: The paper is about neural synaptic balance, but the authors do not provide convincing motivation why we should care about such balancing. As they mentioned, adding a simple L2 regularizer will balance the network naturally (in a distribution sense, not necessarily each neuron individually) during training and have other well-known benefits, so the elaborate mathematical derivations on the general balancing process seem redundant. In addition, in the authors' own plots, unbalanced networks sometimes outperform the balanced networks (e.g., fig 3E), which just emphasizes the point. One of the mentioned motivations is biological neurons, but they claim that biological neural data about synapses do not exist. However, they could test their hypothesis against the currently available connectomes e.g., from or the Drosophila fly brain. They mention spiking networks, but the notion of input-output homogeneity is unclear in spiking networks. Finally, physical neurons' energy consumption is mentioned without details.
Technical Quality: 2
Clarity: 3
Questions for Authors: Why is the energy consumption of physical neurons lower when they are balanced? Why not just have a regularizer to keep the overall activation low and weights small? Why does each neuron need to be balanced separately?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 1
Limitations: The whole framework is specific to BiLU neurons or perhaps to other power-law functions. The relevance to spiking neurons is therefore questionable. It is also questionable as a general principle for machine learning.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer QTyq for the positive review of this work and insightful comments.
"Why is the energy consumption of physical neurons lower when they are balanced?" Because the balancing algorithm also decreases the norm of weights.
Why not just have a regularizer to keep the overall activation low and weights small? Using the balancing algorithm is indeed another way by which we can achieve a balanced state while keeping the overall activation low and weights small. We have shown that using a regularizer is not the only way to achieve a balanced state and keep the overall activation low and weights small.
Why does each neuron need to be balanced separately? It is more elegant and biologically (or neuromorphically) more plausible to be able to achieve a global balanced state through local rules that each neuron can apply independently of all the other neurons in the network, at any point in time, in a completely asynchronous way. In other words, neurons do not need to exchange information between each other in order to achieve a global balanced state. Global order emerges from local order. | Summary: This paper provides a thorough characterization of regularizers which lead to synaptic balance (when the "cost" of input weights to a neuron or pool of neurons is tied to the cost of output weights) in trained neural networks. Their results apply to many different activation functions and architectures.
Strengths: The paper is very well-written and easy to follow. I was able to read everything, including the math, smoothly. The mathematical arguments themselves are crisp and correct, which I really appreciated.
Weaknesses: The paper is strongly lacking in motivation. I never really understood *why* I should care about synaptic balance. Also, it is clear from the numerical experiments that synaptic balance only emerges in networks when it is enforced via a regularizer (expect in the case of infinitely small learning rate), but why is this surprising? It seems obvious that adding a regularizer for some property tends to result in that property. It would be shocking if synaptic balance occurred without some regularization towards the property. Thus, while the "what" and "how" of the paper are nicely addressed, I feel the paper is missing the "why". I believe if the authors could address this from the outset, it would make the paper much stronger, and I would of course be willing to increase my score.
Technical Quality: 4
Clarity: 3
Questions for Authors: -It is claimed throughout the paper that "network balance can be used to assess learning" progress. I do not really understand how. If my total loss $\mathcal{E}$ is the sum of a task loss $E$ and a regularizer $R$, then there is nothing preventing a situation where I get $E = 0$ and $\mathcal{E},R > 0$, meaning that task loss is decoupled from the network balance loss. If the authors could clarify this point, that would be great.
Small typos:
- Line 128: alpha is not rendered in latex
- Figure 4 caption, subplot (D-F) "CFAR10" -> "CIFAR10"
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer cT2m for the positive review of this work and insightful comments.
Synaptic balance does not necessarily emerge in networks trained with a regularizer (unless they are trained very carefully, with very small learning rates, etc). Our work shows that one can obtain synaptic balance without a regularizer, simply by applying the balancing algorithms described in the paper during training or just at the end of training. However,reviewer cT2m is right that we could have provided a clearer motivation. In addition to the theoretical motivations, there are also practical motivations as discussed in the overall Rebuttal. In particular, balancing can be viewed as an alternative way of regularizing networks, in the same way that dropout is viewed as an alternative or complementary way of regularizing networks. This will be made clear in the revised version.
The surprising result in our work is that without any regularization, if each neuron tries to balance its input and output synapses independently (without any coordination with any other neurons) the network reaches a unique, stable, and globally balanced state. Thus a unique global order emerges from local, independent, balancing operations.
By the term “network balance can be used to assess learning” we mean that if a network trained by regularized SGD is in a balanced state and does not move from it, then the gradient must be zero and the learning must have converged. Conversely, if the state is not globally balanced, then learning has not fully converged.
All the minor points are fixed in the revised version.
---
Rebuttal Comment 1.1:
Title: Changed Score to Borderline Accept
Comment: I thank the authors for their reply. I have increased my score to a borderline accept.
I am still confused by this point:
"By the term “network balance can be used to assess learning” we mean that if a network trained by regularized SGD is in a balanced state and does not move from it, then the gradient must be zero and the learning must have converged. Conversely, if the state is not globally balanced, then learning has not fully converged."
Could the authors please be more precise as to which gradient they are talking about? The total gradient (i.e., including the regularizer), or the gradient of the "task" component of the overall loss function?
---
Reply to Comment 1.1.1:
Title: which gradient
Comment: We thank this reviewer for appreciating our reply. Bu "regularized SGD" we refer to the "total gradient". In any case, we will revise the text to remove any confusion. | Summary: The authors provide a theoretical approach to the analysis of balanced neurons and networks. Their theoretical work includes proof of the convergence of stochastic balancing. In addition, they investigate the effect of different regularizers and learning rates on balance, training loss, and network weights, including practical simulations for two classification problems.
Strengths: The paper tries to reveal the inner structure of neural networks during the training phase. This is a very important but difficult problem; its solution could provide new insights for developing better training algorithms. The work proposed can ultimately be an important step toward more transparent networks as opposed to their current black box character.
Weaknesses: The paper has some weaknesses, most notably how the material is presented and part of the evaluation.
Theorem 5.1, dealing with the convergence of stochastic balancing, is arguably the central piece of the paper. However, its formulation is bulky and should be reduced to a shorter, more manageable size, potentially with the help of lemmata. This becomes apparent when seeing that its proof contains the proof of another proposition.
In Figure 4, the authors say that these panels are not meant for assessing the quality of learning. However, measuring not only the training loss but also the accuracy on a test set will give important insights. How does the classification performance relate to the degree of balancing? Why did the authors not include this analysis? It could give important insights into the relationships between overtraining, generalization capability, balance, and accuracy.
The author should discuss the consequences of their work on network training. They do not discuss the immediate practical consequences or any recommendations they can make based on their results.
Technical Quality: 2
Clarity: 2
Questions for Authors: It would help the paper's clarity if the authors answered their own questions in a brief summary at the end of the paper, as concise as possible:
Why does balance occur? Does it occur only with ReLU neurons? Does it occur only with L2 regularizers? Does it occur only in fully connected feedforward architectures? Does it occur only at the end of training? And what happens if we balance neurons at random in a large network?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors could be more specific about the consequences of their work, including limitations. For example, can they recommend any specific learning rate, network structure, or other features for optimal training?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer TDzF for their positive review of this work and insightful comments.
Regarding Theorem 5.1, the reviewer has mentioned a fair point. In the revised version we will shorten Theorem 5.1, and move Proposition 5.4 and its proof outside of the proof of Theorem 5.1.
-----------
Regarding Figure 4, for a fixed set of weights, synaptic balancing does not change the input-output function of the network, as shown by the theory. Thus, for a fixed set of weights, we do not expect to see any change in performance after applying the balancing algorithm. The new figure attached to our rebuttal does what this reviewer is asking for, which is showing the regularizing effect of balancing throughout learning.
-------
As explained in our general response, we will add text on the application of synaptic balancing to regularization and cite additional work.
------
We will add a brief summary at the end of the revised version to improve the paper's clarity.
-------------------
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thanks for adding the figure, which should improve the quality of the presentation. I have upgraded this part of my grading. Compared to the other reviewers, I am more easily convinced that this work could lead to a better understanding of how neural networks operate. However, I agree with the other reviewers that this needs to be better motivated. The feeling is that something important is missing. If we only knew what. | Rebuttal 1:
Rebuttal: We thank the reviewers for appreciating our work and for their insightful comments. We have provided a separate response to each reviewer. The primary goal of our paper is to present the theory of synaptic balancing in neural architectures and the main theorem (Theorem 5.1) connects synaptic balancing to convex optimization. The simulations included are meant to corroborate the theory.
Overall, the main criticism is that we should have included additional information regarding the regularization value of synaptic balancing in the motivation section or in the conclusion. This is a fair point. The reason we did not make this point as clear as we should have done is that we focused primarily on the main result (Theorem 5,1) establishing the properties of the balancing algorithm. Although we discuss the applications of the balancing algorithm, we should have given more space to this. While Theorem 5.1 remains the cornerstone of synaptic balancing, in the revised version, we will make space for additional text and a new figure to describe the regularization applications of synaptic balancing. We will free up space primarily by shortening the proof of Theorem 5.1 [as described below], since the complete proof is available anyway in the supplementary material. The new figure is attached to this rebuttal.
We will add a few sentences on regularization in the motivation section, and in the new conclusion we will make very clear that:
1) synaptic balancing is a novel approach to regularization;
2) synaptic balancing is very general in the sense that it can be applied with all usual cost functions, including all L_p cost functions;
3) synaptic balancing can be carried in full or in partial manner (due to the convexity property in Theorem 5.1);
4) full or partial synaptic balancing can be applied effectively at any time during the learning process: at the start of learning, at the end of learning, or during learning, by alternating balancing steps with stochastic gradient steps.
5) simulations show that these approaches can improve learning in terms of speed (fewer epochs), accuracy or generalization abilities (see examples in new figure). Thus, in short, balancing is a novel effective approach to regularization that can be added to the list of tools available to regularize networks like dropout and other regularization tools.
We hope the reviewers will agree that this addresses their main concern and that synaptic balance is a novel theoretical and practical topic worthy of being presented at the NeurIPS conference.
Pdf: /pdf/40e876f04b3c3dd944bd09d0a8037f35b5f34af7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Sketchy Moment Matching: Toward Fast and Provable Data Selection for Finetuning | Accept (poster) | Summary: This paper addresses the problem of data selection for finetuning large pre-trained models. The key contributions are:
1. A theoretical analysis of data selection for finetuning that reveals a variance-bias tradeoff in high dimensions.
2. A provable result showing that gradient sketching can efficiently find a low-dimensional subspace that preserves fast-rate generalization.
3. A practical two-stage algorithm called Sketchy Moment Matching (SkMM) that uses gradient sketching to explore the parameter space and moment matching to exploit the low-dimensional structure.
4. Empirical validation on synthetic and real datasets demonstrating the effectiveness of the approach.
Strengths: 1. The paper provides a rigorous generalization analysis for data selection in both low and high-dimensional settings. The proofs are detailed and appear sound.
2. The proposed SkMM method is simple to implement and scalable to large models/datasets. Experiments on both synthetic and real data demonstrate the effectiveness of the approach.
Weaknesses: 1. Some of the theoretical results rely on assumptions (e.g., low intrinsic dimensionality) that may not always hold in practice. More discussion of the implications when these assumptions are violated would be valuable.
2. The method introduces new hyperparameters (e.g., sketching dimension, moment matching strength) without much guidance on how to set them optimally.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Does the approach extend naturally to other finetuning scenarios beyond linear probing (e.g., adapters, full finetuning)?
2. How does the computational cost of SkMM compare to other data selection methods as the dataset/model size increases?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their insightful questions and suggestions. We are glad that they found our theory solid and our method effective. On the questions raised in the review:
1. __Low intrinsic dimensions of fine-tuning__:
Recalling the references [2,72] from the introduction, we highlight that the __low intrinsic dimension of fine-tuning is an extensively observed phenomenon in practice with theoretical rationale__. In particular, [2] demonstrates the surprisingly low intrinsic dimensions of fine-tuning language models in practice; while [72] provides a theoretical justification for the ubiquity of low-rank structures in natural data. One of the main goals of this work is to leverage such low intrinsic dimensions via data selection and enable learning with a sample complexity independent of the high parameter dimension $r$.
As explained in footnote 7 (page 5), such __a low intrinsic dimension is necessary for sample-efficient learning__. Intuitively, if all the $r$ directions in the high-dimensional parameter space are equally important, when the coreset size $n < r$, the learned parameter $\theta_S$ must fail to capture the orthogonal complement of the space spanned by the coreset and lead to $\mathbb{E}[ER(\theta_S)] \gtrsim r-n$. We will further elaborate on these assumptions in the revision.
2. __Choices of hyperparameters__:
The two hyperparameters in SkMM (Algorithm 3.1) are the sketching dimension $m$ and the constant $c_S$ that controls the moment matching strength.
* __Theorem 3.1 provides theoretical guidance for the choice of sketching dimension__: $m = O(\overline{r})$ where $\overline{r}$ is the low intrinsic dimension. In practice, choosing $m$ to be a small constant multiple of $\overline{r}$ (eg, $m = 2\overline{r}$) is generally sufficient. (Notice that such pessimistic constants in theory compared to practice are ubiquitous in sketching, cf. [30,49,81] in submission.)
* On the choice of $c_S$, we first __recall from Remark 3.3 that the lower $c_S$ corresponds to the stronger moment matching, resulting in harder optimization of (5), but up on convergence, leading to a better generalization guarantee__. In practice, we start by choosing $c_S$ close to one to ensure the solvability of (5) and then gradually decrease $c_S$ until the optimization of (5) fails or takes too long to converge. Specifically, we fix $c_S = 0.999$ in the synthetic experiments as reported in Sec 4.1 (line 264); while for experiments on real data, we explore $c_S = 0.99, 0.9, 0.8, 0.7, 0.6$ in the respective order for the minimum feasible $c_S$.
We will include the above discussion on hyperparameter tuning (especially for $c_S$) in the revision.
3. __Extension beyond linear probing__:
Thanks for the constructive question. In the revision, we will extend the empirical evaluation beyond linear probing. In particular, we demonstrate the effectiveness of SkMM in data selection for __finetuning the last two layers__ of an ImageNet-pre-trained ResNet18 on two datasets: CIFAR-10 with 10 well-balanced classes and StanfordCars with 196 imbalanced classes. Please refer to General Response 3 for the results and details of the additional experiments.
4. __Computational efficiency of SkMM__:
SkMM is efficient in both memory and computation while scaling well with the model and data sizes. Please refer to General Response 1 for a detailed discussion on the computational efficiency of SkMM.
We are happy to answer any further questions you may have. Thanks again for the helpful feedback.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response on the questions! After reading the other reviews, I would like to keep my current score of 6. | Summary: The authors study the task of data selection. They extend the classical variance reduction to the high dimensional case and provide a variance-bias tradeoff analysis. Based on the theoretical results, they propose sketchy moment matching, which first utilizes gradient sketchy to form a low-dimensional space and then uses moment matching to reduce the variance.
Strengths: The proposal is a reasonable improvement over the baselines which often only consider bias or variance reduction. The theoretical analysis is also a decent contribution of the paper.
Weaknesses: The experiment focuses on linear probing, which already limits the scope of the evaluation. Furthermore, even under this limited scope, the setting does not seem to be challenging. For the synthetic setup, the sample count is 2000 while the rank is 2500, so it seems not to be a very high-dimension setup (the rank is not so much larger than the sample count). Also, the cluster count seems to be low for both tasks, 8 for synthetic, while the number of class is 10 for Cifar-10.
Technical Quality: 2
Clarity: 3
Questions for Authors: For the convenience of the readers, could you list the number of parameter fine-tuned for the Cifar-10 linear-probing task and the number of samples of Cifar-10? (I know this is something that can be looked up online, but these seems to be important numbers to show that the experiment setting is high dimensional)
Also, could you scale up the synthetic experiment, like increasing sample count/rank/number of clusters? Testing on Cifar-100 is also another way to evaluate the performance on a larger number of clusters.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes, the authors discuss the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their helpful questions and suggestions, and we are glad that they found this work well-presented and theoretically sound. Nevertheless, we believe there have been misconceptions regarding some key notions and the focus of this work. Before diving into the specific questions, we want to emphasize the following two main theoretical/algorithmic contributions of this work:
* a generalization bound of data selection for high-dimensional fine-tuning with a low intrinsic dimension that unveils the possibility of learning a fine-tuning model with a sample complexity proportional to the low intrinsic dimension (instead of the high parameter dimension), and
* SkMM, a practical data selection framework inspired by the analysis that finds such coresets efficiently via gradient sketching + moment matching.
Meanwhile, we recall the key notion of "high-dimensional finetuning" as explained in the introduction (footnote 1): the over-parametrized setting where the number of finetuning parameters $r$ is larger than the selected downstream sample size $n$. That is, __“high-dimensional” refers to the relative magnitude $r>n$, instead of the absolute data/model size__.
On the questions raised in the review:
1. __Scaling up linear probing on synthetic data does not change the takeaway information__:
While we agree with the reviewer that more experiments on larger scales would intuitively be preferred, scaling up the synthetic data experiments is not a reasonable choice because (i) our current synthetic data experiments readily lie in the “high-dimensional”/over-parametrized regime with $n < r$ even though the absolute sample/model sizes are not huge; and (ii) scaling up linear probing experiments on synthetic data generally does not change the takeaway information, as shown below.
Consider a set of $N=2000$ samples in a higher dimension $r=4000$ generated from a GMM with $\overline{r}=12$ clusters where ridge regression over the full dataset achieves $L(\theta_{[N]})=1.18e-2$. We show the results in the table below, which are consistent with the synthetic data experiments in the submission. The main takeaways remain unchanged: SkMM achieves among the best empirical risks across different coreset sizes $n$, especially for small $n$; while methods that facilitate variance-bias balance (SkMM and T/R-leverage) tend to outperform the rest baselines. (Notice that for $n \ge 120$, data selection can bring even lower empirical risk than learning over the full dataset, which coincides with the recent theory and observation in [41] in submission.)
| n | 48 | 64 | 80 | 120 | 400 | 800 | 1600 |
|---|---|---|---|---|---|---|---|
| Herding | 4.04e+03 | 4.04e+03 | 4.04e+03 | 3.87e+03 | 3.74e+03 | 3.67e+03 | 1.76e+03
| Uniform | (1.24 $\pm$ 1.19)e2 | (0.81 $\pm$ 1.18)e2 | (0.76 $\pm$ 1.11)e2 | (0.74 $\pm$ 1.11)e2 | (9.92 $\pm$ 0.18)e-3 | (9.56 $\pm$ 0.03)e-3 | (1.64 $\pm$ 0.39)e-2 |
| K-center | (6.64 $\pm$ 0.22)e-1 | (4.72 $\pm$ 2.79)e-1 | (3.98 $\pm$ 0.81)e-2 | (1.71 $\pm$ 2.62)e-1 | (3.12 $\pm$ 0.67)e-2 | (1.29 $\pm$ 2.29)e-1 | (2.04 $\pm$ 0.62)e-1 |
| Adaptive | (2.19 $\pm$ 1.27)e-2 | (2.78 $\pm$ 3.34)e-2 | (2.51 $\pm$ 2.02)e-2 | (1.88 $\pm$ 1.63)e-2 | (2.39 $\pm$ 4.25)e-2 | (9.52 $\pm$ 0.05)e-3 | (1.71 $\pm$ 2.72)e-1 |
| T-leverage | (1.06 $\pm$ 8.15)e2 | (7.21 $\pm$ 8.25)e1 | (5.44 $\pm$ 7.70)e2 | (5.44 $\pm$ 7.70)e2 | __(8.27 $\pm$ 1.62)e-3__ | __(8.66 $\pm$ 1.79)e-3__ | (1.09 $\pm$ 0.37)e-2 |
| R-leverage | (3.52 $\pm$ 7.63)e1 | (1.92 $\pm$ 1.77)e-2 | (1.19 $\pm$ 2.18)e-2 | __(1.03 $\pm$ 0.05)e-2__ | (9.48 $\pm$ 0.50)e-3 | (9.54 $\pm$ 0.02)e-5 | (1.17 $\pm$ 0.47)e-2 |
| SkMM | __(2.13 $\pm$ 1.28)e-2__ | __(1.65 $\pm$ 1.30)e-2__ | __(1.11 $\pm$ 0.06)e-2__ | __(1.03 $\pm$ 0.03)e-2__ | (9.34 $\pm$ 0.13)e-3 | (9.41 $\pm$ 0.11)e-3 | __(1.08 $\pm$ 0.42)e-2__ |
To further investigate the effect of problem size on the data selection performance, instead of scaling the synthetic data experiments, we will provide more extensive empirical evaluations of our method on real data, as elaborated below and in General Response 3.
2. __More comprehensive experiments on real vision tasks__:
Thanks to the constructive suggestion, in the revision, we will extend the empirical evaluation to
* real vision tasks with a larger number of classes (ie, StanfordCars with 16,185 images in 196 imbalanced classes of cars, in addition to CIFAR-10 with 60,000 images in 10 balanced classes) and
* fine-tuning settings beyond linear probing with parameters of much higher dimension (ie, finetuning the last two layers of an ImageNet-pre-trained ResNet18 with $r=2,364,426$ parameters, versus linear probing on CLIP with $r=5,130$ parameters, vide Table 5 in the attached PDF for detailed configurations). We will make sure to include these experimental details in the revision.
Please refer to General Response 3 for the results and more details of the additional experiments.
We are happy to answer any further questions you may have. If our responses above help address your concerns, we would truly appreciate a re-evaluation accordingly.
---
Rebuttal Comment 1.1:
Comment: As the deadline for the discussion phase gets close, we would greatly appreciate it if the reviewer could provide some feedback on our responses, which we believe have addressed all the concerns raised in the review regarding the limited scope of the experiments.
In particular, we kindly re-highlight the additional experiments in General Response 3 and the attached PDF where we provided stronger empirical evidence on (i) real data with more classes and class imbalance, as well as (ii) fine-tuning settings (beyond linear probing) with much higher dimensions (i.e., the highly over-parametrized settings). Meanwhile, we improved the comprehensiveness of our experimental details thanks to the suggestions from the reviewers (vide General Response 3).
We value the opportunity to improve our work based on these constructive suggestions and are always happy to provide further clarifications if needed. Thanks again for your time. | Summary: This paper concerns the data selection problem: given a collection of $N$ embeddings of dimension $r$ for $r\gg N$, the goal is to pick a subset $S$ of points of size $n$ so that one could run any downstream algorithm on $S$ with a regularization term, so that the empirical risk is small even on the entire finetuning set. Assuming the model is $y=\phi(X) \theta_*+z$ where $\phi: \mathbb{R}^d\rightarrow \mathbb{R}^r$ and $z$ is an i.i.d. noise vector with zero mean and bounded variance, then there exists a subspace that one could project onto and decompose the empirical risk as a bias and a variance term. Further, under the assumption that the second moment matrix has low intrinsic dimension, then one could find a good subspace via gradient sketching: draw a JL matrix $\Gamma\in \mathbb{R}^{r\times m}$ for $m\ll r$, then as long as one has $\Gamma^\top \Sigma^{\phi} \Gamma \preceq c_S \cdot \Gamma^\top \Sigma^{\phi}_S \Gamma$, then the error could be decomposed into a bias, variance and a sketching error term. A sketching gradient, moment-matching algorithm is proposed, involves applying sketching to the gradient, form the Jacobian and solve a quadratic relaxation. Experiments are performed on both synthetic datasets and CIFAR10.
Strengths: The main theoretical contribution is that for over-parametrized setting where $r\gg n$, one could provably show the existence of a subspace that one could project onto and perform data selection on that subspace. Moreover, if the second moment in addition has low intrinsic dimension, then one could use standard dimensionality reduction techniques (in $\ell_2$ norm) to sketch the high-dimensional gradient. In the sketchy moment-matching algorithm proposed in the paper, the authors first sketch the gradient then use uniform sampling to construct $S$.
Weaknesses: The core results of this paper are not technically very novel and surprising, the algorithm could be interpreted as a generalization of the leverage score sampling via JL trick due to Spielman and Srivastava, STOC'08. The analysis largely draws inspirations from the over-parametrization literature, which makes sense as finetuning is essentially training in an over-parametrized setting. Another point that is a bit unsatisfactory is the sketchy moment-matching algorithm utilizes quadratic relaxation to solve the program efficiently with projected gradient descent, but all analysis is based upon *not solving the quadratic programs*. The authors should try to provide some theoretical justifications of sketchy moment-matching, as that's one of the key contributions of this paper.
Technical Quality: 2
Clarity: 3
Questions for Authors: What is the runtime efficiency of your proposed method? It seems the performance is slightly better than ridge leverage score sampling, but ridge leverage score sampling could be implemented in input sparsity time, see the algorithm due to Cohen, Musco and Musco, SODA'17. Their algorithm is based on recursive uniform sampling, so could be implemented efficiently in practice.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the insightful questions and suggestions from the reviewer. However, we believe there have been misunderstandings regarding the focus and contribution of this work. We hope that the following responses will help clarify these confusions.
1. __Our theoretical contributions are explanatory instead of instrumental__:
As discussed in the related works, prior theoretical studies on data selection are conducted either in the low-dimensional linear regression setting or in the asymptotic regime, neither of which aligns with the fine-tuning in practice that manages to learn high-dimensional parameters with much fewer samples. (Although we analyze fine-tuning in the kernel regime following [83], the over-parametrization literature is generally out of the data selection setting.) To fill the gap, we provided
* __a generalization bound of data selection for high-dimensional fine-tuning with low intrinsic dimension__ in the non-asymptotic regime that unveils the possibility of learning a fine-tuning model with a sample complexity proportional to the low intrinsic dimension (instead of the high parameter dimension), and
* __a practical data selection framework inspired by the analysis__, SkMM, that finds such coresets efficiently via gradient sketching + moment matching.
Instead of proposing an instrumental theory, we use the existing theoretical tools from sketching and statistical learning to provide __an explanatory theory for the sample efficiency of data selection for fine-tuning that generally admits low intrinsic dimensions in practice__ and __an efficient method for selecting such data__.
While we agree that gradient sketching shares a similar high-level idea as the JL trick in Spielman and Srivastava, STOC'08 (which is ubiquitous in the sketching literature, cf. [30,81] in submission), leveraging such a fundamental theoretical framework should not compromise the novelty of an explanatory theory.
2. __The quadratic relaxation is a practical realization of SkMM instead of the only solution__:
As elaborated in General Response 2, after gradient sketching, moment matching in the resulting low-dimensional subspace can be realized via various methods, including leverage score sampling and existing polynomial-time heuristics of discrete optimization for experimental design:
* Our main motivation for exploring beyond leverage score sampling is the empirical observation that (truncated/sketched) leverage score sampling can perform poorly, especially in the low data regime (cf. Table 1). This can be explained by the dependence of the moment matching guarantee of leverage score sampling on the matrix coherence (vide General Response 2).
* Discrete optimization for the V-optimality is an alternative to leverage score sampling. However, such optimization is known to be hard: the best available polynomial-time heuristic with a theoretical guarantee ([4] in submission) involves matrix inversion in each optimization step, bringing instability issues and compromising the practical application.
Our proposal of SkMM can be interpreted as a quadratic relaxation of moment matching $\widetilde{\Sigma} \preccurlyeq c_S \widetilde{\Sigma}_S$ by assuming that $\widetilde{\Sigma}, \widetilde{\Sigma}_S$ commute (i.e., are simultaneously diagonalizable). This relaxation improves computational efficiency and avoids numerical instability by eliminating the need for matrix inversion. Although the assumption that the two matrices commute does not hold in general, it is a valuable heuristic for designing efficient algorithms, as demonstrated empirically across various domains. For example, in optimization, this approach is used to simplify certain SDP relaxations and approximate solutions effectively (Lu, Monteiro, 2005). In handling distributional shifts, it is used to design optimization formulations to obtain minimax risks efficiently (Blaker, 2000; Lei et al., 2021). We will further clarify this in the revision.
4. __Efficiency of SkMM__:
As ridge leverage score approximation via recursive sampling, gradient sketching in SkMM can be computed efficiently in input sparsity time, and the following moment matching stage takes place in the low dimension $m$ (vide General Response 1).
As discussed in General Response 2, while (ridge) leverage score sampling also facilitates moment matching as SkMM and enjoys a slightly better runtime in the moment matching stage (in a lower-order term), with experiments (cf. Table 1), we show that SkMM tends to provide better performance as it is tailored for optimizing moment matching.
We are happy to answer any further questions you may have. If our responses above help address your concerns, we would truly appreciate a re-evaluation accordingly.
References:
* Lu, Z., Monteiro, R. D. C. (2005). "Primal-dual interior-point methods for semidefinite programming using inexact step directions." Mathematical Programming, 103(2), 453-485.
* Blaker, Minimax estimation in linear regression under restrictions, annals of statistics 2000
* Lei et al, Near-optimal linear regression under distribution shift. ICML 2021
---
Rebuttal Comment 1.1:
Comment: I thank authors for the comments. I'll keep my score as I believe the paper will benefit from a significant amount of revisions by incorporating all reviewers' comments.
---
Reply to Comment 1.1.1:
Comment: Many thanks for the timely response. We kindly refer the reviewer to the general responses for the list of important revisions we made, which we believe have accommodated all the suggestions and questions from the reviewers. If the reviewer found any unaddressed concerns, we would greatly appreciate it if they could specify them during the discussion phase. We are always happy to provide further clarifications and improve our work based on the constructive feedback from the reviewers. | Summary: This paper studies the problem of data selection in the over-parametrized fine-tuning regime, i.e. when the number of fine-tuning parameters $r$ is larger than the amount $N$ of available examples. We want to subsample $n\ll N$ examples that form a representative set to train on, and hopefully achieve quality as close as possible to fine-tuning on the whole set.
The idea is to compute the gradients $G\in \mathbb{R}^{N\times r}$ of all examples wrt the fine-tuning params and then select a subsample $S\subseteq [N]$ such that the Gram matrix of the gradients is approximated: $c\cdot \Sigma_S := c \cdot G^\top I_S G \approx G^\top G := \Sigma$. However, this is not possible to achieve since the model is over-parameterized. Fortunately, if the spectral approximation holds on a low-dimensional subspace of the parameter space, this is good enough, so the authors project the gradients on a random low-dimensional space. The proof goes through under the assumption that the singular values of the gradient matrix are well-concentrated on a small enough (<10%) support.
The experimental results include fine-tuning on a synthetic linear task, as well as fine-tuning a vision transformer on CIFAR-10 image classification.
Strengths: - The authors study the data selection for fine-tuning problem from first principles
- The writing is overall good and math looks sound, even though I didn't check details.
- The experimental results look promising since SkMM beats a variety of algorithms including leverage scores.
- The idea of spectral approximation on a subspace of the parameter space is interesting.
Weaknesses: - Important details on the experimental setup are missing or unclear. Specifically, what is the optimization process after the data is subsampled? For the image classification experiments, what is being fine-tuned, is it all the ViT parameters? For how many epochs?
- The algorithm requires computing the gradients of all samples, which can be computationally expensive. Besides, if we are computing all gradients, why can't we just train one epoch on all datapoints? Why is data selection useful in this case?
- The literature review could be expanded, including relevant papers such as BADGE [1], Coreset-based sensitivity sampling [2].
- In the experimental results, the authors should also compare with margin sampling (in addition to entropy sampling), as well as uniform sampling for the image classification task.
- Computing the moment-matching subset in Algorithm 3.1 seems overly complicated, see questions
[1]: Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds
[2]: Data-Efficient Learning via Clustering-Based Sensitivity Sampling: Foundation Models and Beyond
Technical Quality: 2
Clarity: 3
Questions for Authors: In Remark 3.2 the authors write that their goal is to achieve the constraint $\tilde{G}^\top \tilde{G} \preceq c_S \cdot \tilde{G}^\top I_S \tilde{G}$ (1). They subsequently relax this problem and solve the resulting constrained convex optimization problem using projected gradient descent. However, it seems to me that (1) might be equivalent to $U^\top I_S U \succeq c_S^{-1} \cdot I$, where $U\Lambda^{1/2} V^\top$ is the SVD of $\tilde{G}$. Here $U\in {N\times \bar{r}}$ is a tall and thin matrix. This is a spectral sparsification task which could be solved using leverage score sampling on the rows of $U$. Furthermore, this is the same as sampling examples proportional to the squared $\ell_2$ norms of the rows of $U$. Maybe, I'm missing something, so please correct me if I'm wrong.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their constructive questions and suggestions, and we are glad that they found this work interesting and well-presented. On the questions raised in the review:
1. __Cost of gradient computation and SkMM:__
First, we kindly emphasize the __ubiquitous role of gradients in most data selection methods__. For linear probing, the gradients (of the finetuning model instead of the loss) are essentially the last-layer features from the pre-trained model, which can be computed without backward propagation. Such information is necessary for all unsupervised data selection methods discussed in "1.1 Related Works" including leverage score sampling and influence function. Beyond linear probing, gradient computation can bring asymptotically similar costs as *one epoch* of training on the full data set. Notice that this is more efficient compared to methods with *several epochs* of warmup training (eg, [1,15,38,52,56,69,83] in submission, including all methods we compared against in the real-data experiments except uniform sampling and herding). Such cost of gradient computation or warmup training is acceptable because the selected data have their own importance (eg, finetuning similar models with limited memory and computation). Moreover, we refer to General Response 1 that SkMM based on gradient sketching is highly efficient in both memory and computation. As mentioned in Algorithm 3.1, the gradients can be computed and compressed (via sketching) in parallel and on the fly without storage, while all the subsequent steps for moment matching are conducted in the lower dimension $m \ll r$. We will make sure to clarify these points further in the revision.
Second, we highlight that __training for one epoch on the full dataset is generally insufficient for finetuning__. For example, when finetuning CLIP via linear probing on CIFAR-10 (Table 2), training for one full epoch leads to test accuracy of only $91.94 \pm 0.17$ (cf. $92.96 \pm 0.07$ learned with as few as $1000$ selected data via SkMM).
2. __Sketchy moment matching v.s. sketchy leverage score sampling__:
Up to proper scaling $U^\top diag(\frac{N}{n} s) U \succcurlyeq c_S^{-1}$ (where $s \in \{0,1\}^N$, $\|s\|_0 = n$), we agree with the reviewer that leverage score sampling based on the sketched gradients $\widetilde{G}$ can also provide good control over $c_S$ with high probability. However, such control comes with nuance dependence on matrix coherence and can render the upper bound vacuous in the worst case (vide General Response 2).
This is exactly the major motivation for the comparison with the truncated and ridge leverage score sampling (T/R-leverage) in the synthetic data experiments. In particular, leverage score sampling based on the sketched gradients is effectively equivalent to T-leverage (ie, when the original gradient dimension $r$ is too high for the exact computation of SVD for $G \in \mathbb{R}^{N \times r}$, randomized SVD based on sketching is generally used as an approximation, in which case leverage scores of $\widetilde{G}$ is exactly T-leverage). In experiments, we also confirmed the indistinguishable behaviors between the two, and therefore only reported one of them. As discussed in Sec 4.1, __while SkMM and T/R-leverage both encourage variance-bias balance__ (by controlling $c_S$) and therefore provide among-the-best risks in Table 1, __SkMM shows better performance on average as it is tailored for optimizing__ $c_S$. Nevertheless, we agree that with very limited computation (ie, even optimizing (5) in low dimension $m$ is infeasible), sketchy (ridge) leverage score sampling would be a more affordable alternative to SkMM.
Motivated by this insightful question, we will extend Sec 3.2 based on the above discussions on the connection between SkMM and leverage score sampling in the revision.
3. __Improvements on related works, experiment setup, and more comprehensive experiments__:
Many thanks for the constructive suggestions on related works and experiments. We will include discussions on the suggested literature in the next version. In addition, we will provide more experiments on real data with more comprehensive baselines (including margin sampling) in the revision. (Please refer to General Response 3 for the results and details of the additional experiments.) We will also provide more detailed experimental setups in the revision as follows.
* After data selection, we fine-tune the model in two different settings: (a) linear probing over CLIP-pre-trained ViT (vide Tables 1 and 3 in the attached PDF) and (b) non-linear finetuning of the last two layers in an ImageNet-pre-trained ResNet18 (vide Tables 2 and 4). Please refer to Table 5 in the attached PDF for detailed configurations like parameter dimensions.
* The fine-tuning parameters are optimized via Adam with a learning rate $10^{-2}$, for 200 epochs over CIFAR10 (following [29] in submission, Tables 1 and 2) and 50 epochs over StanfordCars (Tables 3 and 4).
We are happy to answer any further questions you may have. If our responses above help address your concerns, we would truly appreciate a re-evaluation accordingly.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their efforts to answer all of my questions. I have increased my score.
---
Reply to Comment 1.1.1:
Comment: We are happy that our responses have addressed your concerns. Many thanks for the reply and all the constructive suggestions.
---
Rebuttal 2:
Comment: As the deadline for the discussion phase approaches, we would greatly appreciate it if the reviewer could provide some feedback on our responses, which we believe have addressed all the questions in the review. As a summary of important updates we made thanks to the valuable suggestions from the reviewer:
* In Response 1 and General Response 1, we provided further clarification on the computational efficiency of SkMM and comparison with other data selection methods.
* In Response 2 and General Response 2, we discussed the connection and difference between SkMM and leverage score sampling in terms of moment matching, where such difference comes from, and why we may want an alternative to leverage score sampling.
* In Response 3, General Response 3, and the attached PDF, we provided stronger empirical evidence on (i) more real vision tasks and (ii) fine-tuning settings (beyond linear probing) with much higher dimensions, with (iii) more comprehensive baselines (including margin sampling) and (iv) more detailed experimental setup.
We value the opportunity to improve our work based on these constructive suggestions and are always happy to provide further clarifications if needed. Thanks again for your time. | Rebuttal 1:
Rebuttal: First, we would like to thank all the reviewers for their time, efforts, and valuable suggestions. In the general response, we address some common questions raised in the reviews and summarize important revisions we made
1. __Computational efficiency of SkMM__:
SkMM is efficient in both memory and computation. Consider the two stages in SkMM:
* The dimensionality reduction via __gradient sketching__ can be computed __in parallel with input-sparsity time__ and __on the fly without storing the (potentially) high-dimensional gradients__. In particular, let $nnz(G) \le Nr$ be the number of nonzero entries in the (high-dimensional) gradients $G \in \mathbb{R}^{N \times r}$, as discussed in Remark C.1, sketching via a (sub)Gaussian embedding $\Gamma \in \mathbb{R}^{r \times m}$ can be computed in $O(nnz(G) m)$ time, while a sparse embedding (Remark C.1 (b)) can further accelerate the sketching process to $O(nnz(G))$ time. Only the sketched gradients $\widetilde{G} = G \Gamma$ needs to be stored, requiring just $O(Nm)$ memory (recall that $m \ll r$ is a small number proportional to the low intrinsic dimension). For linear probing, $G$ is simply the last-layer feature of the pre-trained model. Beyond linear probing, computing the gradients costs no more than one full training epoch, much cheaper than many data selection methods based on a few epochs of warmup training (eg, [1,15,38,52,56,69,83] in submission, including all methods we compared against in the real-data experiments except uniform sampling and herding).
* After gradient sketching, __variance reduction via moment matching happens in the low dimension__ $m$, with a low memory footprint $O(Nm)$, taking $O(m^3)$ for the spectral decomposition and $O(Nm)$ per iteration for optimizing the moment matching objective (5).
We will highlight the computational efficiency of SkMM further in the main text of the revision.
2. __Different methods for moment matching__:
As mentioned in the limitations and future directions, after gradient sketching, __variance reduction in the resulting low-dimensional subspace via moment matching can be realized using various methods__, among which leverage score sampling based on the sketched gradients $\widetilde{G}$ is arguably the most intuitive approach. Alternatively, we proposed an optimization-based method in Algorithm 3.1 that solves a quadratic relaxation (Remark 3.2) as a practical heuristic. In the synthetic data experiments, we show that __SkMM provides a better variance-bias balance and hence better performance than leverage score sampling__ (T-leverage).
To better clarify the connection with leverage score sampling, in the revision, we will add __a brief review of the theoretical guarantee for leverage score sampling and explain why an alternative could be desired__. In particular, Theorem 17 of [81] (in submission) implies that leverage score sampling on $\widetilde{G}$ provides $c_S \le \frac{(1+\epsilon) m}{\tau_S N}$ with probability at least $1-\delta$ for a coreset size $n = O(m \log(m/\delta) / \epsilon^2)$, where $\tau_S \in [0,1]$ is the minimum leverage score of $\widetilde{G}$ over the coreset $S$ ($\tau_S$ appears because samples in $S$ are equally weighted in the data selection setting). __Such dependence on matrix coherence can render the upper bound vacuous in the worst case__. Nevertheless, leverage score sampling based on the sketched gradients can be computed more efficiently than SkMM in $O(Nm^2)$ time and can provide good control over $c_S$ when $\tau_S$ is reasonably large. Overall, as pointed out in Sec 4.1, both SkMM and leverage score sampling facilitate variance-bias balance in data selection, while __SkMM is tailored for optimizing moment matching, providing better empirical performance at a slightly higher cost in the low intrinsic dimension__ $m$.
3. __Additional experiments__:
Thanks to the reviewers' suggestions, we will extend the empirical evaluation on real vision tasks in the revision from the following two aspects:
* We explore two real vision tasks with different scales and structures: (a) CIFAR-10 with 60,000 images in 10 balanced classes (vide Tables 1 and 2 in the attached PDF) and (b) StanfordCars with 16,185 images in 196 imbalanced classes (vide Tables 3 and 4). (Notice that we expanded the original CLIP linear probing experiments on CIFAR-10 with more comprehensive baselines.)
* We investigate different fine-tuning settings including (a) linear probing over CLIP-pre-trained ViT (vide Tables 1 and 3) and (b) non-linear finetuning of the last two layers in an ImageNet-pre-trained ResNet18 (vide Tables 2 and 4). We summarize the detailed configurations like the parameter counts in Table 5. (Notice that finetuning the last few layers of strong pre-trained models like CLIP can distort the features and hurt the performance significantly, as studied in (Kumar et al., 2022). Therefore, we choose a weaker pre-trained model, ResNet18, for finetuning beyond linear probing.)
For both fine-tuning settings on the two datasets, SkMM demonstrates appealing generalization across different coreset sizes. In particular, SkMM tends to outperform the other baselines by a larger margin on the imbalanced StanfordCars dataset compared to the well-balanced CIFAR-10, manifesting the effectiveness of SkMM in balancing the variance-bias trade-off in data selection as suggested by the theory and in the synthetic data experiments.
References:
* Kumar, Ananya, et al. "Fine-tuning can distort pretrained features and underperform out-of-distribution." arXiv preprint arXiv:2202.10054 (2022).
Pdf: /pdf/90941803f1da8349b0227d0c37a94ea56bb06827.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Causal Discovery from Event Sequences by Local Cause-Effect Attribution | Accept (poster) | Summary: This paper introduces a new causal model in which individual events of the cause variable trigger events of the effect variable with dynamic delays. The authors propose a cause-effect matching approach to learn a fully directed acyclic graph, named the CASCADE algorithm. The algorithm performs a topological search on observational data.
Strengths: This paper presents a comprehensive theory and algorithm, and conducts extensive experiments, particularly with real data, to validate the effectiveness of the proposed method. The analysis and the algorithm are presented in a logical way.
Weaknesses: The proposed method is the direct matching between a cause event and an effect event, which precludes modeling a single event causing multiple other events, as well as multiple events jointly causing a single effect event. This limits the applicability of the algorithm.
Technical Quality: 3
Clarity: 3
Questions for Authors: **1.**
Assuming ``an individual event ... causes an individual event'' in Line 64, and in Line 88: Equation 2.
Do these indicate that an effect event can only be caused by a single cause event? If so, the use of $pa(*)$ later in the manuscript is confusing. Can an effect event be caused by more than one event?
In the results of real experiments, there is an event that is caused by more than one event, which contradicts the assumption. How are the results obtained?
**2.**
Are timestamps in events erased during actual use?
As far as I can see, the algorithm proposed in the manuscript does not use timestamps.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discuss in detail the limitations and applicability of their algorithm.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer, thank you for your time and valuable feedback.
We want to follow-up on the applicability of our causal model and its implications. Mechanisms where multiple events are triggered, e.g. Hawkes processes, can still be modeled in parts by CASCADE. To this end, we supplement an experiment with a Hawkes process causal model, where we vary the expected number of effect events following a cause. We provide the results in the Table below.
|Events per cause | 0.4 | 0.7 | 1.0 | 1.3 | 1.6 | 1.9 | 2.3 | 2.6 | 2.8 | 3.1 |
|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|
| F1 | 0.69±0.13 | 0.89±0.11 | 0.86±0.10 | 0.80±0.12 | 0.79±0.12 | 0.76±0.09 | 0.75±0.11 | 0.75±0.12 | 0.75±0.11 | 0.74±0.11 |
We observe that CASCADE suffers only a slight decrease in accuracy in the one-to-many (large number of expected events) and many-to-one cases. In those settings, CASCADE models a subset of the true mechanism by matching the cause-effect events with highest likelihood. In general, for any causal model, CASCADE can identify causal edges as long as the fitted matching improves over the noise distribution. Expanding the causal model to many-to-one (A ∧ B => C) or suppression (A => !B) setting lies beyond the scope of this paper, but is nonetheless an interesting direction for future work, especially with regard to the identifiability of the resulting model.
With regard to your questions:
1. **Multiple parents**: Each individual event, e.g. “C” occurring at $t_2$, can only be caused by a cause event of a particular kind, e.g. "A" at $t_1$. However, the entire sequence of “C” events can contain events caused by different parents. For example, assume that events of type "A" and "B" trigger an event of type "C". Given two events $(A,t_1)$ and $(B,t_3)$ , which cause $(C,t_2)$ and $(C,t_4)$ respectively, the resulting sequence $S_C = \lbrace t_2, t_4 \rbrace $ contains events from multiple parents, so that $pa(C) = \lbrace A, B \rbrace$. We hope this also clarifies the real-world results.
2. **Timestamps**: We use the timestamps when computing the edge cost $L(i \to j)$. We match events $t_i$ and $t_j$, so that the cost of the matched delays $d = t_i - t_j$ is minimized. We will clarify and expand the algorithm description in the updated manuscript.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. I will maintain my original score.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer, thank you for your response! We are glad that our rebuttal answered your questions. If not please elaborate on your remaining concerns and aid us in addressing them adequately. Thank you very much! | Summary: The article employs the Algorithmic Markov Condition alongside Kolmogorov
complexity for causal discovery from event sequences. It focuses on a specific scenario in
which the sequence of events is divided into source and effect variables. The principal
contribution of this study is its innovative application of Pearl's causality model with
combination of AMC method, in contrast to the traditional Granger causality approach,
enabling the identification of both instantaneous and delayed effects.
Strengths: 1. Originality: The author employs Pearl's model of causality, diverging from
traditional Granger causality, to innovatively incorporate instantaneous effects
into the analysis of sequential events for causal relationship discovery.
2. Quality: The article is with good quality and honest about its strength and
limitation on their work.
3. Clarity: The article presents its algorithm with well-defined logic and
substantiated proofs.
4. Significance: The article offers an innovative approach to integrating
instantaneous effects into the causal discovery of sequential events, proposing a
potential method to enhance causal discovery techniques under such conditions.
However, it imposes strict limitations on the scenarios involving event sequences.
Weaknesses: 1. Significance: As mentioned in the limitation section by the author, strict
assumptions like direct matching between a cause event and an effect event leads
to challenges and possible violations in practical application, and it lacks
flexibility.
2. Section 3.3, which discusses the connection to Hawkes Processes, might be better
placed in an appendix or in a section dedicated to comparing different
methodologies. Its current placement in the theoretical part of the paper is
somewhat abrupt, especially since there is no direct focus on these processes in
your model.
3. The experimentation section lacks depth. It would be beneficial to evaluate and
report on the robustness of your model when its assumptions are challenged
during real-world applications.
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer, thank you for your time and valuable feedback. We would like to address your concerns first.
1. **Assumptions**: Mechanisms where multiple events are triggered, e.g. Hawkes processes, can still be modeled in parts by CASCADE. To this end, we supplement an experiment with a Hawkes process causal model, where we vary the expected number of effect events following a cause. We provide the results in the Table below.
|Events per cause | 0.4 | 0.7 | 1.0 | 1.3 | 1.6 | 1.9 | 2.3 | 2.6 | 2.8 | 3.1 |
|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|
| F1 | 0.69±0.13 | 0.89±0.11 | 0.86±0.10 | 0.80±0.12 | 0.79±0.12 | 0.76±0.09 | 0.75±0.11 | 0.75±0.12 | 0.75±0.11 | 0.74±0.11 |
We observe that CASCADE suffers only a slight decrease in accuracy in the one-to-many (large number of expected events) and many-to-one cases. In those settings, CASCADE models a subset of the true mechanism by matching the cause-effect events with highest likelihood. In general, for any causal model, CASCADE can identify causal edges as long as the fitted matching improves over the noise distribution. Expanding the causal model to many-to-one (A ∧ B => C) or suppression (A => !B) setting lies beyond the scope of this paper, but is nonetheless an interesting direction for future work, especially with regard to the identifiability of the resulting model.
3. **Hawkes**: We will update the section on Hawkes processes to more closely show the connection to our causal model and reevaluate its proper placement in the paper.
4. **Experiments**: We evaluate our approach on real world data with unknown generating mechanisms. On labeled datasets (network alarms) CASCADE is far more accurate than the SOTA, whilst on the unlabeled global banks dataset we obtain a qualitatively better result than other methods. In both cases, our model is able to recover a sensible causal graph from the respective data distributions, even though it is unlikely that all of our assumptions hold.
In addition, we conducted in a controlled setting where we generate data outside of our causal model. Here, we examine CASCADE’s behavior under delay distribution misspecification. We report the results in the table below.
| | Exponential | Poisson | Normal | Uniform |
|-----|-------------|-------------|-------------|-------------|
| F1 | 0.82 ± 0.08 | 0.81 ± 0.09 | 0.76 ± 0.08 | 0.75 ± 0.10 |
| SHD | 13.8 ± 6.3 | 14.2 ± 6.7 | 18.5 ± 6.5 | 19.0 ± 7.8 |
While we observe a decrease in accuracy under misspecification, CASCADE still remains fairly accurate in detecting causal edges in the observed event sequences, as the misspecified model still provides an improved MDL-score over the noise distribution. In general, CASCADE’s is robust to different delay distributions, causal mechanisms and noise levels, and hence performant on the variety of data generating mechanisms found in the real world datasets.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thanks for your rebuttal. I will maintain the original score. | Summary: In their work, the authors are concerned with recovering causal relations, where cause and corresponding effects occur in varying temporal distances. The authors leverage information theoretic formulations and properties of the algorithmic Markov condition to recover the causal graph via minimum description length principled. To this end, the authors present the 'CASCADE' algorithm, which recovers the topological ordering of the causal structure and proof identifiability results. The algorithm is evaluated on multiple synthetic data setups to examine the algorithm's performance under different varying noise, event type, and collider settings. Lastly, the algorithm is tested on a banking and daily activity data set to demonstrate robust performance on real-world data.
Strengths: The paper is well-written and introduces the problem setup and formalisms intuitively. The authors consider the challenging problem of modeling causal event sequences. The information-theoretic treatise and causal modeling of the event-generating process via minimum description length encodings are well described and follow common notation from related work. While I am not an expert on the topic of time series event causality, relevant related work seems to be sufficiently discussed and compared to.
The overall intuition on all proofs is well described. To the best of my knowledge, proofs of theorems 1, 3 and 4 seem to be correct. (Please see minor comments on Thm. 2 below). The presented CASCADE algorithm seems to be sound and its robustness is evaluated via multiple real-world and synthetic experiments, varying the noise and number of event types.
Weaknesses: While the authors present strong theoretical identifiability results, these guarantees are tied to a restrictive set of assumptions (faithfulness, sufficiency, low noise) and hold only for a specific type of event process (single excitation, no suppressing effects). While the authors state all assumptions explicitly, the paper could be improved by discussing the possible implications and reasonability of real-world applications.
Proof of Theorem 2 (Sec. A.2; second line of l. 496): As all other terms seem to be taken over from the line above, it is unclear to me where the canceled term on the left side of the inequality is coming from. (Since all terms are positive, I believe the transformation to be still correct.) Furthermore, it is not obvious to me how the equation following l.497 and the noise ratio of $n_{i,j}/n_j$ leads to the desired result. The paper could be improved by providing a more detailed explanation of this step.
The experiments seem to demonstrate consistently better results compared to related algorithms. However, from the experimental description in B.1, it seems that the experiment on the especially challenging identification of colliders --due to unclear parent assignment-- only considers a setting with a single collider. The authors might want to demonstrate algorithm performance for settings where multiple colliers exist, to better examine the algorithm's robustness regarding unclear EM assignments.
Minor:
* It would be helpful to mention the definition of H() in Sec. A.1 as the entropy, which is only mentioned afterward in A.2.
* Typos in the Proof of Thm. 2 (sec. A.2 l.490): "dealys", "ofset"; and the Conclusion (l.340) "discovers" -> "discover".
* In Sec. 4.1 l.201 text and formula disagree on the complexity: "[...] leading to an overall quadratic complexity $O(p^3)$".
Technical Quality: 2
Clarity: 3
Questions for Authors: My questions mainly concern the weaknesses mentioned above. I would kindly like to ask the authors to comment on the following:
1) How realistic are the assumptions made in the paper (e.g., low noise in real-world settings)? How would one test for them to hold true? How robust would the algorithm be in the presence of other event types, such as suppressing events or multi-effect events?
2) Proof Thm. 2: Could the authors provide further details regarding the proof of theorem 2 - in detail, the derivation of the final step?
3) Regarding my comments above, could the authors give further insights on the algorithm's performance with an increased number of colliders?
4) Figures. 4, 6 and 8 seem to feature few colliders. This seems unreasonable to me, especially for the global banking data set, which I assume to be highly interconnected (possibly violating the DAG assumptions). Could the authors comment on this possible bias? Is it a result of the assumptions made, and how could it be reduced?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Limitations with regard to the applicability of the algorithm are discussed. Assumptions required for identifiability of the considered causal models are stated explicitly but might be hard to check in real-world settings. The work might be improved by discussing societal impacts from applying the algorithm under possible assumption violations in real-world settings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer, thank you for your time and valuable feedback for the main paper as well as the Appendix. We would like to address your concerns and questions in detail.
1. **Assumptions**: First, we would like to elaborate on our assumptions and their implications.
- *Multiple effect events*: Mechanisms where multiple events are triggered, e.g. Hawkes processes, can still be modeled in parts by CASCADE. To this end, we supplement an experiment with a Hawkes process causal model, where we vary the expected number of effect events following a cause. We provide the results in the Table below.
| Events per cause| 0.4 | 0.7 | 1.0 | 1.3 | 1.6 | 1.9 | 2.3 | 2.6 | 2.8 | 3.1 |
|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|
| F1 | 0.69±0.13 | 0.89±0.11 | 0.86±0.10 | 0.80±0.12 | 0.79±0.12 | 0.76±0.09 | 0.75±0.11 | 0.75±0.12 | 0.75±0.11 | 0.74±0.11 |
We observe that CASCADE suffers only a slight decrease in accuracy in the one-to-many (large number of expected events) and many-to-one cases. In those settings, CASCADE models a subset of the true mechanism by matching the cause-effect events with highest likelihood. In general, for any causal model, CASCADE can identify causal edges as long as the fitted matching improves over the noise distribution. Expanding the causal model to many-to-one (A ∧ B => C) or suppression (A => !B) setting lies beyond the scope of this paper, but is nonetheless an interesting direction for future work, especially with regard to the identifiability of the resulting model.
- *Low noise*: The low noise assumption is required only to identify exclusively instant effects. In Figure 3b) we provide the results of an experiment with both instant and delayed effects where the noise level is gradually increased. There, we observe that CASCADE works well even when the added fraction of noise events is at 90%.
- *Faithfulness, Sufficiency*: These assumptions are standard in causal discovery and allow us to show the fundamental identifiability of this mechanism for event sequences. For future work, it would be interesting to see if and how other MDL based approaches for confounding [1] and faithfulness [2] can be integrated into CASCADE.
[1] Kaltenpoth, David, and Jilles Vreeken. "Causal discovery with hidden confounders using the algorithmic Markov condition." Uncertainty in Artificial Intelligence. PMLR, 2023.
[2] Marx, Alexander, Arthur Gretton, and Joris M. Mooij. "A weaker faithfulness assumption based on triple interactions." Uncertainty in Artificial Intelligence. PMLR, 2021.
2. **Theorem 2**: We decompose the total number of events $n_j$ into those that were caused by variable $i$, i.e. $n_{i,j}$, and the remaining ones as $n_j - n_{i,j}$, which is the term that is canceled, i.e. $n_j = n_j - n_{i,j} + n_{i,j}$.
We show that the entropy relation in (l.491) holds, since $\alpha_{j,j} n_j = n_{i,j}$ we can just replace $\alpha_{j,j}$ with $n_{i,j}/n_j$. We will clarify these steps in the updated manuscript.
3. **Colliders**: CASCADE is able to find collider structures on real world datasets. We provide the learned DAG of the Network Alarms dataset (Section 6.3) as Figure 1 in the rebuttal PDF. On this real world benchmark, CASCADE discovers many ground-truth collider structures. Additionally, we expanded the experiment from Section 6.2 to include graphs with multiple colliders (see Figure 2 in PDF). This setting too provides no problem for CASCADE, showing that our individual cause-effect matching correctly assigns effects to causes from different parents.
| # Colliders | 5 | 7 | 10 | 15 | 20 |
|-----|-----------|-----------|-----------|-----------|-----------|
| F1 | 0.97±0.01 | 0.95±0.01 | 0.91±0.01 | 0.87±0.02 | 0.82±0.01 |
4. **Global banks dataset**: Figure 8 shows the results of THP, which does not find any colliders at all, whereas CASCADE obtains a graph with causal edges from large to small banks with subgraphs representing geographical regions. We think, that the lack of collider structures stems from the fact that most crashes propagate on the same day. Hence, it is sufficient in most cases to have a causal singular causal parent, which is reflected in the graph we obtain in Figure 4. We agree that some of the assumptions are unlikely to hold, especially the DAG assumption and consider it an interesting future research direction to allow cyclic graphs.
Regarding the detailed list of typos and misc suggestions, we thank you for reviewing our paper at this level of detail! We will adopt all proposed changes in the updated manuscript.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
thank you for answering my questions regarding the made assumptions and Thm. 2, as well as providing additional clarifying results on Hawkes process data and colliders.
I still recommend the acceptance of this paper and will leave my score unchanged.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer, thank you for your response! We are glad that our rebuttal answered your questions. | Summary: The paper introduces a method for identifying causal relationships in event sequences. The authors presents a causal model that handles both instantaneous and delayed effects, contrasting it with existing methods like Granger causality. This algorithm is evaluated on both synthetic and real-world datasets.
Strengths: 1. The theoretical foundation based on the AMC and MDL principle is provided.
2. The proposed CASCADE algorithm is evaluated through extensive experiments.
3. The paper is well-organized, with clear explanations of the proposed model, theoretical underpinnings, and algorithmic steps. The use of illustrative examples and detailed proofs enhances understanding.
Weaknesses: 1. The paper acknowledges assumptions such as the direct matching between cause and effect events and the focus on excitatory effects. However, it could provide more discussion on the impact of these assumptions and potential ways to address them.
2. Scalability and computational complexity: The paper demonstrates the algorithm's performance on datasets with a moderate number of variables and events. An evaluation of its scalability to very large datasets, which are common in real-world applications, is less emphasized. The computational complexity of the algorithm, particularly for large datasets with many event types, is a concern. The quadratic complexity in the number of event types may limit its applicability to very large-scale problems.
3. Parameter sensitivity is not provided: How sensitive is the CASCADE algorithm to the choice of parameters for the delay distribution and cause probability?
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. How sensitive is the CASCADE algorithm to the choice of parameters for the delay distribution and cause probability?
2. What are the practical limits of the CASCADE algorithm in terms of the number of event types and the size of the datasets?
3. How does the algorithm handle high levels of noise in the data, and are there specific noise thresholds beyond which performance degrades significantly?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors discussed the limitation in the Conclusion section. The identifiability of instantaneous effects relies on the strengths of the trigger and noise probabilities, which may be challenging to estimate accurately in practice.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer, thank you for your time and valuable feedback. We would like to address your concerns and questions in detail.
1. **Assumptions**: Mechanisms where multiple events are triggered, e.g. Hawkes processes, can still be modeled in parts by CASCADE. To this end, we supplement an experiment with a Hawkes process causal model, where we vary the expected number of effect events following a cause. We provide the results in the Table below.
| Events per cause | 0.4 | 0.7 | 1.0 | 1.3 | 1.6 | 1.9 | 2.3 | 2.6 | 2.8 | 3.1 |
|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|
| F1 | 0.69±0.13 | 0.89±0.11 | 0.86±0.10 | 0.80±0.12 | 0.79±0.12 | 0.76±0.09 | 0.75±0.11 | 0.75±0.12 | 0.75±0.11 | 0.74±0.11 |
We observe that CASCADE suffers only a slight decrease in accuracy in the one-to-many (large number of expected events) and many-to-one cases. In those settings, CASCADE models a subset of the true mechanism by matching the cause-effect events with highest likelihood. In general, for any causal model, CASCADE can identify causal edges as long as the fitted matching improves over the noise distribution.
Expanding the causal model to many-to-one (A ∧ B => C) or suppression (A => !B) setting lies beyond the scope of this paper, but is nonetheless an interesting direction for future work, especially with regard to the identifiability of the resulting model.
2. **Scalability**: Scalability is an important aspect for application of causal discovery in real-world settings. In our experiments, we run with up to 200 variables (collider-experiment Figure 3 (c) ), which takes on average 62 minutes (single threaded), whilst the competitors closest in accuracy take 2.7 hours (CAUSE) and 3.6 hours respectively (THP). In the experiment with Hawkes processes, we process event sequences with 500k total events in 99 minutes. To obtain further speedup, the edge addition and pruning steps of CASCADE are fully parallelizable, which would allow our method to scale to extremely large problem sizes.
3. **Parameter sensitivity**: CASCADE does not require pre-specified parameters. The cause probability and delay parameters are fitted by minimizing the MDL score. CASCADE is sensitive to the decision which parametric model to adopt. Here, we choose the commonly used class of exponential distributions. To examine CASCADE’s behavior under misspecification, i.e. when the distribution is different, we conduct a further experiment. We report the results in the table below.
| | Exponential | Poisson | Normal | Uniform |
|-----|-------------|-------------|-------------|-------------|
| F1 | 0.82 ± 0.08 | 0.81 ± 0.09 | 0.76 ± 0.08 | 0.75 ± 0.10 |
| SHD | 13.8 ± 6.3 | 14.2 ± 6.7 | 18.5 ± 6.5 | 19.0 ± 7.8 |
While we observe a decrease in accuracy under misspecification that is worse for distributions of a different shape (normal, uniform), CASCADE still remains quite accurate. In fact, CASCADE is able to detect causal edges in the observed event sequences, if the misspecified model still provides an improved MDL-score over the noise distribution.
4. **Noise**: Lastly, we would like to answer your question on the effect of noise on our method.
The low noise assumption is only required to identify exclusively instant effects. For instant and delayed effects, Section 6.2 contains an experiment where the additional noise fraction of events is increased up to 90%. As summarized in Figure 3b), CASCADE copes well with noise and outperforms the SOTA on all noise levels.
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for the response. I will maintain my original score.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer, thank you for your response! We are glad that our rebuttal answered your questions. If not please elaborate on your remaining concerns and aid us in addressing them adequately. Thank you very much! | Rebuttal 1:
Rebuttal: We thank the reviewers for their detailed and thoughtful comments. All reviewers appreciate the proposed causal model with “theoretical foundation based on the AMC and MDL”, with “strong theoretical identifiability results”. In particular the identifiability of instant effects, which Granger causality based method can not identify, is recognized. Finally, the proposed CASCADE algorithm, with which “extensive experiments [...] to validate the effectiveness of the proposed method” were conducted, was well received across the board.
The reviews ask for clarification with regard to CASCADE’s performance in aspects regarding causal model, different data distributions and structure, and noise levels. We would like to respond to each question individually.
- **Causal model**: A commonly raised concern was the performance of CASCADE under a differing causal model, e.g. where one event causes multiple. To this end, we supplement an experiment with data generated by a Hawkes process, where we vary the expected number of effect events following a cause. We provide the results in the Table below.
|Events per cause | 0.4 | 0.7 | 1.0 | 1.3 | 1.6 | 1.9 | 2.3 | 2.6 | 2.8 | 3.1 |
|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|
|F1 | 0.69±0.13 | 0.89±0.11 | 0.86±0.10 | 0.80±0.12 | 0.79±0.12 | 0.76±0.09 | 0.75±0.11 | 0.75±0.12 | 0.75±0.11 | 0.74±0.11 |
We observe that CASCADE suffers only a slight decrease in accuracy in the one-to-many (large number of expected events) and many-to-one cases. In those settings, CASCADE models a subset of the true mechanism by matching the cause-effect events with highest likelihood. In general, for any causal model, CASCADE can identify causal edges as long as the fitted matching improves over the noise distribution.
- **Distribution misspecification**: In CASCADE, we fit exponential distributions to the matched cause-effect delays. We additionally test our method on generated data where the actual distribution is different and report the results in the table below.
| | Exponential | Poisson | Normal | Uniform |
|-----|-------------|-------------|-------------|-------------|
| F1 | 0.82 ± 0.08 | 0.81 ± 0.09 | 0.76 ± 0.08 | 0.75 ± 0.10 |
While we observe a decrease in accuracy under misspecification, CASCADE still remains accurate in detecting causal edges in the observed event sequences, as the misspecified model still provides an improved MDL-score over the noise distribution.
- **Multiple colliders**: CASCADE is able to find collider structures on real world datasets. We provide the learned DAG of the Network Alarms dataset (Section 6.3) as Figure 1 in the rebuttal PDF. On this real world benchmark, CASCADE discovers many ground-truth collider structures. Additionally, we expanded the experiment from Section 6.2 to include graphs with multiple colliders (Figure 2 in PDF). This setting too provides no problem for CASCADE, showing that our individual cause-effect matching correctly assigns effects to causes from different parents.
| # Colliders | 5 | 7 | 10 | 15 | 20 |
|-----|-----------|-----------|-----------|-----------|-----------|
| F1 | 0.97±0.01 | 0.95±0.01 | 0.91±0.01 | 0.87±0.02 | 0.82±0.01 |
- **Strong noise**: The low noise assumption is only required to identify exclusively instant effects. For instant and delayed effects, Section 6.2 contains an experiment where the additional noise is increased up to 90%. As summarized in Figure 3b) of the paper, CASCADE copes well with noise and outperforms the SOTA on all noise levels.
In general, CASCADE’s is robust to different delay distributions, causal mechanisms and noise levels, and hence performant on the variety of data generating mechanisms found in the real world datasets. Expanding the causal model to the many-to-one (A ∧ B => C) or suppression (A => !B) setting lies beyond the scope of this paper, but is nonetheless an interesting direction for future work, especially with regard to the identifiability of the resulting model. We will update the manuscript with the provided experiments, as well as add a discussion about the implications of our assumptions to the paper.
Pdf: /pdf/c8697ea5fc881a6f8315a09c51d24f7b687f1bc1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MC-DiT: Contextual Enhancement via Clean-to-Clean Reconstruction for Masked Diffusion Models | Accept (poster) | Summary: The paper introduces MC-DiT, a training paradigm for Diffusion Transformers (DiT) in the field of generative diffusion models for image generation. By utilizing the proposed clean-to-clean mask-reconstruction approach, the model can better leverage contextual information at different noise variances.
Strengths: - The paper provides a perspective on the limitations of noisy-to-noisy masked reconstruction, supported by theoretical insight and empirical analysis.
- The method is overall reasonable.
- The performance seems good.
Weaknesses: - Will the additional two branches of DiT decoders increase the training overhead compared with other baseline methods? How about the training cost of each iteration compared with baselines?
- Comparing with MDT-XL / 2-, the improvements of MC-DiT-XL / 2-G seem to be marginal.
- How is the natural information measured in Fig. 1?
- Will the code be released?
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weakness part.
Besides, why is the IS of MC-DiT-XL / 2 much higher than other competitors?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: Training overhead of extra branches.
With the additional two branches, the training cost of MC-DiT is a little bit higher than MaskDiT and MDT. As shown in below Table, the MC-DiT-XL/2 has more parameters, since the EMA branches introduce additional 56M parameters and these additional parameters only accounts for 7.6\%. The FLOPs and training speed of MC-DiT-XL/2 are lower than those of MaskDiT-XL/2. The MDT-XL/2 has higher FLOPs and lower training speed than MC-DiT-XL/2 due to the difference of mask ratio (50% in MC-DiT and 30% in MDT).
In a word, the additional overhead of MC-DiT is relatively small (7.6$\%$ parameters and $8\%$ FLOPs), but the FID performance improvement is significant.
|$256\times256$|Params|FLOPs|Memory|Speed|FID|
|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
|MDT-XL/2|742M|28G|20G|1.22|6.23|
|MaskDiT-XL/2|730M|24G|18G|3.09|5.69|
|MC-DiT-XL/2|786M|26G|20G|1.41|4.14|
|$512\times512$|Params|FLOPs|Memory|Speed|FID|
|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
|MDT-XL/2|742M|64G|28G|0.83|-|
|MaskDiT-XL/2|730M|56G|24G|1.98|10.79|
|MC-DiT-XL/2|786M|60G|27G|1.05|9.30|
### Q2: Improvements of MC-DiT-XL/2-G over MDT-XL/2-G.
In $256\times 256$ image generation, MDT-XL/2-G and MC-DiT-XL/2-G obtain FID scores of 1.79 and 1.78, respectively. However, MDT-XL/2-G is trained for 6500K iterations, while our MC-DiT-XL/2-G only requires 2500K iterations, less than 40\% of MDT-XL/2-G. For a fair comparison, we train MDT-XL/2 and MDT-XL/2-G under similar 2500K training iterations for evaluations and report the results in below Table. MC-DiT-XL/2 and MC-DiT-XL/2-G are shown to outperform the retrained MDT-XL/2 and MDT-XL/2-G with evident margins. This results further demonstrates that our MC-DiT can achieves superior performance with much fewer training iterations.
|Methods|Iterations|FID|sFID|IS|Prec.|Rec.|
|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
|MDT-XL / 2|2500K|7.41|4.95|121.22|0.72|0.64|
|MaskDiT-XL / 2|2500K|5.69|10.34|177.99|0.74|0.60|
|SD-DiT-XL / 2 |2400K|7.21|5.17|144.68|0.72|0.61|
|MC-DiT-XL / 2|2500K|4.14|6.96|309.69|0.83|0.62|
|MDT-XL / 2-G|2500K|2.15|4.52|249.27|0.82|0.58|
|MaskDiT-XL / 2-G|2500K|2.28|5.67|276.56|0.80|0.61|
|MC-DiT-XL / 2-G|2500K|1.78|4.87|290.17|0.81|0.62|
### Q3: Measure of Mutual information in Fig. 1.
In Fig. 1, we measure the mutual information between unmasked and masked patches in different vanilla and generated images. Specifically, given a vanilla image, we patchify it via a tokenizer and apply the mask to obtain the masked and unmasked patches. Noise at different scales is add to these patches. For 'vanilla clean and noisy images' in Fig. 1, these unmasked patches and masked patches are then reshaped to one-dimensional tensors and the mutual information is calculated based on these tensors. For 'generated images' in Fig. 1, unmasked patches are sent to corresponding models for denoising and we use output patches for calculation.
### Q4: Will the code be released?
We have provided the source codes in the supplementary material for review. The overall source codes and pretrained models will be released upon acceptance.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. I do not have further concerns.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your careful and insightful feedback, which has greatly improved the paper. We greatly appreciate your insights and are pleased to hear that all of your concerns have been addressed. Many thanks for your comments again. | Summary: This paper observes that reconstructing masked noisy patches from unmasked noisy patches harms contextual information extraction during the training of DiT and then proposes a novel training paradigm named MC-DiT with clean-to-clean mask-reconstruction. Two EMA branches of DiT decoders are designed to avoid model collapse.
Strengths: 1. The manuscript adequately puts forth a number of propositions and commendably supports these with ample evidence and rigorous demonstrations, fostering a robust intellectual foundation for their arguments.
2. The authors' perspective on applying a noisy-to-noisy mask reconstruction approach is convincingly articulated.
Weaknesses: 1. The presentation of generated images for visualization is rather limited in quantity, necessitating an expansion to adequately illustrate the diversity and quality of the results. It is suggested to present generated results with the resolution of $512\times 512$. This paper only provides visual results in Figure 5 with the resolution of $256\times 256$ and it also claims superiority on $512\times 512$ image generation.
2. Lack of experiment details about training time, inference time and memory usage.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Please clarify the reason why two extra EMA branches can address model collapse.
2. It is suggested to provide visual comparisons compared with other SOTA methods.
3. Investigating the impact of classifier-free guidance is recommended since it can improve the performance of many baselines such as ADM, DiT, and MaskDiT.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: They provide accurate limitations in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: Visual quality comparison.
In Figures~R-1 and R-2(a) in the global rebuttal file, we further provide various $256\times 256$ and $512\times 512$ images generated by our MC-DiT and compare with SOTA methods MaskDiT and MDT. Our generated images are more realistic and have more consistent textual structure than MaskDiT and MDT. For example, images of the 'school bus' and 'custard apple' in Figure R-1(a) exhibit different styles, while the details appear very realistic. Moreover, images of 'hammer' generated by
MaskDiT and MDT have incomplete structure, while our MC-DiT generates images with more complete structures. The same thing happened in $512\times 512$ images in Figure R-1(b), various images (e.g. 'dog', 'fox' and 'penguin') exhibit rich details and realistic styles, validating the effectiveness of our MC-DiT.
### Q2: Details about training time, inference time, and memory usage.
Below table elaborates the implementation details for training. We leverage two types of GPUs for training and summarize the training time, inference time and memory usage.
|Setting|MC-DiT-B/2|MC-DiT-XL/2|MC-DiT-XL/2|
|:-:|:-:|:-:|:-:|
|Resolution|$256\times256$|$256\times256$|$512\times512$|
|Training Time|50h|586h|623h|
|Inference Time (50K images)|12h|8h|15.2h|
|GPUs|2$\times$RTX-3090 GPUs|4$\times$V100GPUs|4$\times$V100GPUs|
|Batch Size|256$\times$2|256$\times$4|128$\times$4|
|Memory Usage per GPU|17GB|20GB|27GB|
Besides, we compare the training cost (parameters, FLOPs, memory used and training speed) on 4$\times$V100 GPUs in below Tables, where training speed denotes the number of iterations per second. The training speed of MC-DiT is a little bit slower than other methods due to two EMA branches. However, the inference speed of MC-DiT is similar to MaskDiT, since two EMA branches are removed during inference. The additional overhead of MC-DiT is relatively small (7.6$\%$ parameters and $8\%$ FLOPs), but the FID performance improvement is significant.
|$256\times256$|Params|FLOPs|Memory|Speed|FID|
|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
|MDT-XL/2|742M|28G|20G|1.22|6.23|
|MaskDiT-XL/2|730M|24G|18G|3.09|5.69|
|MC-DiT-XL/2|786M|26G|20G|1.41|4.14|
|$512\times512$|Params|FLOPs|Memory|Speed|FID|
|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
|MDT-XL/2|742M|64G|28G|0.83|-|
|MaskDiT-XL/2|730M|56G|24G|1.98|10.79|
|MC-DiT-XL/2|786M|60G|27G|1.05|9.30|
### Q3: Two extra EMA branches for addressing model collapse.
The modal collapse occurs when the main branch only considers clean-to-clean mask-reconstruction for masked clean patches but ignores the denoising of unmasked noisy patches. We propose two EMA branches to balance the two tasks for the main branch. We use the noisy EMA branch to realize noisy-to-clean mapping for denoising, and the clean EMA branch to realize clean-to-clean mapping for mask-reconstruction (mask ratio is 0\%). The two EMA branches constrain the output of the main branch (minimize the MSE loss between the outputs of the main branches and EMA branches) via three hyper-parameters, which leads to the balance on the denoising task and clean-to clean mask-reconstruction task.
To verify this, we report in below Table the FID score of the main branch with noisy and clean patch inputted only. The result of the main branch with unmasked noisy patches only is higher than that of masked clean patches, indicating the modal collapse problem. With noisy and clean branches, the FID score of the main branches decline distinctly, validating the effectiveness of the EMA branches.
|Branches|FID|
|:-------:|:-------:|
|Min Branch|22.10|
|w Noisy Branch|19.26|
|w Clean Branch|18.88|
|Main Branch (unmasked noisy patch only)|25.72|
|Main Branch (masked clean patch only)|23.69|
|Main Branch (unmasked noisy patch only) w Noisy Branch|19.84|
|Main Branch (unmasked noisy patch only) w Clean Branch|19.57|
### Impact of classifier-free guidance.
We have reported the $256\times256$ image generation results with classifier-free guidance (CFG) in Table 1 in the manuscript, indicating the superior performance of MC-DiT. Besides, we supplement the $512\times512$ results in the below Table. CFG benefits our MC-DiT with a district margin (4.14 vs 1.78 for $256\times 256$ and 9.30 vs 2.03 for $512\times 512$).
|Methods|FID|sFID|IS|Prec.|Rec.|
|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
|DiT-XL / 2|12.03|7.12|105.25|0.75|0.64|
|MaskDiT-XL / 2|10.79|13.41|145.08|0.74|0.56|
|MC-DiT-XL / 2|9.30|6.28|179.58|0.76|0.53|
|DiT-XL / 2-G|3.04|5.02|240.82|0.84|0.54|
|MaskDiT-XL / 2-G|2.50|5.10|256.27|0.83|0.56|
|MC-DiT-XL / 2-G|2.03|4.87|272.19|0.84|0.56|
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the time and effort you have dedicated to reviewing our manuscript. We believe we have addressed all the concerns you raised in your review and expect your feedback sincerely.
Thank you once again for your attention. | Summary: This paper introduces MC-DiT, a novel training paradigm for Diffusion Transformers (DiT) in image generation. It addresses the limitations of current masked-reconstruction strategies, which fail to effectively extract **contextual information** due to noisy-to-noisy reconstruction. MC-DiT employs clean-to-clean reconstruction, allowing for better contextual information utilization during diffusion denoising. The authors also design dual decoder branches to prevent model collapse. Theoretical and empirical analyses validate their approach, and experiments on the ImageNet dataset show that MC-DiT achieves state-of-the-art performance in both unconditional and conditional image generation tasks.
Strengths: 1.The introduction of the MC-DiT paradigm, which utilizes clean-to-clean mask-reconstruction, represents a novel approach that addresses the limitations of existing methods in extracting contextual information.
2.The authors provide a thorough theoretical and empirical analysis, particularly focusing on mutual information, which strengthens the validity of their claims.
3. The proposed MC-DiT achieves superior results in both unconditional and conditional image generation tasks, as demonstrated by the state-of-the-art FID scores on the ImageNet dataset.
Weaknesses: 1. The paper primarily focuses on image generation using the ImageNet dataset. It remains to be seen how well the approach generalizes to other domains or datasets with different characteristics.
2. The authors should clearly elaborate on the differences between MC-DiT and other masked diffusion transformers (such as MaskGiT,SD-DiT, and MaskDiT).
Technical Quality: 3
Clarity: 4
Questions for Authors: The proposed method may still require large computational resources due to the dual-branch decoder design and the clean-to-clean reconstruction process. How to accelerate it?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors acknowledge that the training and inference speeds need improvement.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: Generalization to other domains or datasets.
We adopt the ImageNet dataset in the experiments for a fair comparison, since MaskDiT, SD-DiT and MDT are all evaluated on the ImageNet dataset. In fact, our MC-DiT can be generalized to different domains or datasets for improved image generation due to the fact that it can extract contextual information from arbitrary images. Table below compares the performance of MaskDiT and MC-DiT on the CIFAR-10 and CelebA (that collected for face recognition) datasets. Due to time limit, we train both MaskDiT and MC-DiT for 200K iterations. Experimental results show that MC-DiT outperforms MaskDiT on both datasets.
|Cifar10|FID|
|:--:|:--:|
|MaskDiT-B / 2|11.52|
|MC-DiT-B / 2|9.28|
|CelebA|FID|
|:--:|:--:|
|MaskDiT-B / 2|7.14|
|MC-DiT-B / 2|5.36|
### Q2: Difference between MC-DiT and other masked DiTs.
In summary, we propose clean-to-clean reconstruction for MC-DiT achieve enhanced contextual information extraction, while existing methods (MDT, SD-DiT, and MaskDiT) are limited in contextual information extraction using noisy-to-clean and noisy-to-noisy reconstruction. Specifically, in Section 3.2 in the manuscript, we first demonstrate the problem of limited ability in contextual information extraction for previous methods (MDT, SD-DiT, and MaskDiT), since they apply noisy patches reconstruction task into the training process of DiT. They could suffer from weak ability of contextual information extraction when the noise is large. Based on this finding, our MC-DiT leverages clean patches for mask-reconstruction task to model contextual information effectively. We insert masked clean patches into the unmasked noisy features for reconstruction to exploit clean contextual information for denoising unmasked patches at arbitrary noise scales. Experimental results demonstrate that our MC-DiT can extract sufficient contextual information and achieve superior performance.
### Q3: Computational resources and acceleration.
We report the resource consumption of MC-DiT, MDT, and MaskDiT in Table R-1(a) in the one-page pdf. Compared with MaskDiT, our MC-DiT sligtly increases the parameters and FLOPs by 7.6\% and 8\% but yields obvious FID gains of 1.5 and 1.4 for $256\times 256$ and $512\times 512$ image generation. MC-DiT could be potentially accelerated by employing more layers for DiT encoder but less layers for DiT decoder to decrease the overhead of DiT decoder.
|$256\times256$|Params|FLOPs|Memory|Speed|FID|
|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
|MDT-XL/2|742M|28G|20G|1.22|6.23|
|MaskDiT-XL/2|730M|24G|18G|3.09|5.69|
|MC-DiT-XL/2|786M|26G|20G|1.41|4.14|
|$512\times512$|Params|FLOPs|Memory|Speed|FID|
|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
|MDT-XL/2|742M|64G|28G|0.83|-|
|MaskDiT-XL/2|730M|56G|24G|1.98|10.79|
|MC-DiT-XL/2|786M|60G|27G|1.05|9.30|
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal, and I would like to keep my original rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback and for taking the time to review our paper. We are glad that our rebuttal has addressed all of your concerns. Many thanks for your comments again. | Summary: The paper introduces a novel training paradigm for Diffusion Transformers (DiT) in the context of generative diffusion models for image generation. The authors propose MC-DiT, which focuses on enhancing contextual information extraction by reconstructing clean unmasked patches from clean masked patches, as opposed to the traditional noisy-to-noisy reconstruction. The method employs two complementary branches of DiT decoders to balance the use of noisy and clean patches, preventing model collapse.
Strengths: 1. The paper presents a new insight into the use of clean-to-clean reconstruction for learning contextual information in masked diffusion models, which is a significant departure from traditional noisy-to-noisy reconstruction methods.
2. The authors provide a theoretical analysis of mutual information between unmasked and masked patches, demonstrating the limitations of existing methods and the benefits of their proposed approach.
3. The introduction of two complementary DiT decoder branches to prevent model collapse is a thoughtful addition that addresses a common issue in such models.
4. The paper reports state-of-the-art results in terms of FID scores and IS scores, indicating that the proposed MC-DiT is highly competitive with existing methods.
Weaknesses: 1. The proposed MC-DiT model may be more complex than necessary, which could potentially hinder its adoption and implementation in practical applications.
2. The paper acknowledges that the training and inference speed of MC-DiT needs to be improved, which suggests that the current approach may have efficiency issues. The authors should provide specific comparisons to demonstrate that these efficiency sacrifices are worth the performance gains.
3. The paper could benefit from a more detailed comparative analysis with other state-of-the-art methods, including feature visualization, to better understand the advantages of MC-DiT.
4. Can the author explain whether this specific context information is pixel-wise information or semantic information? And their role in the overall framework?
5. Ablation experiments can be further supplemented and improved. For example, the hyperparameters in Tab.5 can be further observed to have an impact. The current scaling still has some ambiguity.
Technical Quality: 4
Clarity: 4
Questions for Authors: Please refer to the Weakness Section.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have mentioned the limitations in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: Complexity.
Our MC-DiT has the same main branch and training objective as existing methods like MaskDiT, MDT, and SD-DiT. The additional complexity of MC-DiT lies on the extra two EMA branches and unmasked tuning.
1) The extra two branches increases only 7.6\% parameters and 8\% FLOPs, as shown in Figure R-1(a) in the one-page pdf. Thus, MC-DiT is approximately similar as MaskDiT, MDT and SD-DiT in training cost. Moreover, during inference, the two extra EMA branches are dropped without causing extra complexity, and MC-DiT has the same architecture as DiT in the inference stage.
2) Unmasked tuning can reduce the training-inference discrepancy, as demonstrated by MaskDiT and is adopted in our MC-DiT. However, we can remove unmasked tuning to reduce the complexity at little loss on FID. Table below shows that FID will increase by only 0.41 for MC-DiT by removing unmasked tuning.
|Strategy|Iterations|FID|
|:---:|:---:|:---:|
|MaskDiT-XL/2 w unmasked tuning|1300K|12.15|
|MC-DiT-XL/2 w unmasked tuning|1300K|7.92|
|MC-DiT-XL/2 w/o unmasked tuning|1300K|8.33|
### Q2: Training and inference speed.
First of all, we have to clarify that, in the limitations, we mean to improve the inference speed of diffusion models rather than the inference speed of MC-DiT compared to existing DiT methods. In fact, MC-DiT has the same inference speed as MaskDiT, since two extra branches are removed during inference. Regarding training speed, MC-DiT is slower than MaskDiT due to the two extra EMA branches. However, MC-DiT yields an evidently lower FID scores (i.e., 1.5 for $256\times 256$ image generation and 1.4 for FID for $512\times512$ image generation) than MaskDiT.
### Q3: More detailed comparative analysis
We further provide visualization of the feature maps extracted by MC-DiT and MaskDiT at different noise scale on CIFAR-10 in Figure R-2(b) in the one-page pdf. A larger noise variance denotes the noise with large scale. Our MC-DiT extracts proper shape for various noise scale, while the features extracted by MaskDiT are messy in the large noise scale. This further proves the motivation and effectiveness of our paper that clean-to-clean mask reconstruction promotes learning sufficient contextual information.
### Q4: Specific context information.
The context information is semantic information that denotes the relationship between current patches and other patches. For example, the patches of legs are correlated with patches of the body in a cat image. In fact, MAE employs mask-reconstruction for contextual information extraction from the clean image. In this paper, we introduce mask-reconstruction into the denoising process to extract contextual information. Specifically, the masked clean patches contain contextual information about clean patches target. Using masked clean patches to reconstruct target clean patches helps the model to understand the shape and context.
###Q5: Ablation on hyperparameters.
Following MaskDiT, we select 0.01, 0.1, and 1.0 as the scaling values of three hyperparameters and supplement various values for ablation study. Below tables evaluate various values of the three hyperparameters and we find that the best FID is still obtained when $\lambda_1=0.1$, $\lambda_2=0.1$, and $\lambda_3=0.05$.
|$\lambda_1$|0|0.01|0.03|0.05|0.07|0.09|0.1|0.3|0.5|0.7|0.9|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|FID|43.23|40.99|39.23|38.44|37.95|36.53|35.20|35.98|36.74|36.91|37.52|38.97|
|$\lambda_2$|0|0.01|0.03|0.05|0.07|0.09|0.1|0.3|0.5|0.7|0.9|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|FID|38.83|36.15|36.02|36.46|35.99|36.07|35.20|35.34|36.18|37.26|35.98|37.54|
|$\lambda_3$|0|0.01|0.03|0.05|0.07|0.09|0.1|0.3|0.5|0.7|0.9|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|FID|37.77|37.25|36.63|35.20|36.07|37.93|35.46|35.88|37.26|36.19|37.40|36.35| | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable comments. We appreciate the reviewers' recognition of our work, including **excellent motivation (ZkfZ, RKJM, Ngpd, cs4r and gqjf), reasonable method (gqjf), thorough theoretical analysis (cs4r, Ngpd, YSTZ), state-of-the-art results (Ngpd, gqjf, cs4r and ZkfZ), along with clear writing (YSTZ, RKJM)**. The reviewers also raised some issues, including training efficiency (TpTX,cs4r, RKJM and ZkfZ) and unmasked tuning (ZkfZ, YSTZ). According to their comments, we have addressed each concern raised by the reviewers in a point-by-point manner.
Pdf: /pdf/0fbdee5f225e8c79d454dd08f76f904c446b1c90.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: In this work, the authors reveal the issues of Diffusion transformers of having semantic inconsistency as they fail to learn the contextual information. Based on their theoretical analysis, they proposed a novel training paradigm to fully learn contextual information with clean-to-clean mask reconstruction. The paper is well organised and written.
Strengths: The authors have a comprehensive understanding of issues and the state-of-the-art. In terms of originality and quality, the work is technically sound in general. The analysis and written are clear in general.
Weaknesses: Please see the list of questions for improvement and clarification on some of the aspects.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The authors gave a thorough analysis on the issues of diffusion transformers in section 3. However, the motivation for the proposed MC-DiT to solve the issues is not very clear.
2. The steps mentioned in section 3.3 are not so clear and probably are not cohesive with Figure 2. For instance, it mentioned as ‘the unmasked noisy patches x_t^{1} are fed into the DiT encoder for extraction’, but it seems those unmasked noisy patches go to the DiT decoder (?). I might be better to put the denotations on Figure 2 as well to guide readers.
3. In Table 1, the results on using classier-free guidance were reported for ImageNet-256x256 generation. However, when it comes to the ImageNet-512x512 generation, they are ignored and not reported. Any particular reason behind?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors mentioned that the training speed and inference speed still need to be improved and a possible future mitigation on the issue.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: Motivation for MC-DiT
In Section 3 of the manuscript, sufficient analysis is provided to claim that reconstructing masked noisy patches from unmasked noisy patches is insufficient for contextual information extraction. In details, the information used in noisy-to-noisy patch reconstruction only lie in $\mathcal{I}(x_0^1;x_t^2)$ or $\mathcal{I}(x_t^1;x_t^2)$, which are less than $\mathcal{I}(x_0^1;x_0^2)$. Thus, our motivation is to directly model $\mathcal{I}(x_0^1;x_0^2)$ via the clean-to-clean patch reconstruction, which reduces the impact of the noise and learns sufficient contextual information. This is realized by inserting masked clean patches into the unmasked noisy features for unmasked clean patches reconstruction. The contextual information flows from the masked clean patches to the unmasked clean patches, which equals $\mathcal{I}(x_0^1;x_0^2)$.
### Q2: Denotations on Figure 2.
Thank you for the comment and we will put the denotations on Figure 2 in the revised version. Here, we would like to explain the reason that `$x_t^{1}$ are fed into the DiT decoder for extraction as below.
In the main branch, the unmasked noisy patches are fed into the DiT encoder, while all the noisy patches are directly inserted into the EMA DiT decoder to avoid modal collapse, as shown in the Figure 2 in the manuscript. The reasons are on the two folds: (1) efficient. Only apply DiT decoder for EMA branches leads to small extra parameters and fast inference speed, while EMA DiT encoder slows down the entire EMA branches. (2) effective. The DiT decoder is trained to extract masked clean images patches in the main branch. Thus, directly apply image patches as the input of EMA DiT decoder does not lead to poor denoising results. As shown in below Table , applying EMA DiT encoder introduces extra 669M parameters, while FID score only decreases 1.35. Thus, to balance the parameters and performance, we select DiT decoder in the EMA branches.
|Branch|Params|FID|
|:--:|:--:|:--:|
|DiT Decoder|56M|18.88|
|DiT Decoder+DiT Encoder|725M|17.53|
### Q3: Classier-free guidance for ImageNet-512x512 generation.
According to your suggestion, we provide the results for ImageNet-512x512 generation with classier-free guidance in Table R-1 (b) in the one-page pdf. Compared with MaskDiT, our MC-DiT is consistent to reduce FID by 0.47, demonstrating its effectiveness. | Summary: This paper proposes a training strategy for diffusion transformers that fully learns contextual information by introducing clean to clean mask reconstruction during training, and designs complementary DiT decoder branches as well as corresponding supervisory losses to avoid the problem of model collapse, giving theoretical and experimental validation.
Strengths: 1. Sufficient theoretical analysis
2. The overall writing of the paper is logically clear
Weaknesses: 1. There are errors in the description of parts of the paper, e.g., x1 in lines 107 and 109 of the introductory section of Masked AutoEncoders is described as masked and unmasked, respectively.
2. Visualization of experimental results is indeed missing, and only quantitative experimental results exist in the body of the paper.
3. Using the training strategy in the paper, although it can improve the results, it is not possible to conclude the size of the contribution of the training strategy to the final experimental results, as parameter tuning is still required in the testing phase.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Why not release the qualitative results as proof of the effectiveness of the strategy?
2. How much does parameter tuning in the final testing phase affect the degree of merit of the final result? How can it be shown that it is the training strategy that is at work and not the parameter tuning?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The methodology proposed in the paper needs to be fine-tuned to the task during the inference phase, otherwise good results may not always occur. Further, it can be found that similar strategies that utilize unknown information in the inference phase for training can have application limitations, which are not conducive to extending and applying the strategy to tasks that do not have access to potentially clear images and information.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: Writing error in line 107.
We thank the reviewer for point out the writing error. The unmasked patches $x_1$ and masked patches $x_2$ in line 107 are corrected to $x_1=x[m]$ and $x_2=x[1-m]$.
### Q2: Visualization results.
We have provided visualization results of generated $256\times 256$ images in Figure 5 in the supplementary materials. Figure 5 shows that images generated by MC-DiT achieve vivid details and diverse styles. In this rebuttal, we further provide more visualization results for both generated $256 \times 256$ and $512\times 512$ images in the Figure R-1 in the global rebuttal file. It demonstrates that images of the 'school bus' and 'custard apple' in Figure R-1(a) exhibit different styles, while the details appear very realistic. The same thing happened in $512\times 512$ images in Figure R-1(b), various images (e.g. 'dog', 'fox' and 'penguin') exhibit rich details and realistic styles, validating the effectiveness of our MC-DiT.
### Q3: Parameter tuning required for testing.
We would like to emphasize that it is the enhanced ability of contextual information extraction by the clean-to-clean patch reconstruction rather than parameter tuning contributes to the superior performance of our MC-DiT. Existing masked diffusion models like MaskDiT and SD-DiT require parameter (unmasked) tuning to decrease the training-inference discrepancy. Different from these models, MC-DiT leverages clean-to-clean patch reconstruction to enhance the contextual information extraction for generation beyond the unmasked tuning to decrease the training-inference discrepancy. Experimental results further demonstrate the effectiveness of the clean-to-clean patch reconstruction in MC-DiT.
**Experimental evaluations.** To validate the effectiveness of the clean-to-clean patch reconstruction, we compare MaskDiT-XL/2 (with unmasked tuning) and MC-DiT-B/2 with and without unmasked tuning. All the models are trained for 1300K iterations. As reported in below Table, both MC-DiT-B/2 with and without unmasked tuning outperform MaskDiT-XL/2 by evident margins of 4.27\% and 3.82\% in FID. By contrast, masked tuning only leads to a reduction of 0.41 in FID. These results demonstrate that the performance gain of our MC-DiT mainly comes from the enhanced ability of contextual information extraction by clean-to-clean patch reconstruction rather than unmasked tuning.
|Strategy|Iterations|FID|
|:---:|:---:|:---:|
|MaskDiT-XL/2 w unmasked tuning|1300K|12.15|
|MC-DiT-XL/2 w unmasked tuning|1300K|7.92|
|MC-DiT-XL/2 w/o unmasked tuning|1300K|8.33|
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the time and effort you have dedicated to reviewing our manuscript. We believe we have addressed all the concerns you raised in your review and expect your feedback sincerely. Thank you once again for your attention.
---
Rebuttal 2:
Comment: Thank you for the author's response. I suggest including the visualization results and ablation experiments in the final version of the paper. I have raised my score accordingly.
---
Rebuttal Comment 2.1:
Comment: Thank you very much for your comprehensive evaluation and the time you dedicated to reevaluating our work! We will include the experiments and visualization provided in the rebuttal in the final version of our paper. Many thanks for your suggestions again! | Summary: This paper critiques previous masked-reconstruction strategies in DiT training for their poor contextual information extraction, attributing this to noisy-to-noisy reconstruction. The authors theoretically and empirically validate that this approach limits mutual information between unmasked and masked patches. To address this, they propose a new training paradigm, MC-DiT, which uses clean-to-clean mask-reconstruction combined with diffusion denoising at varying noise levels. To prevent model collapse, they design two complementary DiT decoder branches to balance the reliance on noisy and clean patches. Model collapse would happen in this context due to excessive reliance on clean patches for reconstruction, leading to insufficient utilization of noisy patches and imbalanced training. Extensive experiments on the ImageNet dataset show that MC-DiT achieves state-of-the-art performance in both unconditional and conditional image generation, with faster convergence.
Strengths: - The paper motivates the need for their research in their introduction and is an interesting idea.
- The paper adds to the mathematical discussion surrounding image generation using diffusion using well understood mutual information metric prevalent in other areas of computer vision.
- Presents experimental evaluation, with section on reproducible details and supplementary materials.
Weaknesses: - While reading the article, there are many questions that arise which effect the reading experience of the article.
- The main weakness of the paper is that at many occasions claims are made which are intuitive, but they are attributed to be implied from an equation / proposition which do not (at least not immediately) show the claim to be true. Look at questions for more details.
- Some experiment details are unclear (in questions).
- Table 1 is a bit difficult to read here with the number of methods and it is not obvious how the horizontal lines are drawn, i.e. what makes them different from other quadrants. I think there is enough space for a column or two to add a bit more detail instead of adding them all to the name of the method.
- Figure 3 (a) is used to showcase speed of convergence. However, I think the distinction between convergence and a convergence to lower loss should be made. All 3 lines more or less flatten at the same time, you could actually argue the red and orange line are flattening faster. I agree the blue line is lower, but that does not mean it has converged faster, only converged to a lower loss. This also leads to a second point, a lower loss here does not necessarily mean a more performant model. As you notice in your own experiments, you require fine-tuning to make the output desirable. Therefore, I disagree that the model converges faster on the whole.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Where can I see: Line 38 "Despite superior performance over vanilla DiT, they are deficient in exploiting contextual information by neglecting different noise scales in different steps of diffusion process." Is there a citation which discusses this is important, or this concluded from your Figure 1 and table of results Table 1?
- It is not obvious from equation 5 and 6 that in Line 163: "With the growth of $t$, the KL divergence terms in (5) and (6) increase due to larger noise perturbation on $x^1_0$ and $x^2_0$" should be true. This can be understood intuitively, but there is no "decay" term with respect to $t$ in these equations to suggest that. Can this be formalised with respect to strength of the gaussian noise $n$. Also, I do realise that due to non negativity of KL divergence the two expectation terms subtract from it to make the mutual information smaller, but I so not see how this is true let's say between t and t+1.
- Line 216: how are the 2 branches of DiT trained in the EMA fashion here (student teacher or are do they also collect gradients)?
- Line 203: "$\mathcal{I}(x^1_0; x^2_0)$ is much higher", the **much** part is not clear from Proposition 2.
- Figure 3, is the training loss that is logged for all the models the same? i.e. $\mathcal{L}_{\text{clean}}$? or for your method is it the composite loss?
- Related to Figure 3, when we talk about speed of convergence in terms of iterations it does not say anything about the wall clock time (or FLOPs or Memory) that an iteration takes. In this adapted method, we do x3 forward passes through the DiT decoder, therefore how do the wall clock times (or FLOPs or Memory) compare? From a practical standpoint, this should be clear. Hypothetically, do you also expect the other methods to make up the difference in performance if they were trained for a proportional time longer.
- Figure 3 a and b, why is the plot only shown for different number of total iterations?
- Figure 3b, are these metrics calculated before or after fine-tuning for your method?
- Line 309 in Limitations. Why does the inference speed need to be improved? Is the model inferred differently and requires more steps?
*Minor Typos*
- Line 13: MDT mentioned before it is defined, although this clear from the citation
- Equation 1: it was not clear $\mathcal{L}_{\text{asym}}$ was defined as the expectation term, this lead to some confusion in the discussion later in Proposition 3.
- Proposition 2, Equation 6 should end with a full stop.
- Equation 8 9, 10, 11: brackets are not matched, duplicate sencond closing brackets?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Questions listed above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1:Claim in Line 38
We provide both theoretical and empirical evidences in Proposition~2 and Figure 1(a) in the manuscript to support the claim. We consider the mutual information $\mathcal{I}(x_0^1;x_0^2)$ between unmasked patches $x_0^1$ and masked patches $x_0^2$ as the contextual information.
Proposition 2 points out that previous methods only learns $\mathcal{I}(x_0^1;x_t^2)$ and $\mathcal{I}(x_t^1;x_t^2)$, which is insufficient for $\mathcal{I}(x_0^1;x_0^2)$. Figure 1(a) demonstrates that the mutual information learned by previous methods (MaskDiT, DiT and MDT) all declines greatly during the noise scale increases, indicating that these methods struggle to learn contextual information under different noise scales.
### Q2:Equations(5) and (6) and Claim in Line 163.
The details of the KL divergence in the Proposition 2 are as follows
$$
E_{p(x_0^2)}E_{p(x_t^2|x_0^2)}E_{p(x_0^1|x_0^2)}\log \left[\frac{p(x_0^1|x_t^2)}{p(x_0^1|x_0^2)}\right] \\
=E_{p(x_0^2)}E_{p(x_t^2|x_0^2)}E_{p(x_0^1|x_0^2)}\log \left[\frac{p(x_t^2|x_0^1)}{p(x_0^2|x_0^1)}\times \frac{p(x_t^2)}{p(x_0^2)}\right] \\
\approx E_{p(x_0^2)}E_{p(x_t^2|x_0^2)}E_{p(x_0^1|x_0^2)} \log \left[\frac{p(x_0^2|x_0^1)+p(n|x_0^1)}{p(x_0^2|x_0^1)}\times \frac{p(x_0^2)+p(n)}{p(x_0^2)}\right] \\
$$
where $x_t^2=x_0^2+n$ with $n\in\mathcal{N}(0,t^2I)$. We approximate $p(x_t^2)\approx p(x_0^2)+p(n)$, since $p(x_t^2)$ is a Gaussion distribution with mean value $x_0^2$ and variance $t^2$. As $t$ increases, the KL divergence in Proposition 2 increases and the mutual information $\mathcal{I}(x_0^1;x_t^2)$ and $\mathcal{I}(x_t^1;x_t^2)$ achieve the larger difference with $\mathcal{I}(x_0^1;x_0^2)$.
### Q3:Training of the 2 branches of DiT
The parameters of EMA decoders are initialized with those in the DiT decoders and are updated in the EMA fashion according to the parameters in DiT decoders. $\theta_{ema}=\alpha \times \theta_{ema}+(1-\alpha)\times \theta_{dec}$
where $\alpha$ denotes the weight coefficient. The two decoders are updated only using the EMA method without using gradient updates.
### Q4:Much part of $\mathcal{I}(x_0^1;x_0^2)$
We demonstrate in Proposition 2 that the difference between $\mathcal{I}(x_0^1;x_0^2)$ and $\mathcal{I}(x_0^1;x_t^2)$ lies in the KL divergence under various noise scale and further verify the theoretical results in Figure 1(a). Figure 1(a) shows that, with the growth of noise, the difference between $\mathcal{I}(x_0^1;x_0^2)$ (red line) and $\mathcal{I}(x_0^1;x_t^2)$ (gray and yellow lines) becomes larger. During the increase of noise variance from 0.0 to 1.0, the mutual information of MaskDiT and MDT decline 90% (1.95 vs 0.21), while mutual information of vanilla noisy image only decline 18% (2.07 vs 2.50). This decline refers to the KL divergence in Proposition 2.
### Q5:Training loss in Figure 3
In Figure 3, $L_{clean}$ in Equation (8) is adopted as the training loss for all the models. Figure 3 shows that our MC-DiT converges to a lower value than other methods in terms of $L_{clean}$ and demonstrates the effectiveness of clean-to-clean mask reconstruction.
### Q6:Different numbers of total iterations in Figures 3a and b.
To compare training performance, we train MaskDiT and DiT for 300K iterations in Figures 3a and b due to the substantial time and GPU resource overhead. We directly use the training curve of our MC-DiT trained for evaluations. MC-DiT is trained for 400K iterations for a fair comparison with other methods. In fact, we can find from the training curves for the first 300K iterations that, MC-DiT can obviously decrease the training loss and FID score in comparison to MaskDiT and DiT. This result implies the improvements of our MC-DiT.
### Q7:Metrics in Figure 3b.
The FID reported in Figure 3(b) is calculated after unmasked tuning for MaskDiT and MC-DiT. It is consistent to the results reported in Table 3 in the manuscript.
### Q8:Inference speed in limitations
We intend to improve the inference speed of diffusion model that requires multiple steps to achieve image generation, since MC-DiT is based on the diffusion model and shares the same architecture with MaskDiT. In fact, MC-DiT does not infer differently or requires more steps and yields the same time consumption in the inference stage, since the EMA branches and noisy target are removed during test.
### Q9:Evaluation of speed of convergence.
We follow Figure 4 in SD-DiT to evaluate the speed of convergence for each iteration in Figure 3. Besides, as for the wall clock time, we report FLOPs and Memory used by MC-DiT in each iteration in Table R-1(a) in one page pdf. MC-DiT achieves 3x forward passes, thus the FLOPs and Memory are calculated via the summation of 3x forward process. Training speed of MC-DiT is slower than MaskDiT, leading to longer convergence time. However, MC-DiT converges to a lower loss and achieves superior FID score than MaskDiT. Meanwhile, due to the lightweight DiT decoder, MC-DiT achieves small extra FLOPs and Memory in the training stage.
The loss curve of MaskDiT may not decrease due to the small decrease magnitude after 100K iterations. And the FID score of previous methods doesn't decline faster than MC-DiT according to the decrease magnitude in Figure 2(b) of the manuscripts.
### Q10:Speed of convergence shown in Figure 3(a).
Although convergence to a lower loss is not bound to imply a faster convergence in terms of iterations, the primary focus of our analysis is the overall effectiveness of the model. The blue line can achieve a lower loss, despite similar iteration counts for flattening, highlights the model's efficiency in reaching a more optimal solution.
Besides, the loss reported in Figure 3(a) denotes the MSE loss $\mathcal{L}_{clean}$. Thus, lower MSE loss means the generated clean patches are more similar to the ground-truth, indicating the performant model. Moreover, we report the results with and without unmasked tuning in Table. Our MC-DiT outperforms MaskDiT without unmasking tuning.
---
Rebuttal 2:
Comment: Thank you for the clarifications and my questions have been sufficiently answered. I have updated my recommendation for the paper now.
---
Rebuttal Comment 2.1:
Comment: Thank you very much for acknowledging the improvements made to the paper and reconsidering the score! We greatly appreciate your thoughtful evaluation and the time you took to reassess our work. | null | null |
Continuously Learning, Adapting, and Improving: A Dual-Process Approach to Autonomous Driving | Accept (poster) | Summary: The paper introduces **LeapAD**, an interesting paradigm for autonomous driving inspired by human cognitive processes, addressing the limitations of prevailing data-driven methods in complex scenarios. LeapAD incorporates a dual-process decision-making module consisting of an Analytic Process (System-II) for logical reasoning and experience accumulation, and a Heuristic Process (System-I) for quick, empirical decision-making based on the learned knowledge from System-II. By emulating human attention to focus on critical objects, LeapAD simplifies environmental interpretation and mitigates decision-making complexities. The system is tested in the CARLA simulator, demonstrating superior performance over camera-only methods with less labeled data. The Heuristic Process shows continuous improvement through a reflection mechanism and a growing memory bank, indicating the effectiveness of the dual-process approach.
Strengths: The paper presents several notable strengths across dimensions of originality, quality, clarity, and significance:
### Originality
1. **Dual-Process Decision-Making**: The combination of an Analytic Process (System-II) and a Heuristic Process (System-I) emulates human cognitive functions, offering a biologically inspired framework for autonomous driving.
### Quality
I think the quality is good.
1. **Continuous Learning**: The reflection mechanism and growing memory bank enable continuous learning and improvement, showcasing the adaptability of the proposed system.
### Clarity
The paper is well-written, with clear and concise explanations of complex concepts. The dual-process framework and its components are described in detail, making the methodology accessible to a broad audience.
### Significance
**Advancing the Field**: By introducing a dual-process decision-making framework, the paper opens avenues for research in autonomous driving and artificial intelligence, potentially influencing future developments in the field.
Weaknesses: While the paper presents some interesting contributions, there are areas where improvements could be made:
### Methodological Concerns
While I appreciate the design of the Analytic Process and Heuristic Process, does the paper clearly distinguish between the two? My understanding is that the Analytic Process uses LLMs, while the Heuristic Process uses a lightweight language model. Why can it be called the Heuristic Process? It would be better to clearly state why can it be called the Heuristic Process and the Analytic Process.
### Experimental Limitations
1. **Quantitative Metrics**:
The paper's experimental results are primarily based on the CARLA simulator, lacking real-world experiments. CARLA scenarios are still too simple. It would be better to report results that can comprehensively evaluate the performance of LeapAD, such as using the real-world dataset nuScenes.
### Clarity and Presentation
1. **Technical Details**:
This paper is based on Qwen VLM. It is not clear whether the performance improvement is due to this Qwen VLM or the two-system design. It would be better to include more ablation studies to explore the influence of VLMs, such as LLaVa.
By addressing these weaknesses, the authors can provide a more thorough and robust evaluation of LeapAD.
Technical Quality: 3
Clarity: 3
Questions for Authors: There are areas where improvements could be made:
### Methodological Concerns:
The paper should clearly distinguish between the Analytic Process and Heuristic Process? How are these processes defined and why is the Heuristic Process called such if it uses a lightweight language model?
### Experimental Limitations:
Can results be reported to evaluate the performance of LeapAD using a real-world dataset like nuScenes?
### Clarity and Presentation:
It is not clear whether the performance improvement is due to Qwen VLM or the two-system design. Can you report ablation studies to explore the influence of VLMs, for example, also try LLaVa?
By addressing these questions, the authors can provide a more thorough and robust evaluation of LeapAD.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the limitations are discussed in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer:
Thank you for your constructive comments. We provide discussions and explanations about your concerns as follows.
**Q1:** The paper should clearly distinguish between the Analytic Process and Heuristic Process? How are these processes defined and why is the Heuristic Process called such if it uses a lightweight language model?
**A1**: As explained in the introduction, our approach is inspired by the dual-process theory of human intelligence. The distinction between the Analytical Process and Heuristic Process lies primarily in their roles within our system. The Analytical Process is rational, slow, and excels in logical reasoning and creativity across various domains, while the Heuristic Process is quicker, empirical, and domain-specific. Although we use a lightweight language model, it performs a function in our dual-process decision module that aligns with the role of the Heuristic Process.
**Q2:** The paper's experimental results are primarily based on the CARLA simulator, lacking real-world experiments. CARLA scenarios are still too simple. It would be better to report results that can comprehensively evaluate the performance of LeapAD, such as using the real-world dataset nuScenes.
**A2:** It is important to note that our primary focus is on exploring continuous learning in **closed-loop autonomous driving** based on a dual-process approach. Closed-loop autonomous driving involves interactions between agents and the environment, while publicly available datasets like nuScenes only provide open-loop evaluations. The open-loop evaluations show certain inherent limitations: (1) the current decisions do not influence subsequent navigation, making it impossible to assess cumulative errors; (2) there is no interaction between agents, leading to a lack of dynamic behavior; (3) and there is no global closed-loop evaluation metric. We believe that close-loop experiments in real-world environments are primarily constrained by the availability of high-fidelity simulators. Currently, there is no well-established high-fidelity simulator, but developing such a simulator is an area we are actively working on.
**Q3:** This paper is based on Qwen VLM. It is not clear whether the performance improvement is due to this Qwen VLM or the two-system design. It would be better to include more ablation studies to explore the influence of VLMs, such as LLaVa.
**A3:** Our scene understanding module (Qwen-VL) and decision-making module (dual-process design) are relatively independent components. Qwen-VL provides scene descriptions, which are fed into our dual-process decision-making module for driving reasoning and decision. Directly applying Qwen-VL for the decision-making process does not work well. We experimentally found its output can not align well with the data format required for decision reasoning.
Empirically, accurate scene understanding positively correlates with the effectiveness of subsequent decision-making. However, since our focus is on exploring the dual-process approach for autonomous driving rather than scene understanding per se, we have not extensively investigated the network architecture of the VLM. We selected Qwen-VL as the scene understanding module due to its demonstrated strengths in visual understanding and grounding on various public benchmarks, aligning well with our requirements.
As suggested by the review, we also added additional ablation studies to explore the influence of Qwen-VL and LLaVA. This includes scene understanding evaluation results on the Rank2tell (real) and Carla (simulated) datasets, as well as the closed-loop performance, detailed below. We provided Grounded scores, including precision, recall, and F1 score, to assess the models' grounding performance, as well as Chat Scores, including language score (ROUGE) and GPT score (GPT-4-turbo), to evaluate the models' reasoning and question-answering capabilities. For the close-loop experiments, we test the performance of the first eight routes of the Town05 Short benchmark. From these results, it is evident that Qwen-VL exhibits superior grounding abilities and better DS scores in closed-loop settings.
(1)Evaluations on the collected CARLA data.
| VLMs| Precision | Recall | F1 Score | ROUGE | GPT Score |
| --- | --- | --- | --- | --- | --- |
| LLaVA-1.5-7B | 34.95 | 32.00 | 33.41 | 78.04 | 58.09 |
| Qwen-VL-7B | 51.41 | 47.14 | 49.18 | 83.24 | 63.01 |
(2)Evaluations on Rank2Tell test dataset.
| VLMs | Precision | Recall | F1 Score | ROUGE | GPT Score |
| --- | --- | --- | --- | --- | --- |
|LLaVA-1.5-7B| 28.49 | 25.22 | 26.75 | 69.26 | 65.20 |
| Qwen-VL-7B | 46.70 | 37.37 | 41.52 | 70.23 | 66.59 |
(3) Closed-loop experiments on Town05 Short benchmark
| VLMs | DS | RC | IS |
| --- | --- | --- | --- |
| LLaVA-1.5-7B | 78.87 | 86.12 | 92.75 |
| Qwen-VL-7B | 88.25 | 100 | 88.25 |
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. My concerns have been resolved.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer kf7r:
We sincerely thank you for your valuable feedback and for acknowledging our efforts.
Your time and thorough review of our work are greatly appreciated. | Summary: This paper presents LeapAD, a dual-process closed-loop autonomous driving system.
LeapAD first uses a VLM to analyze the scene by selecting and locating critical objects in the scene, and then it uses a dual-process learning approach to learn driving behaviors.
The dual-process learning system contains an Analytical Process and a Heuristic Process. The Analytical Process is strong but expensive to run. It is used to summarize the driving experience into the Memory Bank. The Heuristic Process is more lightweight and is used to generate controls to control the vehicle. The Heuristic Process is trained with data in the Memory Bank.
The Analytical Process can also reflect from collision events in previous simulation runs. It will analyze the cause of the collisions and save the knowledge in the Memory Bank.
The authors evaluated the LeapAD method in closed-loop simulation with the CARLA simulator. They used the Qwen models as the VLMs and GPT-4 for the Analytical Process.
The evaluation result shows that LeapAD surpasses the performance of the other camera-only models on the CARLA Town05 benchmark.
Strengths: * The dual-process idea is neat and thought-provoking. It equips the autonomous driving system with the ability to learn from past experiences.
* The method achieves stronger performance than state-of-the-art methods in CARLA closed-loop simulation.
* This paper is well-written and provides sufficient details for reproducing their approach.
Weaknesses: * The performance improvement is not very significant compared to the baseline.
Technical Quality: 4
Clarity: 4
Questions for Authors: N/A
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer:
Thanks a lot for your acknowledgement, and we appreciate the time and effort you dedicated to enhancing the quality and clarity of our manuscript.
**Q:** The performance improvement is not very significant compared to the baseline.
**A:** Thanks for your feedback. As you mentioned in summary, LeapAD proposes a new paradigm for autonomous driving that addresses the limitations of current data-driven methods in complex scenarios. We focus on verifying the superiority of this dual-process system. At the same time, we also mentioned in the limitations section that we are actively researching several areas, such as integrating time inputs, enabling VLM to participate in reflection processes, and developing real-world simulations to further improve the performance of our system.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Thank you for your response. I will keep my rating.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer zRJR:
We sincerely thank you for your valuable feedback and for recognizing our work.
Your time and thorough review of our work are greatly appreciated. | Summary: This paper introduces a paradigm to design an annotation-efficient end-to-end autonomous driving system that harnesses the power and generalizability of open-source LLM models. It proves that critical frame/instance selection are critical to a decision-making module training. This method is evaluated by closed-loop testing in CARLA and achieves the SOTA performance among camera-based methods.
Strengths: 1. The core idea is straightforward.
2. Achieves the SOTA result.
3. Complete adequate ablation studies to support its claim.
Weaknesses: 1. No quantitative benchmark on its VLM module on simulation and the real world. Only some samples are listed in the paper.
2. The paper only presents an overall benchmark on the system but no failure case analysis.
3. The result relies on the foundation model performance and the paper does not show a way to fill the gap between the simulation and the real world, which limits its impact.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Why decouple into 2 separated modules, scene understanding, and decision making?
2. The scene understanding section mentions that the motion direction is one of the outputs. However, since the input sensor data is single-frame based, how does the model know the motion direction?
3. It is not clear how the interaction with GPT4 completes in the reflection mechanism. It would be better to provide more details.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer:
Thank you for your constructive comments. We will discuss and explain your concerns as follows.
**Q1:** No quantitative benchmark on its VLM module on simulation and the real world. Only some samples are listed in the paper.
**A1**: Thank you for your valuable suggestions. We have added evaluation results of VLMs on Rank2tell (real) and Carla (simulated) datasets. We evaluated the performance of Qwen-VL-7B on these two datasets, and also evaluated LLaVA-1.5-7B for comparison. We provided Grounded scores, including precision, recall, and F1 score, to assess the models' grounding performance, as well as Chat Scores, including language score (ROUGE) and GPT score (GPT-4-turbo), to evaluate the models' reasoning and question-answering capabilities. Our experiments found that Qwen-VL-7b demonstrates stronger grounding abilities.
(1)Evaluations on the collected Carla data.
| VLMs| Precision | Recall | F1 Score | ROUGE | GPT Score |
| --- | --- | --- | --- | --- | --- |
| LLaVA-1.5-7B | 34.95 | 32.00 | 33.41 | 78.04 | 58.09 |
| Qwen-VL-7B | 51.41 | 47.14 | 49.18 | 83.24 | 63.01 |
(2)Evaluations on Rank2Tell test dataset.
| VLMs | Precision | Recall | F1 Score | ROUGE | GPT Score |
| --- | --- | --- | --- | --- | --- |
|LLaVA-1.5-7B| 28.49 | 25.22 | 26.75 | 69.26 | 65.20 |
| Qwen-VL-7B | 46.70 | 37.37 | 41.52 | 70.23 | 66.59 |
**Q2:** The paper only presents an overall benchmark on the system but no failure case analysis.
**A2:** Thanks for your advice. We have included two typical failure cases in the uploaded PDF.
(1) “Run a red light” as shown in Figure 1 in the uploaded PDF. In this scenario, the system lacks temporal information regarding the yellow light's remaining duration, making it difficult to determine whether to accelerate through or stop. When the light is yellow, the system cautiously issues a “DC” command, causing the vehicle to cross the stop line slowly. When the light turned red, CARLA interpreted this as running a red light, even though a “STOP” command was issued at this time.
(2) “Collision” as shown in Figure 2 in the uploaded PDF. In this case, the VLM did not detect the car at the left rear edge of the field of view due to the camera’s field of view limitation. Furthermore, in the CARLA setting, other vehicles will not proactively yield to the ego vehicle, leading to collisions caused by other vehicles.
**Q3:** The result relies on the foundation model performance and the paper does not show a way to fill the gap between the simulation and the real world, which limits its impact.
**A3:** We use the VLM to observe and interpret the driving environment and provide scene descriptions. The LLM then performs driving reasoning and makes decisions based on these descriptions. Consequently, the domain gap between simulated and real-world scenarios primarily affects the scene descriptions generated by the VLM. Notably, the VLM demonstrates strong generalization capabilities. The data used for fine-tuning includes both simulated and real-world data, which enables our VLM to generate accurate scene descriptions in both contexts. This is illustrated by the cases shown in Figures 6 and 7 of the paper, and further supported by the quantitative experiments (A1) we have included.
While there is an inherent gap between simulation experiments and real-world closed-loop scenarios, we believe the main limitation is the current lack of a high-fidelity simulator in the industry, as highlighted in the limitations.
**Q4:** Why decouple into 2 separated modules, scene understanding, and decision making?
**A4:** We have divided the system into separate modules for the following reasons:
(1) The modular design enables easy replacement and upgrading of individual components, particularly the scene understanding module and the Analytic Process.
(2) We adopt the scene understanding module to generate scene descriptions that effectively encode the environment. The dual-process decision-making module generates the driving reasoning and decision. These historical scene descriptions and reasoning can be conveniently encoded into a vector database (memory bank) for rapid retrieval of similar scenes and to guide the Heuristic Process in making accurate decisions through a few-shot approach.
(3) As noted in the limitations section of our paper, the VLM’s inability to participate in the reflection mechanism hinders further system improvements. Addressing this limitation will be a key focus for future development.
**Q5:** Since the input sensor data is single-frame based, how does the model know the motion direction?
**A5:** Indeed, accurately assessing vehicle motion based on a single image alone is challenging. Interestingly, we have found that, due to the remarkable generalization abilities of large vision language models, it is possible to infer the motion direction of vehicles simply based on the vehicle’s heading. As noted in the limitations section of our paper, the current version relies solely on single-frame input and lacks temporal information. Incorporating temporal cues to more accurately assess the motion of surrounding objects will be a focus for future work.
**Q6**: It is not clear how the interaction with GPT4 completes in the reflection mechanism. It would be better to provide more details.
**A6**: In the appendix (Figure 16), we provide a detailed example of the reflection mechanism and explain it thoroughly in Section F. Specifically, when a traffic incident occurs during the Heuristic Process, the reflection mechanism is triggered. During reflection, information from historical frames is fed into the Analytic Process to identify and correct any potentially erroneous reasoning decisions. These corrections are then added to the memory bank to further enhance the accuracy of the Heuristic Process. For more detailed information, please refer to the relevant sections in the appendix.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer eSfn:
We sincerely thank you for your valuable feedback and for acknowledging our efforts.
We appreciate the time and effort you spent reviewing our work. | Summary: The paper "LeapAD" introduces a new approach to autonomous driving that addresses key challenges in adaptability and interpretability. It draws inspiration from human cognition to enhance decision-making processes in complex environments.
The system incorporates two complementary processes:
- Analytic Process: Provides thorough analysis and reasoning, accumulating driving experience through logical reasoning.
- Heuristic Process: Employs swift, empirical processing and learns from the Analytic Process through supervised fine-tuning. This dual-process setup enhances adaptability and performance.
Closed loop testing in the CARLA simulator demonstrates that LeapAD outperforms methods relying solely on camera input. The Heuristic Process can inherit knowledge from an Analytic Process powered by GPT-4, leading to continuous performance improvements as the memory bank expands.
Strengths: 1. The paper is generally well-written and but easy to follow. Good motivation for the model design in the introduction.
2. I like the problem setup: how can we design AV systems that continually learn from its mistakes.
3. The experimental results seem to support the authors' claims.
Weaknesses: I overall liked the idea of closed-loop autonomous driving approach that could emulate the critical attention mechanisms required for smooth driving environment in safety critical scenarios. The notion of heuristic and analytical processes for executing actions in robotics seems a novel approach.
However, my primary concern lies in the setup of data and models for generating scene descriptions into text to identify critical objects. Operating within the text domain, which requires subsequent interpretation and tokenization by the analytical and heuristic modules, seems less efficient than using a direct vectorized representation. For instance, representing an object with parameters such as {v = 0.2 m/s, s = 3m, class = Car} is likely more efficient and robust than the text output "The car is 3 m away from the ego vehicle and is moving at 0.2 m/s." This textual method could lead to inefficiencies, especially in scenarios with multiple dynamic actors.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The authors should detail the data generation process for complex driving scenarios like intersections, lane changes, and overtaking. Based on my understanding, the current model primarily focuses on simpler scenarios involving a single-lane and limited interaction with other actors.
- I recommend evaluations in more dynamic settings such as intersections and scenarios involving lane changes and overtaking, where multiple actors interact and cooperate for safety.
- A comparison with traditional vectorized or feature-based planning systems, such as Wayformer or Precog, would be beneficial. These systems process scenes as images and convert data into vectors instead of text, which might offer insights into efficiency and performance.
- I see DriveLM also has good performance in the case of VLM based driving. Is there any reason why that has not been put as a baseline, considering the similarity in dataset generation processes.
I look forward to seeing how these suggestions might be incorporated to further enhance the robustness and applicability of the proposed approach in more complex driving scenarios.
Ref:
- http://arxiv.org/abs/2207.05844
- https://openaccess.thecvf.com/content_ICCV_2019/papers/Rhinehart_PRECOG_PREdiction_Conditioned_on_Goals_in_Visual_Multi-Agent_Settings_ICCV_2019_paper.pdf
- https://arxiv.org/abs/2312.14150
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer:
Thank you for your constructive comments. We provide discussions and explanations about your concerns as follows.
**Q1:** My primary concern lies in the setup of data and models for generating scene descriptions into text to identify critical objects. Operating within the text domain, which requires subsequent interpretation and tokenization by the analytical and heuristic modules, seems less efficient than using a direct vectorized representation.
**A1**: Indeed, adopting vectorized representations can significantly compress data, thereby accelerating inference speed. However, by logically integrating target attributes (such as orientation, distance, and category) into coherent natural language statements, we not only clarify the meaning of these physical quantities but also better align with the data distribution of pre-trained large language models. This enables us to leverage the embedded world knowledge in these models to enhance their understanding of autonomous driving environments while maintaining their generalization capabilities.
It is also worth noting that, as shown in Table 1 of our paper, compared to traditional vectorized or feature-based methods such as VAD, InterFuser, and TransFuser, the amount of labeled data we use is typically one to two orders of magnitude smaller. Furthermore, our dual-process decision-making process requires no human intervention and is capable of continuous improvement, clearly demonstrating the effectiveness of knowledge-driven methods.
**Q2**: Evaluations in more dynamic settings as intersections and scenarios involving lane changes and overtaking, where multiple actors interact and cooperate for safety.
**A2**: In fact, the closed-loop benchmark in CARLA Town05 involves numerous dynamic settings, such as intersections, traffic lights, STOP signs, and sudden appearances of pedestrians or cyclists from the side. Our method has achieved strong performance across these diverse scenarios, demonstrating its adaptability to dynamic environments. Additionally, it is important to emphasize that our approach can handle complex situations such as intersections and some corner cases. As shown in Figures 13 and 14 in the appendix, our method is able to handle intersections with multiple actors (vehicles and traffic lights) interacting (Figure 13), and react appropriately to unexpected events, such as a cyclist suddenly appearing from the side (Figure 14) by making timely decisions like slowing down.
Moreover, the project page linked in our paper's abstract includes several demos that further illustrate our method's adaptability in dynamic environments. For example, the first video shows our method slowing down and coming to a stop when a cyclist suddenly appears at an intersection between the 5-15 second and 35-40 second marks. The second video, between the 25-45 second marks, showcases our method of navigating a complex intersection with multiple participants, and the third video demonstrates our approach stopping briefly at an intersection with a STOP sign.
**Q3:** A comparison with traditional vectorized or feature-based planning systems, such as Wayformer or Precog, would be beneficial. These systems process scenes as images and convert data into vectors instead of text, which might offer insights into efficiency and performance.
**A3:** It is important to note that our primary focus is on exploring continuous learning in closed-loop autonomous driving based on a dual-process approach, using image inputs and a knowledge-driven method. Unlike our approach, Wayformer does not use image inputs but rather relies on ground truth sparse abstract state descriptions of the world, while Precog requires LiDAR input. Both of these methods are evaluated in an open-loop setting, which has certain inherent limitations: (1) the current decisions do not influence subsequent navigation, making it impossible to assess cumulative errors; (2) there is no interaction between agents, leading to a lack of dynamic behavior; (3) and there is no global closed-loop evaluation metric. In fact, we have already compared our method with traditional vectorized or feature-based systems. As shown in Table 1, methods like InterFuser, TransFuser, and VAD use neural networks to represent scenes as implicit vectors. Our method outperforms most of these approaches while only requiring one to two orders of magnitude less labeled data, and it also has the capability for continuous learning. This further demonstrates the effectiveness of leveraging knowledge from large models.
**Q4:** I see DriveLM also has good performance in the case of VLM based driving. Is there any reason why that has not been put as a baseline, considering the similarity in dataset generation processes.
**A4:** Our LeapAD features a dual-process approach for closed-loop autonomous driving, whereas DriveLM only reports open-loop performance without providing closed-loop metrics. Additionally, DriveLM has not released the CARLA dataset, the fine-tuned network weights on CARLA, and the inference code on CARLA, making it difficult to conduct closed-loop experiments using DriveLM. Furthermore, DriveLM is based on the BLIP-2 architecture, while Qwen-VL, which we use, supports higher resolutions and has demonstrated better grounding and vision-language understanding capabilities across multiple benchmarks, even with a similar parameter count. Finally, the output from the original DriveLM is quite verbose. For our dual-process decision method, the scene understanding-related data in DriveLM is more important. Therefore, we refined DriveLM's data and combined it with Rank2Tell and collected CARLA data to fine-tune Qwen-VL as our scene understanding module.
---
Rebuttal Comment 1.1:
Title: Thank you for your rebuttal response!!
Comment: I would like to keep my score.
This is where I disagree. In the case of, say, 5 vehicles and 5 pedestrians, the text generation part will be noisy and inaccurate. In this work, the model gets good results because, in most of the test cases, you only have traffic signs that are not cluttered and 1-2 actors. So, describing the situation in text is very easy compared to dense urban driving. Textual context learning there will do poorly compared to vectorized methods.
The authors have also responded to my question 1, where I requested details on the data generation process for complex driving scenarios like intersections, lane changes, and overtaking. This would have cleared my doubts about whether this method will work in urban driving.
---
Rebuttal 2:
Title: Thank you for your response!!
Comment: Dear Reviewer:
Thank you for your response.
Regarding the text generation of complex scenario you mentioned:
- First, we also recognize the complexity of driving scenarios. Therefore, as emphasized in our paper, we focus exclusively **only describing the key objects** that may influence driving decisions, reducing complexity and enhancing the efficiency of subsequent decision-making. In practice, we only need to consider those traffic participants who might interact with the ego car, such as nearby vehicles or pedestrians who may cross the road. For example, in the case of, say, 5 vehicles and 5 pedestrians, only one or two of each are actually important for driving decision-making.
- Secondly, we provide **textual descriptions with grounding boxes** for each key target to further identify and differentiate these traffic participants, please refer to Section 4.1 and the case study for more details.
- Finally, as shown in our experiments—specifically in Figure 8 of our paper—our method effectively grounds these critical traffic participants, **even in complex scenarios with many people (>5)**.
Regarding data generation process of question 1, we provided a detailed explanation in Section 4.1 of the paper, as well as in Appendix B and C. In fact, the data generation process is applicable across various complex scenarios. For scene understanding, we primarily describe the semantic attributes, spatial motion properties, and behavioral reasoning of key targets. Please refer to Figure 9 for more details. For more prompt details for the decision module, please refer to Figure 10 and 11. | Rebuttal 1:
Rebuttal: Dear Reviewers,
Thank you very much for taking the time to review this manuscript and for helping to improve our work. I greatly appreciate all your comments and suggestions. Please find my detailed responses below.
As suggested by Reviewer eSfn (Q2), we have included visualizations of the failure cases in the uploaded document. Additionally, due to the page limitation, please refer to A2 for detailed failure case analyses.
Pdf: /pdf/eb9f56e36b8807c2912d4b9bdcfc4da8f8cb2482.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MomentumSMoE: Integrating Momentum into Sparse Mixture of Experts | Accept (poster) | Summary: The paper introduces MomentumSMoE, a novel integration of heavy-ball momentum into Sparse Mixture of Experts (SMoE) to enhance stability and robustness. It establishes a connection between SMoE and gradient descent on multi-objective optimization problems.
The paper demonstrates theoretical and empirical improvements of MomentumSMoE over standard SMoE across various tasks. The method is universally applicable to many SMoE models, including V-MoE and GLaM, with minimal additional computational cost.
Strengths: To the best of my knowledge, attempting to accelerate the fixed point iteration in SMoE is an original idea.
It seems like there is comprehensive empirical evidence for the method, but I am not an expert on metrics for the SMoE, and will have to rely on other reviews to be confident in this strength.
The paper is fairly clear, with well-organized sections and figures.
Weaknesses: My largest negative for this paper is the largely unfounded connection between the SMoE and gradient descent. If the authors had made a connection to accelerating fixed-point iterations in general, I would want to accept this paper. Essentially, the authors are assuming that $\nabla_x f$ has strictly real eigenvalues when they should just work with truly, potentially complex, eigenvalues, ex., using tools as in Azizian et. al. For example, when performing this analysis, various other acceleration schemes are often better, like negative momentum (Gidel et. al.) or complex momentum (Lorraine et. al.). I would be curious to see some empirical investigation (or theoretical) or what the eigenvalues of $\nabla_x f$ are – ex., as in Figure 7 of https://arxiv.org/pdf/2102.08431 -- to validate any theoretical claims about what acceleration schemes should be used.
But, of course, the spectrum is only known in small-scale problems, leading to the second weakness, which is that some of the methods – ex., RobustSMoE – seem to rely on knowing the spectrum to set various parameters, which we won’t have access in real settings. Th
The theoretical results are also largely just reproductions of known theoretical results for momentum once you assume that the update from the SMoE is a gradient. This makes them not much of a contribution from my point of view other than leveraging existing tools. I think these results could be easily substituted for analogous techniques from Azizian.
Azizian, Waïss, et al. "Accelerating smooth games by manipulating spectral shapes." International Conference on Artificial Intelligence and Statistics. PMLR, 2020.
Lorraine, Jonathan P., et al. "Complex momentum for optimization in games." International Conference on Artificial Intelligence and Statistics. PMLR, 2022.
Gidel, Gauthier, et al. "Negative momentum for improved game dynamics." The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019.
Technical Quality: 3
Clarity: 3
Questions for Authors: How can the assumptions about the fixed point operator's spectrum and the Jacobian's conservativeness be validated or relaxed in practical scenarios?
Are there more general acceleration tools than momentum you might want to use for this problem?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The limitations are discussed briefly, but a delineated section elaborating on all the limitations would be valuable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Unfounded connection between the SMoE and gradient descent. Connection to accelerating fixed-point iterations. The authors should work with complex eigenvalues [Azizian et. al.] using negative (Gidel et. al.) or complex momentum (Lorraine et. al.). Empirical (or theoretical) investigation and an analysis of the eigenvalues of $\nabla_x f$ is needed.**
**Answer:** We respectfully disagree with the reviewer’s comment that the connection between SMoE and gradient descent (GD) is largely unfounded. In Section 2.3, from line 128 to line 144, we discuss the empirical evidences (Fig. 2, 3) for connection between SMoE and GD.
GD updates can be considered as fixed-point iteration: at optimality where the gradient is 0, GD finds a fixed-point solution. Thus, techniques in accelerating fixed-point iterations can be used to find the equilibrium point of SMoE. Following the reviewer's suggestion, we have conducted additional experiments that incorporate negative momentum (Gidel, 2019) and complex momentum (Lorraine, 2022) into SMoE using our momentum-based design framework. Table 2 in the attached PDF compares the PPL of NegativeMomentumSMoE and ComplexMomentumSMoE with MomentumSMoE, AdamSMoE, and SMoE on the clean/attacked WikiText-103. Our AdamSMoE achieves the best PPL. ComplexMomentumSMoE obtains slightly better PPL than MomentumSMoE (positive momentum coefficient), while both of these methods significantly outperform NegativeMomentumSMoE and the SMoE baseline.
Furthermore, when analyzing the eigenvalues of $\nabla_x f$, where $f$ is the output of an SMoE layer in the trained SMoE above, we observe that the mean absolute real and imaginary parts of the eigenvalues are 4.06 and 0.34, averaged over all SMoE layers in the model, respectively. (Gidel, 2019) and (Lorraine, 2022) suggest that positive momentum converges when the eigenvalues are real, while negative momentum has better convergence when the eigenvalues have large imaginary parts. Also, the convergence of the complex momentum method is robust to the wider range of complex eigenvalues, including purely real and imaginary ones. Since the imaginary part of the eigenvalues of $\nabla_x f$ is small compared to their real part (0.34 vs. 4.06), NegativeMomentumSMoE should not work well, and MomentumSMoE with positive momentum should be a better design choice. Since ComplexMomentumSMoE can handle a wider range of complex eigenvalues, it helps improve the performance of MomentumSMoE. However, due to the small imaginary part of eigenvalues of $\nabla_x f$, the improvement is small. These explain our results in Table 2 in the attached PDF and we also provide a plot of the spectrum in Figure 1.
We would like to thank the reviewer for your suggestion on negative and complex momentum. The significant improvement of MomemtunSMoE, AdamSMoE, and ComplexMomentumSMoE over the SMoE baseline further verifies the power and promise of the momentum-based framework for designing SMoE. Moreover, given our framework, analyzing the eigenvalues of $\nabla_x f$ offers a principled way to select the right momentum method for SMoE design.
**Q2. Methods like RobustSMoE seem to rely on knowing the spectrum to set various parameters, which we won’t have access in real settings.**
**Validating/Relaxing assumptions about the fixed point operator's spectrum and the Jacobian's conservativeness**
**Answer:** The motivation behind the design of complex momentum is to work over a large range of eigenvalues, as it is common that, in practice, we do not know the spectrum [Lorraine, 2022]. Similarly, for MomentumSMoE and AdamSMoE, we need to tune the hyperparameters, i.e., $\mu$, $\gamma$, as well as $\beta$ in AdamSMoE, in order to achieve convergence.
In contrast, the Robust Momentum Method in [Cyrus, 2018] that we use to develop Robust MomentumSMoE **does not require the spectrum information**, and the convergence of the algorithm **does not depend on the spectrum values**. This Robust Momentum Method can be considered as a Lur’e feedback control system, and the hyperparameters $\gamma$, $\mu$, and $\alpha$ (see Eqn. (18), our manuscript) are designed to push the stability boundary into the negative real axis. This allows an additional margin for the stability conditions of the system to hold.
**Q3. The novelty of theoretical results?**
**Answer:** As mentioned in line 156 in our manuscript, we are inspired by the techniques in [Qian, 1999] and adapt these techniques to prove the convergence and stability results of Momentum SMoE (Proposition 1 and Corollary 1). The techniques we use are not new, and other techniques, such as those in [Azizian, 2020] or [Wilson, 2018], can be leveraged to prove similar results. Our objective is to provide theoretical guarantees on the convergence and stability of MomentumSMoE to prove that MomentumSMoE is more stable than the SMoE baseline, which justifies the empirical advantages of MomentumSMoE over SMoE shown in our manuscript.
**Q4. More general acceleration tools than momentum?**
**Answer:** Momentum methods, such as Nesterov’s accelerated gradient method, are simple approximations of the proximal point method [Ahn, 2022]. For future work, we can employ the proximal point method to develop better SMoE models. In our manuscript, we also explore the sharpness-aware minimization (SAM) [Roulet, 2017] approach to enhance the robustness of SMoE (lines 805 - 821 in Appendix E.4).
**References**
O’donoghue, B, et al. Adaptive restart for accelerated gradient schemes. 2015.
Lucas, J., et al. Aggregated momentum. 2018.
Cyrus, S., et al. A robust accelerated optimization algorithm for strongly convex functions. 2018
Qian, N. On the momentum term in gradient descent learning algorithms. 1999.
Wilson, A. C., et al. A Lyapunov analysis of accelerated methods in optimization. 2021.
Ahn, K., et al.Understanding nesterov's acceleration via proximal point method. 2022.
Roulet, V., et al. Sharpness, restart and acceleration. 2017.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I appreciate the author's response and results, which ameliorate my main concerns. As such, I am revising the rating to be higher. Here are some elaborations on the response:
Figure 1 in the response really strengthens the paper, and it should be referenced in the Section “Empirical Evidence for the Gradient Descent Analogy of (S)MoE.” However, I think it's quite important that the writing clearly emphasizes that the MoE is not doing gradient descent on a single objective function since you observed complex eigenvalues. Instead, the MoE is “close to doing gradient descent on an objective function”, in a spectral sense. Further, all subsequent theories should be noted as a simplified heuristic justification for your method, as the theory requires assuming a conservative, real-valued vector field. I believe the theory could be generalized to work with complex eigenvalues, and I do not think it is required from this work to generalize the theory. Still, the gap between theory and practice should be easily documented for readers.
I believe the comparison with other fixed-point acceleration schemes also enhances the paper.
Also, regarding robust momentum, introducing an extra hyperparameter -- even randomly -- often leads to some improvement from tuning the new parameter. What we need to do to show this is useful is some exploration showing that doing your search in this new 3-parameter space is better than optimizing the 2-parameter space – ex., finding better values when doing a grid search of N queries in 3 dimensions instead of 2. Or, you could compare to other ways to introduce a third hyperparameter, and show you are better. However, I don’t think this is non-essential for publication, as the robust momentum is not the key contribution. More, you note that various momentum flavors can be projected to this setting.
It seems infeasible for NeurIPS, but it would improve the paper to rephrase the momentum theory to work with spectrums that are bound to be “near” the real axis in some sense. I think it's likely you could reuse theory from papers on fixed-point acceleration.
For Figure 1, shouldn’t the spectrum be symmetric, because if I have an eigenvalue at “x + iy” then I must have an eigenvalue at “x - iy” because the eigenvalues come in conjugate pairs. Since your plot is not symmetric, what is being shown? Are you only plotting the positive roots?
Also, this does not affect my review, but I think is an interesting avenue for future exploration. For your Figure 1 with the spectrum exploration, it is interesting that the spectrum is almost all positive and closely clustered to the real axis. Why is there one eigenvalue with a negative real part? A better understanding of the shape of spectrums you encounter allows you to select optimal acceleration schemes. Does this spectral shape hold for various MoE models? Does the spectral shape change during training?
Also, there is a single eigenspace with a negative real part—this seems notable. Does this eigenspace correspond to anything interpretable? In what eigenspaces do the parameters accumulate as the iterations progress? Answering these may lead to insights, allowing you to design better fixed-point acceleration schemes.
---
Rebuttal 2:
Title: Thanks for your endorsement!
Comment: We would like to thank the reviewer for your further feedback, and we appreciate your endorsement.
We will include Figure 1 in the Section "Empirical Evidences for the Gradient Descent Analogy of (S)MoE" in our revision. Following your suggestion, we will revise our manuscript to verify that MoE is “close to doing gradient descent on an objective function in a spectral sense” and update our stability analysis of MomentumSMoE (Section 3 in our manuscript) accordingly. We will also discuss the extension from our momentum-based framework for SMoE to accelerating fixed-point iterations and include the experiments with the ComplexMomentumSMoE and NegativeMomentumSMoE in our revision.
Furthermore, we agree with the reviewer that introducing a hyperparameter into the model usually improves its performance and that a thorough comparison will help further validate our experimental results. As you mentioned, robust momentum is not our main contribution, so we leave this for future work. However, we would like to point out that when tuning the hyperparameter in RobustMomentumSMoE, we are tuning the model on clean ImageNet-1K data, where there is a slight improvement. While on perturbed datasets, such as ImageNet-A/R/C, there is a significant increase in accuracy. These results lead us to believe that the model's enhanced performance is not entirely due to the introduction of an extra parameter, but rather the design of robust momentum itself.
In addition, further investigation of acceleration methods that consider different spectrums is an interesting direction that we will continue to explore. We believe that our momentum-based framework can be extended to these methods, as well as their theories, and will benefit from them.
For Figure 1 in our 1-page attached PDF, the spectrum is not symmetric due to the presence of purely real eigenvalues. We also take the average over the layers in the SMoE, causing the spectrum to skew upwards in our plot.
Once again, we appreciate your suggestions and feedback which help enhance the quality of our paper and introduce exciting new directions for further development of our MomentumSMoE.
---
Rebuttal Comment 2.1:
Comment: "For Figure 1 in our 1-page attached PDF, the spectrum is not symmetric due to the presence of purely real eigenvalues. We also take the average over the layers in the SMoE, causing the spectrum to skew upwards in our plot."
In your plot, there are complex eigenvalues -- i.e., eigenvalues with a non-zero imaginary part -- which do not have their conjugate plotted. Why is this occurring? It would not be caused by real eigenvalues, as every complex eigenvalue will still have its conjugate. I also don't understand what you mean by taking the average over layers. You are computing the eigenvalues of a real nxn matrix, which always satisfies the property of having complex eigenvalues occur in conjugate pairs.
---
Reply to Comment 2.1.1:
Title: Clarification on Figure 1 in Our Attached PDF
Comment: We are sorry for this misunderstanding. Please let us clarify Figure 1 here. By taking the average over the layers, we mean that in a medium SMoE model, there are 6 layers and 16 experts. Each of these will correspond to 1 real nxn matrix. We find the eigenvalues of each of these matrices, which are either real numbers or conjugate pairs. We find the modulus and phase of each eigenvalue and average over the 6 layers and 16 experts. Positive real eigenvalues in the spectrum result in phases of $0$ while negative real eigenvalues in the spectrum result in phases of $\pi$. Consequently, averaging these $0$ and $\pi$ phases with the phases of complex eigenvalues causes the plot to be skewed upwards.
We have tried our best to reproduce Figure 7 in [Lorraine, 2022] in the context of SMoE. We would appreciate your suggestions to better make this figure.
**Reference**
Lorraine, Jonathan P., David Acuna, Paul Vicol, and David Duvenaud. "Complex momentum for optimization in games." International Conference on Artificial Intelligence and Statistics. PMLR, 2022.
---
Rebuttal 3:
Comment: The goal is to visualize the eigenvalues of $\nabla_x f$, and perhaps view how it changes as "optimization progresses". I think this corresponds to taking the eigenvalues of the matrix, which is an average of the matrices for each expert, and the "optimization step" is like the layer. So the "averaging" would be done for the matrices for the experts themselves and not the eigenvalues if I understand correctly. Further, plotting the eigenvalues' union would be a more typical visualization strategy than averaging if there are multiple distinct matrices, as getting rid of the conjugate structure is confusing to me. I think a union over the different layers may make sense, or as separate plots.
You might also find it easier to visualize as a heatmap than a scatterplot. If you wanted to see a more granular structure, you could play with different ways to plot it—for example, a different figure/color for each layer/expert. For example, in [Lorraine, 2022] Figure 9 shows the spectrum at the start and the end of training.
---
Rebuttal Comment 3.1:
Title: Reply to Reviewer GFpG
Comment: Thanks for your prompt response. We appreciate your detailed and valuable suggestions to better visualize the eigenvalues of $\nabla_{x}f$. If you allow us to include an anonymous link that includes additional figures in our reply, we will be very happy to make the figures you suggested and present them in this discussion. We will also include those figures in our revision.
---
Rebuttal 4:
Comment: On the point of averaging the matrices of each expert, we agree that this would be a better way to visualize the eigenvalues of $\nabla_{x}f$ than averaging the eigenvalues. However, as only 2 experts are chosen at each layer for each data point (i.e., our model implements top-2 SMoE), and further, a convex combination of their matrices are applied where the coefficients of this convex combination are the corresponding affinity (gate) scores, there are many possible combinations. How would you like to best visualize them? Would it make sense to you to take the 2 experts chosen the most frequently during training/inference time and plot the spectrum of their convex combination?
For example, at layer $i$, $i=1,...,6$, we have 16 expert matrices $A_{i_1}, A_{i_2}, ..., A_{i_{16}}$, and the top 2 most frequently chosen matrices are $A_{i_j}$ and $A_{i_k}$. Then, for $\lambda \in [0,1]$ decided by the affinity (gate) scores, we plan to compute the convex combination $A_i = \lambda A_{i_j} + (1-\lambda) A_{i_k}$ and plot the spectrum of $A_i$ in each layer.
We would be willing to provide you with the eigenvalues and a snippet of python code that can be easily run in a jupyter notebook for visualization.
---
Rebuttal 5:
Comment: The goal is to visualize the eigenvalues of $\nabla_x f$ (or, similarly, the eigenvalues of the Jacobian of the fixed point operator) at each optimization step, so you should visualize exactly that. Note that you are abusing notation with $\nabla_x f$ as it seems to assume a potential function, so, really, you are looking at the eigenvalues of the Jacobian of the expert's aggregated update. This is uniquely defined, and there should not be ambiguity in how you establish these eigenvalues.
As such, you should use the exact weighting of the expert's matrices, such that it equals $\nabla_x f$ (at some specific optimization step). I believe this is weighting them according to the gating scores. Still, the key point is figuring out how to visualize exactly the spectrum of $\nabla_x f$. You need to derive exactly what form the Jacobian $\nabla_x f$ -- which will involve gating values and expert matrices -- then visualize exactly that, and this will tell you exactly how to weigh the different matrices uniquely. You should not just choose arbitrary averages of the matrices or arbitrary averages of eigenvalues of different terms.
Note that the eigenvalues will likely be different at each step of your fixed-point operator. It is more common to visualize the union of them or them separately at varying points in optimization or only at the end of optimization, which dictates asymptotic convergence rates.
This specific spectrum is useful, as it dictates the spectrum of the Jacobian of the fixed point operator, which bounds on convergence rates to the fixed point and, relatedly, the optimal acceleration schemes (like momentum and its variants). See Theorem 1 from Negative momentum (https://arxiv.org/pdf/1807.04740), which just re-states the classic result of Prop 4.4.1 from Bertsekas 1999.
D. P. Bertsekas. Nonlinear programming. Athena
scientific Belmont, 1999.
---
Rebuttal Comment 5.1:
Comment: Thanks for your detailed explanation and informative reply. We follow the reviewer's suggestion and derive the Jacobian $\nabla_x f$ as follows.
In Eqn. (9) in our manuscript, the SMoE is formulated as
$$
x_{t+1} = x_t - \gamma\sum_{i=1}^K \text{softmax}(\text{TopK}(g_{i}(x_t))) [-u_i(x_t)] = x_t - \gamma f(x_t),
$$
where, again, $f(x) = \nabla_{x}F = \sum_{i=1}^K \text{softmax}(\text{TopK}(g_{i}(x))) [-u_i(x)]$ plays the role of a fixed point operator and $F$ is the objective function defined in Eqn. (3) in our manuscript. The Jacobian $\nabla_x f$ is then given by:
$$
\nabla_x f = \sum_{i=1}^K \nabla_x \, \{\text{softmax}(\text{TopK}(g_{i}(x))) [-u_i(x)]\}.
$$
For each data point, we differentiate $f$ using automatic differentiation in PyTorch. We then compute the eigenvalues of the Jacobian at layers 1, 3 and 6 in the SMoE model, corresponding to their respective steps in our optimization algorithm. These eigenvalues are listed in the code snippet below. You can run our code to visualize those eigenvalues in a scatterplot. As can be seen from the resultant figures, the eigenvalues cluster around 0, $\pi$ and $-\pi$, indicating that while they are not exactly real, the eigenvalues are close to real. It is also interesting that in the last step (layer 6), there seem to be slightly less eigenvalues with almost purely imaginary parts.
---
Reply to Comment 5.1.1:
Title: Code Snippet Part 1/4
Comment: ~~~ #python
import torch
import matplotlib.pyplot as plt
# layer 1
eig0_base = torch.tensor([ -28.21+0.00j, -18.35+0.21j, -18.35-0.21j, -14.52+0.68j, -14.52-0.68j, -7.76+1.04j, -7.76-1.04j, -15.45+0.00j, -14.99+0.12j, -14.99-0.12j, -14.32+0.00j,
-12.24+0.64j, -12.24-0.64j, -4.99+1.07j, -4.99-1.07j, -12.49+0.36j, -12.49-0.36j, -9.83+0.76j, -9.83-0.76j, 1.91+1.01j, 1.91-1.01j, -10.79+0.61j,
-10.79-0.61j, -7.73+0.85j, -7.73-0.85j, -11.27+0.42j, -11.27-0.42j, -11.91+0.10j, -11.91-0.10j, -10.23+0.53j, -10.23-0.53j, -6.54+0.85j, -6.54-0.85j,
-4.04+0.96j, -4.04-0.96j, 7.57+0.00j, 6.55+0.47j, 6.55-0.47j, 0.15+0.98j, 0.15-0.98j, -7.79+0.71j, -7.79-0.71j, 4.91+0.65j, 4.91-0.65j,
-9.64+0.14j, -9.64-0.14j, -3.78+0.85j, -3.78-0.85j, 2.30+0.79j, 2.30-0.79j, -1.28+0.88j, -1.28-0.88j, 0.89+0.82j, 0.89-0.82j, -10.34+0.16j,
-10.34-0.16j, -7.92+0.60j, -7.92-0.60j, -6.47+0.67j, -6.47-0.67j, -10.16+0.00j, -7.87+0.54j, -7.87-0.54j, -8.77+0.29j, -8.77-0.29j, -2.87+0.80j,
-2.87-0.80j, -0.06+0.79j, -0.06-0.79j, 3.06+0.70j, 3.06-0.70j, 5.00+0.55j, 5.00-0.55j, 5.16+0.40j, 5.16-0.40j, 6.78+0.13j, 6.78-0.13j,
6.83+0.00j, 5.82+0.31j, 5.82-0.31j, 6.09+0.19j, 6.09-0.19j, -8.94+0.00j, 2.90+0.67j, 2.90-0.67j, -4.00+0.74j, -4.00-0.74j, -2.03+0.74j,
-2.03-0.74j, 0.21+0.73j, 0.21-0.73j, -5.03+0.65j, -5.03-0.65j, -8.39+0.00j, 1.27+0.66j, 1.27-0.66j, 2.54+0.62j, 2.54-0.62j, 4.75+0.43j,
4.75-0.43j, 3.28+0.54j, 3.28-0.54j, 5.78+0.16j, 5.78-0.16j, 5.42+0.24j, 5.42-0.24j, -0.75+0.69j, -0.75-0.69j, -4.10+0.61j, -4.10-0.61j,
-6.44+0.47j, -6.44-0.47j, -7.08+0.35j, -7.08-0.35j, -7.75+0.18j, -7.75-0.18j, -7.46+0.27j, -7.46-0.27j, 5.36+0.00j, 4.62+0.25j, 4.62-0.25j,
-2.09+0.61j, -2.09-0.61j, -5.08+0.48j, -5.08-0.48j, -5.27+0.45j, -5.27-0.45j, 4.99+0.00j, -3.61+0.55j, -3.61-0.55j, -6.72+0.24j, -6.72-0.24j,
-6.60+0.22j, -6.60-0.22j, -7.18+0.01j, -7.18-0.01j, -7.04+0.06j, -7.04-0.06j, -1.82+0.59j, -1.82-0.59j, -0.18+0.59j, -0.18-0.59j, 4.10+0.28j,
4.10-0.28j, 3.61+0.35j, 3.61-0.35j, 1.72+0.50j, 1.72-0.50j, 4.05+0.13j, 4.05-0.13j, 2.38+0.45j, 2.38-0.45j, 2.71+0.39j, 2.71-0.39j,
-3.14+0.51j, -3.14-0.51j, -0.91+0.56j, -0.91-0.56j, -1.11+0.54j, -1.11-0.54j, 0.36+0.51j, 0.36-0.51j, -2.56+0.51j, -2.56-0.51j, -6.10+0.00j,
-4.94+0.29j, -4.94-0.29j, -4.73+0.30j, -4.73-0.30j, 2.97+0.33j, 2.97-0.33j, 4.08+0.07j, 4.08-0.07j, 3.95+0.13j, 3.95-0.13j, 0.57+0.47j,
0.57-0.47j, 1.69+0.41j, 1.69-0.41j, 3.44+0.21j, 3.44-0.21j, 2.96+0.20j, 2.96-0.20j, -0.15+0.45j, -0.15-0.45j, -0.59+0.46j, -0.59-0.46j,
-2.92+0.42j, -2.92-0.42j, -5.39+0.23j, -5.39-0.23j, -4.43+0.33j, -4.43-0.33j, -3.61+0.35j, -3.61-0.35j, -5.59+0.09j, -5.59-0.09j, -3.97+0.29j,
-3.97-0.29j, -4.96+0.17j, -4.96-0.17j, -5.13+0.12j, -5.13-0.12j, -5.21+0.03j, -5.21-0.03j, 0.88+0.39j, 0.88-0.39j, 2.95+0.14j, 2.95-0.14j,
2.02+0.29j, 2.02-0.29j, -2.25+0.38j, -2.25-0.38j, -1.58+0.39j, -1.58-0.39j, -2.39+0.33j, -2.39-0.33j, -0.05+0.38j, -0.05-0.38j, 0.32+0.37j,
0.32-0.37j, 1.51+0.31j, 1.51-0.31j, 1.26+0.29j, 1.26-0.29j, 3.21+0.01j, 3.21-0.01j, 2.26+0.19j, 2.26-0.19j, 2.75+0.06j, 2.75-0.06j,
-3.49+0.21j, -3.49-0.21j, -3.91+0.14j, -3.91-0.14j, -4.12+0.00j, -4.04+0.04j, -4.04-0.04j, -1.50+0.33j, -1.50-0.33j, -0.80+0.35j, -0.80-0.35j,
2.50+0.09j, 2.50-0.09j, 1.70+0.21j, 1.70-0.21j, 0.99+0.26j, 0.99-0.26j, -3.46+0.13j, -3.46-0.13j, 0.17+0.26j, 0.17-0.26j, -0.02+0.27j,
-0.02-0.27j, -2.45+0.25j, -2.45-0.25j, -0.99+0.27j, -0.99-0.27j, -1.47+0.28j, -1.47-0.28j, -2.07+0.23j, -2.07-0.23j, -2.83+0.16j, -2.83-0.16j,
-3.11+0.11j, -3.11-0.11j, 1.85+0.15j, 1.85-0.15j, 2.20+0.00j, 2.16+0.03j, 2.16-0.03j, -3.17+0.00j, -2.25+0.18j, -2.25-0.18j, 1.36+0.16j,
1.36-0.16j, 1.79+0.07j, 1.79-0.07j, -2.97+0.00j, -2.66+0.06j, -2.66-0.06j, -1.69+0.19j, -1.69-0.19j, -0.91+0.23j, -0.91-0.23j, -1.31+0.20j,
-1.31-0.20j, -2.42+0.03j, -2.42-0.03j, -2.05+0.12j, -2.05-0.12j, -0.37+0.22j, -0.37-0.22j, 0.68+0.20j, 0.68-0.20j, -0.68+0.20j, -0.68-0.20j,
1.81+0.00j, 0.05+0.19j, 0.05-0.19j, 0.55+0.16j, 0.55-0.16j, 1.37+0.00j, 1.20+0.08j, 1.20-0.08j, 0.90+0.12j, 0.90-0.12j, -1.09+0.15j,
-1.09-0.15j, -1.81+0.10j, -1.81-0.10j, -1.81+0.05j, -1.81-0.05j, -0.28+0.16j, -0.28-0.16j, 1.07+0.04j, 1.07-0.04j, 0.99+0.05j, 0.99-0.05j,
0.80+0.09j, 0.80-0.09j, -1.22+0.10j, -1.22-0.10j, -0.53+0.13j, -0.53-0.13j, -1.53+0.05j, -1.53-0.05j, -1.45+0.03j, -1.45-0.03j, -0.62+0.11j,
-0.62-0.11j, -0.11+0.11j, -0.11-0.11j, 0.16+0.09j, 0.16-0.09j, 0.52+0.03j, 0.52-0.03j, 0.04+0.07j, 0.04-0.07j, 0.52+0.00j, 0.22+0.04j,
0.22-0.04j, -0.93+0.05j, -0.93-0.05j, -0.42+0.05j, -0.42-0.05j, 0.19+0.00j, -0.02+0.00j, -1.12+0.00j, -0.94+0.00j, -0.65+0.00j, -0.68+0.00j])
~~~
---
Rebuttal 6:
Comment: I have run the code you provided, and those visualizations look good. Perhaps a heatmap would look reasonable in the plots, but that is a personal preference. This is, of course, not required for this paper, but for future analysis, might I suggest looking at the eigenvalues of fixed point operator (as in Figure 4, https://arxiv.org/pdf/1807.04740), which combined information about your optimization scheme, with the spectrum of $\nabla_x f$. For example, for gradient descent which uses an update of $x = x - f$, for a fixed point operator $g(x) = x - f$, then the you would want to look at $eigs(\nabla_x g) = eigs(I - \nabla_x f)$. The eigenvalues of the fixed point operator directly tell you convergence rates (in idealized scenarios), and looking at how your optimizer warps the spectrum may provide insight into the best acceleration schemes. You might want to look at the union of the eigenvalues over multiple different "layers" and optimization setups / different problems for the experts. It could also be worth looking at what eigenspaces your updates are actually accumulating in, as this is non-convex (so most theoretical results will say it doesn't converge). Overall, I am happy with your response, and I think this a fun topic to ponder.
```fig, ax = plt.subplots(1, 3, figsize=(16, 4), sharey=True)
fontsize = 15
bins = 51 # Number of bins for the heatmap. Better visualization if this number is odd imo
# Heatmap for the first set of data
h0 = ax[0].hist2d(abs0, phase0, bins=bins, cmap='Blues')
ax[0].set_ylim([-3.2, 3.2])
ax[0].set_yticks(ticks=[-3.14, -1.57, 0, 1.57, 3.14])
ax[0].set_yticklabels(labels=[-3.14, -1.57, 0, 1.57, 3.14], fontsize=fontsize-2)
ax[0].grid()
ax[0].set_title("Layer 1", fontsize=fontsize)
ax[0].tick_params(axis='both', which='major', labelsize=fontsize-2)
fig.colorbar(h0[3], ax=ax[0])
# Heatmap for the second set of data
h2 = ax[1].hist2d(abs2, phase2, bins=bins, cmap='Oranges')
ax[1].grid()
ax[1].set_title("Layer 3", fontsize=fontsize)
ax[1].tick_params(axis='both', which='major', labelsize=fontsize-2)
fig.colorbar(h2[3], ax=ax[1])
# Heatmap for the third set of data
h5 = ax[2].hist2d(abs5, phase5, bins=bins, cmap='Greens')
ax[2].grid()
ax[2].set_title("Layer 6", fontsize=fontsize)
ax[2].tick_params(axis='both', which='major', labelsize=fontsize-2)
fig.colorbar(h5[3], ax=ax[2])
plt.tight_layout()
plt.show()
---
Rebuttal Comment 6.1:
Title: Thanks for your endorsement!
Comment: Thanks the reviewer for your constructive responses and interesting suggestions on the spectrum analysis of the fixed point operator. We are grateful for the time taken to provide good insights into further improvements to our paper. We will continue to explore these new perspectives in our future work. We appreciate your endorsement and the fruitful discussions. | Summary: This paper proposes a variant of sparse mixture of experts, MomentumSMoE, by incorporating momentum into the traditional sparse mixture of experts framework. The authors provide both theoretical proofs and empirical evidence demonstrating that MomentumSMoE offers greater stability and robustness compared to the standard sparse mixture of experts. Experiments on language modeling and object recognition tasks are conducted to verify the effectiveness of the proposal.
Strengths: 1. The idea of integrating momentum into sparse mixture of experts is interesting.
2. Both the theoretical proof and extensive empirical results are provided to demonstrate that the proposed MomentumSMoE is more stable and robust than SmoE; the experimental results are appealing.
3. The code is provided.
Weaknesses: The pseudocode may be provided to better illustrate the implementation of the proposal.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why the MomentumV-MoE and Robust MomentumV-MoE have marginal gains on clean IN-1K data, is there any in-depth analysis available on this?
2. In the ImageNet-1K Object Recognition experiment, why was the popular top-5 accuracy metric not used, as it was in the Soft Mixture of Experts experiment?
3. As stated in the weaknesses, the authors could provide pseudocode to better clarify their proposal.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Why the MomentumV-MoE and Robust MomentumV-MoE have marginal gains on clean IN-1K data, is there any in-depth analysis available on this?**
**Answer:** V-MoE's result reported in Table 2 in our manuscript is among the state-of-the-art results on clean IN-1k data for the models that have around 45M effective parameters without pretraining. To further improve this result is challenging, and the almost 0.43\% improvement of our MomentumV-MoE vs. the V-MoE baseline is already nontrivial. Notice that both our MomentumV-MoE and Robust MomentumV-MoE significantly improve over the V-MoE baseline on robustness benchmarks, such as IN-R, IN-A, and IN-C, as shown in Table 2.
**Q2. In the ImageNet-1K Object Recognition experiment, why was the popular top-5 accuracy metric not used, as it was in the Soft Mixture of Experts experiment?**
**Answer:** Thanks for your comment. We provide the top-5 accuracy for ImageNet-1K object recognition task in Table 1 in the attached PDF.
**Q3. As stated in the weaknesses, the authors could provide pseudocode to better clarify their proposal.**
**Answer:** Thanks for your suggestion. We include the pseudocode for our MomentumSMoE, AdamSMoE, and Robust MomentumSMoE below.
~~~
Hyperparameters: mu
def MomentumSMoE(x, momentum):
momentum = - SMoE(x) + mu * momentum
x = x + gamma * momentum
return x
~~~
~~~
Hyperparameters: mu, beta, eps = 1e-8
def AdamSMoE(x, gradient, squared_gradient):
gradient = - (1 - mu) * SMoE(x) + mu * momentum
squared_gradient = beta * squared_gradient + (1 - beta) * SMoE(x) ** 2
x = x + gamma / (torch.sqrt(squared_gradient) + eps) * gradient - k * x
return x
~~~
~~~
Hyperparameters: p, L, m
def RobustMomentumSMoE(x, momentum):
k = L / m
gamma = k * ((1 - p) ** 2) * (1 + p) / L
mu = k * p ** 3 / (k - 1)
alpha = p ** 3 / ((k - 1) * ((1 - p) ** 2) * (1 + p))
y = x + alpha * gamma * momentum
momentum = - SMoE(y) + mu * momentum
x = x + gamma * momentum
return x
~~~
---
Rebuttal 2:
Title: Any Questions from Reviewer puL2 on Our Rebuttal?
Comment: We would like to thank the reviewer again for your thoughtful reviews and valuable feedback.
We would appreciate it if you could let us know if our responses have addressed your concerns and whether you still have any other questions about our rebuttal.
We would be happy to do any follow-up discussion or address any additional comments. | Summary: The paper introduces a novel approach to enhancing the robustness and stability of Sparse Mixture of Experts (SMoE) models. Inspired by the analogy of gradient descent and SMoE, the authors develop a family of models by incorporating momentum into the training process. The key idea is that training SMoE is a multi-objective optimization problem where the monument-based gradient descent method is more stable and robust than the vanilla one. They proposed the AdamSMoE and Robust MomentumSMoE, which demonstrate improved performance across a variety of tasks, including language modeling and object recognition.
Strengths: (1) The integration of momentum into SMoE is a non-trivial innovation that addresses instability and inefficiency issues in existing models.
(2) The paper provides convincing empirical evidence showing the effectiveness of MomentumSMoE across multiple benchmarks.
(3) The proposed method's compatibility with other momentum-based optimizers, like Adam, suggests it can be broadly applied to various SMoE architectures.
Weaknesses: (1) Formulating SMoE as a multi-objective optimization problem is doubtful to me. Every expert network is continually changing during the model training, which makes each objective nonstatic, which violates the basic assumption of multi-objective optimization, whose objectives should be very clear and stable.
(2) It is unconvincing to use ||f(x)|| as the key metrics to measure the efficacy of SMoE or MoE. This confuses me a lot. Please explain why the output norm represents the goodness/badness of the model.
(3) There are some grammar issues. Please use `` instead of " in the paper (line 665).
(4) There is no sufficient discussion of computation overhead. Training efficiency is a critical issue for current foundation model training. Does computation significantly increase by applying momentum over the SMoE? Keeping an additional copy weight (p in Fig 1) would take additional memory and may decrease the throughput.
I'd like to hear a more insightful discussion regarding all the points above from the authors.
Technical Quality: 2
Clarity: 3
Questions for Authors: (1) Please explain more of line 140 ("Thus, it is expected that these two terms learn to reduce ...).
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Formulating SMoE as a multi-objective optimization problem is doubtful to me.**
**Answer:** We believe there is a misunderstanding of our formulation of SMoE as a multi-objective optimization problem. Please allow us to clear this misunderstanding by clarifying the role of expert networks in our multi-objective optimization (MOO) framework for Sparse Mixture of Experts (SMoE). We rewrite the minimization problem in Eqn. (3) as follows:
$$
\min_{x\in D, \theta \in \Theta} F(x, \theta) := \sum_{i=1}^E c_i F_i(x, \theta^{i}).
$$
where $\theta^{i}$ is the parameters of the expert network $i$, $\theta = \{\theta^{1},\dots,\theta^{E}\}$, and $\Theta$ is the parameter space. Also, $D$ is the feasible region, and $c_i \in \mathbb{R}$, $i=1,\dots,E$, are weights representing the importance of each objective function. In order to solve this optimization problem, we employ the alternating minimization approach as follows:
*Step 1 - fix $\theta$ and minimize $F$ w.r.t. $x$*:
$$
x_{t+1} = x_{t} - \gamma \sum_{i=1}^{E}\alpha_{i}^*\nabla_xF_i(x_t,\theta^{i}_{t})
$$
$$
= x_{t} - \gamma \sum_{i=1}^{E}\alpha_{i}^* f_{i}(x_{t}, \theta_{t}^i),
$$
where $\alpha^*=(\alpha_1^*,\dots,\alpha_E^*)$ satisfy the Pareto-stationary condition in Definition 1 in our manuscript, $\gamma$ is the step size, and $f_{i}=\nabla_xF_i$.
*Step 2 - fix $x$ and minimize $F$ with respect to $\theta$*
$$
\theta_{t+1}^{i} = \min_{\theta^{i} \in \Theta^{i}} c_i F_i(x_{t+1}, \theta^{i}), i=1,\dots,E.
$$
where $\theta^{i}$ lies in the parameter space $\Theta^{i}$.
In our formulation of SMoE as a multi-objective optimization problem, we regard $-u_i(x_t)$, where $u_i$ is the $i^{\text{th}}$ expert network, as the gradient $\nabla_{x}F_i(x_t,\theta^{i}_{t})$ in Step 1, and the score $\text{softmax}(g_i(x_t))$ from the router $g(x_t)$ is learned to approximate $\alpha_i^*$. Step 1 in the alternating minimization algorithm above corresponds to forward pass of an SMoE, and step 2 corresponds to an update step of the model's parameters (i.e., similar to an M-step in an EM algorithm). Note that this parameter update step is implicitly performed at each training iteration via optimizing the objective loss at the final layer of the model using backpropagation and gradient descent. Results in Fig 2. and the discussion from line 137 to line 144 in our manuscript provide supporting evidence for this implicit parameter update.
In Step 1, the gradient $\nabla_{x}F_i(x_t,\theta_{t}^{i})$ is a function of $\theta_{t}^{i}$ and is supposed to change during the model training, which matches the observation that expert network is continually changing during the model training. Also, the objective function of our MOO problem is $F(x, \theta)$ in which we want to find $x^*$ and $\theta^*$ to minimize $F$. This objective function is static, clear, and stable.
**Q2. Why $||f(x)||$ as the key metrics to measure the efficacy of (S)MoE?**
**Please explain more of line 140.**
**Answer:** In Fig. 2 and 3 in our manuscript, $||f(x)||$ is not used as a key metric to measure the efficacy of SMoE/MoE. Instead, the results in Fig. 2 and 3 are to justify the connection between SMoE/MoE and gradient descent. In particular, from Eqn. (8) and (9), the output of (S)MoE corresponds to $-f(x_t)$, where $f(x_t) = \nabla_xF_(x_t)$ and is the gradient of the objective function $F$ with respect to $x$ at $x_t$. If (S)MoE indeed performs (stochastic) gradient descent in its forward pass, then the norm of the (S)MoE output must decrease when t (i.e., layer index) increases since the gradient norm $\||f(x_t)\||$ decreases when gradient descent updates are applied. Fig. 3 confirms this by showing that the norm of the (S)MoE output decreases over layers in a 6-layer (S)MoE model trained on the WikiText-103 language modeling task. At the last layer, the norm increases might be due to overshooting, a common phenomenon that can occur when using gradient descent.
Additionally, from line 116 to line 119 in our manuscript, we hypothesize that the scores $\text{softmax}(g_{i}(x_t))$ for MoE and $\text{softmax}(\text{TopK}(g_{i}(x_t)))$ for SMoE from the router $g(x_t)$ are learned to approximate $\alpha_i^*$. If this is indeed true, then in order to satisfy the Pareto-stationary condition in Definition 1 in our manuscript, the scores should be learned to minimize the norm $\|\sum_{i=1}^E \alpha_i \tilde{f_i}\|$ (see line 93 - 99 in our manuscript), which is equivalent to the norm of the (S)MoE output. Fig. 2 verifies this expectation by showing that each MoE and SMoE layer learns to reduce its output norm during training, suggesting that the scores $\text{softmax}(g_i(x_t)) $ and $\text{softmax}(\text{TopK}(g_{i}(x_t)))$ are learned to approximate $\alpha_i^{*}$. These explain line 140.
**Q3. Grammar issues:** We have addressed the grammar issues in our revision.
**Q4. Insufficient discussion of computation overhead.**
**Answer:** The reviewer might have overlooked Appendix E.5 of our manuscript. Table 11 in E.5 reports the training/inference runtime, training/inference memory usage, and the number of parameters of our MomentumSMoE/AdamSMoE vs. the SMoE baseline. Our MomentumSMoE/AdamSMoE have the same training/inference memory usage and number of parameters as SMoE. Furthermore, our momentum models are on par with the SMoE baseline in terms of training/inference runtime. Compared to the SMoE baseline, our MomentumSMoE (see Fig. 1) only needs an additional momentum state $p$, which is accumulated from layer to layer. Thus, MomentumSMoE only uses an additional tensor shared between layers to store this momentum state $p$. Similarly, AdamSMoE only needs two additional tensors shared between layers to store $p$ and $m$. The memory overhead for introducing these additional tensors is minimal, thus explaining why memory increase and throughput decrease in our MomentumSMoE/AdamSMoE compared to the SMoE baseline are negligible.
---
Rebuttal 2:
Title: Any Questions from Reviewer oJG3 on Our Rebuttal?
Comment: We would like to thank the reviewer again for your thoughtful reviews and valuable feedback.
We would appreciate it if you could let us know if our responses have addressed your concerns and whether you still have any other questions about our rebuttal.
We would be happy to do any follow-up discussion or address any additional comments.
---
Rebuttal Comment 2.1:
Title: After Rebuttal
Comment: Thanks to the detailed response from authors. Most of my concerns are well addressed. I'd like to increase the score to 5.
---
Reply to Comment 2.1.1:
Title: Thanks for your endorsement!
Comment: Thanks for your response, and we appreciate your endorsement. | Summary: This paper addresses the instability problem of training SMoE models. By establishing a relationship between SMoE and multi-objective optimization, the authors integrate momentum into SMoE and propose MomentumSMoE. Experimental results show that MomentumSMoE is more stable than SMoE during training.
Strengths: 1. The paper tackles a critical issue in the training of SMoE models.
2. The proposed method is generalizable and can be applied to various SMoE models such as V-MoE and GLaM.
3. Experimental results demonstrate that this method is more stable than SMoE during the training process.
Weaknesses: 1. This method has little effect on models with few layers.
2. The largest models for evaluation only have 388M parameters, which are much smaller than mainstream MoE LLMs.
3. From a theoretical standpoint, developing a framework to explain the enhanced robustness of MomentumSMoE would be interesting.
Technical Quality: 3
Clarity: 3
Questions for Authors: please refer to weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. This method has little effect on models with few layers.**
**Answer:** A momentum-based approach like MomentumSMoE needs more layers to show its advantages, just like a heavy-ball momentum, or Adam needs a couple of iterations to start showing its faster convergence compared to gradient descent. Also, models with few layers are not common in practice due to their worse performance compared to models with more layers. For example, as can be seen in Table 7 in our manuscript, the SMoE-small baseline has much lower validation and test perplexity (PPL) compared to SMoE-medium and large baselines, 84.26/84.81 PPL vs. 33.76/35.55 and 29.31/30.33 PPL, respectively. The poor PPL of the SMoE-small baseline renders its performance on the WikiText-103 language modeling tasks negligible.
**Q2. The largest models for evaluation only have 388M parameters, which are much smaller than mainstream MoE LLMs.**
**Answer:** Thanks for your comment. During the rebuttal, we have tried our best to conduct an additional experiment on a Sparse Mixture of Experts (SMoE) backbone with more than 1B parameters. However, due to the shortage of computing resources and the short time of the rebuttal, our trainings have not finished yet. We will include these results of our momentum-based SMoE on a larger SMoE backbone in the revised manuscript and during the discussion period if possible.
**Q3. From a theoretical standpoint, developing a framework to explain the enhanced robustness of MomentumSMoE would be interesting.**
**Answer:** Thanks for your suggestion. We agree with the reviewer that developing a framework to explain the enhanced robustness of MomentumSMoE would be interesting. The approach to theoretically proving the robustness of MomentumSMoE is to consider a minimizer $x^*$ of the objective function $F(x)$ in Eqn. (3) in our manuscript, which is found by heavy-ball updates when starting at the initial point $x_0$. Then, we prove that by tuning the momentum $\mu$ and step size $\gamma$, the heavy-ball iterations will find a point $x_k$ such that $\||x_k - x^*\|| \le c\rho^{k}$ for a constant c, $\rho \in [0, 1)$, and $k \ge 1$. This result implies that the output of MomentumSMoE is close to $x^{*}$ as long as the input data is close to $x_0$, thus verifying the robustness of MomentumSMoE. This proof can be extended from the proof of Proposition 1 and Corollary 1 in our manuscript. We will include the detailed proposition and its proof for the robustness of MomentumSMoE in our revision.
---
Rebuttal 2:
Title: Any Questions from Reviewer qEGD on Our Rebuttal?
Comment: We would like to thank the reviewer again for your thoughtful reviews and valuable feedback.
We would appreciate it if you could let us know if our responses have addressed your concerns and whether you still have any other questions about our rebuttal.
We would be happy to do any follow-up discussion or address any additional comments. | Rebuttal 1:
Rebuttal: ## Global Rebuttal
Dear AC and reviewers,
Thanks for your thoughtful reviews and valuable comments, which have helped us improve the paper significantly. We are encouraged by the endorsements that: 1) The idea of integrating momentum into Sparse Mixture of Experts (SMoE) is original, interesting, a non-trivial innovation that tackles a critical issue in the training of SMoE models (all reviewers); 2) The paper provides convincing and comprehensive empirical evidence showing the effectiveness of MomentumSMoE across multiple benchmarks (all reviewers); 3) The proposed method is compatible with other momentum-based optimizers, like Adam (Reviewer oJG3), and can be applied to various SMoE models such as V-MoE and GLaM (Reviewer qEGD). We have included the additional experimental results requested by the reviewers in the 1-page attached PDF.
One of the concerns shared by Reviewer oJG3 and GFpG is that the formulation of SMoE as a multi-objective optimization problem is not clear and the connection between SMoE and gradient descent needs justification. We address this concern here.
**Clarifying the formulation of SMoE as a multi-objective optimization (MOO) problem**
Let us start by rewriting the minimization problem in Eqn. (3) as follows:
$$
\min_{x\in D, \theta \in \Theta} F(x, \theta) := \sum_{i=1}^E c_i F_i(x, \theta^{i}).
$$
where $\theta^{i}$ is the parameters of the expert network $i$, $\theta = \{\theta^{1},\dots,\theta^{E}\}$, and $\Theta$ is the parameter space. Also, $D$ is the feasible region, and $c_i \in \mathbb{R}$, $i=1,\dots,E$, are weights representing the importance of each objective function. In order to solve this optimization problem, we employ the alternating minimization approach as follows:
*Step 1 - fix $\theta$ and minimize $F$ w.r.t. $x$*:
$$
x_{t+1} = x_{t} - \gamma \sum_{i=1}^{E}\alpha_{i}^*\nabla_xF_i(x_t,\theta^{i}_{t})
$$
$$
= x_{t} - \gamma \sum_{i=1}^{E}\alpha_{i}^* f_{i}(x_{t}, \theta_{t}^i),
$$
where $\alpha^*=(\alpha_1^*,\dots,\alpha_E^*)$ satisfy the Pareto-stationary condition in Definition 1 in our manuscript, $\gamma$ is the step size, and $f_{i}=\nabla_xF_i$.
*Step 2 - fix $x$ and minimize $F$ with respect to $\theta$*
$$
\theta_{t+1}^{i} = \min_{\theta^{i} \in \Theta^{i}} c_i F_i(x_{t+1}, \theta^{i}), i=1,\dots,E.
$$
where $\theta^{i}$ lies in the parameter space $\Theta^{i}$.
In our formulation of SMoE as a multi-objective optimization problem, we regard $-u_i(x_t)$, where $u_i$ is the $i^{\text{th}}$ expert network, as the gradient $\nabla_{x}F_i(x_t,\theta^{i}_{t})$ in Step 1, and the score $\text{softmax}(g(x_t))_i$ from the router $g(x_t)$ is learned to approximate $\alpha_i^*$. Step 1 in the alternating minimization algorithm above corresponds to the forward pass of an SMoE, and step 2 corresponds to an update step of the model's parameters (i.e., similar to an M-step in an EM algorithm). Note that this parameter update step is implicitly performed at each training iteration via optimizing the objective loss at the final layer of the model using backpropagation and gradient descent. Results in Fig 2. and the discussion from line 137 to line 144 in our manuscript provide supporting evidences for this implicit parameter update.
As can be seen in Step 1, the gradient $\nabla_{x}F_i(x_t,\theta_{t}^{i})$ is a function of $\theta_{t}^{i}$ and is supposed to change during the model training, which matches the observation that expert network is continually changing during the model training. Also, the objective function of our MOO problem is $F(x, \theta)$ in which we want to find $x^*$ and $\theta^*$ to minimize $F$. This objective function is static, clear, and stable.
**Empirical evidences to justify the connection between the SMoE and gradient descent**
In Section 2.3 of our manuscript, from line 128 to line 144, we discuss the empirical evidences (see Figure 2 and 3 in our manuscript) for connection between SMoE and gradient descent. In particular, from Eqn. (8) and (9), the output of (S)MoE corresponds to $-f(x_t)$, where $f(x_t) = \nabla_xF(x_t)$ and is the gradient of the objective function $F$ with respect to $x$ at $x_t$. If (S)MoE indeed performs (stochastic) gradient descent in its forward pass, then the norm of the (S)MoE output must decrease when t (i.e., layer index) increases since the gradient norm $\||f(x_t)\||$ decreases when gradient descent updates are applied. Fig. 3 confirms this by showing that the norm of the (S)MoE output decreases over layers in a 6-layer (S)MoE model trained on the WikiText-103 language modeling task. At the last layer, the norm increase might be due to overshooting, a common phenomenon that can occur when using gradient descent.
Additionally, from line 116 to line 119 in our manuscript, we hypothesize that the scores $\text{softmax}(g_i(x_t))$ for MoE and $\text{softmax}(\text{TopK}(g_{i}(x_t)))$ for SMoE from the router $g(x_t)$ are learned to approximate $\alpha_i^*$. If this is indeed true, then in order to satisfy the Pareto-stationary condition in Definition 1 in our manuscript, the scores should be learned to minimize the norm $\||\sum_{i=1}^E \alpha_i \tilde{f_i}\||$ (see line 93 - 99 in our manuscript), which is equivalent to the norm of the (S)MoE output. Fig. 2 verifies this expectation by showing that each MoE and SMoE layer learns to reduce its output norm during training, suggesting that the scores $\text{softmax}(g_{i}(x_t))$ and $\text{softmax}(\text{TopK}(g_{i}(x_t)))$ are learned to approximate $\alpha_i^{*}$.
-----
We hope that our rebuttal have cleared your concerns about our work. We are glad to answer any further questions you have on our submission, and we would appreciate it if we can get your further feedback at your earliest convenience.
Pdf: /pdf/9534f756b3e37ad200814a5f0ce86cc142400fe0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Parallelizing Linear Transformers with the Delta Rule over Sequence Length | Accept (poster) | Summary: This paper proposes the Delta Rule method to construct the state updates for Linear Attention. Furthermore, the paper introduces a chunk-wise training approach, allowing the computational cost of training to grow subquadratically with the text length. Experimentally, the paper validates the effectiveness of the model architecture using three synthetic benchmarks: MQAR, MAD, and RegBench. Additionally, the paper uses Common Sense Reasoning and Retrieval tasks in LLM pre-training to verify the model's performance in real-world tasks. The model has been validated at scales ranging from 340M to 1.3B parameters. Furthermore, this paper explores the possibility of combining the Delta Rule with Sliding Window Attention and Global Attention, demonstrating the positive impact of the hybrid architecture on model performance.
Strengths: 1. Solid work. The paper provides a good derivation, offering a more general method for state updates in Linear Models.
2. The experiments are comprehensive and effectively demonstrate the validity of the model architecture.
Weaknesses: 1. Have you conducted experiments on long context? For example, measuring extrapolation and scenarios akin to "looking for a needle in a haystack"? As a linear model, I would like you to further discuss its capability to generalize to long context.
2. The algorithmic speed of Delta Net increases linearly, but it seems to be slower than GLA. Can you analyze the factors contributing to this?
3. Could you further explain the insights of the Delta Net updates? I understand there are algorithmic differences compared to GLA operators, but what unique benefits do they bring? Is there any theoretical analysis?
Technical Quality: 3
Clarity: 3
Questions for Authors: I would like to discuss the following questions with you:
Do you think linear models can fundamentally bridge the gap with transformers in memory-based tasks?
Is there an inherent conflict between the ability to handle long context and the performance of memory-based tasks?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review!
## W1 Long context experiments
Thanks for your suggestion. Models at 1B scale are currently not powerful enough to provide meaningful results on needle-in-the-haystack style long-range benchmarks. We are currently training models are larger scale (3B parameters), and will run the needle-in-the-haystack tests with these models once they have finished training. (We give the 3B model results, which have been trained on 167B tokens so far, on the other benchmarks. We find that DeltaNet performs comparably to Transformers at this scale.)
| Model | # Tokens | wikitext PPL | arc-c | arc-e | boolq | hellaswag | openbookqa | piqa | sciq | winogrande | average |
|------------|--------|-------|-------|-------|-----------|------------|-------|-------|------------|-----------|--------------|
| 3B-Transformer++ | 167B | 15.39 | 29.78 | 62.96 | 63.06 | 46.15 | 27.8 | 72.96 | 91.1 | 60.62 | 56.80 |
| 3B-DeltaNet| 167B | 15.34 | 29.78 | 63.8 | 65.08 | 46.57 | 26.2 | 74.32 | 88.5 | 58.09 | 56.54 |
## W2: reason of deltanet being slower than GLA.
Both DeltaNet and GLA scale linearly with respect to sequence length; however, the recurrent transition matrices in GLA are diagonal matrices, making them amenable to tiling techniques (commonly used to accelerate matrix multiplications on GPUs) because the hidden states are independent of each other. The recurrent transition matrices in DeltaNet are more expressive, modeling state-to-state dependencies and thus requiring "marginalizing" over the entire head dimension, making tiling less amenable, as discussed in the limitations section (lines 301-306).
Additionally, the computation of the WY representation in DeltaNet is more expensive than computing the cumulative product of decays in GLA. From Eq. 7-8, we can see that the DeltaNet chunkwise form involves more matrix multiplications than vanilla linear attention (and also GLA).
## W3 Intuitive explanation and theoretical justification of delta update rule
We found [1] motivating the delta rule intuitively from the perspective of key-value associative memory: the key intuition of the delta rule is to subtract the value associated with the current input key, namely the old value, from the memory, and write a new value that is a linear combination of the old value and the current input value into the memory. This encourages the formation of key-value associations, making retrieval easier.
Regarding theoretical justification, in lines 322-323, we reference several theoretical papers that reveal the superiority of the delta rule in memory capacity in a mathematical way.
[1] Linear Transformers Are Secretly Fast Weight Programmers https://arxiv.org/pdf/2102.11174
## Q1 Do you think linear models can fundamentally bridge the gap with transformers in memory-based tasks?
We don't think linear models cannot fundamentally bridge the gap with Transformers in memory-based (or recall-intensive) tasks. Several theoretical papers reveal such limitations [1, 2].
Still, we believe the use of the delta rule will push the Pareto frontier of the recall-memory tradeoff curve, a concept that was advocated in [2].
It is possible and promising to replace the subquadratic encoder in YOCO [3] with DeltaNet to fully bridge the gap with Transformers in memory tasks.
[1] RNNs are not Transformers (Yet): The Key Bottleneck on In-context Retrieval. https://arxiv.org/abs/2402.18510
[2] Simple linear attention language models balance the recall-throughput tradeoff https://arxiv.org/abs/2402.18668
[3] You Only Cache Once: Decoder-Decoder Architectures for Language Models https://arxiv.org/abs/2405.05254
## Q2: Is there an inherent conflict between the ability to handle long context and the performance of memory-based tasks?
This is a good question, and one for which the community doesn't have a good answer for yet (in our opinion). Our sense is that memory-based tasks will always require some attention-like mechanism which "retrieves" portions of thecontxt. It is possible that subquadratic-but-superlinear attention mechanisms (e.g., those based on clustering) may enable efficient long-context modeling while still enabling good performance on memory-based tasks.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response, the authors addressed my concerns, and I have raised my score. | Summary: This paper introduces a novel algorithm for the efficient training of DeltaNet Linear Transformers. DeltaNet enhances contextual associative recall using a delta rule-like update but was previously limited by inefficient parallelization in its training algorithm. The work described in this paper presents a hardware-efficient algorithm that leverages the memory-efficient WY representation for computing products of Householder matrices, enabling the scaling of DeltaNet similar to other linear Transformer models. The authors trained a 1.3B parameter model on 100B tokens and found that it outperforms strong linear-time baselines such as Mamba and GLA in terms of perplexity and zero-shot performance on downstream tasks.
Strengths: - The paper introduces a novel hardware-efficient algorithm for training DeltaNet Linear Transformers, leveraging the WY representation of Householder matrices, which effectively addresses the parallelization limitations of previous algorithms.
- Through large-scale experiments, the authors demonstrate that DeltaNet significantly outperforms existing models like Mamba and GLA in terms of perplexity and zero-shot performance on downstream tasks.
- The new algorithm enables the scaling of DeltaNet to larger datasets and parameter sizes, which is crucial for large language models.
Weaknesses: The algorithms presented in this paper are satisfactory in terms of efficiency and performance.
Technical Quality: 4
Clarity: 3
Questions for Authors: I have no questions for this paper.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review!
We are adding some additional results in case they are of interest.
First, we have preliminary results with 3B models trained for 167B tokens:
| Model | # Tokens | wikitext PPL | arc-c | arc-e | boolq | hellaswag | openbookqa | piqa | sciq | winogrande | average |
|------------|--------|-------|-------|-------|-----------|------------|-------|-------|------------|-----------|--------------|
| 3B-Transformer++ | 167B | 15.39 | 29.78 | 62.96 | 63.06 | 46.15 | 27.8 | 72.96 | 91.1 | 60.62 | 56.80 |
| 3B-DeltaNet| 167B | 15.34 | 29.78 | 63.8 | 65.08 | 46.57 | 26.2 | 74.32 | 88.5 | 58.09 | 56.54 |
We are aiming to train these models for longer (300B-1T tokens) and will report these results in the next iteration of the paper.
We have also run experiments on the recently-released [MAD benchmark](https://arxiv.org/abs/2403.17844), a suite of synthetic tasks designed to test the capabilities of language models beyond perplexity. Here are the results:
| Model | Compress | Fuzzy Recall | In-Context Recall | Memorize | Noisy Recall | Selective Copy | Average |
|---------------|----------|--------------|-------------------|----------|--------------|----------------|---------|
| Transformer | 51.6 | 29.8 | 94.1 | 85.2 | 86.8 | 99.6 | 74.5 |
| Hyena | 45.2 | 7.9 | 81.7 | 89.5 | 78.8 | 93.1 | 66.0 |
| Multihead Hyena | 44.8 | 14.4 | 99.0 | 89.4 | 98.6 | 93.0 | 73.2 |
| Mamba | 52.7 | 6.7 | 90.4 | 89.5 | 90.1 | 86.3 | 69.3 |
| GLA | 38.8 | 6.9 | 80.8 | 63.3 | 81.6 | 88.6 | 60.0 |
| DeltaNet | 42.2 | 35.7 | 100 | 52.8 | 100 | 100 | 71.8 |
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. Your responses solve my concern to some extent. I maintain my score. | Summary: This paper proposes a hardware-efficient algorithm for training linear transformers with a delta update (DeltaNet; SMS21). This architecture has an attention formulation that prevents the direct application of chunk-wise parallel algorithms for computing its output. To address this issue, the authors introduce a re-parameterization of DeltaNet as a matrix-valued RNN whose recurrence is given by a generalized Householder transformation. This enables the use of WY representation which is memory efficient and eliminates the need to materialize the hidden state matrices. Experiments on synthetic benchmarks and language modeling tasks shows competitive performance compared to strong baselines (Mamba, GLA) and faster speed than the original Deltanet implementation.
Strengths: - The paper is well motivated and situated with respect to prior work. It provides sufficient background for linear transformers, demonstrates great scholarship in crediting prior work, and has a clear exposition of the proposed idea. In addition, it presents an informative overview that compares the formulations of recent linear transformers that highlights their differences.
- Proposes an efficient algorithm for training linear transformers with the delta update which is a competitive variant. The re-parameterization is non-obvious and leverages WY representation for Householder matrices in a novel way. Previously, this architecture could not be easily scaled to larger models and datasets with a recurrent formulation. In addition, it introduces two competitive hybrid methods based on DeltaNet that leverage local and global full attention.
- Demonstrates the effectiveness of the proposed approach on two synthetic benchmarks and eleven language modeling and understanding tasks compared to strong baselines such as Mamba and GLA. The results are consistent, have a good coverage, and are important for the researchers working on efficient transformers.
- The experiments are thorough and have convincing settings, namely all the variants are trained from scratch with the same configurations, there are ablations to justify the design choices, and the experimental reporting is very detailed.
Weaknesses: - W1. In terms of scale, the model explores two different architectures of increasing size up to 1.3B parameters. Even though this size is considerable, it is still relatively small compared to the LLMs that are widely used such as Llama, Mistral (7B+ size). There is always the question of whether the quality is maintained with further model increase.
- W2. The improved results compared to Mamba and GLA make use of additional architectural components: convolution and local/global attention, without them the results are comparable to the other models.
Technical Quality: 3
Clarity: 4
Questions for Authors: - Q1: What is the effect of chunk size in the chunk-wise parallel algorithm for DeltaNet? Varying the chunk size $C$ and showing its effect in efficiency would be interesting to explore.
- Q2: The chunk-level hidden states $S_{[t]}$'s are discarded to save memory. From Eq. 7, it seems that their computation depends on the previous hidden states $S_{[t-1]}$'s. Are these kept in memory for the re-computation in the backward pass?
- Q3: GLA with convolution performs worse than w/o convolution with the larger model size. Do you expect this to be the case for DeltaNet as well? It would be good to add this result if possible.
Minor:
- In Table 2, is the L1/L2 norm referring to the normalization of queries and keys? Please specify.
- In Eq.1, why is this equation showing the state $S_{[t+1]}$ instead of $S_{[t]}$? The latter is used in Eq. 2. Same for Eq. 7.
- l172: stable -> stabilize
- l213-214: we -> we follow
- l321: vallina -> vanilla
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes, they have.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review.
## W1: larger scale experiments
As noted by the reviewer, it is difficult to conduct experiments at 1B+ scale. Nonetheless, we are currently running some larger-scale experiments at the 3B parameter scale. Here are some preliminary results:
| Model | # Tokens | wikitext PPL | arc-c | arc-e | boolq | hellaswag | openbookqa | piqa | sciq | winogrande | average |
|------------|--------|-------|-------|-------|-----------|------------|-------|-------|------------|-----------|--------------|
| 3B-Transformer++ | 167B | 15.39 | 29.78 | 62.96 | 63.06 | 46.15 | 27.8 | 72.96 | 91.1 | 60.62 | 56.80 |
| 3B-DeltaNet| 167B | 15.34 | 29.78 | 63.8 | 65.08 | 46.57 | 26.2 | 74.32 | 88.5 | 58.09 | 56.54 |
As we can see, the DeltaNet results are comparable to the Transformer++ results at this scale. We will make sure to include these results once they have finished training (we are aiming for 300B-1T tokens, depending on the availability of compute). We also plan to open-source the pretrained models so that researchers can study these models in more detail.
We have also run experiments on the recently-released [MAD benchmark](https://arxiv.org/abs/2403.17844), a suite of synthetic tasks designed to test the capabilities of language models beyond perplexity. Here are the results:
| Model | Compress | Fuzzy Recall | In-Context Recall | Memorize | Noisy Recall | Selective Copy | Average |
|---------------|----------|--------------|-------------------|----------|--------------|----------------|---------|
| Transformer | 51.6 | 29.8 | 94.1 | 85.2 | 86.8 | 99.6 | 74.5 |
| Hyena | 45.2 | 7.9 | 81.7 | 89.5 | 78.8 | 93.1 | 66.0 |
| Multihead Hyena | 44.8 | 14.4 | 99.0 | 89.4 | 98.6 | 93.0 | 73.2 |
| Mamba | 52.7 | 6.7 | 90.4 | 89.5 | 90.1 | 86.3 | 69.3 |
| GLA | 38.8 | 6.9 | 80.8 | 63.3 | 81.6 | 88.6 | 60.0 |
| DeltaNet | 42.2 | 35.7 | 100 | 52.8 | 100 | 100 | 71.8 |
## W2: The improved results compared to Mamba and GLA make use of additional architectural components: convolution and local/global attention, without them the results are comparable to the other models.
Thanks for the comment! Note that Mamba uses conv layers by default, while GLA does not. This is why we trained our own GLA+conv baselines. Our main experiments compare DeltaNet+conv against Mamba+conv and GLA+conv, i.e., we give all models the chance to use convolution layers for a fair and meaningful comparison. In this setting, DeltaNet outperforms both Mamba and GLA. Morevoer, these depthwise separable "short convolution" layers are cheap both in terms of the number of parameters and compute, and hence we think that this convolution primitive is a practical approach to modeling local interactions.
## Q1 Effect of Chunk size C
The computational complexity of the WY representation is O((N/C) * C^3) = O(NC^2). If C is too large, WY representation computation will be very expensive (recall that WY computation is fully recurrent).
If the chunk size is too small, and we adapt the materialization version of FlashLinearAttention, we need to write more hidden states to HBMs, resulting in a high I/O burden and thereby slowing down the actual running speed. If we adapt the non-materialization version of FlashLinearAttention, it will lack sequence-level parallelism, which is not desirable in large-scale training. Please refer to Section 3.3 of the GLA paper for more discussions.
To provide some intuition, we measured the running time (forward and backward pass) on a single A100 GPU while varying the chunk size \( C \):
| Chunk Size \( C \) | Forward + Backward Time (ms) |
|--------------------|------------------------------|
| 16 | 4.8738 |
| 32 | 3.6616 |
| 64 | 5.2382 |
| 128 | 16.0602 |
The experimental settings were as follows: sequence length \( \text{seq\_len} = 4096 \), batch size \( B = 2 \), head dimension \( \text{d\_head} = 128 \), model dimension \( \text{d\_model} = 2048 \), and number of heads \( \text{num\_head} = 16 \). As we can see, the moderate chunk size of 32 performs the best. When the chunk size is less than 32, the I/O cost surpasses the WY representation cost, and vice versa.
## Q2: Chunk state
Yes this is correct! Concretely, we adapt the "non-materialization" version of FlashLinearAttention for chunkwise DeltaNet implementation. In this version, the hidden states of each chunk are materialized on HBMs in the forward pass, then discarded to save memory. During the backward pass, the hidden states of each chunk are recomputed.
## Q3: DeltaNet w/o shortconv in large scale
Thank you for the suggestion. We agree that this will be an interesting experiment to run. We are currently using all our resources for the 3B experiments, but we will run this ablation once the 3B experiments are done.
## Minor
Thanks for identifying the typos/errors! We will fix these.
---
Rebuttal 2:
Title: Response to authors
Comment: Thank you for answering my questions and providing additional results!
- The experiments with 3B model provide evidence that performance is comparable to that of a Transformer as model size increases to 3B. Given the compute constraints, my concern has been addressed adequately.
- Regarding the convolutional component, I am not challenging the fairness of the comparison. The potential issue is that DeltaNet without convolution is in several tasks worse than GLA without convolution. Any insight on why that is? For completeness, I'd suggest adding some discussion about it to emphasize that it's essential to the performance of DeltaNet and reporting the results of DeltaNet w/o convolution for the 1.3B model, which are currently missing.
- I found the results about the effect of chunk size insightful; it'd be good to include these results in the final version.
My questions and concerns have been answered adequately except the second bullet point above. Hence, I decided to keep my original scores.
---
Rebuttal 3:
Comment: Thanks for your feedback!
## The potential issue is that DeltaNet without convolution is in several tasks worse than GLA without convolution. Any insight on why that is?
Local information plays a critical role in NLP tasks. As highlighted in [3], incorporating more local features can significantly enhance the performance of linear attention. Many recent studies also demonstrate that additional local attention mechanisms greatly improve linear attention.
Short convolutions are one method to enhance local information, while the gating mechanism in GLA imposes a strong inductive bias towards local context. DeltaNet’s update rule, on its own, doesn’t prioritize local information, so without short convolutions, it may underperform in NLP tasks. However, when short convolutions are applied, DeltaNet’s ability to leverage local context is improved. Since GLA already incorporates gating mechanisms to enhance local information, short convolutions may not provide additional benefits.
Additionally, it’s important to note that both the delta rule and the vanilla linear attention update rules can be seen as "context-based addressing" mechanisms. [1] emphasizes the importance of "location-based addressing," and [2] also highlights that a drawback of linear attention is its "lack of precise local token shifts and comparisons." The gating mechanism in GLA addresses this by emphasizing nearby tokens, while short convolutions can be considered another form of location-based addressing.
- [1] https://arxiv.org/abs/1410.5401 Neural Turing Machines
- [2] https://arxiv.org/abs/2402.18668 Simple linear attention language models balance the recall-throughput tradeoff
- [3] https://arxiv.org/abs/2312.11135 Linear Attention via Orthogonal Memory
## For completeness, I'd suggest adding some discussion about it to emphasize that it's essential
Thank you for your suggestion! We will include this discussion in the next iteration of the paper draft to emphasize its importance. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Equivariant Machine Learning on Graphs with Nonlinear Spectral Filters | Accept (poster) | Summary: This paper addresses the issue of inadequate modeling of graph equivariance in existing spectral GNNs due to nonlinear operations. The authors investigate the concept of domain translation in graph space as functional translations, drawing from the convolutional operations defined on images. Based on a series of in-depth analyses, they propose nonlinear spectral filters (NLSFs) that are fully equivariant to graph functional shifts and demonstrate universal approximation capabilities.
Strengths: 1. The research problem is highly valuable, and the ideas presented are novel.
2. The theoretical analysis is rigorous, thoroughly supporting the arguments and solutions proposed in the paper. It reflects the authors' deep understanding and insight in the field.
2. Notable experimental improvements in classification tasks.
Weaknesses: 1. The paper is missing important spectral GNN models such as [1,2,3,4].
2. There is a lack of discussion on equivariant GNNs, such as [5,6,7,8,9,10,11,12]. While the focus is on spectral GNNs, it is also essential to discuss works in the spatial GNN domain, especially given that the paper’s title starts with "Equivariant Machine Learning on Graphs." Positioning your work within the broader GNN field can further elucidate the significance and unique advantages of your contributions.
3. The experiments are somewhat lacking in comprehensiveness. While you have demonstrated the effectiveness of your proposed model in common node and graph classification tasks, your core contribution is enhancing equivariant representation learning in spectral GNNs. I suggest adding experiments that specifically show how the improved performance is due to enhanced equivariant learning, such as graph isomorphism tests.
[1] How powerful are spectral graph neural networks
[2] Bernnet: Learning arbitrary graph spectral filters via bernstein approximation
[3] Specformer: Spectral graph neural networks meet transformers
[4] Graph neural networks with learnable and optimal polynomial bases
[5] Universal Invariant and Equivariant Graph Neural Networks
[6] E(n) Equivariant Graph Neural Networks
[7] On the Generalization of Equivariant Graph Neural Networks
[8] Expressive Power of Invariant and Equivariant Graph Neural Networks
[9] Approximately Equivariant Graph Networks
[10] Equivariant Polynomials for Graph Neural Networks
[11] Graph Neural Networks for Learning Equivariant Representations of Neural Networks
[12] Sign and Basis Invariant Networks for Spectral Graph Representation Learning
Technical Quality: 3
Clarity: 3
Questions for Authors: see weaknesses
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: yes, the authors discuss the limitations in Section 6
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful and constructive review.
>**Additional Comparison with Spectral GNNs:**
Thank you for the comment.
Following the reviewer's request, we included JacobiConv, BernNet, Specformer, and OptBasisGNN in our node classification experiment.
Moreover, motivated by other comments about heterophilic graphs, we extended NLSFs to include an additional filter capturing the band that contains the remaining non-leading eigenvectors, which we call rem-NLSFs. For Index NLSFs, we incorporate $\mathbf{I} - \sum_ {j=1}^J \mathbf{P}_ j$, and for Value NLSFs, we add the $\mathbf{I} - \sum_ {j=1}^K g_ j(\mathbf{\Delta})$. This alleviates the loss of information due to projecting to the low-frequency bands. All of our proofs are trivially extended to rem-NLSFs. If the reviewer would like to see the new proofs, which are almost identical to the old ones, we can send an anonymous PDF file to the AC according to the regulations of NeurIPS.
Table 2 of the global response PDF presents the node classification accuracy for these methods and rem-NLSFs. We see that rem-NLSFs outperform the other models on the Cora, Citeseer, and Chameleon datasets. For the Pubmed and Actor datasets, rem-NLSFs achieve the second-best results. Adding the orthogonal component alleviates the loss of information due to projecting to the low-frequency bands and, therefore, further improves the empirical results. The new experiments will be included in the camera-ready version if accepted.
>**Additional Discussion on Equivariant GNNs:**
Thank you for this feedback.
While Section 2 already shortly discusses the background of Equivariant GNNs, we add an extended review in the appendix to include more works:
In recent developments on equivariant GNN model design, Satorras et al. (2021) introduced a new model for learning graph neural networks (GNNs) that are equivariant to rotations, translations, reflections, and permutations, Huang et al. (2023) developed an approach for designing GNNs that respect approximate automorphism symmetries. Additionally, Lim et al. (2023) presented SignNet and BasisNet, which process eigenvectors invariant to sign and eigenspaces. Different from these methods, Kofinas et al. (2024) proposed representing neural networks as computational graphs of parameters. In a more theoretically oriented work, Keriven et al. (2019) extended the Stone-Weierstrass theorem for the algebra of real-valued functions to the equivariant case, and Karczewski et al. (2024) established a generalization bound for E(n)-Equivariant GNNs. Azizian et al. (2021) explored the expressive power of message-passing GNNs, linear GNNs, and folklore GNNs, for both their invariant and equivariant versions. Additionally, Puny et al. (2023) introduced an alternative expressive power hierarchy to evaluate GNNs based on their ability to approximate equivariant graph polynomials.
We will also include a more detailed comparison to methods that consider graph automorphisms as symmetries, like Huang et al. (2023). Let us explain the main idea of the comparison. The goal is to illustrate that functional translations are more stable than hard symmetries of the graph, namely, graph automorphisms (isomorphisms from the graph to itself). Automorphisms are another analog to translations on general graphs, which competes with our notion of functional shifts. Consider as an example the standard 2D grid on the torus. The automorphisms of the grid include all translations. However, this notion of hard symmetry is very sensitive to graph perturbations. If one adds or deletes even one edge in the graph, the automorphism group becomes much smaller and does not contain the translations anymore. On the other hand, we claim that functional shifts are not so sensitive. To visualize this, we conducted a toy experiment that illustrates that graph functional shifts are stable to graph perturbations. Specifically, we show that standard translations can be approximated by a graph functional shift on a perturbed grid (adding Gaussian noise to the edge weights), optimizing the coefficients of the functional shift so the operator is as close as possible to the classical shift. See Figure 2 in the PDF. We then train the NLSFs on a perturbed grid to do MNIST classification, following the setup in [1,2,3]. We compare the NLSF on the clean grid to the NLSF on the perturbed grid. Table 3 presents the performance of the NLSF, which is almost unaffected by the perturbation, which demonstrates that NLSFs on the perturbed grid can roughly implement CNN-like architectures (translation equivariant functions).
[1] Defferrard et al. (2016). Convolutional neural networks on graphs with fast localized spectral filtering.
[2] Monti et al. (2017). Geometric deep learning on graphs and manifolds using mixture model CNNs.
[3] Levie et al. (2018). Cayleynets: Graph convolutional neural networks with complex rational spectral filters.
>**Experiments Highlighting the Benefits of Equivariant Learning:**
Thanks for this comment.
It is important to note that the connection between graph isomorphism testing and GNNs is primarily theoretical. In practical applications, spectral GNNs are not commonly utilized for direct isomorphism testing to the best of our knowledge. We would appreciate it if the reviewer could provide any specific work they are referring to for further clarification.
Following the reviewer's suggestion, as explained above, we added a new toy example demonstrating how a standard translation can be estimated by a graph functional shift on a perturbed grid, which shows the robustness of functional shift symmetries. We then train an NLSF MNIST classifier, which illustrates that NLSFs can roughly implement CNN-like architectures, also on a perturbed grid.
Thank you again for your valuable feedback. We will include these examples in the camera-ready version if accepted.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' detailed rebuttal. As all of my concerns have been addressed, I have raised my score.
---
Reply to Comment 1.1.1:
Comment: We warmly thank the reviewer for their comments and for acknowledging our rebuttal. | Summary: This paper proposes a spectral GNN called non-linear spectral filters (NLSF), which aims to enhance GNNs with nonlinear functions. Since general GNNs with nonlinear functions do not commute with unitary operators, this paper defines Graph Functional Shifts, which is a set of unitary matrices commuting with a normal graph shift operator (GSO). It then formulates two functions for spectral index and filter bank, respectively, and concatenates these two functions as graph attention. In the experiment section, NLSF is compared with GAT, SAGE, and other spectral-like GNNs. In the node-classification task, att-Node-level NLSF shows outstanding performance among these models. In the graph classification task, att-Graph-level NLSF achieves comparable results with these models, and att-Pooling-NLSF performs better than other models in the graph classification task.
Strengths: 1. NLSFs have a solid mathematical foundation and proof, especially on Universal Approximation and graph expressivity.
2. The experimental results validate the effectiveness of the theory.
Weaknesses: 1. The method proposed in this paper requires the use of eigenvalues, hence it necessitates eigen decomposition of the GSO. The time complexity of eigendecomposition is relatively high, especially for very large graphs.
Technical Quality: 3
Clarity: 3
Questions for Authors: None
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comment.
Please note that in Section 4.1, we discussed the complexity of our method. We elaborated on the efficiency of the Lanczos algorithm, which is well-known for its computational efficiency. For estimating the leading $J$ eigenvectors, the Lanczos algorithm takes $O(JE)$ operations per iteration and converges very fast, where $E$ is the number of edges. Hence, the Lanczos algorithm is as efficient as a message-passing neural network that operates on features of dimension $J$.
Section 4.1 already discusses the efficiency of the Lanczos algorithm, but we will extend the text and make this clearer. We will clarify that the Lanczos algorithm (for sparse graphs) can be seen as a message-passing algorithm, as it only uses the nonzero entries of the matrix, which are interpreted as edges in our context of graph machine learning.
Additionally, we reported the runtime analysis in Appendix C, where the average running time per epoch(ms)/average total running time(s) of our method is 18.22/4.5 for Cora, 18.51/4.4 for Citeseer, 20.23/6.1 for Pubmed, 28.51/17.1 for Chameleon, 25.56/5.1 for Squirrel, and 17.09/4.6 for Actor. In comparison, for example, the runtime for ChebNetII is 20.53/5.9 for Cora 20.61/5.7, for Citeseer, 33.57/12.9 for Pubmed, 39.03/17.3 for Chameleon, 37.29/38.04 for Squirrel, and 40.05/9.1 for Actor. Our method is efficient and comparable with competing baselines.
We appreciate the reviewer's feedback and will make sure that these points are more explicitly detailed in the camera-ready versions if accepted. | Summary: The authors introduce spectral GNNs that are equivariant to functional symmetries. Specifically, they introduce node-level, graph-level and pooling non-linear spectral filters and show that these are able to outperform standard convolutional GNNs on (semi-supervised) node classification and graph classification tasks.
Strengths: - The experimental results are compelling.
- To the best of my understanding, the theory is sound
- The idea being proposed is novel and worth being investigated
- The paper is clearly written, even though it doesn't seem to be very accessible to readers unfamiliar with graphs signals processing
Weaknesses: - While the authors did a good job in trying to introduce all the relevant concepts, the paper is quite dense with mathematical details and notions that will likely be unfamiliar to many GNN researchers and may therefore hinder the accessibility of the manuscript.
Technical Quality: 4
Clarity: 3
Questions for Authors: - While it's very intuitive to understand what is meant by "shift" in the context of images and CNNs, this doesn't come across very clear in the paper in the context of graphs: what is the rationale behind the decision to "model the group of translations on graphs as the group of all unitary operators on signals that commute with the graph shift operator"? If more space in the paper can be used to make the underlying concepts more accessible (perhaps moving some of the material on the theoretical properties of the NLSFs to the appendix) I think the paper would greatly gain in accessibility, potentially increasing its impact beyond the graph signal processing community.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The limitations are clearly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive and encouraging comments.
>**Enhancing Accessibility of Theoretical Contributions:**
Thank you for your comment.
We recognize the importance of making our manuscript accessible to a broader audience. To improve accessibility, we will provide more details on the mathematical concepts in the appendix. For instance, we will write and explain Schur’s lemma, extend the discussion on random geometric graphs, and elaborate on the background of the cut norm and Hoeffding's inequality.
>**Clarifying the Concept of Functional Translation on Graphs for Enhanced Accessibility:**
We will extend the text in Section 3.1 to further clarify the link between graph functional shifts and standard domain translations. We will start by showing that on the standard grid (on the 2D torus) with central difference GSO, the standard Fourier basis is the eigenbasis of the Laplacian. We will then show that any domain shift on the grid, when applied on signals, is equivalent to modulation in the frequency domain. Hence, domain shifts commute with the Laplacian (both of them are diagonal matrices in the frequency domain). Functional shifts are defined by adopting this property as a definition. We call *any* unitary operator that commutes with Laplacian a graph functional shift. For the grid, these include exactly the operators that multiply each frequency by a complex number of unit modulus.
Fig. 1 (in the global response PDF) presents a comparison of a classical and a functional translation of a Gaussian signal. The classical translation $f(t-T)$ is a specific case of functional translation, which is equivalent to modulation in the frequency domain $e^{-i\omega T}\hat{f}(\omega)$. Our example of a graph functional shift is an operator that modulates low frequencies by a given rate and modulates high frequencies by another rate. This is interpreted as shifting the low-frequency content and the high-frequency content of the signal at different speeds. This illustrates that functional symmetries are more rich than domain symmetries.
In addition, we add another experiment showing that functional translations are much more stable than hard symmetries of the graph, namely, graph automorphisms (isomorphisms from the graph to itself). Autonorpisms are another analogue to translations on general graphs, which competes with our notion of functional shifts. Consider again the standard 2D grid on the torus. The automorphisms of the grid include all translations. However, this notion of hard symmetry is very sensitive to graph perturbations. If one adds or deletes even one edge in the graph, the automorphism group becomes much smaller and does not contain the translations anymore. On the other hand, we propose a toy experiment that illustrates the graph functional shifts are stable to graph perturbations. Specifically, we show that standard translations can be approximated by a graph functional shift on a perturbed grid (adding Gaussian noise to the edge weights), optimizing the coefficients of the functional shift so the operator is as close as possible to the classical shift. See Figure 2 in the PDF. We then train the NLSFs on a perturbed grid to do MNIST classification, following the setup in [1,2,3]. We compare the NLSF on the clean grid to the NLSF on the perturbed grid. Table 3 shows that the performance of the NLSF is almost unaffected by the perturbation, which demonstrates that NLSFs on the perturbed grid can roughly implement CNN-like architectures (translation equivariant functions).
These examples will be incorporated into the camera-ready version if accepted, including the discussion about automorphism symmetries vs functional shifts.
[1] Defferrard et al. (2016). Convolutional neural networks on graphs with fast localized spectral filtering.
[2] Monti et al. (2017). Geometric deep learning on graphs and manifolds using mixture model CNNs.
[3] Levie et al. (2018). Cayleynets: Graph convolutional neural networks with complex rational spectral filters.
---
Rebuttal Comment 1.1:
Comment: I appreciate the clarifications and support the proposed updates in the camera ready version of the paper.
---
Reply to Comment 1.1.1:
Comment: We warmly thank the reviewer for their comments and for acknowledging our rebuttal. | Summary: The authors propose nonlinear spectral filters (NLSFs) that achieve full equivariance to graph functional shifts, demonstrating that these filters have universal approximation properties. These NLSFs are designed based on transferable spectral domain, potentially improving GNN performance in node and graph classification tasks across diverse graph structures.
Strengths: 1- The paper is well-written and self-contained, offering clear, didactic insights. The experiments provide valuable conclusions that future practitioners will find useful. However, a synthesis of the information could further enhance readability and understanding for the reader.
2- The use of the nonlinear spectral filters for graphs to achieve full equivariance to graph functional shifts may be a promising avenue to explore.
Weaknesses: Despite these merits, I have the following concerns about the paper.
1- While the paper presents a compelling method with potential applications in graph analysis, one significant limitation is its scalability, particularly concerning large-scale graphs. The reliance on specific spectral properties, such as the leading eigenvectors, may not only limit the method's capacity to capture diverse graph dynamics but also result in computational inefficiencies when applied to extensive graph datasets.
2- The datasets used in the paper predominantly consist of mid-sized, homophilic graphs, which may not fully represent the diverse range of real-world applications, particularly in contexts involving heterophilic graphs.
3- The efficiency of the proposed models in terms of computation and resource utilization is not adequately discussed.
Technical Quality: 3
Clarity: 3
Questions for Authors: (i) Does your theory adapt differently when applied to heterophilous graphs compared to homophilous graphs, and if so, how are these differences addressed within your methodology?
(ii) Given that your Nonlinear Spectral Filters (NLSFs) are motivated by respecting graph functional shift symmetries, similar to Euclidean CNNs, do you have any claims or observations regarding how NLSFs fit within or potentially extend the Weisfeiler-Lehman hierarchy of expressivity? Additionally, could you elaborate on how the expressivity of NLSFs, as informed by metrics from the Euclidean vector space, compares to traditional graph neural network models?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Also, the efficiency of the proposed models in terms of computation and resource utilization is not adequately addressed. For practical applications, especially in resource-constrained environments, understanding the computational overhead is essential.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments and questions.
>**Complexity, Efficiency, and Runtime Analysis:**
In Section 4.1, we discussed our method's complexity and efficiency, highlighting the efficiency of the Lanczos algorithm. This algorithm estimates the leading $J$ eigenvectors in $O(JE)$ operations per iteration and converges quickly, making it as efficient as a message-passing neural network with $J$-dimensional features.
We will extend Section 4.1 to clarify that the Lanczos algorithm can be viewed as a message-passing algorithm for sparse graphs, using only the matrix's nonzero entries, interpreted as edges in graph machine learning. In addition, we reported the runtime analysis in Appendix C, where the average running time per epoch(ms)/average total running time(s) of our method is 18.22/4.5 for Cora, 18.51/4.4 for Citeseer, 20.23/6.1 for Pubmed, 28.51/17.1 for Chameleon, 25.56/5.1 for Squirrel, and 17.09/4.6 for Actor. In comparison, for example, the runtime for ChebNetII is 20.53/5.9 for Cora 20.61/5.7, for Citeseer, 33.57/12.9 for Pubmed, 39.03/17.3 for Chameleon, 37.29/38.04 for Squirrel, and 40.05/9.1 for Actor. Our method is efficient and comparable with competing baselines. Furthermore, we will include the space overhead (MB) of our method: 79 for Cora, 163 for Citeseer, 2054 for Pubmed, 85 for Chameleon, 263 for Squirrel, and 319 for Actor. In comparison, the space overhead for ChebNetII is 67 for Cora, 159 for Citeseer, 1821 for Pubmed, 79 for Chameleon, 259 for Squirrel, and 316 for Actor.
We also explain below how we addressed the limitation of computing only the leading eigenvectors. We slightly extended the NLSF method (the extension is called rem-NLSF), which now allows capturing all of the frequency contant of the graph. See the details below.
>**Theoretical Analysis on Heterophilic Graphs and Large-Scale Datasets:**
Thank you for your comment.
We would like to clarify that in our first submission, we used three heterophilic graphs (Chameleon, Squirrel, and Actor) among the six datasets for the node classification task. We will emphasize this point more clearly in our revised version.
Motivated by the reviewer's question, we extend our approach to include an additional filter in both Index and Value NLSFs. For Index NLSFs, we incorporate the $(J+1)$-th filter as $\mathbf{I} - \sum_ {j=1}^J \mathbf{P}_ j$, and for Value NLSFs, we add the $(K+1)$-th filter $\mathbf{I} - \sum_ {j=1}^K g_ j(\mathbf{\Delta})$. This alleviates the loss of information due to projecting to the low-frequency bands. Now, the full spectral range of the signal is captured by the NLSF.
We denote these NLSFs with orthogonal complement as **rem-NLSFs**. This setup is particularly relevant for heterophilous graphs, which require high-frequency components to accurately represent the labels. Table 2 presents the classification performance, and we see that rem-NLSFs improve performance for heterophilic graphs. We will motivate this construction in our paper from the perspective of heterogeneous components of the target.
The theory, and all of our proofs, trivially extend to rem-NLSF. If the reviewer would like to see the new version of the proofs, which are almost identical to the old proofs, we would be more than happy to send an anonymous PDF file to the AC according to NeurIPS regulations.
In addition, following the reviewer's questions, we conduct additional tests on five large heterophilic graphs: Penn94, Pokec, Genius, Twitch-Gamers, and Wiki datasets from Lim et al. (2021). The experimental setup is in line with previous work by Chien et al. (2021), Lim et al. (2021), and He et al. (2022). We use the same setup for our NLSFs as reported in Appendix B. Table 1 (in the global response PDF) presents the classification accuracy. We see that rem-NLSFs outperform the competing methods on the Penn94, Pokec, Genius, and Wiki datasets. For the Twitch-Gamers dataset, rem-NLSFs yield the second-best results. Our additional experiments show that our method could indeed scale to handle large-scale graphs effectively.
Thank you again for the valuable comment.
>**Expressivity:**
We note that graph-level NLSFs are bounded by the expressive power of MPNNs with random positional encodings. For example, in [1], the authors showed that random features improve GNN expressivity, distinguishing certain structures that deterministic GNNs cannot. The Lanczos algorithm for computing the leading $J$ eigenvectors can be viewed as a message-passing algorithm with a randomly initialized $J$-channel signal.
In Section 4.3, we discuss the expressivity in the context of a single fixed graph with variable signals. This setting is unrelated to the traditional graph isomorphism test hierarchy of expressivity.
At the graph-level, our example in Appendix A.2.4 can be used to show that graph-level NLSFs (without synthesis and pooling) are not more expressive than spectral GNNs. We also have an approach for showing that spectral GNNs are more expressive than graph level NLSF (without synthesis and pooling), but it requires very high dimensional features and unstable filters. If the reviewer would like to see the analysis, we can write you a comment in OpenReview or send an anonymous PDF to the AC according to the NeurIPS regulations.
Regarding pooling-NLSFs, we do not have an answer to whether standard spectral GNNs are more powerful than pooling-NLSFs with unlimited resources. A more practically useful question is comparing their expressivity within the same parameter budget. This is a theoretically challenging question. However, in practice, NLSFs typically perform better than standard spectral GNNs with equal budgets, or equivalent architecture. This comparison is also valid for graph-level NLSF (without synthesis and pooling).
We will add a discussion on these points in the camera-ready version if accepted.
[1] Sato et al. (2021). Random features strengthen graph neural networks.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their extensive rebuttal which has answered some of my concerns. After reading the rebuttal and other reviewers' comments, I decided to keep my current rating.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their comments and for acknowledging our rebuttal. We would be happy to address any remaining concerns if the reviewer has any. | Rebuttal 1:
Rebuttal: # General Response to All the Reviewers
We thank the reviewers for their valuable input and criticism. We highlight the main revisions to the paper below.
>**Enhancing NLSFs with Orthogonal Complements:**
Our original method projected the signal's information to the leading (low) frequencies. This projection can lose important information, especially in the context of heterogeneous graphs, which require high frequencies to represent the target/label. To mitigate this limitation, we slightly extended NLSFs to include an additional high-pass filter, denoting the new method **rem-NLSFs**. For Index NLSFs we add $\mathbf{I} - \sum_ {j=1}^J \mathbf{P}_ j$, and for Value NLSFs we add $\mathbf{I} - \sum_ {j=1}^K g_ j(\mathbf{\Delta})$. Now, the full spectral range of the signal is captured by the NLSF.
Table 2 presents classification performance on benchmark graphs, where rem-NLSFs improve performance. We will motivate this construction in our paper from the perspective of heterogeneous graphs.
The theory, and all of our proofs, trivially extend to rem-NLSF. If the reviewers would like to see the new version of the proofs, which are almost identical to the old proofs, we would be happy to send an anonymous PDF file to the AC according to the NeurIPS regulations.
>**Illustrating Functional Translations:**
Some reviewers asked for an additional discussion and illustrations focused on helping the reader to better grasp the new notions of symmetry.
We will start by showing that on the standard grid (on the 2D torus) with central difference GSO, the standard Fourier basis is the eigenbasis of the Laplacian. We will then show that any domain shift on the grid, when applied on signals, is equivalent to modulation in the frequency domain. Hence, domain shifts commute with the Laplacian (both of them are diagonal matrices in the frequency domain). Functional shifts are defined by adopting this property as a definition. We call *any* unitary operator that commutes with Laplacian a graph functional shift. For the grid, these include exactly the operators that multiply each frequency by a complex number of unit modulus.
Fig. 1 (in the global response PDF) presents a comparison of a classical and a functional translation of a Gaussian signal. While classical translation shifts the whole signal uniformly, functional translations can shift different frequency bands at different speeds. This illustrates that functional symmetries are more rich than domain symmetries.
In addition, we add another experiment showing that functional translations are much more stable than hard symmetries of the graph, namely, graph automorphisms (isomorphisms from the graph to itself). Autonorpisms are another analogue to translations on general graphs, which competes with our notion of functional shifts. The relevant experiments are in Fig.2 and Table 3 in the PDF. See the response to Reviewer 6hk5 for more details.
These examples will be incorporated into the camera-ready version if accepted.
>**Expressivity in the Context of WL:**
We will add a discussion about the relation between our notion of expressivity and the traditional WL hierarchy. See the rebuttal of Reviewer fGo4 for more details.
>**Efficiency of Eigendecomposition:**
Some reviewers were concerned about the efficiency of eigendecomposition. We would like to point out that computing the leading eigenvectors of sparse graphs is as efficient as message-passing networks. In Section 4.1, we discussed our method's complexity and efficiency. The Lanczos algorithm estimates the leading $J$ eigenvectors in $O(JE)$ operations per iteration and converges quickly, making it as efficient as a message-passing neural network with $J$-dimensional features.
>**Scalability to Large-Scale Datasets:**
To demonstrate the scalability of our method, we conduct additional tests on five large heterophilic graphs: Penn94, Pokec, Genius, Twitch-Gamers, and Wiki datasets from Lim et al. (2021). The experimental setup is in line with previous work by Chien et al. (2021), Lim et al. (2021), and He et al. (2022). We use the same setup for our NLSFs as reported in Appendix B. Table 1 (in the global response PDF) presents the classification accuracy. We see that rem-NLSFs outperform the competing methods on the Penn94, Pokec, Genius, and Wiki datasets. For the Twitch-Gamers dataset, rem-NLSFs yield the second-best results. Our additional experiments show that our method could indeed scale to handle large-scale graphs effectively. We will include these experiments in the camera-ready version if accepted.
>**Additional Comparison with Spectral GNNs:**
In response to the feedback about missing important spectral GNNs, we included JacobiConv, BernNet, Specformer, and OptBasisGNN in our node classification experiment following the reviewer's request.
Table 2 of the global response PDF presents the node classification accuracy for these methods and rem-NLSFs. We see that rem-NLSFs outperform the other models on the Cora, Citeseer, and Chameleon datasets. For the Pubmed and Actor datasets, rem-NLSFs achieve the second-best results. Adding the orthogonal component alleviates the loss of information due to projecting to the low-frequency bands and, therefore, further improves the empirical results. The new experiments will be included in the camera-ready version if accepted.
Pdf: /pdf/a683a944da22925cf7e3b3cc94eea521e8dbf616.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper tackles the task of Network design for graph neural networks. The suggested approach is based on spectral properties of graphs. So far in the literature spectral methods were limited in assuming that the graph domain is fixed. To address this, a relaxed version of symmetry is proposed based on band-limited projections. In addition, a nonlinear spectral filter design is suggested, suggesting node-level, graph-level, and pooling operations. The method is evaluated on several graph learning tasks, demonstrating improvement in generalization over existing spectral methods.
Strengths: The paper makes a valuable contribution to the literature on Graph Neural Networks (GNNs), particularly by addressing the challenge of transferability in spectral methods, which is highlighted as a significant issue.
claims are supported by theoretical analysis.
The paper is self-contained, providing both background information and a short overview on spectral graph learning.
Weaknesses: Writing Quality: Some sections of the manuscript could benefit from revision. For instance, reordering the paragraphs in the introduction could improve readability. Specifically, mentioning what was missing from previous works earlier rather than at the end would help.
More examples can be found in the method section: i) The discussion on problem the activation functions is missing some details, e.g., what rho is exactly? ii) The paper states that "It is important to note that functional shifts, in general, are not induced from node permutations. In stead, functional shifts are related to the notion of functional maps...". This sentence is too vague. Consider adding more details to make it clearer.
No qualitative results are provided. Is it possible to visualize learned features as in the illustration in figure 2? Is it possible to design a toy experiment showcasing the suggested notion of relaxed symmetry, for which the suggested network design generalizes adequately?
Technical Quality: 3
Clarity: 2
Questions for Authors: No question other than the weakeness stated above.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback.
>**Enhancing Writing Quality for Improved Readability and Clarity:**
Thank you for your valuable feedback.
In response to the reviewer’s suggestions, we will make the following revisions:
- Introduction Section: We will reorder the paragraphs in the introduction. Specifically, we will highlight the research gap earlier in the section.
- Method Section:
- We will emphasize that $\rho$ is any non-linear activation function (such as ReLU, Sigmoid, etc.). We will include the following explanation of why non-linear activation functions break the functional symmetry. We offer a construction for complex-valued signals, which is easy to extend to the real case. Consider a 1D grid with $100$ nodes and the central difference Laplacian. Here, the Laplacian eigenvectors are the standard Fourier modes. In this case, one can see that a graph functional shift is any operator that is diagonal in the frequency domain and multiplies each frequency by any complex number with the unit norm. Consider the nonlinearity that takes the real part of the signal and then applies ReLU, which we denote in short by ReLU. Consider a graph signal $x=(e^{i\pi 10n/100} + e^{i\pi 20n/100})_{n=0}^{99}$. We consider a graph functional shift $S$ that shifts frequencies 10 and 20 at a distance of 5, and every other frequency is not shifted. Namely, frequency 10 is multiplied by $e^{-i\pi 50/100}=-i$, frequency 20 by $e^{ -i\pi = -i\pi 100/100}=-1$, and every other freequency is multiplied by 1. Consider also the classical shift $D$ that translates the whole signal by $5$ uniformly. Since $x$ consists only of the frequencies 10 and 20, it is easy to see that $Sx=Dx$. Hence, $ReLU(Sx)=ReLU(Dx)=D(ReLU(x))$. Conversely, if we apply $ReLU(x)$ and only then shift, note that $ReLU(x)$ consists of many frequencies in addition to 10 and 20. For example, by nonnegativity of ReLU, $ReLU(x)$ has a nonzero DC component (zeroth frequency). Now, $S(ReLU(x))$ only translates the 10 and 20 frequencies, so we have $S(ReLU(x))\neq D(ReLU(x))=ReLU(S(x))$.
We will add this example to an appendix.
- We will revise the sentence, "Instead, functional shifts are related to the notion of functional maps..." into "Instead, functional shifts are general unitary operators that are not permutation matrices in general. The value of the functionally translated signal at a given node can be a *mixture* of the content of the original signal at many different nodes. For example, the functional shift can be a combination of shifts of different frequencies at different speeds."
- We will also explain in detail why spectral filters commute with graph functional shifts. This is easily seen from the fact that both spectral filters and functional shifts are diagonal operators in the graph Fourier domain.
>**Visualizing Learned Features and Demonstrating Relaxed Symmetry:**
Visualizing the learned filter directly is not feasible due to its high-dimensional nature: the filter is a general MLP, which makes it challenging to visualize in a meaningful way (as is typically the case in deep learning). However, to illustrate the concepts of relaxed symmetry and functional translation, we will add a new toy example. Similarly to the above construction, one example of a functional translation on the grid, which is not a classical translation, is an operator that translates different frequency bands at different speeds. In the example, we compare the translation of a Gaussian signal on a 2D grid by a classical uniform translation and by different speeds for different frequencies. See the experiments in the rebuttal PDF.
In addition, we add another experiment showing that functional translations are much more stable than hard symmetries of the graph, namely, graph automorphisms (isomorphisms from the graph to itself). Autonorpisms are another analogue to translations on general graphs, which competes with our notion of functional shifts. Consider again the standard 2D grid on the torus. The automorphisms of the grid include all translations. However, this notion of hard symmetry is very sensitive to graph perturbations. If one adds or deletes even one edge in the graph, the automorphism group becomes much smaller and does not contain the translations anymore. On the other hand, we propose a toy experiment that illustrates the graph functional shifts are stable to graph perturbations. Specifically, we show that standard translations can be approximated by a graph functional shift on a perturbed grid (adding Gaussian noise to the edge weights), optimizing the coefficients of the functional shift so the operator is as close as possible to the classical shift. See Figure 2 in the PDF. We then train the NLSFs on a perturbed grid to do MNIST classification, following the setup in [1,2,3]. We compare the NLSF on the clean grid to the NLSF on the perturbed grid. Table 3 shows that the performance of the NLSF is almost unaffected by the perturbation, which demonstrates that NLSFs on the perturbed grid can roughly implement CNN-like architectures (translation equivariant functions).
These examples will be incorporated into the camera-ready version if accepted, including the discussion about automorphism symmetries vs functional shifts.
[1] Defferrard et al. (2016). Convolutional neural networks on graphs with fast localized spectral filtering.
[2] Monti et al. (2017). Geometric deep learning on graphs and manifolds using mixture model CNNs.
[3] Levie et al. (2018). Cayleynets: Graph convolutional neural networks with complex rational spectral filters.
---
Rebuttal Comment 1.1:
Title: reply to authors
Comment: I appreciate the author’s rebuttal and believe that their suggestions would greatly benefit the paper. I have no further concerns.
---
Reply to Comment 1.1.1:
Comment: We warmly thank the reviewer for their comments and for acknowledging our rebuttal. | null | null | null | null | null | null |
ChronoEpilogi: Scalable Time Series Selection with Multiple Solutions | Accept (poster) | Summary: The authors consider the problem of feature selection when forecasting multivariate time series. They propose a novel algorithm called ChronoEpilogi based on identifying a Markov boundary of the time series variables. They experimentally and theoretically validate the findings.
Strengths: 1. A significant problem to tackle,
2. Good formalization of the problem,
3. The paper is generally well-written.
Weaknesses: 1. Not all limitations are discussed: for instance, the model assumes that the selected set of variables remains fixed over time. When we deploy time series models, it is important that the method should work with different train and test time segments. However, since the set of variables is selected, generally, it might not be applicable to other train/test splits and thus might cause issues in real-world use.
2. In my view, the experiments could be more comprehensive. It would be beneficial to consider other forecasting models, baselines, and datasets to provide a more robust evaluation of the model's performance.
3. Prior work discussion is incomplete: for instance, signature paths and feature selection should be discussed and compared to the method. Some specific examples include
- Cross-correlation analysis,
- Signature transforms https://arxiv.org/abs/1603.03788
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. The abstract states, “Identifying these subsets leads to gaining insights, domain intuition, and a better understanding of the data-generating mechanism.” Is this claim supported by the experiments or otherwise in the main text?
2. I have the same question about “identifying all such minimal-size, optimally predictive subsets is necessary for knowledge discovery and important to avoid misleading a practitioner.” I guess feature selection helps with interpretability, but I think the main text does not discuss it in detail.
3. Please comment on weakness #1, which I have listed above.
## Additional comments:
Additional comments
I am really surprised that the primary area is interpretability and explainability, even though the paper almost does not discuss these aspects of the solution.
What is V in tvs?
“under Composition, Interchangeability, and other broad, non-parametric assumptions” – could you list all the assumptions here?
L102 : conditional independence is not written well
L205 RVS is not properly capitalized
Eq 2: what does the dot mean? Is it a misprint?
Table 2:
What do you mean by size?
No selection yields worse results. I wonder, what happens if you use a stronger base model for forecasting, e.g., LSTM?
Re: Statistical significance:
The paper does not report statistical significance in Table 2.
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: In weaknesses, I’ve already commented that some crucial assumptions have not been discussed. These assumptions are critical to the framework, in my opinion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you very much for all the constructive comments, remarks, and questions that helped us to improve the quality of our manuscript. We hope this response addresses your concerns effectively!
**Q1**
We argue that finding one MB primarily serves to build a surrogate model with improved performance. On the other hand, finding multiple MBs is essential for domain discovery and helps practitioners understand the underlying data generation mechanism used to train a model. This is about Data Explainability rather than Model Explainability and our experiments with real datasets (see Figure 1 and Table 2) provide evidence that it is an issue of practical concern. To the best of our knowledge, our submission is the first work that reports multiple MBs on popular datasets used for MTS forecasting problems.
**Q2**
The main text contains a quantitative evaluation of explanations using Markov boundaries (MBs). We show that the ChronoEpilogi algorithm reduces the number of time series to 5% of the original size, with increased performance to no selection (by 0.02 to 0.08 R2) (see Table 2 in the paper). Multiple signatures (a.k.a explanations) provide a complete set of explanations. The MBs contain the direct causes of the outcome and facilitate the interpretation of the data generation mechanism. A qualitative and subjective evaluation of the benefits for interpretation to a human user would require a user study and is out of the scope of this paper.
A qualitative and subjective evaluation of the benefits for interpretation to a human user would require a user study and is out of the scope of this paper. We are highlighting in the paper conclusions potential usages of the multiple solutions computed by ChronoEpilogi e.g., constructing an ensemble model trained on several or all MBs as a means to create forecasters more robust to noise, faulty sensors, or other systematic errors.
**W1**
We agree that data stationarity (with a unique distribution over the entire dataset) is a limitation of our current work, as in most cause discovery algorithms (https://doi.org/10.1613/jair.1.13428). There exist indeed many scenarios where such an assumption is broken, including datasets with trends, datasets where train and tests are both stationary but do not come from the same distribution, or datasets with change points. This is an open problem in the causal discovery literature that deserves further research to identify in a first phase realistic evolution patterns of Markov Boundaries, unlike the strong hypothesis of a few previous works. Additionally, all experiments follow the Forward Chaining Cross Validation protocol. We will add these clarifications to the Camera-Ready version of the paper.
**W2**
Regarding competitors, algorithms for multiple MBs discovery assume iid data and do not provide open source code for MTS that we could include in our experiments. Their implementation is challenging as time series require specific precautions in terms of estimators since data is not iid. Moreover, the only algorithms with theoretical guarantees for multiple MB discovery are TIE* and KIAMB which require an exponential runtime and could not run for the high-dimensional TS datasets of our experimental evaluation. See also our global rebuttal answer.
**W3**
We thank you for pointing out the Signature transforms work. However, we cannot understand how we could leverage Signature Transforms (a Feature Extraction method) to perform Feature Selection with multiple solutions in Multivariate Time Series. Getting more suggestions and pointers will be valuable and could allow us to add more discussion and material in the new paper version. We will be very willing to understand how this ML technique can be relevant to the discovery of multiple equivalent solutions (subsets of features) discovery for time series forecasting (MBo).
**AC1**
ChronoEpilogi is the first algorithm for Time-series Variable Selection with Multiple Solutions that opens new perspectives in Data Explainability compared to Model Explainability heavily studied in the xAI literature. We argue that existing feature importance methods like SHAP are challenged by the presence of multiple MB in the datasets. This is a new claim that we are planning to support with additional experiments that will appear in the camera-ready version. It is known that Shapley values currently suffer from the inclusion of unrealistic data instances when features are correlated, even for linear models. We are currently investigating how SHAP explanations of regression models might misrepresent the role of variables belonging to some but not all Markov boundaries of the modeled target. We claim that the SHAP importance score of each equivalent set of variables is distributed among equivalent variables, hence leading to underestimations of an equivalent set importance, when considered individually. This attribution is unstable: on different data splits, different variables among an equivalence set obtain high importance. See global review. To the best of our knowledge, these results have not been reported in the existing xAI literature.
**AC7**
For each MTS, we obtain the size of the solutions found by GroupLasso and ChronoEpilogi (all solutions produced by ChronoEpilogi are the same size) by computing their mean and standard deviation.
**AC8**
We evaluated ChonoEpilogi and its competitors with experiments conducted over three real TS datasets. For each one, we test 10 different targets. We are currently experimenting with additional real datasets fitting multiple Markov Boundaries (MB) that allow us to reliably compute a critical difference diagram of the statistically significant ranking of GroupLasso and ChronoEpilogi performance using the post-hoc Nemenyi test. In the new TS datasets, results are similar to the ones shown so far and strengthen our claims.
The typos have been corrected.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response!
The response clarifies my concern. I think this is a very interesting aspect of your work about data explainability. In my opinion, viewing this work from this perspective would make it a really valuable contribution.
Currently, my concern is that adding such a massive claim would make the paper significantly different from the version under review. Could you briefly list all the changes you plan to incorporate?
---
Rebuttal 2:
Title: Additional Contributions
Comment: We thank you for your positive feedback on our answer during the rebuttal.
We will definitively further detail domain discovery under the light of the Data Explainability in the paper introduction, add new claims regarding the limitation of Feature Importance methods in the presence of redundant features and include in the main paper and the appendix the experiments that support them.
We should stress that that this a fourth contribution of our work that could not been made without the previous three :
(a) ChronoEpilogi algorithm variants
(b) Theoretical guarantees of the algorithm
(c) quantitative evaluation of ChronoEpilogi results with synthetic and real datasets.
More precisely, we will make the following new claims, based on the SHAP value experiments we have included in the rebuttal:
Claim 7: The importance score of variable with equivalences decreases by half in our experiments, as the number of equivalent variables grows.
Claim 8: the top |MB(T)| predictors in explanations are significantly more unstable to data changes in presence of multiple Markov Boundaries compared to distributions with only one MB(T) (pvalue 4e-13).
The above claims can be validated from the one pdf page we added in our global answer. The experimentation protocol (models libraries, SHAP explainer implementation, tuning, instability metric) we used will be also detailed.
We believe that, on the additional page of the camera-ready version we will be able to include the main results of our new experiments in the paper beside the appendix.
---
Rebuttal Comment 2.1:
Comment: Thank you for responding! Your clarifications helped me much better understand the contribution. I will raise my score to 5.
---
Reply to Comment 2.1.1:
Comment: Dear reviewer, thank you a lot for the time you have spent on our paper and for the constructive feedback. We believe this work will be valuable for researchers of our domain. | Summary: The authors propose a scalable algorithm called ChronoEpilogi that aims to select multiple subsets (Markov Boundaries) of time series (TS) features in order to better understand the underlying data generation process and to provide better explanations of downstream forecasting tasks. Through extensive experiments, the authors show that time series forecasting models perform better when fed with these subsets of TS features (individually) than when fed with all TS features.
Strengths: - Originality: Although the problem addressed in this paper is not new and the proposed solution is based on the combination of existing methods/models, the originality lies in the fact that, unlike previous work, the proposed algorithms provide a compact representation of mutually equivalent variables for multivariate time series (MTS). In addition, the redundant and irreplaceable variables in MTS can be relatively easy to identify.
- Quality: The document is well structured and the assertions are fairly well supported.
- Clarity: Overall, the document is well written and pleasant to read. However, the reviewer suggests improving definition 2. What does V stand for?
- Significance: The reviewer believes the proposed algorithm can be used as an alternative solution to provide explainability in diverse and sensitive fields such as medicine and autonomous vehicles. Indeed, in these fields, when it comes to forecasting tasks, identifying the the time series features that influence the decision is as important as model accuracy.
Weaknesses: - The experiment is not conducted with multivariate time series that have a high rate of missing values. This is a crucial aspect that the study should have taken into account, as missing values are inherent in time series and may affect causal inference.;
- The authors simply identify subsets of time series variables without providing concrete explanations that could have strengthened their claim. For example, the authors should have taken a few subsets (Markov Boundaries) in any task and shown how they are actually relevant to the target;
- The reviewer understands the usefulness of greedy heuristics to speed up the algorithm. However, this heuristic has the disadvantage of providing suboptimal results. Although the authors demonstrate its effectiveness, it would be interesting in future work to test it on additional datasets covering different domains.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Why did you choose ARDL over other models like RNNs which are also autoregressive models and are certainly more efficient (Equation 2)?
- Is the model redesigned at each iteration of the Forward phase, or does it remain unchanged? The reviewer asks this question because the size of the inputs $\textbf{S}$ may vary at each iteration (see Algorithm 3 line 8)?
- What can explain the outperformance of SVR over TFT and DeepAR with the Solar Dataset?
- How do the authors expect their algorithm to perform when faced with sparse multivariate time series, given that missing value can affect causal inference?
- The reviewer does not understand the relevance of reporting the standard deviation in columns MB size and Number of MB (Table 1). Could you please elaborate on this?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - The authors have identified the limitations of their work and plan to address them in their future work;
- On line 176 it should be forward instead of backward;
- The reviewer suggests improving definition 2, which is the core definition of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you very much for all the constructive comments, remarks, and questions that helped us to improve the quality of our manuscript. Hereafter, we provide answers to all your questions and comments pointed out in the weaknesses and limitations sections of your review.
**Question 1)**
In a preliminary phase of experimentation, we indeed planned to include LSTM models to tackle nonlinear tasks. Our observation was that applying the forward backward phases to find an MB(T) in nonlinear synthetic datasets (https://doi.org/10.1613/jair.1.13428) had poor causal recovery when using LSTMs compared to linear methods. We explained this by the variability of DNN training which has higher fluctuations in R2 compared to the increase in R2 expected due to adding a correct MB(T) variable. Obtaining sufficient stability might require more data, finer human analysis of model training routines, and extensive bootstrapping to remove fluctuations. Therefore, we did not include LSTM in ChronoEpilogi.
**Question 2)**
At every iteration, we build a new model with the variables selected at that step. It is important to note that ChronoEpilogi avoids the creation of a new model for every possible variable set by examining the correlation of the model residuals with the variable set selected at the previous step.
**Question 3)**
The discrepancy is likely to be influenced by cross-validation, where the training set in the earliest split is 50% smaller than the one of the last split. SVR results are stable across folds, while TFT models have more unstable results. We plan to add, in the appendix, new results for this experiment with a different cross-validation strategy to confirm our hypothesis.
**Question 4)**
ChronoEpilogi requires two main operators: (a) a modeler, and (b) an estimator of pairwise correlation between the residuals and a Time Series variable. One could impute missing values before constructing a model or computing correlation (https://folia.unifr.ch/unifr/documents/309429). Alternatively, one could employ models that inherently fit models in the presence of missing values (https://doi.org/10.1038/s41598-018-24271-9). The absolute performance of such a version of our algorithm, as well as its relative performance against similar types of algorithms on datasets with missing values, is an open research question for causal discovery methods (https://doi.org/10.1613/jair.1.13428). In this paper, we propose the first algorithm for Time Series Selection with Multiple Solutions, so that we can extend it in the future to handle datasets with missing values.
**Question 5)**
In Table 1, we observe that ChronoEpilogi discovers (on average) smaller solutions (with smaller variance) than GL. Standard deviation and mean are computed across the 270 synthetic MTS, where for each MTS ChronoEpilogi and GL are applied once. For each MTS, we obtain the size of the solutions found by GroupLasso and ChronoEpilogi (all solutions produced by ChronoEpilogi are the same size) by computing their mean and standard deviation.
The standard deviation of MB size is relevant to our experiment, as it allows us to understand that GroupLasso computes feature sets with varying lengths compared to those of ChronoEpilogi. GL tends to include too many variables generating overfitting in linear models.
We also observe that GL tends to dismiss some irreplaceable variables that should have a low impact on predictive performance.
Evaluating the standard deviation of solution size is also relevant to compare FBE and FE (approximate version). We observe that FE tends to create more solutions than FBE, with higher fluctuations. As reported in Table 2 (real datasets), both methods can provide parsimonious solutions. To better appreciate the quality of the approximation offered by FE we will include in the appendix also the characteristics of the solutions computed by FBE in real datasets.
**Weakness 1)**
We consider this as an important issue, but it is orthogonal to our current submission also as in recent empirical studies of causal discovery methods (https://doi.org/10.1613/jair.1.13428).
**Weakness 2)**
We started investigating selected subsets in real TS datasets as Traffic (see Figure 1). This study is interesting to reveal the reasons why multiple replaceable TS variables may exist, as for instance the spatial proximity of traffic sensors included in the dataset. However, such root cause analysis could not be completed without the involvement of experts from different domains, a task that is not possible with the public TS datasets used in our experimental evaluation. We are planning in the future to conduct a deeper analysis of proprietary datasets in the context of an ongoing research project. As noted in the paper conclusions, the discovery of multiple MBs aims to facilitate a better understanding (via appropriate visualization tools) of the underlying data generation mechanism, as well as the construction of ensemble models trained on several or all MBs as a means to create forecasters more robust to noise, faulty sensors, or other systematic errors.
**Weakness 3)**
Current MTS datasets commonly used for forecasting do suffer from lack of domain diversity (https://arxiv.org/pdf/2310.06119) (https://arxiv.org/html/2403.20150v1). We are considering datasets from other domains (such as maintenance and water flow forecasting), but those are proprietary and outside of the scope of this paper due to each dataset's specific challenges.
We are currently experimenting with two more additional datasets: METR-LA and PEMS-BAY (speed readings in road networks) that we found to exhibit multiple MBs. The new results will be included in Table 2 and the appendix of the camera-ready version.
**Limitation 2)**
Fixed
**Limitation 3)**
We have improved definitions 2.1 and 2.3, which we include in the Camera-Ready version of the paper.
---
Rebuttal Comment 1.1:
Comment: Dear authors, thank you for your detailed answers. My concerns were clarified. I keep the score unchanged. | Summary: The authors presents **ChronoEpilogi**, an algorithm for multiple feature selection in multivariate time-series (TS) forecasting. This approach aims to identify all minimal-size subsets of TS variables (Markov Boundaries) that optimally predict a given target variable's future. The key contributions are:
1. **Theory Development**: Introduces the problem of multiple time-series variable selection (MTVS) and the concepts of informational equivalence and interchangeability in TS data.
2. **Algorithm Design**: Proposes ChronoEpilogi, which identifies all Markov Boundaries under broad, non-parametric assumptions.
3. **Experiments and Results**: Demonstrates ChronoEpilogi's scalability to hundreds of TS variables and its effectiveness in reducing the number of variables while maintaining or improving forecasting performance.
Strengths: Some of the key strengths of the paper are:
1. The paper proposes a novel theoretical foundation for multiple feature selection in TS data including concepts like informational equivalence and interchangeability. Combining these concepts the authors have been able to propose an empirical method to detect Markov Boundaries
2. Furthermore, the proposed algorithm ChronoEpilogi is shown to handle large datasets effectively, making it suitable for real-world applications with numerous TS variables. This scalability is crucial to real world usage of the algorithm
3. Another key contribution for the paper is that ChronoEpilogi aims to identify all minimal-size subsets, offering multiple valid forecasting models and insights.
Weaknesses: While being a very interesting paper, there are some avenues for improvement:
1. While the authors discuss the scalability, the algorithm's computational complexity seems to be high for very large datasets, potentially limiting its practicality.
2. Some of at the assumptions that the algorithm relies on such as Compositionality and Interchangeability may not hold in all real-world scenarios, potentially affecting its generalizability. The authors should consider discussing the limitations in their papers and how well the assumptions hold in practice
3.The authors have provided thorough experimentations. However, to justify the practicality of the algorithm, it would be interesting to report additional validation on more diverse and complex real-world datasets
Technical Quality: 3
Clarity: 3
Questions for Authors: It would be great if the authors can discuss about the computational complexity and the limitations stemming from their assumptions
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See weakness above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you very much for all the constructive comments, remarks, and questions that helped us to improve the quality of our manuscript. Hereafter, we provide answers to all your questions and comments pointed out in the weaknesses and limitations sections of your review.
**Question 1) It would be great if the authors can discuss about the computational complexity and the limitations stemming from their assumptions.**
**Weakness 1) While the authors discuss the scalability, the algorithm's computational complexity seems to be high for very large datasets, potentially limiting its practicality.**
ChronoEpilogi complexity essentially depends on the number and the complexity of tests run by each phase of the algorithm. Note that a model is constructed only at the end of each iteration step and not for each candidate subset of TS variables. The exact version of ChronoEpilogi, termed FBE, is indeed expensive (see experiments with synthetic data in Figure 2) but allows us to prove soundness and completeness theorems. For this reason, we have also proposed a greedy approximated version termed FE that avoids running the backward phase and computes the equivalence directly in the forward phase, thus pruning new model generations for all the selected TS variables discovered in the forward phase. As we can see in Table 1, in synthetic datasets fitting multiple MBs, FE runs in 30% of FBE time on average, with only a decrease of causal f1 metric of 0.05 while in real TS datasets, FE is able to find the first MB(T) up to 2 orders of magnitude faster than GroupLasso. Such results suggest the practicality of the proposed algorithm.
**Weakness 2) Some of the assumptions that the algorithm relies on such as Compositionality and Interchangeability may not hold in all real-world scenarios, potentially affecting its generalizability. The authors should consider discussing the limitations in their papers and how well the assumptions hold in practice**
Please note that without any assumption regarding the joint probability distribution, causal variable selection is NP-hard even for linear regression problems (https://www.tandfonline.com/doi/abs/10.1080/00949658208810560). Rather than simply dropping the strong Faithfulness assumption we have considered weaker assumptions like Compositionality and Interchangeability allowing us to prove ChronoEpilogi soundness and completeness. Several practical and useful data distributions satisfy Composability. To that extent, in example 2.1, we show that any deterministic transformation satisfies composition. In example 2.2, we show that Joint Gaussian Distributions also satisfy Composability. We will also include in the appendix examples of distributions that do not respect Compositionality, starting with the typical example of non-faithful XOR operation between Bernouilli variables with p=0.5. ChronoEpilogi (and its variants) is an effective tool that allows us to experimentally verify the Compositionality and Interchangeability properties in real TS datasets. To the best of our knowledge, our submission is the first work that reports multiple MBs on popular datasets used for MTS forecasting problems related to these two weak assumptions of our algorithm.
**Weakness 3.The authors have provided thorough experimentations. However, to justify the practicality of the algorithm, it would be interesting to report additional validation on more diverse and complex real-world datasets**
We are currently extending our experimental study to two new TS datasets: METR-LA and PEMS-BAY (speed reading in road networks) that we found to exhibit multiple MBs (https://openreview.net/forum?id=SJiHXGWAZ). Current MTS datasets commonly used for forecasting do suffer from lack of domain diversity (https://arxiv.org/pdf/2310.06119) (https://arxiv.org/html/2403.20150v1). Please note that the challenge of the experimental evaluation of our work is not to measure forecasting performance with different models but the quantitative evaluation of ChronoEpilogi's multiple solutions against our main competitor namely GroupLasso. The newly produced results also confirm ChronoEpilogi's excellence and they will be included in Table 2 and the appendix of our paper.
---
Rebuttal Comment 1.1:
Title: Acknowledging author response
Comment: Thanks for the detailed response | Summary: This paper handles the problem of selecting all the minimal-size subsets of multivariate time series variables such that the past leads to an optimal predictive model for the forecast of a given target variable, which is essentially a time series feature selection problem. Past algorithms have worked to select a single such subset. The proposed algorithm is relatively efficient, in that it does not take as much longer than finding a single subset as one would think, but leading to more insight and better "Markov blankets."
Strengths: 1. The paper handles an important problem in a clever way and is explained quite clearly.
2. The experimental results are convincing and actually include running time, which is often omitted.
3. The theoretical results look correct, although admittedly I did not comb through the proofs in great detail.
Weaknesses: 1. The proposed algorithm was only compared against GroupLasso and not against any other among the related work mentioned in the paper.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Line 176 should say "forward" rather than "backward."
2. I suspect that algorithm 3 step 9 and algorithm 4 step 3 should have $\geq$ in place of $\leq$.
3. In line 260, how is TSS defined?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you very much for all the constructive comments, remarks, and questions that helped us to improve the quality of our manuscript. Hereafter, we provide answers to all your questions and comments pointed out in the weaknesses and limitations sections of your review.
**Question 1) Line 176 should say "forward" rather than "backward."**
Thank you for finding this typo, we have fixed it in the paper.
**Question 2) I suspect that algorithm 3 step 9 and algorithm 4 step 3 should have ≥ in place of ≤.**
This one is another typo since we stop when we fail to reject the hypothesis stating the models are different. The smaller the p-value, the more different the tested models. We corrected it.
**Question 3) In line 260, how is TSS defined?**
TSS stands for Time Series variable Selection and will repeat the definition of the acronym at its first appearance in the introduction.
**Weakness 1) The proposed algorithm was only compared against GroupLasso and not against any other among the related work mentioned in the paper.**
This is due to the fact that competitor solutions for multiple MBs discovery assume for iid data hence do not provide open-source code for MTS data, and thus their implementation is challenging and a research work in itself. We should note that time series require specific precautions in terms of estimators since data is not iid. Moreover, the only algorithms with theoretical guarantees for multiple MB discovery are TIE* and KIAMB which require an exponential runtime (see related work) and could not run for the high-dimensional TS datasets of our experimental evaluation. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their constructive comments, remarks, and questions that helped us to improve the quality of our manuscript. Hereafter, we provide a summary of the main modifications and additions we will bring to the final version of the paper.
**1) Extension of the empirical datasets and consideration of other forecasting methods and baselines.**
In terms of additional datasets, we are currently extending our experimental study to two new TS datasets (speed reading in road networks), namely, METR-LA and PEMS-BAY (https://arxiv.org/abs/1707.01926) that we found to exhibit multiple MBs. With 5 datasets in total and 10 targets modeled each, we can compute critical difference diagrams of the statistically significant ranking of GroupLasso, ChronoEpilogi, and NoSelection performance. The newly produced results also confirm ChronoEpilogi's excellence and will be included in Table 2 of the camera-ready version of the paper. Note that current MTS datasets commonly used for forecasting do suffer from a lack of domain diversity (https://arxiv.org/pdf/2310.06119) (https://arxiv.org/html/2403.20150v1).
Regarding additional forecasting models, we could include Nhits and DecoderMLP available in the dedicated library we are using (https://pytorch-forecasting.readthedocs.io/en/stable/models.html). Please note that the challenge of the experimental evaluation of our work is not to measure forecasting performance with different models but the quantitative evaluation of ChronoEpilogi's multiple solutions against our main competitor namely GroupLasso.
Unfortunately, comparison with additional baselines is not possible as competitor solutions for multiple MBs discovery assume iid data and how to adapt the methods for MTS data would require a paper in itself.
We will also provide FBE results on real datasets where possible.
**2) Further evaluation of ChronoEpilogi relevance to xAi.**
ChronoEpilogi is the first algorithm for Time Series Selection with Multiple Solutions and opens new perspectives in Data Explainability compared to Model Explainability which has been heavily studied in the xAI literature. We argue that existing feature importance methods like SHAP are challenged by the presence of multiple MB in the TS datasets. This is a new claim that we are planning to support in the camera-ready version of the paper and we submitted as a pdf file. It is known that Shapley values currently suffer from the inclusion of unrealistic data instances when features are correlated, even for linear models (https://www.sciencedirect.com/science/article/pii/S0004370221000539). We are currently investigating how SHAP explanations of regression models might misrepresent the role of variables belonging to some but not all Markov boundaries of the modeled target. We claim that the SHAP importance score of each equivalent set of variables is distributed among equivalent variables, hence leading to underestimations of an equivalent set importance, when considered individually. This attribution is unstable: on different data splits, different variables among an equivalence set obtain high importance. The importance of the top variable diminishes as the number of equivalent variables grows, sharing more and more importance with the remaining variables. We report the results of these experiments on the page attached to the global rebuttal. To the best of our knowledge, these results have not been reported in the existing xAI literature.
Pdf: /pdf/d116197297f37dff4c7c723fa46c680120db7724.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper considers the problem of finding all minimal subsets of variables for optimal prediction of time series data, coining the term "Markov Boundaries" for those minimal subsets constituting Markov Blankets for the target time series variables in question.
The paper then proposes novel algorithms for this problem,FBE and FE, and prove the soundness and completeness of FBE (FE is a faster approximate algorithm) and empirically evaluate their performance.
The experiments are conducted using both synthetic data (with ground truth causal structure) and real world data, and compare the performance of the proposed algorithms against baselines of Group Lasso (GL) and No variable selection, with respect metrics including predictive accuracy, accuracy of causal structure learning (for synthetic data), computation time and solution size.
The results presented validate a number of claims about the proposed methods, the notable ones being that they are more accurate at uncovering the ground truth causal structure on synthetic data and FE roughly comparable to GL on real world data sets in terms of accuracy and computation time, sometimes significantly out-performing it in terms of solution size.
The problem formulating is apparently novel and interesting, and the proposed methods are also novel and theoretically sound (and complete). The empirical results show that they are at least competitive to the standard baselines.
This work would add some valuable knowledge and insights to the community with interest in causal modeling and interpretable learning in time series data.
Strengths: The problem formulation is novel and interesting and well motivated practically.
The proposed solution is novel and sound and complete.
The empirical evaluation is reasonable.
Weaknesses: The performance of the proposed methods against the baseline of Group Lasso on real world data sets is not exactly compelling.
More clarify on the relative advantage of the proposed method(s) would be valuable.
The optimal algorithm, FBE, is not evaluated on real world data sets, which I assume is due to computational complexity. It would be beneficial to know if any evaluation (even if partial) could be performed on FBE on the real world data.
Technical Quality: 3
Clarity: 3
Questions for Authors: One wonders if there are ways to use Group Lasso to obtain multiple solutions of the type obtained by the proposed methods, for example, by performing multiple randomized runs with perturbation and aggregating the outputs. A comparison with such heuristics would be of interest.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors do mention that the claims are limited to the scope of their empirical evaluation which could be enhanced.
It would be additionally desirable to address the question of quantifying the statistical confidence of the outputs of the algorithms.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you very much for all the constructive comments, remarks, and questions that helped us to improve the quality of our manuscript. Hereafter, we provide answers to all your questions and comments pointed out in the weaknesses and limitations sections of your review.
**Question 1) One wonders if there are ways to use Group Lasso to obtain multiple solutions of the type obtained by the proposed methods, for example, by performing multiple randomized runs with perturbation and aggregating the outputs. A comparison with such heuristics would be of interest.**
One could extend Lasso-type algorithms based on global optimization to identify multiple solutions. An example of preliminary work in this direction or cross-sectional i.i.d. data is at https://arxiv.org/abs/1710.04995. However, how to extend this or similar types of algorithms for the Group Lasso and time-series data is still an open problem. In addition, proving soundness and completeness for algorithms based on Lasso is also challenging.
**Limitation 1) The authors do mention that the claims are limited to the scope of their empirical evaluation which could be enhanced. It would be additionally desirable to address the question of quantifying the statistical confidence of the outputs of the algorithms.**
In the current version of the paper, we evaluate ChonoEpilogi and its competitors with experiments conducted over three real TS datasets. For each one, we test 10 different targets. This dataset number does not allow us to apply a significant statistical test on the algorithms’ predictive performance. We are currently experimenting with additional real TS datasets fitting multiple Markov Boundaries (MB) that allow us to compute a critical difference diagram of the statistically significant ranking of GroupLasso and ChronoEpilogi performance using the post-hoc Nemenyi test.
Moreover, to assess the stability of Feature Importance methods like SHAP in the presence of replaceable variables, we are currently experimenting with synthetic datasets of varying redundancy degrees using the Wilcoxon signed rank tests. In the new datasets, results are similar to the ones shown so far and strengthen our claims.
---
Rebuttal Comment 1.1:
Title: Response regarding additional evaluation
Comment: Thank you for elaborating on your on-going efforts on additional empirical evaluation. It would be good (and strengthen the paper) to include the results of those (regarding both Group Lasso and SHAP) in an updated version of the paper. | null | null | null | null | null | null |
RobIR: Robust Inverse Rendering for High-Illumination Scenes | Accept (poster) | Summary: This paper addresses inverse rendering in high-illumination scenes with strong shadows where past methods bake shadows and highlights into estimation results. This paper proposes to use ACES tone mapping and makes it scene-dependent for inverse rendering in high-illumination scenes. This paper also proposes to directly estimate the visibility of each spherical Gaussian of direct illumination instead of a visibility field, which enables an accurate representation of shadows at the edge. The experimental results on the synthetic and real-world datasets show that the proposed method can estimate accurate albedos, surface roughness, and illumination without artifacts in the high-illumination scenes.
Strengths: + This paper proposes a novel regularized visibility estimation that enables an accurate representation of shadows at the edge.
+ Experimental results show that the proposed method successfully estimates BRDF and illumination while existing methods suffer from artifacts. They also indicate the effectiveness of ACES tone mapping compared with log tone mapping methods.
Weaknesses: - It is unclear why the ACES tone mapping, of which usage is the key contribution, enables the robust inverse rendering of high-illumination scenes with strong shadows.
- The proposed method loses the albedo of the detailed texture due to smoothness loss in Eq. 10. For example, Bear in Fig. 7 and truck in Fig. 4.
Technical Quality: 3
Clarity: 3
Questions for Authors: - A more detailed explanation of the effects of ACES tone mapping is expected. Why is it more suitable for high-illumination scenes than other tone mapping methods such as sRGB and log tone mapping? How does it affect the loss and the optimization?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad and appreciate that the reviewer recognizes the novel ideas of RobIR and the proposed method successfully estimates BRDF and illumination. Since many of the questions have already been answered in the common response, our additional response to the reviewer’s comments is below:
**Q1: Why ACES is more suitable than other tone mapping like sRGB and sigmoid.**
As previously mentioned in the common response, high-illumination scenes often require HDR tone mapping to produce the final image, mapping values from 0 to A into the range of $[0, +\infty]$. However, sRGB is not designed for processing HDR input. ACES allows PBR output colors to be in a larger value range, which better distinguishes very dark shadow regions and aids in the decoupling of object materials.
Additionally, ACES has significant advantages over sigmoid curves. Although both map an infinite range to $[0,1]$, ACES is a more commonly used HDR tone mapping curve in graphics, consistent with many rendering engines, and provides better contrast control while preserving more detail in highlights and shadows.
**Q2: Lost details in albedo.**
As previously mentioned in the common response, although the smooth loss does cause some loss of detail, the rendering precision of NeuS is a bigger factor in this loss. In the submitted rebuttal PDF, we included the results of NeuS rendering, which clearly show some blurring. This blurring affects other BRDF components supervised by NeuS, exacerbating the over-smoothing issue. Therefore, we are very eager to use methods with higher rendering and geometric quality, such as 2DGS, in future work.
---
Rebuttal 2:
Title: Fix typo errors in Q1
Comment: **Q1: Why ACES is more suitable than other tone mapping like sRGB and sigmoid.**
As previously mentioned, high-illumination scenes often require HDR tone mapping to produce the final image, mapping values from $[0, +\infty]$ into the range of $[0, 1]$. However, sRGB is not designed for processing HDR input. ACES allows PBR output colors to be in a larger value range, which better distinguishes very dark shadow regions and aids in the decoupling of object materials.
Additionally, ACES has significant advantages over sigmoid curves. Although both map an infinite range to $[0,1]$, ACES is a more commonly used HDR tone mapping curve in graphics, consistent with many rendering engines, and provides better contrast control while preserving more detail in highlights and shadows.
---
Rebuttal Comment 2.1:
Title: Additional questions
Comment: I appreciate your addressing the questions. After reading the rebuttal, I have additional questions about tone mapping to clarify the contributions of this paper.
1. Does the proposed method assume an unknown method of tone mapping for the input images? If so,
* Is this assumption common for applications of inverse rendering?
* Is the proposed scene-dependent tone mapping not needed when the tone mapping method is known and can be used in optimization?
2. Would you like to claim that the existing inverse rendering method misses the mismatch of tone mapping between image processing (image formation) and optimization, resulting in failure in high-illumination scenes?
---
Rebuttal 3:
Comment: Thank you very much for the constructive feedback. We hope that our response below will address your concerns.
**Q1: Unknown tone mapping assumption.**
Our proposed method only takes a collection of images with camera poses as input, and the ground truth tone mapping of these images is unknown. All the methods we compare in our paper use datasets with unknown ground truth tone mapping. We introduce scene-specific ACES tone mapping to approximate various tone mapping scenarios; for example, HDR scenes are typically optimized to vanilla ACES tone mapping. Therefore, if the tone mapping method were known, theoretically, we wouldn’t need to use our proposed scene-specific ACES. However, this is nearly impossible because different physics engines employ different tone mappings, and tone mapping itself involves numerous parameters. Additionally, variations camera parameters in real-world scenes make it even more difficult for us to determine the ground truth tone mapping.
**Q2: The problem of the existing inverse rendering method.**
The existing inverse rendering methods, such as NeRO, use sRGB tone mapping. However, in high-illumination scenes, we typically need tone mapping that can handle HDR inputs to produce the final image. We are the **first** to apply ACES tone mapping in inverse rendering, which fundamentally enables our method to handle inverse rendering in high-illumination scenes. Additionally, the introduction of scene-specific ACES tone mapping allows us to approximate other tone mappings (e.g., sRGB), making our method compatible with non-high-illumination scenes as well.
Building on more accurate tone mapping, we achieve more precise visibility modeling through Regularized Visibility Estimation (RVE) and NeuS Octree, which effectively removes shadows and disentangles other parts of the BRDF estimation more accurately.
Therefore, the failure (especially shadow baking) of existing inverse rendering methods in high-illumination scenes stems from 1) the mismatch of tone mapping between image processing and optimization, as well as 2) inaccurate visibility estimation.
---
Rebuttal Comment 3.1:
Comment: Thank you for your response. I acknowledge the novel idea of scene-specific tone mapping to handle both HDR and non-HDR scenes as well as the effectiveness of the visibility estimation. I raise my rating from Borderline reject to Accept.
---
Reply to Comment 3.1.1:
Comment: Thank you for your recognition and adjustment of the score. We are glad that our rebuttal effectively addressed your concerns. We also appreciate your recognition of our methods. Thank you again for your constructive feedback and guidance throughout the review process. | Summary: This paper proposes a method for the inverse rendering of high-illumination and highly reflective scenes. There are two training phases, in the first phase, it trains by Neus, to get geometry and compute visibility by octrees. In the second phase, it decomposes lighting as SGs and material by MLPs.
Strengths: Direct and indirect lighting, visibilities are presented by SGs, which is compact.
From the results, shadows and specularities are decomposed well.
Tone mapping is used, as NeRF in the Dark, which improves results for high-illumination scenes.
Weaknesses: Figure 2 gives comparisons with and without smooth loss, however, the one w/o smooth loss is better, while the loss may over-smooth the details.
Figure 11 shows results where the lighting consists of the original colors.
Technical Quality: 3
Clarity: 3
Questions for Authors: How does the relighting work? Do you use original indirect lighting?
Why is visibility divided into two stages, and why not directly use the results calculated by the Neus octree as the ground truth to supervise SG? Instead, why is the Neus octree result used as the ground truth to supervise the MLP, and then the MLP's distribution used to supervise the SG results? What are the differences?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review as well as the suggestions for improvement. Our response to the reviewer’s comments is below:
**Q1: How does the relighting work.**
We imported the reconstructed albedo and roughness maps into Blender and performed relighting using Blender's scripting. Fig. 10 aims to demonstrate that we can extract high-quality PBR materials that can be directly applied in creative tools like Blender.
**Q2: Why use an MLP instead of directly using the results from the NeuS Octree.**
Good question. This is because querying Octree tracing is extremely time-consuming. By using an MLP, we can cache the results of the NeuS Octree and speed up inference.
**Q3: Over-smooth details in Fig. 2.**
This is a finding we want to highlight: smoothing helps correct geometric errors. Although the smooth loss results in some loss of detail, it effectively fixes geometric breakages caused by reflections or shadows in the normal map. Addressing these issues will aid in the decoupling of shadows and indirect illumination.
**Q4: Environment map consists of the object color.**
The estimation of environment map is still not robust in current IR frameworks. However, we want to emphasize that, as shown in Figure 5, our method has significantly improved the accuracy of environment map estimation compared to previous approaches. Additionally, we are one of the few methods that quantitatively compare environment map metrics (see Table 1). We will focus on addressing the estimation of environment maps in cases like those shown in Figure 11 in our future research. | Summary: This paper introduces RobIR, an inverse rendering approach that can better tackle “high-illumination” scenes. RobIR first leverages the existing neural field model (NeuS) to represent 3D geometry information including normal, visibility, and indirect illumination. It then utilizes these geometry priors to decompose rendering attributes of the scene through the approximated rendering equation with spherical Gaussian. This work introduces an optimizable ACES tone-mapping and regularized visibility estimation model to better handle HDR color and shadow occlusions, respectively. Their experiment demonstrates some impressive results on shadow disentanglement.
Strengths: 1. This work has some further thoughts on color tone mapping for the typical multi-view inverse rendering. The proposed optimizable ACES tone mapping looks very effective in improving inverse rendering results.
2. The proposed visibility representation (RVE) also plays an important role in the final results. RVE with its neural net accumulates and denoises Monte Carlo samples to achieve more accurate and stable visibility evaluation. Their efforts to refine the visibility should be appreciated.
3. I am glad the authors clearly point out they use the original NeRF rendering of Hotdog and Lego, instead of the NeRFactor’s.
Weaknesses: 1. This paper still follows a commonly used multi-stage inverse rendering strategy with neural fields. The geometry representation is based on NeuS; the rendering formulation (SG rendering, visibility, and indirect illumination) is mainly based on InvRender; The proposed optimizable ACES tone and REV are more like incremental improvements over InvRender. The key rendering formulation and optimization remain the same as the prior works. Therefore, the novelty of this work is moderate.
2. The description of RVE in Sec. 3.4 is not very clear. It seems that MLP $Q(x, \tau)$ directly outputs N visibility ratios, thus $\eta(x)$ in Eq. 12 should also be an N-dim vector instead of a scalar value.
3. The proposed method is quite time-consuming. Training time even without NeuS is around 5 hours.
4. The proposed regularization terms may hurt the high-frequency details in real-world scenes (Fig. 7).
5. This method is limited to dielectric materials, without the consideration of metallic and glossy objects, as already pointed out by the authors.
6. The paper should include some inverse rendering methods with differentiable path tracing, as these methods can explicitly handle visibility, for example, NvDiffRecMC, Mitsuba, etc.
7. Minor errors:
* L275 accuurate -> accurate
* Figure 8: Hotdog label is wrong.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What are the differences between the final optimized ACES tone mapping and sRGB gamma tone mapping? How does this optimized tone mapping vary from scene to scene? It would be better if tone-mapping curves were included in the paper.
2. I tested this ACES tone mapping mentioned in the paper, it seems that the proposed learnable ACES tonemapping (an S-curve) cannot well approximate the existing concave tone-mapping curves that are potentially used for rendering Blender objects (e.g., sRGB curve, Filmic curve, AgX curve, etc.). Given this limitation, how does the paper address the color mismatch in the low-illumination color space?
2. The paper does not show the metrics for rendering (NVS) with decomposed attributes. I’d like to see the rendering quality metrics for those NeRF scenes.
3. It would be better if the author could show some video examples of moving shadows while rotating envmaps in the final release.
4. Why does the learnable parameter $\gamma$ have an exponent 0.2?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations are adequately discussed in the last paragraph of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review as well as the suggestions for improvement. We will revise the typo errors in the paper based on these insightful suggestions. Our response to the reviewer’s comments is below:
**Q1: Comparison with inverse rendering methods with differentiable path tracing.**
Great suggestion. We will add citations to these papers in our paper. We have shown a comparison with `NvDiffRecMC` in the rebuttal PDF. As can be seen, NvDiffRecMC still cannot remove baked shadows and indirect illumination from PBR materials, demonstrating the robustness of our method.
**Q2: Show some video examples of moving shadows while rotating envmaps.**
Of course, no problem. Unfortunately, we are unable to submit a video during the rebuttal period. However, we can provide other examples to demonstrate that our results are indeed strong. In Fig. 9, we offer several examples that illustrate the robustness of our method in shadow removal, which can partially demonstrate our capability to achieve the task you mentioned.
**Q3: This method is limited to dielectric materials.**
We are very grateful to the reviewer for pointing out this issue and noticing that we discussed it in the Limitations section. We want to emphasize that our approach is fundamentally a general method that can be integrated with other NeuS-based inverse rendering methods for glossy objects, such as NeRO. In the NeRO paper, there is also an issue with shadow baking in PBR materials for glossy objects. We believe that our approach can help NeRO achieve higher quality PBR material decomposition for metallic and glossy objects.
**Q4: Novel view synthesis on NeRF synthetic scenes.**
From our perspective, we believe that novel view synthesis (NVS) is not the primary focus of inverse rendering. Instead, we are more concerned with relighting and the decoupled BRDF components and PBR materials. Using the rendering equation to constrain the process can negatively impact NVS performance, resulting in rendering metrics that are less competitive than methods that directly output color. Additionally, NVS performance is highly sensitive to the base method used; for example, approaches based on NeuS are likely to perform worse in NVS compared to methods based on TensoRF or 3D-GS, which can introduce a degree of unfairness. Despite this, we still provide NVS results (on hotdog, lego, ficus, and mic) to demonstrate that the fidelity of our reconstruction is acceptable.
| | PSNR | SSIM | LPIPS |
| --------- | ----- | ------ | ------ |
| Ours | 30.11 | 0.9466 | 0.0528 |
| NVDiffrec | 29.81 | 0.9732 | 0.0345 |
| InvRender | 27.64 | 0.9056 | 0.0982 |
| TensoIR | 32.47 | 0.9637 | 0.0377 |
We want to emphasize once again that, although our NVS quality may not be the highest, the image supervision is sufficient to decouple high-quality and smooth PBR materials. These can be effectively used for tasks that are of greater interest in inverse rendering, such as relighting and shadow removal.
**Q5: Why does the learnable parameter $\gamma$ have an exponent 0.2.**
This is a very detailed question. It's an engineering trick to allow ACES to stretch across a wider range.
**Q6: The use of $\eta (x)$ in Eq. 12 is inaccurate.**
Thank you very much for pointing out this issue. Your understanding is completely correct. In Eq. 12, the supervision value for $Q(x, \tau)$ should be an N-dimensional vector, representing the visibility of N direct SGs at coordinate **x**. We will replace $\eta (x)$ with a different mathematical symbol in the paper.
Finally, we would like to thank the reviewer once again. Many of the reviewer's comments were very accurate, which truly made us very happy.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' detailed responses. I am generally satisfied with these responses.
I'll retain or possibly increase my score later.
I still think NVS rendering quality is also an important factor that should be considered. I encourage you to include these metrics in the supplement.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the valuable comments and are keen to follow up on the provided suggestion to include NVS metrics in the supplement.
Additionally, we want to emphasize that the NVS results are highly dependent on the base method. We tried applying our method to 3D-GS, and the NVS results are shown in the table below.
| | PSNR | SSIM | LPIPS |
| ------- | --------- | ---------- | ---------- |
| Ficus | 32.35 | 0.9773 | 0.0191 |
| Hotdog | 35.66 | 0.9806 | 0.0297 |
| Lego | 34.51 | 0.9766 | 0.0208 |
| Mic | 34.01 | 0.9871 | 0.0129 |
| Average | **34.13** | **0.9804** | **0.0206** |
| Ours | 30.11 | 0.9466 | 0.0528 |
Although the NVS results are significantly higher than ours, the decomposition of each component is completely chaotic, with shadows baked into the albedo, the environment map being meaningless, and so on. This issue arises because the geometric quality of 3D-GS is far inferior to that of NeuS.
In the future, we will attempt to base our method on models like 2D-GS, which possess both good geometric and rendering quality. We believe that strong geometric quality is the foundation for the correct decomposition in inverse rendering, facilitating accurate BRDF estimation and yielding better PBR materials and environment maps for relighting and shadow removal. Improved rendering quality will also enhance NVS quality and further improve the rendering quality of relighting. | Summary: This paper introduces RobIR, an inverse rendering approach designed to handle strong or directional illumination scenes with strong shadows and specular reflections.
The proposed method aims to decouple environment lighting and object materials, with the goal of producing high-quality albedo without baked shadow.
Building on top of prior inverse rendering methods such as InvRender, RobIR introduces two components that further boost the reconstruction quality: (1) ACES tone mapping with an optimizable gamma parameter to better capture the image formation process; (2) regularization for visibility estimation.
RobIR demonstrates better performance over existing methods in quantitative and qualitative evaluations.
Strengths: - The model design choices are valid and sensible.
- The experiments and ablation study are thorough.
- The paper is well written and easy to follow.
Weaknesses: [W1] The benefit of ACES tone-mapping is a bit surprising. Despite a more accurate formulation for the image formation process, the task of inverse rendering is inherently still an ill-posed problem. It’s a surprising conclusion that a tone-mapping formulation can robustly and significantly benefit shadow removal.
With many other regularization terms entangled, it’s a bit hard to evaluate the correctness of this specific component.
I’d hope to know more details in the following aspects:
[W1.1] Missing visualization of the tone-mapping curve. Despite an important contribution, the estimation results of the tone-mapping curve are missing. What does the default ACES tonemapping look like, and what does the final tonemapping look like with the optimizable gamma?
In the revised version, the curves should be qualitatively visualized and included in main paper. From the existing experiments (such as the PSNR metrics), the audience cannot intuitively understand how well the tonemapping is estimated.
[W1.2] Missing evaluation of the tone-mapping curve.
As many datasets do not have GT tonemapping, it’s unclear how accurately the tone-mapping approximate the GT tonemapping.
One way to evaluate is to add additional tone adjustment to the input dataset. Assume with the original dataset images $\{ I \}$ and the method originally reconstructs a tonemapping curve $f$. Given a new tone adjustment function, e.g. $g(x) = x^\kappa$, the adjusted dataset images become $\{g(I)\}$. Re-running the method can get a newly reconstructed tonemapping curve $f_\kappa $. The consistency between $g \circ f$ and $f_\kappa$ can indicate how well the model can approximate the additional introduced tone adjustment function. $\kappa$ can be set to values like 0.5 or 2.
[W1.3] The evaluation metric on the Albedo is flawed. As albedo estimation/optimization often involve an unknown scale, PSNR alone is not a proper evaluation for Albedo. Check out [1] for more analysis and more appropriate scale-invariant metrics.
[W1.4] Most of the results are from synthetic datasets, where the GT tonemapping could potentially be close to ACES. For real-world results in Fig.7, the albedo estimation looks strongly regularized and over-smoothed.
[W1.5] Are the radiance values (before tonemapping) and indirect illumination in HDR? If so, what is the activation function?
[W2] The proposed method involves a complicated training pipeline (two stages with each stage have its own loss scheduling), and the novelty is relatively limited.
**References**
[1] Grosse et al., Ground truth dataset and baseline evaluations for intrinsic image algorithms, ICCV 2009.
Technical Quality: 2
Clarity: 3
Questions for Authors: This paper addresses the shadow baking issue in inverse rendering by proposing two straightforward but effective techniques: optimizable tone-mapping and visibility regularization.
However, I am not fully convinced that optimizable tone-mapping significantly benefits shadow removal, and without further clarification it’s challenging to evaluate the technical correctness of this component. In the rebuttal, please prioritize addressing Weaknesses 1.1-1.4.
**[Post rebuttal update]**
The rebuttal and related discussion address my concerns. I update my rating to Weak Accept.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The paper discussed limitations in the end of paper. It’ll be beneficial to include failure cases in Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad and appreciate that the reviewer recognize that the results of RobIR is thorough. Our response to the reviewer’s comments is below:
**Q1: Visualization of the tone-mapping curve.**
Great suggestion. We highly appreciate the suggestion, which can improve the readability of the article. We have already submitted visualizations with different tone mappings and different gamma settings in the submitted rebuttal PDF.
**Q2: Missing evaluation of the tone-mapping curve.**
This is also a very good suggestion. We would like to thank the reviewer for the in-depth understanding and for attempting to provide a method to evaluate real-world scenarios without GT tone mapping. Here, we provide tone mapping evaluations for two our rendered scenes: truck and chessboard.
- Truck: We rendered the scene in Blender and used Blender's built-in Filmic tone mapping to handle the high dynamic range. The final optimized $\gamma$ for the `truck` scene is 1.0, meaning the optimized tone mapping is the vanilla ACES. Considering that the Filmic curve is modified from ACES and their distributions are generally consistent with only some differences in detail, we believe that the optimization of tone mapping for this scene is accurate.
- Chessboard: We also rendered this scene in Blender, but used sRGB to process the BRDF output. The final optimized $\gamma$ value is 0.42. In the submitted PDF, we compared the differences between the ACES curve with $\gamma=0.42$ and the sRGB curve. It can be seen that $\gamma^{-0.2}$ stretches the ACES curve very well, making it closer to the distribution of the sRGB curve. Combined with more accurate visibility modeling (NeuS Octree and Regularized Visibility Estimation), we can remove the baked shadows and reflections from the PBR material in the chessboard scene (See Fig. 6 and Fig. 8).
**Q3: Albedo quantitative comparison.**
We greatly appreciate the reviewer's attention to the hue differences in albedo in inverse rendering, as directly using PSNR might lead to unfair comparisons. We carefully studied the "Ground Truth Dataset and Baseline Evaluations for Intrinsic Image Algorithms" for evaluating and decomposing albedo and found that their decomposition formula is $I(x) = S(x)R(x)+C(x), \text{where}\ S(x), R(x), C(x)\ \text{denotes illumination, albedo, specular respectively} $, which is not compatible with our method. Nonetheless, we are grateful to the reviewer for suggesting a more reasonable evaluation of albedo.
Regarding the evaluation of albedo, we believe that qualitative assessment is more important. Therefore, in Fig. 4, we provide extensive comparisons to demonstrate that our method does not bake shadows into the albedo. The removal of shadows cannot be solved merely by adjusting the hue (scale). Quantitative experiments only serve as a supplement to albedo evaluation. Although every method has issues with hue affecting PSNR to some extent, our method remains the closest and does not bake shadows. Furthermore, we also provide SSIM and LPIPS metrics, which are less sensitive to hue, and these metrics consistently show that our method achieves higher quality albedo, in agreement with Fig. 4.
**Q4: Albedo estimation in real-world scenes looks over-smoothed.**
In Fig. 7, the `Man` scene is actually successful in removing complex lighting and shadows, resulting in a smooth albedo, as the material of the sculpture is uniform. However, in the `Bear` scene, there is indeed an issue of over-smoothing. As mentioned in the common response, the smooth loss does have some impact, but it is more attributable to the rendering precision of NeuS. We are very much looking forward to considering structures that balance both rendering quality and geometric quality, such as 2DGS, in our future work.
**Q5: About the radiance values and indirect illumination.**
Your understanding is very accurate. The radiance values (before tone mapping) and indirect illumination are in HDR, which are then mapped to the $[0,1]$ using scene-specific ACES tone mapping. The activation functions for albedo and roughness are **sigmoid**, while SG-based direct light is modeled as `nn.Parameter`. The indirect light is supervised by NeuS (with NeuS supervision values $[0, 1]$ are mapped back to the same range in BRDF estimation stage using the inverse ACES mapping in Eq. 7).
**Q6: The statement on the role of scene-specific ACES tone mapping.**
In Figure 8, we conducted a rigorous ablation study on the effect of ACES. Introducing scene-specific tone mapping indeed plays a crucial role in shadow removal.
Finally, we would like to thank the reviewer once again. Many of these points were things we had not noticed before, and these suggestions will significantly improve the readability of the paper. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their valuable comments. We are glad and appreciate that the reviewers recognize that our proposed regularized visibility estimation and ACES tone mapping are novel, and our experiments are thorough and impressive. We will further polish our paper and release our codes.
We would first like to clarify the contributions of RobIR and provide the common response for the explanation of details absence and ACES tone mapping. Following that, we will address the specific questions posed by each reviewer.
**1. Contribution:**
We are very grateful to reviewer `GRUR` for pointing out that we used the original NeRF rendering of Hotdog and Lego instead of NeRFactor’s. This actually highlights the fundamental contribution of our work. Currently, most neural field-based inverse rendering methods are applied to scenes with low light intensity (as shown in Fig. 13), which avoids the challenge of decoupling shadows, indirect illumination, and PBR materials. This reflects that the components decomposed by current inverse rendering methods are often messy and fail to decouple shadows from PBR materials in high illumination scenes (see Fig. 4).
Through this paper, we aim to draw the community's attention to this basic issue in current IR frameworks, and we propose our solution by explicitly introducing ACES tone mapping and more accurate visibility modeling.
**2. The absence of the texture details in Fig. 4 and Fig. 7.**
Reviewers `ijRb` and `GRUR` believe that our regularization terms may lead to a loss of high-frequency details. The loss of some details is indeed an issue brought about by smooth loss. Additionally, we believe that the rendering accuracy of NeuS is a more significant reason for the loss of details. However, we want to emphasize that **the quality of geometry** is crucial for correctly understanding the scene and decoupling materials and shadows. Only through NeuS's high-quality geometric reconstruction and smooth loss correction of geometric errors can we correctly decouple shadows from the object's PBR materials, ensuring that shadows or indirect illumination are not baked into the PBR materials (**No shadow residual** in Fig. 7 bear and Fig. 4 truck). We look forward to incorporating work that excels in both geometric reconstruction and rendering quality, such as 2DGS, in future endeavors to enhance the details of PBR materials.
**3. Why ACES can make a difference in BRDF estimation.**
We are very grateful that almost every reviewer raised this issue; it is a very profound question. We believe that the introduction of scene-specific ACES tone mapping has two benefits for BRDF estimation.
- First, as reviewer `ySN8`mentioned, incorporating ACES tone mapping undoubtedly provides a more accurate formulation for the image formation process, especially for high-illumination scenes (since rendering in such scenes often requires HDR tone mapping).
- Second, and what we consider a more direct reason in neural field-based inverse rendering, is that it offers a broader value range for the calculation of PBR color (Lines 159-163). In the submitted rebuttal PDF, we provided a comparison **between ACES and sRGB tone mapping**. It can be observed that the input for sRGB is within the range of $[0, 1]$, whereas ACES extends from $[0, +\infty]$. This allows the color obtained through PBR to have a greater contrast when aided by ACES tone mapping, thereby helping to decouple the shadows in extremely dark areas from the PBR material.
However, not all scenes with shadows or high illumination undergo HDR tone mapping, and there are differences between various HDR tone mapping methods (as reviewer `GRUR` mentioned, such as Filmic and AgX curves). Therefore, we introduced a $\gamma$ correction (see Eq. 8) to stretch the ACES curve, allowing it to adapt to different light intensities and to approximate various tone mapping curves. We also included a comparison of curves under different $\gamma$ values with other tone mapping methods in the submitted rebuttal PDF.
Overall, we believe that the introduction of scene-specific ACES tone mapping undoubtedly provides a foundation for more precise BRDF estimation. Its inclusion is essential for the decoupling of shadows, and when combined with our proposed Regularized Visibility Estimation (RVE), it enables more robust shadow removal. As shown in the Fig. 8 ablation study, neither ACES nor RVE alone can achieve reliable shadow removal, but together they can truly make a difference.
Pdf: /pdf/1110dba0fe4918f441479fd92366e603e139f65f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
NeuroBOLT: Resting-state EEG-to-fMRI Synthesis with Multi-dimensional Feature Mapping | Accept (poster) | Summary: This paper introduces NeuroBOLT, a transformer-based model. NeuroBOLT utilizes multi-dimensional representation learning across temporal, spatial, and spectral domains to translate raw EEG data into comprehensive fMRI activity signals across the entire brain. Experimental results showcase NeuroBOLT's ability to effectively reconstruct resting-state fMRI signals across primary sensory, high-level cognitive areas, and deep subcortical regions.
Strengths: 1. The paper tackles one of the most challenging and competitive topics in neuroscience.
2. The motivation behind the paper is quite clear, and the experimental section is logically sound.
3. The figures and tables in the article are clear and well-organized, making it highly readable.
Weaknesses: 1. The method abbreviation and the title are not closely related. It is unclear where 'BOLT' comes from in the title, and even after reading the abstract, it remains confusing.
2. In fact, there has been a lot of work on fMRI-EEG in recent years, especially in 2023 and 2024, but the author's related work lacks a significant amount of relevant literature.
3. In the abstract and introduction, the author's description of the method is inconsistent with the organization in the methodology section, resulting in a need for improved readability.
4. The writing of the article needs to be further standardized. For example, 'FMRI data' is sometimes written with a capital 'F' and other times as 'fMRI data'.
5. Although the layout and presentation of the tables are aesthetically pleasing, the font size is too small, making them difficult to read even when enlarged.
6. The paper does not provide code or data to support the reproducibility of results.
7. This paper lacks details on the parameter selection for the baseline methods. Although the authors state, 'The baseline models are from [44] and [16], where we choose the models with the best downstream classification task performance,' the datasets and tasks in references [16] and [44] are not entirely consistent with those in this paper. Therefore, the authors should specify the exact process of parameter selection.
8. The equations and symbols in the article are not very standardized. The authors should provide notation to help readers understand.
9. The authors should conduct statistical tests to validate the significance of their methods.
10. For readers in the NeurIPS community, the theoretical contribution of this paper appears to be weak.
11. I don't quite understand what the author means by the third point of contribution: 'Successful resting-state fMRI reconstruction To our knowledge, this is the first study to successfully reconstruct the resting-state fMRI signal from raw EEG data, with only 26 electrodes.' What is the significance of 26 electrodes?
12. Equation 2 does not appear to be a complete equation.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see the twelve weaknesses above.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Limitations are discussed in the section of Discussion and Conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable suggestions, which truly help us enhance the quality and readability of our paper from multiple aspects. Responses to your concerns are presented as follows:
> **Consistency and presentation details (W3-5, W8, W12):**
* **W3**: We restructured the model overview in the intro and abstract, and included summarizing sentences to ensure a coherent and clear presentation of our method throughout the manuscript.
* **W4**: We have carefully reviewed the manuscript and ensured that we use a capital "F" only when "fMRI" appears at the beginning of a sentence, following standard conventions.
* **W5**: We have now moved the MSE metric to the appendix to create more space for displaying the mean and standard deviation of R values. We will include an interactive table that allows readers to easily zoom in/out on our project page.
* **W8**: We have now reviewed all equations and symbols throughout the manuscript to ensure they are standardized. In the revised version of the manuscript, we will include a notation section (table) in the appendix that defines all symbols and terms used in the equations to help readers better understand the mathematical content and ensure consistency throughout the manuscript.
* **W12**: The expression in question is intended to define the set of patch embeddings (i.e., to define the notation), rather than to present an equation.
> **Meaning of the model abbreviation (W1):**
“NeuroBOLT” stands for “**Neuro**-to-**BOL**D-**T**ranslation.” We have added clarification of this abbreviation both in the abstract and introduction.
> **Related work (W2):**
We have incorporated additional relevant literature and discussed their relevance to our work in the Introduction section of our revised manuscript, including the works from Calhas et al. [1-2] and Liu et al. [3-4].
> **Data and code availability (W6):**
We will release all datasets, code, and model weights upon acceptance. This will provide the necessary resources for the community to validate, further explore, and build upon our models.
> **Training details of baselines (W7):**
We ensured that all training hyperparameters (like batch size) were consistent with those used for our model across all the baselines. Our baseline models consist of (1) SOTA EEG encoders mentioned in [5-6] and (2) EEG-to-fMRI frameworks [7-8], as detailed in Appendix B.2. We strictly adhered to the model settings provided in their original papers when training. We have clarified this and added a table in our revised manuscript that specifies these parameter settings to ensure the results' replicability.
> **Statistical tests (W9):**
We have now added statistical tests for all comparisons with other baseline models (please see **General Response**).
> **Theoretical contribution (W10):**
Our study aims to address a key limitation in previous methods for learning the effective EEG representation useful for EEG-fMRI synthesis. We not only incorporate the SOTA EEG encoding framework but also emphasize the importance of more comprehensive frequency representations, which are often neglected in previous methods [1-4,6-8]. We propose a multi-level spectral representation learning approach that captures and aggregates a range of rapid to smooth fluctuations with adaptive windows. As demonstrated in Table 2 (ablation study), NeuroBOLT benefits from this design by effectively capturing dynamic representations and improving synthesis accuracy and fidelity. Furthermore, additional experiments on an auditory task dataset (please see the **General Response**) validate the **robustness and applicability of our learned representations across neural data from different sites and scanners.**
We believe our approach not only facilitates the translation between two neuroimaging modalities but also highlights the need for multi-scale spectral features for more effective EEG encoding in future research within this community. We will include more detailed theoretical derivations and explanations of spectral representations in the appendix to strengthen the theoretical foundation of our work.
> **Significance of 26 channels (W11):**
For EEG, having fewer than 32 electrodes (26 in our case) is typically considered a small channel set, which presents challenges in accurately estimating neural activity in cortical and subcortical regions using traditional methods like EEG source localization, which often requires >64 channels [9]. However, our experiments show that even with fewer electrodes, we were able to reconstruct the fMRI signals across multiple brain regions in both intra- and inter-subject prediction scenarios. This finding suggests that our method is practical and effective, potentially broadening its applicability to real-world scenarios, such as clinical applications, where electrode count may be limited.
We genuinely appreciate the suggestions, and believe our paper will be improved with your feedback. Please let us know if you have additional questions!
[1] Calhas, David, et al. "Eeg to fmri synthesis for medical decision support: A case study on schizophrenia diagnosis."
[2] Calhas, David, et al. "Eeg to fmri synthesis benefits from attentional graphs of electrode relationships."
[3] Liu, Xueqing, et al. "Latent neural source recovery via transcoding of simultaneous EEG-fMRI."
[4] Liu, Xueqing, et al. "A convolutional neural network for transcoding simultaneously acquired EEG-fMRI data."
[5] Jiang, Wei-Bang, et al. "Large brain model for learning generic representations with tremendous EEG data in BCI."
[6] Yang, Chaoqi, et al. "Biot: Biosignal transformer for cross-data learning in the wild."
[7] Li, Yamin, et al. "Leveraging sinusoidal representation networks to predict fMRI signals from EEG."
[8] Kovalev, Alexander, et al. "fMRI from EEG is only Deep Learning away: the use of interpretable DL to unravel EEG-fMRI relationships."
[9] Michel, Christoph M., et al. "EEG source imaging."
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's response.
1. Some specific modifications by the author were not observed, such as W3, W5, W6.
2. Regarding W2, [1-6].... Aside from the paper titled "EEG-to-fMRI Synthesis," many papers titled "simultaneous EEG/FMRI" also align with the direction of this work. Among the articles cited and compared by the author, only 19 and 24 are related to EEG-fMRI translation, while most focus on EEG Encoding. The insufficient comparative experiments also stem from a lack of thorough research in the related work.
3. Regarding W7, the author stated, "We strictly adhered to the model settings provided in their original papers when training." However, the datasets and evaluation metrics used by the author may differ from those in the original paper, which may not be entirely fair.
4. Regarding question 10, the author mentioned "validate the robustness and applicability," but based on the appendix results, it appears that the author did not conduct an in-depth study of robustness, such as the method's robustness under different fMRI and EEG noise levels. Additionally, "We will include more detailed theoretical derivations and explanations of spectral representations in the appendix to strengthen the theoretical foundation of our work." These specific details should also be clarified.
[1] Bricman P, Borst J. EEG2fMRI: Cross-Modal Synthesis for Functional Neuroimaging[J]. 2021.
[2] Trujillo-Barreto N J, Daunizeau J, Laufs H, et al. EEG–fMRI Information Fusion: Biophysics and Data Analysis[M]//EEG-fMRI: Physiological Basis, Technique, and Applications. Cham: Springer International Publishing, 2023: 695-726.
[3] Liu X, Tu T, Sajda P. Inferring latent neural sources via deep transcoding of simultaneously acquired EEG and fMRI[J]. arXiv preprint arXiv:2212.02226, 2022.
[4] Wei H, Jafarian A, Zeidman P, et al. Bayesian fusion and multimodal DCM for EEG and fMRI[J]. Neuroimage, 2020, 211: 116595.
[5] Calhas D. EEG-to-fMRI Neuroimaging Cross Modal Synthesis in Python[J]. 2023.
[6] Tu T, Paisley J, Haufe S, et al. A state-space model for inferring effective connectivity of latent neural dynamics from simultaneous EEG/fMRI[J]. Advances in Neural Information Processing Systems, 2019, 32.
.........
---
Rebuttal 2:
Title: Thank you for the comments - Part 1
Comment: Thank you very much for your additional comments. Please note that this year, **authors are not permitted to upload the revised full manuscript or include links to external pages during this rebuttal period.** We were only allowed to upload a one-page PDF (please kindly refer to our [Global Response](https://openreview.net/forum?id=y6qhVtFG77¬eId=VLcizb9WHq)). Therefore, we are unable to provide the revised manuscript here (in response to your **Points 1 and 4.2**), but will upload it when permitted. Responses to your other new concerns are below:
>**Point 2:**
We appreciate the reviewers' thorough examination of related work. Our rationale for including particular baselines, and further discussion of your reference list ([1-6]), is provided below:
**Ref. \[2\]** is an excellent review article on simultaneous EEG-fMRI, and we will add it to the other simultaneous EEG-fMRI articles that we have cited in our original manuscript (including Ritter et al., 2006; Laufs et al., 2003, 2006; Chang et al. 2013; de Munck et al., 2009). With regard to the DCM model in **Ref. \[4\]**, we had not selected it as a baseline since it requires specifying stimulus onsets along with the neuroimaging data. Since no stimuli are presented during resting state, this model can not operate on resting-state data without major modifications. In discussing their future directions, the authors of **Ref. \[4\]** state: “Finally, the expansion of the current task-based analysis to the corresponding resting-state methodology, where an equivalent canonical microcircuit formulation for cross spectral data features, will be needed.” **Ref. \[6\]**, aims to infer the effective (directed) connectivity of a brain network based on the complementary information provided by EEG and fMRI. This is interesting work too, but does not appear to provide a framework for inferring fMRI time courses from EEG.
As we mentioned in our first rebuttal, we indeed plan to cite **Ref. \[3\]** (which is the preprint version of *\[3\]* in the previous response) and **Ref. \[5\]** (which uses the same model as *\[1,2\]* in the previous response). For **Ref. \[3\]**, we were unable to find the code, and some experimental settings are not specified in their paper, which would make a direct comparison potentially unfair and unreliable, so we did not include it as a baseline. For **Ref. \[5\]**, the model only accepts 64-channel EEG as input (according to the authors’ GitHub repo and PyPi sites). Since we are using EEG data from 32-channel caps, we did not include this model as a baseline here, but look forward to doing so in future studies with 64-channel data. **Ref \[1\]** explored three conventional architectures: FCNN, CNN, and Transformer. However, the paper did not provide code/model parameters and offered limited information about the training process. In our baselines, we have included models that are similar to or more advanced versions of these architectures (see Appendix B.2 in the original manuscript: CNNs *\[19, 23, 24, 32\]* and Transformers *\[16, 44, 36, 32\]*).
| Baselines | If included | Rationale |
| :---- | :---- | :---- |
| [1] | ✗ | - Source code is not available; - Model parameters are not specified; - We already included baselines with similar or advanced versions of these architectures |
| [3] | ✗ | - Source code is not available; - Model parameters are not specified |
| [4] | ✗ | Not applicable without major modifications |
| [5] | ✗ | Only supports 64-channel inputs |
| [7] | ✓ | Provided in the manuscript. |
| [8] | ✓ | Provided in the manuscript. |
We hope that our response helps to clarify the rationale behind the baseline models selected for this submission. While the simultaneous EEG-fMRI field is indeed large, the area of EEG-to-fMRI synthesis is currently a niche but rapidly emerging subfield. We will also include the referenced papers in our revised manuscript, which will appear on this forum when we are permitted to submit.
### **Please see our responses to _<Point 3, Point 4 and references>_ in our [next comment block](https://openreview.net/forum?id=y6qhVtFG77¬eId=wkIJndrOr2)**
---
Rebuttal 3:
Title: Thank you for the comments - Part 2
Comment: ### **Here are the responses to <Point 3, Point 4 and references>** (For responses to **Point 1 and Point 2**, please see the **[comment block above](https://openreview.net/forum?id=y6qhVtFG77¬eId=ssbQ088SCz)**)
>**Point 3:**
If we understand correctly, the reviewer suggests running a grid search for each baseline model to find the parameters that perform optimally on the current dataset and with the current evaluation metrics. In our study, we had adopted the parameter settings recommended by the authors. When evaluating baselines, it is common practice to use the parameters suggested by the original authors, due to the computational burden associated with a comprehensive parameter search across the search space for each application. Moreover, we would like to clarify that we **used the same evaluation metrics as in \[7,8\], i.e., correlation coefficient (i.e., R values).** For the other EEG encoding models, the tasks in the original papers are different (mostly classification tasks like detection of seizures, event type classification, etc.), so metrics like accuracy, AUC, Cohen’s Kappa, and weighted F1 were used in these papers.
To further address your concern regarding differences in dataset, we conducted additional experiments on the ***same dataset*** as in refs. \[7,8\] (eyes open/close task fMRI), predicting the ***same brain regions***, using the ***same settings and objective*** (i.e., intra-subject prediction). The results (R ± S.D) are shown below, with the best values in bold. **We find that our model still outperforms \[7,8\].**
| Model | Pallidum | Caudate | Putamen | Accumbens | Average |
| :---- | :---- | :---- | :---- | :---- | :---- |
| Ours | **0.57 ± 0.18** | **0.58 ± 0.11** | **0.57 ± 0.16** | **0.60 ± 0.11** | **0.58 ± 0.02** |
| \[7\] | 0.43 ± 0.15 | 0.49 ± 0.12 | 0.51 ± 0.14 | 0.43 ± 0.13 | 0.47 ± 0.04 |
| \[8\] | 0.37 ± 0.04 | 0.47 ± 0.14 | 0.48 ± 0.16 | 0.42 ± 0.05 | 0.44 ± 0.05 |
>**Point 4:**
* **4.1)**: We apologize for the confusion, but our intention here for saying *"validate the robustness and applicability of our learned representations across neural data from different sites and scanners"* was to convey that our model shows promising applicability to another dataset, even if it is collected in a different condition (task) and on a different scanner (see **[Global response](https://openreview.net/forum?id=y6qhVtFG77¬eId=VLcizb9WHq)** and **Part 1** in the one-page PDF). We now notice that the word ”robustness” could cause confusion so we will omit this word in our revised manuscript. Evaluating the influence of noise requires simulating different noise levels, which is a bit out of the current scope of this paper, but it is a very interesting future direction.
* **4.2)**: We had planned to provide further details on the building blocks of our model, including specific details of the multi-scale spectral representation learning module. Multiscale temporal representations and Short-Time-Fourier-Transform (STFT) are proving to be highly effective in representing neural data \[9,10\]. While the components of this module have well-established theoretical foundations in signal processing, our key contribution lies in integrating these components to work effectively together, resulting in a novel method and application (shown through ablation studies in *Section 4.4 Table 2*). Our planned additions include the following:
1. Formally describe (i.e. provide mathematical notations) how we utilized STFT to convert EEG time sequences into time-frequency features.
2. Discuss how the dimensions of frequency features depend on the time-window length with equations illustrating this relationship, and emphasize the rationale for selecting specific window lengths.
3. Introduce the trade-off between temporal and frequency resolution, further clarifying the reasoning behind using an approach that considers multiple window lengths.
Thank you once again for your valuable comments. We hope our responses fully address your concerns.
**References:**
[1-6]: see refs. [1-6] in the Reviewer’s previous comment
[7] Li et al. “Leveraging sinusoidal representation networks to predict fMRI signals from EEG”
[8] Kovalev et al. “fMRI from EEG is only Deep Learning away: the use of interpretable DL to unravel EEG-fMRI relationships.” arXiv preprint.
[9] Van De Ville et al. “When makes you unique: Temporality of the human brain fingerprint."
[10] Samiee et al. “Epileptic seizure classification of EEG time-series using rational discrete short-time Fourier transform” | Summary: The manuscript proposes an EGG-to-fMRI synthesis model. The framework implements a transformer architecture and uses a multi-channel feature combination expanded across the temporal axis. To evaluate the proposed model, EGG and fMRI data from 22 participants were recorded while they were in the resting state with eyes closed.
Strengths: The manuscript addresses an interesting problem and can open opportunities for multimodal neuroimaging analysis. Overall, this line of investigation is little explored, therefore, the present manuscript is novel and of interest to the community.
The present manuscript is quite complete, (1) the model and rationale behind are sound; (2) a dataset is collected which allows a faithful evaluation of the proposed translation (from EEG to fMRI); (3) it's rather easy to read and follow the manuscript, (4) the reported results are promising.
Weaknesses: The biggest weakness is that the framework and the dataset are only addressing the resting state. While this is an important baseline to investigate, it would have been great to explore the fidelity of the proposed framework when participants are presented with some stimuli.
It is unclear whether the source code and dataset will be released publicly.
The stability of the results is fully guaranteed given no statistical analyses are performed.
Technical Quality: 3
Clarity: 3
Questions for Authors: What is "In-scan" in Table 1? This is not explained in the manuscript.
It is unclear to me why all methods are not evaluated for both inter-subject and in-scan. For example, isn't it possible to evaluate BIOT [44] for inter-subject data?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The manuscript sufficiently discusses its current limitations. I think the limitations section should tap into the particular scenario that the model has been evaluated on (resting state) and whether the results will be generalisable to other scenarios remains to be shown.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful review and encouraging feedback on our manuscript! We are delighted that the reviewer found our work novel and of interest to the community. We are very encouraged by the reviewers’ evaluation on this work opening up opportunities for multimodal neuroimaging analysis. We greatly appreciate the positive comments on the soundness of our model, the clarity of our manuscript, and the promise of our results. We have addressed their specific questions below and included additional details in our global response.
> **Evaluation on task fMRI:**
Our motivation for focusing on resting-state fMRI data stems from the rich information that can be gained from naturally evolving brain dynamics. We sought to address the question of reconstructing resting-state fMRI signals since the spontaneous nature of the signals could present challenges beyond reconstructing block- or event-related fMRI task data.
But we strongly agree with the reviewer that it is important to investigate the fidelity of our proposed model on fMRI data that contains a task. In this rebuttal, we now include an auditory task EEG-fMRI dataset for an in-depth evaluation, please kindly refer to the **global response part 1** for details.
> **Data and code availability:**
We will release all datasets on OSF, code on GitHub, and model weights on HuggingFace upon acceptance. This will provide the necessary resources for the community to validate, further explore, and build upon our models.
> **Statistical analyses:**
Please kindly refer to the global rebuttal response, where we provide additional results from our statistical analyses.
> **Question 1:**
“In-scan” stands for within-scan (i.e., intra-scan) prediction, where part of the scan is used for training and another part is used for testing. We clarified this terminology in the revised version.
>**Question 2:**
All methods can be applied to both inter-subject and in-scan predictions. We would like to clarify that we evaluated BIOT for both inter-subject and in-scan (intra-subject) frameworks (as shown in Table 1 in our original manuscript). For the in-scan evaluation, we originally selected the state-of-the-art EEG encoders (LaBraM [1] and BIOT [2]), as representative baselines for comparison. However, in our analysis for this rebuttal, we have also evaluated the intra-subject prediction performance on the two EEG-to-fMRI baselines (Li et al.[3] and BERIA [4]) to provide a more comprehensive analysis. Please see the comparison below, which shows the means and S.D. of correlation between prediction and ground truth ( full MSE table will be included in the revised manuscript). The significance of the paired t-test between our model and other baselines is indicated as follows: *:_p_<0.05; **: _p_<0.01; ***: _p_<0.001.
| Model | Cu | He | MF | PA | Pu | Th | GLB |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Ours | **0.59±0.17** | **0.57±0.18** | **0.50±0.17** | **0.56±0.14** | **0.44±0.18** | **0.48±0.21** | **0.59±0.16** |
| LaBraM\[1\] | 0.54±0.18 | 0.52±0.20\*\* | 0.49±0.15 | 0.49±0.18\* | 0.41±0.18 | 0.45±0.18 | 0.49±0.17\*\* |
| BIOT\[2\] | 0.53±0.22 | 0.52±0.21\* | 0.49±0.16\*\* | 0.46±0.11\*\*\* | 0.41±0.21 | 0.41±0.23 | 0.49±0.13\*\*\* |
| Li et al.\[3\] | 0.46±0.22\*\* | 0.51±0.21 | 0.38±0.17\*\*\* | 0.46±0.20\* | 0.32±0.18\*\* | 0.40±0.19\* | 0.58±0.17 |
| BERIA\[4\] | 0.36±0.24\*\*\* | 0.40±0.24\*\*\* | 0.29±0.23\*\*\* | 0.32±0.22\*\*\* | 0.23±0.19\*\*\* | 0.24±0.10\*\*\* | 0.46±0.24\*\* |
> Cu: Cuneus; He: Heschl’s gyrus; MF: Middle Frontal Gyrus Anterior; PA: Precuneus Anterior; Pu: Putamen; Th: Thalamus; GLB: Global Signal
Again, we genuinely appreciate the reviewer’s encouraging feedback and constructive suggestions! Please feel free to let us know if you have any further questions or comments.
[1] Jiang, Wei-Bang, et al. "Large brain model for learning generic representations with tremendous EEG data in BCI." arXiv preprint arXiv:2405.18765 (2024).
[2] Yang, Chaoqi, et al. "Biot: Biosignal transformer for cross-data learning in the wild." NeurIPS 36 (2024).
[3] Li, Yamin, et al. "Leveraging sinusoidal representation networks to predict fMRI signals from EEG." Medical Imaging 2024: Image Processing. Vol. 12926. SPIE, 2024.
[4] Kovalev, Alexander, et al. "fMRI from EEG is only Deep Learning away: the use of interpretable DL to unravel EEG-fMRI relationships." arXiv preprint arXiv:2211.02024 (2022).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for responding to my questions. I do not have any further questions.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you once again for your valuable and encouraging feedback! We sincerely appreciate the time and effort you put into the review process. | Summary: In this work, the authors present a deep learning architecture for inferring functional magnetic resonance imaging (fMRI) signal from electroencephalography (EEG) data. The proposed model, named NeuroBOLT, utilizes transformer backbones to provide spatial, temporal, and frequency-based features from the EEG are utilized for reconstruction. The authors demonstrate the performance of their architecture on a small (N=22) data set using a propriatary data set of simultaneously measured EEG-fMRI.
Strengths: fMRI reconstruction from simultaneous EEG is a fascinating topic, and a difficult problem to tackle. The approach taken by the authors in this work is novel for the task at hand, i.e. using a multi-scale spectral feature embedding. Although the decision to use multi-scale spectral embeddings is not new in MRI analysis, as far as I could find the approach has not been utilized for this particular problem and the authors address novel problems for their application to simultaneous EEG-fMRI data in a deep learning architecture. At best this paper is a novel methodological tweak applied with state of the art architectures to see improvements over other deep learning baselines.
The breadth of the experiments attempted by the authors is promising; however, see my discussion below for more of a discussion of the limitations of the experiments performed.
The authors also perform an ablation study to explore how the inclusion of Multi-Scale Spectral features improves model performance, thus demonstrating the benefit of combining the multi-scale spectral features with the spatiotemporal. This is well appreciated.
Weaknesses: The major weaknesses of this work come down to weaknesses in the empirical evaluation. I am afraid that in its current state, the evaluation does not lead to a convincing demonstration of this method for fMRI reconstruciton, and the claims in the introduction about novelty coming from the application to multiple brain regions and resting-state fMRI seem somewhat overemphasized. Currently this brings it to a full reject as the paper is otherwise sound but the limitations in the evaluation are significant enough to bring it well below the threshold, and cannot be easily addressed in the rebuttle I believe.
First, I will highlight the lack of reported standard deviations or error bars in any of the results. No standard deviations are provided in tables 1 or 2, or in any of the figures providing results. In the checklist, the authors state "Error bars are not reported at the current stage because it would be too computationally expensive to compute over all the brain regions and for all participants,also there is limited space in the paper to put all the statistics. But we could always add this information if reviewers think it’s important to know."
While I appreciate the authors' acknowledgement of this exclusion, I do think error bars are absolutely necessary to demonstrate the efficacy of the proposed method. The demonstrated improvements are often quite small (e.g. improvement from 0.540 to 0.588 in table 1), and it is not clear if the purported improvements can be explained away from model noise. I could not find any information about controlling model initialization or seeds as well to ensure that random initializations played a less significant role between experiments even with the same architecture on different regions. I absolutely think error bars are necessary for this work, and the reasoning provided by the authors is not mitigated elsewhere or behind a more significant barriers other than training and evaluation time. Additionally, the authors could have mentioned this omission in the limitation section of their main paper since I had to go to the checklist to be sure the authors were aware of the issue.
Second, in the abstract the authors highlight the ability to "generalize to other brain areas" and "other conditions (such as resting state)". Unless I am missing something, I cannot find any experiments by the authors that address these particular gaps. The authors do provide inter- and intra-subject predictions which is interesting; however, their model is still only trained on individual ROIs, and they don't include any experiments demonstrating transfer learning between models trained on other regions, and they do not include any experiments studying other tasks BEYOND resting-state fMRI. Thus, the paper falls into the same limitation as past works which were only focused on task, just in the other direction. This work would have been much more compelling if they could demonstrate a model which trained well both on task and rest related data, or even better, which could reconstruct task-related data despite only being trained on resting-state fMRI. The acknowledgement of the limitations in the literature is thus misleading as the proposed method still suffers from these same limitations. The choice to only demonstrate the results for several ROIs highlights this limitation - it would be okay if the authors did not seem to imply elsewhere that their model gets around the single-ROI training approach from past methods.
Clearly, the N in this study is quite small. This is to be expected as simultaneous EEG-fMRI is still quite rare as a sequence to collect; however, the authors seem to gloss over all of the myriad issues which will come with training their data over such a small data set. I am not penalizing this work for the small N in and of itself, but as I cannot find any mention of common obstacles such as overfitting, bias towards particular kinds of reconstruction errors, and other limitations that would inevitably arise. I am extremely surprised I could find no mention of pretraining anywhere in this work, which I almost imagine would be hugely necessary for these kinds of studies with very small data sets. Again, it's not necessarily a limitation in and of itself to not do these things, with how the paper currently reads these seem to be touted as benefits of the new approach which are not backed up by evidence.
Technical Quality: 3
Clarity: 3
Questions for Authors: How well does the model trained on other ROIs transfer to reconstruction of completely different ROIs?
How might scanner model and particular parameters of the resting state sequence affect reconstruction?
Why do you not compare anywhere with Source Localization? Source Localization is only mentioned once offhand, and its limitations and efficacy as a reconstruction technique are not gone into in detail. I am surprised it was not included as a baseline method in fact.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors do provide some discussion of the limitations of their work; however, as I have noted above, there are some limitations which are ommitted from the main body of text which at least should have been acknowledged in this section.
The authors state they have IRB approval in section 4.1 of their paper. I see no reason for additional review.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We truly appreciate your excellent suggestions! We address specific concerns below. Please see the PDF in the general response for additional details and figures.
> **Evaluation on task fMRI**
Our motivation for focusing on resting-state fMRI stems from the rich information that can be gained from naturally evolving brain dynamics. We sought to address the question of reconstructing resting-state fMRI signals since the spontaneous nature of the signals could present challenges beyond reconstructing block- or event-related fMRI task data.
But we strongly agree that it would be more compelling if our proposed model is evaluated on task data, in addition to rest. In this rebuttal, we now include an auditory task EEG-fMRI dataset for additional evaluation, please kindly refer to the global response part 1 for details.
> **Lack of error bars/standard deviations**
We have now revised our work to include error bars in figures and standard deviations in tables, and statistical significance. Please see the global response and the PDF attachment for details.
> **Model initialization and seeds**
To ensure consistency during model training, we set a fixed seed. This clarification has been added to our revised manuscript. While this is certainly an important robustness test, we plan to focus on this point in a future study due to the large scope of the current analyses. We will raise this important point in the Discussion of the revised manuscript.
> **Single-ROI training**
We now realize that the wording "generalize to other brain areas" can imply that a single trained model can predict fMRI signals from different brain areas. In fact, we had meant to convey that our modeling framework can be trained to reconstruct the fMRI signal from an arbitrary brain region (but, indeed, ROI-specific models are trained). We have now clarified the wording in an effort to avoid this misunderstanding.
> **Small N**
We acknowledge the small sample size and agree that it is important to make this point clear to readers. We have carefully reviewed the entire manuscript and made sure this limitation is discussed more clearly in the main text.
Regarding pretraining, we would like to clarify that we do leverage the pre-trained weights from the EEG foundation model (LaBraM [1], trained on 2500 hours of various types of EEG data from around 20 datasets) for our “Spatiotemporal Representation Learning module.” In the original manuscript, we had briefly discussed pretraining in the last paragraph of the introduction section and in section B.1 of the appendix. We have now added clarifications across the manuscript (specifically in the Methods and in Section 4, Experiments: Implementation Detail), describing the pre-training procedure and its importance.
Moreover, the additional results on the task-based fMRI collected at a different site demonstrate that training our model on a relatively small sample of resting-state data shows promising potential to generalize to different task conditions and hardware.
[1] Jiang, Wei-Bang, et al. "Large brain model for learning generic representations with tremendous EEG data in BCI." arXiv preprint arXiv:2405.18765 (2024).
> **Question 1**
Please also see our response to “Single-ROI training” above. The current framework takes multi-channel EEG as input and predicts the fMRI ROI signal from the region on which the model is trained. We would therefore expect the model trained for one ROI would tend to reconstruct other fMRI ROIs if they share correlated temporal fluctuations. In future work, we will extend the model to predict multiple fMRI ROIs signals at once. We are revising the main text accordingly.
> **Question 2**
While our new task analysis presents an initial probe into this question, showing promising performance on task data acquired with different hardware/parameters, answering this question directly would require future work that systematically investigates altering scan parameters (while keeping the task condition, and ideally the subject, consistent) and would involve collecting more data. We believe this is a useful avenue for future work.
> **Question 3**
This is a great point! Source localization is indeed a valuable method. However, we did not include source localization for the following reasons:
**(1) Limited number of EEG electrodes:** Our EEG data consist of 32 electrodes. After excluding the ECG, EOG, and EMG channels from the model input, this is reduced to 26 channels. However, larger numbers of channels (e.g., >64) may be needed for sufficiently accurate source localization [1-2].
**(2) Hemodynamic filter between source electrical signal and fMRI signal:** Even if we can map the EEG into source space, the output signals would be neural electrical signals. Our present study focuses on predicting fMRI hemodynamic signals, which are blurred/delayed relative to electrical signals. Although fMRI signals can be estimated by convolving neural signals with the hemodynamic response function (HRF), the shape and peak of the HRF vary across different brain regions and individuals, which would pose challenges for aligning between fMRI and source-reconstructed EEG signals.
However, we do think that source localization is an excellent future direction.
[1] Michel, Christoph M., and Denis Brunet. "EEG source imaging: a practical review of the analysis steps." Frontiers in neurology 10 (2019): 325.
[2] Michel, Christoph M., et al. "EEG source imaging." Clinical neurophysiology 115.10 (2004): 2195-2222.
We genuinely appreciate your commitment to ensuring the rigor of our paper and the constructive suggestions. These insights have significantly improved this work. We hope our rebuttal addresses your concerns and please let us know if you have any additional questions/comments!
---
Rebuttal 2:
Title: Score Adjustment
Comment: The rebuttal provided by the authors addresses much of the concerns raised here. The requested error bars somewhat lower the impact of the model compared to baselines, as there is high variance and many baselines perform fairly comparably; however, there is demonstrative improvement. I have raised my score to a borderline accept.
---
Rebuttal Comment 2.1:
Title: Thank you for increasing your score
Comment: Thank you so much for your prompt response and raising the score. We are happy that our additional experiments have improved this work, and sincerely appreciate your feedback, which helps to improve the quality of our paper.
We acknowledge that the error bars are relatively large across all methods. While variability in the performance across individuals presents a challenge, we find it promising that our method achieves consistently higher mean performance than the baselines for all (or all but one) of the ROIs in the intra/inter-scan comparisons (Figs. R1.2, R2.1, R2.2 in the one-page PDF), though indeed not all of these differences are statistically significant. By contrast, considering only the baseline methods, these appear to vary across ROIs in terms of which model achieves the highest mean performance. Additionally, we believe this is the first study to incorporate and adapt these state-of-the-art EEG Encoding baselines in this specific application of EEG-to-fMRI translation.
We believe that your comment is highly valuable, and will include a discussion of this limitation in our revised manuscript. In addition, we are planning further in-depth exploration of the factors that may drive performance variability across individuals in our future work, as this is indeed an important issue.
Thank you once again for your valuable comments! | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and their insightful, constructive suggestions. We are excited that all reviewers find the topic of our paper important and fascinating. Reviewers found our study to be novel and well-motivated with promising results, and also noted that our manuscript is well-organized (Reviewer bwvZ) and highly readable (Reviewers rcEv, bwvZ).
We have addressed below two main concerns that are raised by the reviewers: absence of (1) evaluation on task-related fMRI, and of tests regarding generalizability of our model from resting-state to task-related data; and (2) statistical measures to demonstrate the model's efficacy and feasibility. In addition to this global response, we include point-by-point responses addressing the comments of each reviewer in the separate response sections below.
> #### **1. Generalization to task-related fMRI**
One concern that the reviewers shared involved the lack of task fMRI data in our experiments. Accordingly, we have now run experiments using **a simultaneous EEG-fMRI dataset collected during auditory tasks** (see details below **\*\*Auditory Task**). We conducted additional experiments using this dataset: **(1) zero-shot generalization,** where we pre-trained our model on resting-state fMRI data and evaluated the performance on task fMRI data; **(2) intra-subject prediction and inter-subject prediction,** where we trained and evaluated models using only task fMRI data; **(3) fine-tuning,** where models trained on resting-state data were fine-tuned with task fMRI data, and **(4) joint-training,** where we jointly trained (using both resting-state and auditory task fMRI data) and evaluated the model on the respective held-out test sets of both datasets. For these experiments, we used the same parameter settings for the model training and optimization as in the main paper. Specifically, in these experiments we addressed:
* **Feasibility of our model framework on task fMRI (new experiments 2\)**: In the PDF attachment, Fig R1.1 shows the **intra-subject** prediction performance distribution, with average correlation \= 0.51±0.18 across 7 ROIs and 16 scans. Fig R1.2 displays the **inter-subject**, unseen whole-scan prediction (trained on 3+6 scans, validated on 1+2 scans, tested on 2+2 scans for Fast+Sparse auditory tasks) performance compared with other baselines. Our model exhibits superior performance across 6 ROIs.
* **Generalizability of our model trained with resting-state data (new experiments 1, 3 and 4).** The performance of our pretrained model on *zero-shot* whole-scan task fMRI reconstruction (see in Table R1, the first row of data) achieved performances comparable to that of our original resting-state data, with even better performance in several regions compared with the model that was trained only on task fMRI. With further *fine-tuning* on the task dataset, the performance improved significantly. Moreover, carrying out *joint training* using both resting-state and auditory task fMRI datasets resulted in the best performance across 4 ROIs in task fMRI prediction. Our results also suggest that joint training is not necessarily facilitating the prediction on resting-state fMRI (beyond training on resting-state data alone), which might be due to the smaller sample size of task data and richer variability of brain dynamics in the resting state.
The results from the above experiments demonstrate initial evidence that our model is indeed able to generalize across resting-state and task fMRI data, as well as across different sites and hardware settings.
> #### **2. Statistical measures**
Another concern raised by the reviewers is the lack of statistical measures to demonstrate the model's efficacy and feasibility. To address this, we added bar plots (with error bars showing S.D.) and indicated the statistical significance for resting-state intra- and inter-subject predictions. For statistical testing, the correlation values are firstly Fisher-z transformed and a paired t-test was employed to test the significance of improvement of our model compared with other baselines (\*: _p_-value\<0.05; \*\*: _p_-value\<0.01; \*\*\*: _p_-value\<0.001). Our task fMRI results shown in the PDF attachment also include the S.D. and error bars, and the statistical significance of the baseline comparisons.
> **3. Data and code availability.**
We will release all datasets on OSF, code on GitHub, and model weights on HuggingFace upon acceptance. This will provide the necessary resources for the community to validate, further explore, and build upon our models.
Finally, we would like to express our deep appreciation for the reviewers' comments and their recognition of the promising potential of our work to inspire future advancements in multi-modal neuroimaging. All the additional results (experiments on task fMRI, R values, and MSE values with S.D.) will be properly incorporated into our revised manuscript, with detailed information about task-fMRI data collection/preprocessing and implementation details of the additional experiments.
> \*\* **Auditory Task details.** Specifically, binaural tones were delivered with randomized inter-stimulus intervals (ISI), and there are two versions of the task that differed only in the timing of tone delivery: (1) **a fast ISI version** (6 scans, 500 TR/scan, ISI mean(SD) \= 5.6(0.7) sec, TR \= 2.1 sec), (2) **sparse ISI version** (10 scans, 693 TR/scan, ISI mean(SD) \= 40.1(14.6) sec, TR \= 2.1 sec). Subjects were asked to keep their eyes closed the entire time and to make a right-handed button press as soon as possible upon hearing a tone. This dataset was collected at a different site, different MR scanner (3T Siemens Prisma scanner) and using a slightly different EEG cap (32 channels but with partially different electrode settings). Detailed information about data collection and preprocessing will be included in the revised version of the manuscript.
Pdf: /pdf/c6f6711ee1176fbfeb0380deff6cceda9630a663.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Towards Diverse Device Heterogeneous Federated Learning via Task Arithmetic Knowledge Integration | Accept (poster) | Summary: The authors aim to develop a knowledge distillation method that addresses the challenges posed by heterogeneous device prototypes in federated learning. By capturing the knowledge transfer among device prototypes, the proposed TAKFL tries to preserve each device's unique contribution and prevent knowledge dilution during the learning procedure. The method incorporates a self-regularization technique to address issues of the noisy and unsupervised ensemble distillation. Evaluation of some CV and NLP tasks shows the performance of model accuracy and scalability.
Strengths: The authors focus on knowledge distillation in heterogeneous settings, which is meaningful to real-world tasks. The overall presentation is clear and easy to understand. The proposed TAKFL shows some primary theoretical analysis of the learning efficiency. The evaluation provides a quantitative analysis to compare model accuracy and scalability with previous methods.
Weaknesses: While the paper presents a clear presentation and evaluation, there are a few aspects to be strengthened.
1. The concept of "task arithmetic" in the context of federated learning is vague. It would be beneficial to explain how this concept enhances the design of the federated learning process.
2. While the knowledge distillation process is described, it appears to be largely based on existing methodologies. It would be interesting to explore any novel design elements introduced in TAKFL. Additionally, what is the theoretical convergence order of the proposed method?
3. Although TAKFL demonstrates higher model accuracy in performance comparisons, the baselines used, such as FedAvg and FedDF, seem outdated. Incorporating more recent baselines could provide a more rigorous evaluation of TAKFL's effectiveness.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to the concerns and questions mentioned in the weakness part.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your invaluable review and pointing out that our proposed method shows **some primary theoretical analysis of the learning efficiency**.
### **Response to 1**
To clarify, we use the concept of task arithmetic to address limitations of previous approaches and enhance knowledge transfer across heterogeneous device prototypes in FL. We extend the notion of “task vector” from centralized learning to FL by considering the averaged locally updated parameters as the “pre-trained” model parameters and the distilled model parameters as the “fine-tuned” parameters (refer to Figure 2 in paper). Task vectors encapsulate the distinct contributions of each prototype's ensembles to the student model, representing transferred knowledge (Section 5.2).
Our task arithmetic approach enhances the design of knowledge transfer across diverse heterogeneous device prototypes in FL by:
**(1) Effective Distillation of Unique Knowledge:** Task arithmetic facilitates the distillation of unique knowledge from each prototype’s ensembles. Prior works neglect individual strengths and average logits, causing dilution and information loss (Sections 4, 5.1, Remark 1).
**(2) Customized Knowledge Integration:** Our adaptive task arithmetic operation (Eq 8) allows the student model to customize knowledge integration based on its capacity and the helpfulness of other prototypes, enhancing performance. Prior works use a one-size-fits-all approach, ignoring the student model’s capacity and prototypes’ helpfulness (Sections 4, 5.2, Remark 2).
We have further verified the enhancements of our task arithmetic approach through both theoretical analysis and extensive experiments (Sections 6, 7, D.1.1). To the best of our knowledge, our work is the first of its kind to introduce the notion of “task vector” in FL to enhance knowledge transfer across diverse heterogeneous device prototypes.
### **Response to 2**
We highlight the key distinctions and novelties of our knowledge distillation methodology compared to existing methods:
**(1) Diversity in Device Capabilities:** Existing methods often overlook device capability diversity and apply logit averaging, causing dilution and interference. TAKFL addresses this by treating knowledge transfer from each prototype as separate tasks, distilling them independently to ensure unique contributions are captured without interference.
**(2) Customized Knowledge Integration:** Existing methods use a single integrated distillation target for all student models, failing to provide customized knowledge integration. TAKFL employs task vectors in FL as a unique representation of transferred knowledge from each prototype. This enables the student model to customize knowledge integration based on each prototype's knowledge quality and its own capacity via adaptive task arithmetic (Eq 8), enhancing performance.
**(3) Handling Noisy and Unsupervised Distillation:** Existing methods often neglect challenges associated with noisy and unsupervised ensemble distillation, which may cause the student to forget its own knowledge and drift into erroneous directions. TAKFL incorporates a novel KD-based self-regularization technique to mitigate these issues.
Regarding the theoretical convergence, TAKFL consists of two processes in each round: (1) per-prototype FL and (2) server-side knowledge distillation and integration via task arithmetic. The convergence order of per-prototype FL is the same as, FedAvg [1], with various per-prototype FL optimizer options, and the server-side KD is similar to centralized neural network training [2], as the KL loss has the same functional form as cross-entropy. Thus, the complete task arithmetic model merging convergence is simply the sum of these two procedures. We will include this in a Remark in the final revision of our paper.
We have also obtained the convergence plot for (CIFAR-10, Dir(0.3)) in Table 1 of the paper, included in Figure 2 of the PDF rebuttal. TAKFL shows similar convergence behavior to the baselines while achieving better performance. We will include this plot in the final version of the paper.
### **Response to 3**
We appreciate the reviewer highlighting TAKFL's higher performance compared to the baselines. Existing body of works on global device heterogeneous FL consists of two distinct approaches and problem formulations:
**(1) Partial Model Training Methods:** These aim to train a single global model while accommodating heterogeneous sub-models at the devices according to their compute resources, with several SOTA baselines.
**(2) Knowledge Distillation Methods:** These address a more practical scenario where device prototypes with heterogeneous model architectures participate in FL to enhance their global model performance through knowledge distillation. The current SOTA baselines are FedDF and FedET, which we have comprehensively compared in our work. These methods mainly focus on settings where device prototypes have similar capabilities.
Our work follows (2) and introduces TAKFL to address limitations of existing methods in the *underexplored* diverse device heterogeneous settings. We believe our work can significantly contribute to the FL literature and establish a new SOTA benchmark. For a detailed discussion on related works, we kindly refer the reviewer to Appendix A, where we reference recent surveys from 2023 and 2024.
To further evaluate TAKFL's effectiveness, we experimented using FedOpt [3], a more advanced FL optimizer for per-prototype FL, and included the results in Table 3 of the PDF rebuttal. As shown, TAKFL achieves SOTA performance results independent of the specific FL optimizer used. We will include this experiment in the final version of our paper.
### References
[1] Haddadpour, et al. "Local sgd with periodic averaging: Tighter analysis and adaptive synchronization." NeurIPS’19
[2] Aggarwal. “Neural networks and deep learning”. Springer (2018)
[3] Reddi et al. "Adaptive Federated Optimization." ICLR’21
---
Rebuttal Comment 1.1:
Title: Kind Reminder: End of Author-Reviewer Discussion Period is Tomorrow
Comment: Dear Reviewer ME5v,
We sincerely appreciate your extensive efforts in reviewing and commenting on our work. We hope that our rebuttal has comprehensively addressed your comments and concerns.
As the author-reviewer discussion period is approaching to its end by tomorrow on August 13th, we kindly ask you to reach out if you have any further questions or points of discussion. Your feedback is highly valued, and we are eager to address any additional remaining concerns you may have.
Thank you very much for your attention to this matter. | Summary: The paper focus on a problem that traditional federated learning methods fail to effectively handle scenarios where devices have widely varying capabilities. It improve existing Knowledge Distillation (KD) methods that are inadequate in these heterogeneous environments. Experimental results show the validity of their proposed method.
Strengths: 1. TAKFL treats knowledge transfer from each device prototype’s ensembles as separate tasks and distills them independently.
2. The paper is well written with comprehensive experiments
Weaknesses: 1. Assumption on Weight Disentanglement Property: The theorem 2 reply on the assumption of the weight disentanglement property (line 679-681 in appendix) is too strong. In practice, achieving weight disentanglement is challenging. Studies [1,2] demonstrate that there are interferences among task vectors, making disentanglement difficult. [3] achieves disentanglement using Neural Tangent Kernel (NTK) and shows without disentanglement the performance is dropped. Consequently, asserting the disentanglement property is problematic, thereby limiting the theoretical impact.
2. Similar to weight conflicts when doing merge, averge logits also have conflicts, the paper only considers vanilla average logit as the KD loss. However, there are studies to resolve the issue, like [4,5] have proposed better ensemble KD loss designs, more studies online. Therefore, incorporating these methods for comparison is important.
3. The computation cost of the method is high as the number of distillation process is O(M^2) which exponentially increases with the prototype number.
4. The changes compared to [6] are minor. Initially, I believed this method could be simple and efficient. However, upon reviewing the weight disentanglement property, I have concerns about its practical validity, which limits the novelty of the approach.
5. A minor issue is only apply to data with prototype label, thus may limited its impact
[1] TIES-Merging: Resolving Interference When Merging Models
[2] Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
[3] Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models
[4] Adaptive Multi-Teacher Multi-level Knowledge Distillation
[5] Agree to Disagree: Adaptive Ensemble Knowledge Distillation in Gradient Space
[6] Ensemble distillation for robust model fusion in federated learning.
Technical Quality: 3
Clarity: 3
Questions for Authors: please refer to Weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: please refer to Weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your invaluable review and positive remarks on the strengths of our paper in **improving existing KD methods that are inadequate in diverse device heterogeneous environments** and **comprehensive experiments**.
### **Response to 1**
Studies [1,2] primarily address parameter interference and redundancy, proposing methods to mitigate this. These studies focus on parameter interference, not weight disentanglement. Parameter interference is not equivalent to weight entanglement.
Study [3] (NeurIPS’23), our cited reference (reference [37]), provides theoretical insights into task arithmetic, introducing weight disentanglement as a necessary condition. They show that “task arithmetic in non-linear models cannot be explained by NTK,” and weight disentanglement is actually the sole requirement for task arithmetic “where distinct directions in weight space correspond to changes of the network in disjoint regions of the input space ... This allows a model to perform task arithmetic by independently manipulating these weight directions.” They also demonstrate that weight disentanglement is an emergent property of pre-training and propose fine-tuning models in their tangent space to enhance this property. Please note that the quotes are the original text from [3]. Therefore, asserting weight disentanglement property is not problematic per [3].
In our theoretical framework, we have cited [3] in our paper for weight disentanglement as it is the established theoretical framework to study task arithmetic. Our theoretical framework makes the following contributions: (1) It present the first theoretical framework using the concept of capacity dimension to understand the effectiveness of KD in device heterogeneous FL. (2) Our theoretical analysis of TAKFL is based on the theory in [3] (3) It presents an argument consistent with novel numerical experimental findings, showing that TAKFL performs KD without information loss more effectively.
### **Response to 2**
We used vanilla average logits KD in our methodology for knowledge distillation from individual prototypes because it is the most representative and canonical form of KD. This avoids any bias towards the advantages or disadvantages of specific KD implementations, making our comparisons more transparent and informative.
Regarding [4,5], these methods are designed for KD in a centralized learning setting, where teachers are fully trained, and the distillation dataset aligns with actual learning objective, including a cross-entropy supervision loss. However, in FL, the ensembles are noisy, and the distillation dataset differs from the private dataset, lacking a supervision cross-entropy loss (Section 5.1).
With these considerations, we have compared TAKFL by directly implementing [4,5] in FL as well as incorporating them into TAKFL instead of vanilla KD. The results are presented in Table 2 of the PDF rebuttal. As seen, using [4,5] directly in FL performed poorly, while incorporating [5] into TAKFL achieved better performance in low data heterogeneity compared to TAKFL+Vanilla KD. TAKFL is independent of the specific KD method used. We will incorporate these comparisons into the final version of our paper.
### **Response to 3**
We have presented the full algorithm details of TAKFL in Appendix B. All the distillation processes are performed in parallel and asynchronously (lines 9-10 in Algorithm 1). Therefore, the overall computation cost of the entire server-side distillation process is O(M), as device prototypes do not perform KD one at a time but combine information asynchronously into a singe task arithmetic operation (Eq. 8). Additionally, in FL, the server typically has substantial resources to manage the orchestration of many devices, making server-side computation cost not a concern.
### **Response to 4**
We highlight the key distinctions and novelties of TAKFL compared to [6]:
(1) [6] overlooks the diversity in device capabilities and applies uniform logit averaging, causing dilution and information loss. TAKFL addresses this by treating knowledge transfer from each prototype's ensembles as separate tasks, distilling them independently to ensure unique contributions are effectively captured.
(2) [6] uses average logits as the distillation target for all student models, failing to provide customized knowledge integration. TAKFL employs a novel task arithmetic approach, allowing each student model to customize knowledge integration based on each prototype's quality and its own capacity, enhancing performance.
(3) [6] neglect challenges associated with noisy and unsupervised ensemble distillation. TAKFL incorporates a novel KD-based self-regularization technique to mitigate these issues.
Our theoretical and experimental findings highlight the deficiencies of [6] in diverse device heterogeneous FL (Remark 1, Proposition 1, Section 7.2, Appendix D.1.1). Our extensive experiments show that TAKFL not only outperforms existing methods but also offers a novel, simple, and efficient solution to a significant practical challenge. In regards to the weight disentanglement property, kindly please see our response to #1 above.
### **Response to 5**
It seems that the reviewer is concerned about the impact of TAKFL. We have comprehensively evaluated TAKFL for CV and NLP tasks, including various datasets with differing architectures and data heterogeneity levels. We also introduced a new scalability evaluation showing TAKFL’s adaptability across devices ranging from XXS to XXL, underscoring its broad applicability and consistent performance improvements over existing methods. We believe that the impact of our method extends beyond specific datasets and prototype sizes, providing a versatile solution for diverse device heterogeneous federated learning applications. If this does not address the reviewer’s comment, could the reviewer please clarify and elaborate? The term “data with prototype label” is unclear to us.
---
Rebuttal 2:
Comment: **Question 1:**
Thank you for your response. I still insist that using disentanglement theory as proof is not fair because the model cannot achieve perfect disentanglement. In equation 6 of [3], they define disentanglement error to measure the weight disentanglement of a model. In Fig. 3 of [3], they found that non-linear models are not weight-disentangled enough, but linearized models are more weight-disentangled. Thus, they enhance task arithmetic performance via linearization, fine-tuning in the tangent space to their pre-trained initialization, which is the same as training a kernel predictor with kernel kNTK (as shown on pages 6-7 in [3]).
Normally, due to the difficulty in achieving weight disentanglement, there is interference within different task vectors (which could be introduced by non-perfect disjoint regions of the input space). As a result, recent studies focus on resolving the interference when combining different task vectors [1,2,7].
Therefore, assuming perfect weight disentanglement is unfair when comparing it with logit distillation. I feel that one way to consider a more concrete theory might be by showing that the degree of influence on the parameter space is smaller than in the logit space.
**Question 2:**
Thank you for your efforts in conducting extra experiments. In Table 2 in rebuttal pdf the second row should be CIFAR-100?
May I know how the new knowledge distillation losses were added? Were they used to replace the KD loss in FedDF? Because it is a little surprising that they perform much worse than the original FedDF.
**Question 3:**
It is a little tricky to say that parallelization can reduce the computational cost, since the amount of resource usage still grows in O(M^2).
If you can resolve these remaining issues, I will increase my score.
[1] TIES-Merging: Resolving Interference When Merging Models
[2] Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
[3] Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models
[7] AdaMerging: Adaptive Model Merging for Multi-Task Learning
---
Rebuttal 3:
Title: Response to Q1
Comment: We appreciate the reviewer’s detailed feedback and engagement during the discussion period.
### **Response to Q1**
We would like to clarify our position on the use of weight disentanglement (WD) in our work.
**Response to Weight Disentanglement Concerns**
Task Arithmetic (TA) was first introduced in reference [51] of our paper and has demonstrated practical effectiveness. [3] provides the theoretical foundation for TA, identifying WD as the only necessary condition to perform TA effectively. Importantly, [3] demonstrates that WD is an emergent property during pre-training, absent in randomly initialized models (Section 6.2 of [3]). Fig 3 of [3] shows that WD remains sufficiently strong within the merging coefficients range (0, 1] across tasks, even without linearization. Linearization primarily enhances WD outside this range, suggesting that models are weight-disentangled enough in practice to enable task arithmetic and achieve strong performance within the effective range.
This effective range is crucial, as both the original TA paper [51] and [2,3] set task vector coefficients within (0, 1], ensuring that WD is maintained. Additionally, [7] uses `torch.clamp` to constraint task vector coefficients within this range, further validating the effectiveness of WD in this context. Our work, following [51,2,3,7], also ensures that the merging coefficients remain within this range, supporting robust WD and effective TA (Eq. 8, Section 5.2, Appendix F.3).
**Response to Studies [1,2,7]**
Studies [1,2] primarily address parameter interference and redundancy rather than WD. [1] discusses interference between task vectors due to redundant parameters and sign disagreement, proposing TIES-MERGING to mitigate this. [2] identifies redundant delta parameters in SFT LMs and introduces DARE to enhance model merging. [7] suggests that merging coefficients play a pivotal role in the average accuracy of the final MTL model and proposes AdaMerging to optimize these coefficients.
We couldn’t find any argument in studies [1,2,7] stating that achieving WD is difficult, causing interference within different task vectors. It does not appear to us that there is any formal correspondence between parameter interference and redundancy as these are considered in [1,2,7] and the notion of WD as defined in Eq. 4 of [3]. We would appreciate it if the reviewer could please clarify and elaborate, which would help us provide a more comprehensive response regarding [1,2,7]. Moreover, if the reviewer suggests that [7] handles the difficulty better through adaptive TA, it is important to note that our approach also employs adaptive TA, ensuring that we address this challenge effectively.
**Conclusion**
In conclusion, while perfect WD across all merging coefficient ranges is challenging to achieve, the strong WD within the effective range (0, 1] is sufficient for TA to perform well. Our work, following [51,2,3,7], ensures that the merging coefficients remain within this range, maintaining the robustness of WD and enabling effective TA as it is evident from the improvements in our extensive experiments across both CV and NLP tasks.
Finally we believe that this concern is largely orthogonal to the specific theoretical results we claim. Regardless, we believe that this assumption is pedagogically useful, providing clarity and focus in our theoretical results without overcomplicating the analysis. The key takeaway is the "Delta" of information retention/loss between KD and TAKFL, which remains consistent regardless of the baseline error rate. Even if WD does not hold generically, our theoretical results are still informative as they indicate behavior in certain idealized, limited behavior circumstances, in particular cases with significant degrees of freedom of capacity. This is common practice in studying Federated Learning. Introducing additional complexity to account for WD variations would make the results more opaque without significantly altering the conclusions.
We appreciate the reviewer's continued engagement in the discussion and are happy to address any further questions or concerns.
---
Rebuttal 4:
Title: Response to Q2 and Q3
Comment: We appreciate the reviewer’s detailed feedback and engagement during the discussion period.
### **Response to Q2**
Yes, the second row in Table 2 in the rebuttal PDF file refers to CIFAR-100. We apologize for any confusion, as we were working diligently to prepare the results. To address the reviewer's question, we directly replaced the KD loss in FedDF with AMTML-KD [4] and AE [5] in Table 2, resulting in the baselines labeled AMTML-KD [4] and AE [5]. Additionally, we replaced the vanilla KD loss in our TAKFL framework with [4,5], resulting in baselines TAKFL+[4] and TAKFL+[5].
The reason why [4,5] are performing worse in FL could be that these methods are designed for KD in a centralized learning setting, where teachers or ensembles are fully trained and robust, and the distillation dataset aligns with the actual learning objective, including a cross-entropy supervision loss. However, in FL, the ensembles are noisy, and the distillation dataset differs from the private dataset, lacking a supervision cross-entropy loss. This discrepancy likely contributes to the reduced effectiveness of these methods in the device heterogeneous FL setting.
### **Response to Q3**
We appreciate the reviewer's comment and would like to clarify the distinctions between overall computation time, computational load, and resource usage.
* **Computation Time:** TAKFL's computation time is O(1) (constant) due to parallelization, as all distillation processes occur simultaneously.
* **Computation Load:** The overall computational load scales as O(M) (linear) since the distillation tasks are performed independently for each prototype in parallel and merged into a singe task arithmetic operation (Eq. 8).
* **Resource Usage (Memory):** The resource usage scales as O(M^2) (quadratic) because of the need to store and process multiple task vectors concurrently.
While it's true that resource usage grows with O(M^2), it’s important to note that the entire KD process occurs on the server side. In FL, servers typically possess substantial resources. Given these resources, the resource usage incurred by TAKFL is generally less of a concern in practical applications. Moreover, in real-world scenarios, the number of device prototypes (M) tends to be limited, often encompassing categories such as IoT devices, mobile devices, and workstations. Therefore, while the theoretical resource usage grows with O(M^2), the practical impact is mitigated by the server's capabilities and the limited number of prototypes involved.
We appreciate the reviewer's continued engagement in the discussion and are happy to address any further questions or concerns.
---
Rebuttal 5:
Comment: Thank you response.
Q1: First, I agree with the author's conclusion that "while perfect WD across all merging coefficient ranges is challenging to achieve, strong WD within the effective range (0, 1] is possible." Then, as stated in [3], the WD refers to different tasks (classes) as shown in the problem statement in Section 2, Property 3, and Fig. 6. In Property 1 (Task Arithmetic) of [3], it is noted that this occurs with non-intersecting task supports $\mathcal{D}_{t'} \cap \mathcal{D}_t = \emptyset$.
However, in the problem setting (lines 131-135), this paper focuses on various network architectures, and in Table 13 of the appendix, it is shown that heterogeneity occurring within each device setting. As a result $\mathcal{D}_{t'} \cap \mathcal{D}_t \neq \emptyset$, thus might be too strong to use this assumption.
Q2: the answer is clear to me.
Q3: the answer is clear to me.
As a result, the inadequate Theory, the application of task arithmetic to FedDF, limited the novelty of this paper.
---
Rebuttal Comment 5.1:
Comment: Although the novelty is limited, I will raise my score from 4 to 5 based on the authors' experiments. If there are any additional points I may have overlooked, I'm open to discussing them.
---
Rebuttal 6:
Title: Response to Q1
Comment: We would like to extend our sincere gratitude for the reviewer's great effort in reviewing our work and continued engagement in the discussion. We also appreciate the reviewer for raising his score to 5.
### **Response to Q1**
To answer Q1, we would like to note that the goal of device prototypes $i$ and $j$ participating in FL and doing server-side knowledge transfer with each other is benefiting and learning from their underlying datasets $\mathcal{D^i}$ and $\mathcal{D^j}$ (as noted in problem formulation Eq. 1), where indeed $\mathcal{D^i} \cap \mathcal{D^j} = \emptyset$ and $\bigcup_{i=1}^{M} \mathcal{D^i} = \mathcal{D}$ (lines 618-619, 679-680).
In all of our experimentation, the data partitions of prototypes, i.e. $\mathcal{D^i}$'s, are all indeed disjoint with no sample overlap. Therefore, $\mathcal{D^i} \cap \mathcal{D^j} = \emptyset$ does hold in our work and theoretical framework.
We hope that this answers the reviewer's question.
Once again, we appreciate the reviewer's continued engagement in the discussion and are happy to address any further questions or concerns in the remaining time till end of discussion period by tomorrow on August 13th.
---
Rebuttal 7:
Comment: Your response is not true because even though you don't have overlapping samples, the classes themselves overlap.
As mentioned in the problem statement in [3]: Consider $T $ tasks, with every task $ t $ consisting of a triplet $\mathcal{D}_t, \mu_t, f_t^*) $ where $ f_t^* : \mathcal{D}_t \to \mathcal{Y} $ a target function (e.g., labels).
Additionally, as illustrated in Fig. 6 in [3], the eigenfunction localization clearly demonstrates disentanglement in the case of $ x \in RESISC45 $ and $ x \in cars $. The former refers to Remote Sensing Image Scene Classification, while the latter pertains to car classification. The labeling functions $f_t^*$ are significantly different.
However, in your data splitting, if we denote the data distribution of class $ c $ as $ D_c$, for any prototype group $ i $, there are samples belonging to the class $ x^i_c \in D_c $. Consequently, the intersection among different prototypes is not a null set $\mathcal{D}_{t'} \cap \mathcal{D}_t \neq \emptyset$.
As a result, the Theory is inadequate.
---
Rebuttal 8:
Comment: We appreciate the reviewer’s attention to the technical details of weight disentanglement (WD). Formally, the reviewer is correct that the WD theory in [3] assumes exclusively partitioned class labels across device prototypes. In our experiments, there may be overlap in class labels, meaning our setup does not fully satisfy the assumption as formally presented in [3].
However, we contend that this overlap does not undermine the theory’s applicability. When class labels overlap, the task vectors associated with training data from different prototypes can include vectors in the tangent space of both prototypes’ loss surfaces. This overlap creates an i.i.d. situation, which can actually make knowledge distillation (KD) easier, with the Vanilla Average Logits method likely experiencing minimal information loss.
For instance, the task vectors corresponding to training on device prototype 2's data may now contain vectors in the tangent space of the loss surface of device prototype 1's architecture, fit to data exclusively belonging to prototype 2's original training samples. Additionally, vectors could now exist in the tangent space of the loss surface of device prototype 1's architecture, fit to prototype 1's data, due to the overlapping classes. This overlap creates an i.i.d. condition, simplifying KD. In this case, Vanilla Average Logits should also suffer minimal information loss.
Relaxing the WD assumption in this context would simply add additional notation and unnecessary complexity to an already lengthy formal property statement without providing additional clarity. However, we are open to including a remark summarizing this point for completeness.
While we appreciate the rigorous technical discussion, we respectfully disagree that this makes the theory inadequate. As mentioned earlier, this assumption is pedagogically useful, offering clarity and focus without overcomplicating the analysis. The key takeaway remains the "Delta" of information retention/loss between KD and TAKFL, which stays consistent regardless of the baseline error rate. Even if WD does not hold universally, our theoretical results still provide valuable insights into behavior under idealized conditions, which is a common practice in Federated Learning research. Introducing additional complexity to account for WD variations would make the results more opaque without significantly altering the conclusions.
Finally, we believe that theoretical details should enhance the pedagogical and explanatory value of the theory. Simplifying assumptions, while not always factually accurate, are essential for clarity and understanding. For example, convergence theory often assumes continuously differentiable functions, despite the widespread use of ReLUs and max-pooling in modern architectures. Such theory remains valuable in providing insights into an algorithm’s properties and relative advantages, and we believe the same applies to our theoretical framework.
Once again, we appreciate the reviewer's continued engagement in the discussion and are happy to address any further questions or concerns in the remaining time till end of discussion period by tomorrow on August 13th.
Title: Response to Reviewer F7vu
---
Rebuttal Comment 8.1:
Comment: 1. As state in lines 679-681 in appendix, the proof heavily rely on task arithmetic property thus the statement "Relaxing the WD assumption in this context would simply add additional notation and unnecessary complexity to an already lengthy formal property statement without providing additional clarity." is wrong.
2. I didn't see "When class labels overlap, the task vectors associated with training data from different prototypes can include vectors in the tangent space of both prototypes’ loss surfaces." why they are in tangent space? More extreme, what if there is only one class, the distillation of other prototypes to prototypes $j$ ? Without concrete proof or citation, it is not proper to give such a conclusion.
3. Combining my question 1,2 this statement is not proper....“As mentioned earlier, this assumption is pedagogically useful, offering clarity and focus without overcomplicating the analysis. The key takeaway remains the "Delta" of information retention/loss between KD and TAKFL, which stays consistent regardless of the baseline error rate. Even if WD does not hold universally, our theoretical results still provide valuable insights into behavior under idealized conditions, which is a common practice in Federated Learning research.”
---
Rebuttal 9:
Comment: We sincerely appreciate the reviewer's detailed constructive feedback to enhance or theoretical results. To provide a comprehensive response to the reviewer's point, we present the following cases:
### Case 1 (No Overlap)
The prototype $i$ and $j$ datasets are disjoint, i.e. $\mathcal{D}^i \cap \mathcal{D}^j = \emptyset$, and $\bigcup_{i=1}^{M} \mathcal{D^i} = \mathcal{D}$. Then, our current proposition holds correct with no changes.
### Case 2 (Fully Overlap: Identical Class Labels)
The prototype $i$ and $j$ datasets are fully overlapping, i.e. $\mathcal{D}^i \subseteq \mathcal{D}^j$, meaning that the two prototypes have the same classes $\mathcal{D}_c$, $\forall c \in D$. Then the changes to Proposition 1 and 2 in our theory are detailed in the following.
Consider that in this case, the logits corresponding to $\mathcal{W}^{1,1}$ and $\mathcal{W}^{1,2}$ are the same. As such, $\mathcal{W}^{1,2} \in \mathcal{Q}^1$, and so the propositions change to, trivially:
**Proposition 1** *(Information Loss in VED).* Consider the VED procedure. Consider two device prototypes with a device capacity and solution dimension of $Q^1, Q^2$ and $W^1, W^2$, respectively, and with associated eigenbases $\mathcal{Q}^i, \mathcal{W}^i$. Let the solution set of VED with prototype $i$ as student be $\hat{\Theta}^i_{VED}$ with $\dim(\hat{\Theta}^i_{VED}) = W^{v_i}$ with eigenbasis $\mathcal{W}^{v_i}$. In addition, denote $W^{s,t},\, s,t \in \{1,2\}$ the dimension of the solution set for the student model trained on the data from the teacher device's ensembles. We assume that self-distillation is executed appropriately, e.g., $W^{1,1} = W^1$ and $W^{2,2} = W^2$.
1. **Case 1:** Assume that $Q^1 = Q^2$ and $W^1 = W^2 = W^{1,2} = W^{2,1}$. Then it holds that, in expectation,
$$
\dim\left(\hat{\Theta}^1_{VED} \cap \left[\mathcal{Q}^1 \setminus \mathcal{W}^1 \right]\right) = 0
$$
This corresponds to the expected capacity of prototype 1 that is taken up for fitting logits that do not fit the data corresponding to prototype 1.
2. **Case 2:** Assume that $Q^1 > Q^2$ and $W^1 = W^{1,2} > W^2$. Then the same quantity as for Case 1 holds. Moreover,
$$
\dim\left(\hat{\Theta}_{VED} \cap \left[\mathcal{Q}^1 \setminus (\mathcal{W}^1 \cup \mathcal{W}^{1,2}) \right]\right) = 0
$$
This corresponds to the capacity of client 1 that has been allocated but fits, in the model of prototype 1, neither the data of prototype 1 nor the data of prototype 2.
Please note that Proposition 2, that is the guarantees for TAKFL, remains the same.
**...Response continued in the next comment...**
---
Rebuttal 10:
Comment: **...Continued Response...**
### Case 3 (Partial Overlap)
The prototype $i$ and $j$ datasets are partially overlapping, i.e. $\mathcal{D}^i \cap \mathcal{D}^j \neq \emptyset$, meaning that the two prototypes overlap on at least one of the classes $\mathcal{D}_c$. Then the changes to Proposition 1 and 2 in our theory are detailed in the following.
In this case, $\mathcal{W}^{1,2} = \mathcal{W}^{1,2}_1 \cup \mathcal{W}_2^{1,2}$, that is, the class labels across data prototypes partially overlap (with $W^{1,2}_1$ the overlapping dimensionality). The Proposition for VED changes to:
**Proposition 1** *(Information Loss in VED).* Consider the VED procedure. Consider two device prototypes with a device capacity and solution dimension of $Q^1, Q^2$ and $W^1, W^2$, respectively, and with associated eigenbases $\mathcal{Q}^i, \mathcal{W}^i$. Let the solution set of VED with prototype $i$ as student be $\hat{\Theta}^i_{VED}$ with $\dim(\hat{\Theta}^i_{VED}) = W^{v_i}$ with eigenbasis $\mathcal{W}^{v_i}$. In addition, denote $W^{s,t},\, s,t \in \{1,2\}$ the dimension of the solution set for the student model trained on the data from the teacher device's ensembles. We assume that self-distillation is executed appropriately, e.g., $W^{1,1} = W^1$ and $W^{2,2} = W^2$.
1. **Case 1:** Assume that $Q^1 = Q^2$ and $W^1 = W^2 = W^{1,2} = W^{2,1}$. Then it holds that, in expectation,
$$
\dim\left(\hat{\Theta}^1_{VED} \cap \left[\mathcal{Q}^1 \setminus \mathcal{W}^1 \right]\right) = O\left( \frac{(Q^1 - W^1)(W^1)!(Q^1 - W_2^{1,2})!}{Q^1!(W^1)!(Q^1 - W^1)! + Q^1! W_2^{1,2}!(Q^1 - W_2^{1,2})!} \right)
$$
This corresponds to the expected capacity of prototype 1 that is taken up for fitting logits that do not fit the data corresponding to prototype 1.
2. **Case 2:** Assume that $Q^1 > Q^2$ and $W^1 = W^{1,2} > W^2$. Then the same quantity as for Case 1 holds. Moreover,
$$
\dim\left(\hat{\Theta}_{VED} \cap \left[\mathcal{Q}^1 \setminus (\mathcal{W}^1 \cup \mathcal{W}^{1,2}) \right]\right) = O\left(\frac{(Q^1 - W^1)(W^1!)(W_2^{1,2} - W^2)!}{Q^1! W^1!(Q^1 - W^1)! + Q^1! W^2!(W_2^{1,2} - W^2)!}\right)
$$
This corresponds to the capacity of client 1 that has been allocated but fits, in the model of prototype 1, neither the data of prototype 1 nor the data of prototype 2.
Please note that the presence of overlapping logits $W^{1,2}_1$ does not appear in the result. It is exactly the same as the original Proposition 1 in the original paper, just with $W_2^{1,2}$ in place of $W^{1,2}$.
Please note that Proposition 2, that is the guarantees for TAKFL, remains the same. Indeed, in this case $W^{1,2}$ does not even appear in the original result, and there is no modification needed. Indeed, since the theoretical results consider the complement of $\mathcal{Q}^1$ and all overlapping logits are in $\mathcal{Q}^1$, they do not affect the results.
### **Conclusion Remark**
We will supplement these cases to our theoretical results for completeness in the final version of the paper. We hope that this response comprehensively addresses the reviewer's remaining concerns.
Once again, we appreciate the reviewer's continued engagement in the discussion and are happy to address any further questions or concerns in the remaining time till end of discussion period in the next few hours.
---
Rebuttal Comment 10.1:
Comment: The problem is when class labels overlap $\mathcal{D}_{t'} \cap \mathcal{D}_t \neq \emptyset$, task arithmetic property is not held. As state in lines 679-681 in appendix, the proof heavily rely on task arithmetic property thus "Please note that Proposition 2, that is the guarantees for TAKFL, remains the same." is not true....
---
Rebuttal 11:
Comment: We sincerely appreciate the reviewer's response and great effort for the continued engagement in the discussion.
Yes, when the labels overlap (case 2 and 3) the task arithmetic property is not held, but that's not a problem. Please note that for case 2 and 3 the statement is trivial and does not require a proof since we already know the logits are in $\mathcal{Q}^1$. Therefore, as a result no need for task arithmetic property for this case as the statement is trivial and no proof is needed.
Please note that Proposition 1 deals with the information loss of vanilla KD (or average logits ensemble distillation) in device heterogeneous FL, while Proposition 2 states the improvement of TAKFL beyond vanilla KD.
Regarding the improvement of knowledge transfer (TAKFL's improvement) in Proposition 2, under the label overlapping case, the statement becomes trivial. The theorem specifically concerns whether the KD results in parameters that fall within or outside of the space $\mathcal{Q}^1$. When the logits are identical, they are already within that space, meaning there is nothing further to address. As a result, all propositions, which consider the dimensions of the model that do not fit prototype 1's data (i.e., $\mathcal{Q}^1$), remain unaffected by these overlapping logits. This exactly proves our earlier statement that the label overlapping case creates an i.i.d. situation, which can actually make knowledge transfer easier, with the vanilla KD method experiencing minimal information loss. Therefore, as a result TAKFL's guarantee (improvement of knowledge transfer) in proposition 2 becomes trivial.
We would like to also sincerely appreciate the reviewer's great effort in continued engagement in the discussion and providing us feedback. The received feedback indeed helped us to provide a complete theoretical statements for different cases (disjoint and overlapping labels) which we will note this in the final version of the paper. | Summary: The paper presents a novel framework called TAKFL, which addresses the challenge of transferring knowledge in federated learning across heterogeneous devices, ranging from small IoT devices to large workstations. TAKFL uniquely handles the knowledge distillation by treating the transfer from each device prototype as a separate task, allowing for tailored integration of knowledge to optimize performance. The approach is validated theoretically and through extensive experiments, demonstrating superior results over existing methods across both computer vision and natural language processing tasks.
Strengths: (1) Practical Problem and Innovative Approach
The authors address a significant, real-world challenge in federated learning—knowledge transfer across heterogeneous devices. Their novel framework, TAKFL, innovatively treats each device's knowledge transfer as a separate task, allowing for customized integration. This tailored approach is both practical and theoretically sound, making it a substantial contribution to the field.
(2) Clarity and Organization
The paper is well-structured, facilitating easy understanding of complex concepts and methods. The clear presentation enhances the accessibility of the content, making it easier for readers to grasp the significance of the proposed solution and its impact on the field.
(3) Strong Experimental and Theoretical Support
The authors back their claims with extensive experimental results across multiple tasks and datasets, demonstrating the effectiveness of TAKFL in diverse scenarios. Moreover, the inclusion of theoretical analysis adds depth to the validation, reinforcing the reliability and scalability of their approach. This combination of empirical and theoretical evidence strongly supports the paper's contributions and conclusions.
Weaknesses: (1) [main concern] Strong Dependence on Hyperparameters
The TAKFL framework introduced in the article significantly relies on the setting of hyperparameters, especially during the Task Arithmetic Knowledge Integration process, where the weights for different task vectors are set as hyperparameters and adjusted on a validation set. This might limit the method's generalizability across different real-world applications. To enhance the practicality and robustness of the method, it is recommended that the authors explore more automated hyperparameter optimization strategies to reduce the need for manual tuning and improve the adaptability of the model.
(2) [main concern] Strong Assumptions in the Selection of Public Datasets
The experimental design involves the use of public datasets, such as CIFAR-100 and ImageNet-100, for knowledge distillation. This choice seems to be based on two key assumptions: that the public datasets must exhibit high diversity and that the training data distribution can be approximately considered a subset of the public dataset. These assumptions may not always hold in practical applications, so it is advisable for the authors to thoroughly investigate the actual impact of these choices on model performance in future work. Additional experiments could validate the effectiveness of these assumptions, and considerations of these potential limitations should be explicitly stated in the manuscript.
(3) [minor concern] Quantification of Data Heterogeneity and Hyperparameter Selection
The authors utilize a Dirichlet distribution to quantify Data Heterogeneity, setting $Dir(\alpha)$ at 0.3 and 0.1 to simulate varying degrees of data heterogeneity. However, there is insufficient explanation for the choice of these specific values. To enhance the transparency and reproducibility of the research, it is recommended that the authors provide a detailed rationale behind these parameter choices, based on logic and references to previous studies. Moreover, to give readers a more intuitive understanding of the data distribution differences under different $\alpha$ settings, descriptive statistics or visualizations, such as the distribution of samples across categories, would be helpful.
Technical Quality: 2
Clarity: 3
Questions for Authors: See weaknesses
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: N.A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your invaluable review and pointing out that our work makes a **substantial contribution to the field**. We also appreciate the reviewer for the positive remarks on the **practicality, innovation, and clarity of our TAKFL framework**, as well as the **strong experimental and theoretical support** for our approach.
### **Response to 1**
We appreciate the reviewer’s feedback regarding the dependence on hyperparameters. We have designed a simple and efficient automated heuristic method to tune the merging coefficients, detailed in Appendix F.3. Based on our extensive experiments, we observed that larger prototypes benefit more from increasingly skewed merging coefficients, while smaller prototypes perform best with uniformly increasing coefficients. This pattern is intuitive as larger prototypes gain less from smaller ones, especially in high data heterogeneity cases.
Our automated heuristic method randomly instantiates merging coefficients following this observed pattern, generating 30 candidate coefficients that include both uniformly increasing and various degrees of skewed coefficients (detailed implementation is presented in Listing 1 of Appendix F.3). The optimal merging coefficient is determined using maximum performance on a small held-out validation set. As detailed in Appendix F.3, we observed similar performance using this automated heuristic method compared to manual tuning (lines 1143-1145). Furthermore, we used this automated heuristic method for our scalability evaluation experiment in Section 7.3, Figure 3 of our paper. This approach reduces the need for manual tuning, enhancing the adaptability and robustness of the TAKFL framework across different real-world applications.
### **Response to 2**
We appreciate the reviewer’s feedback regarding the impact of public datasets. Our public dataset selection mainly follows [1] for a fair and consistent comparison with existing works. To address the reviewer’s concern, we have explored the influence of the public dataset on TAKFL's performance and existing KD-based methods when the public dataset used for server-side distillation is less similar to the private dataset, as detailed in Appendix E.2.
In this experiment, we measured the similarity between different public datasets and the private dataset using CLIP [2]. We fixed the private dataset to CIFAR-10 and used less similar public datasets such as Celeb-A. We observed that the performance of existing methods drastically deteriorates as the similarity between the public and private datasets decreases. In contrast, TAKFL exhibits robustness, suffering much less performance degradation and still achieving SOTA average performance under the same conditions. For instance as presented in Table 10, when using Celeb-A as the public dataset, which is significantly different from CIFAR-10, TAKFL still achieved more than 3% average performance across all prototypes compared to the baselines. This is in contrast to other methods like FedET, which substantially underperformed compared to vanilla FedAvg under the same condition.
This demonstrates TAKFL's practical utility in real-world scenarios where the server typically lacks knowledge of the private datasets to select a closely aligned public dataset for distillation. Additionally, we limited the size of the public dataset to 60,000 or less across all experiments in throughout the paper.
We will further highlight these findings in the final version of our paper to emphasize TAKFL's robustness and practical applicability.
### **Response to 3**
We appreciate the reviewer’s feedback and would like to further clarify our data heterogeneity setting. We utilized Dirichlet distribution to quantify data heterogeneity, as it is a commonly used method in the FL literature to simulate varying degrees of data heterogeneity [1,3]. The level of data heterogeneity is controlled by $\alpha$; a lower $\alpha$ indicates higher data heterogeneity and presents a more challenging setup.
The chosen values of $\alpha = 0.3$ for low data heterogeneity and $\alpha = 0.1$ for high data heterogeneity are standard in the FL literature and follow practical experimental design suggestions from [4] (cited in our paper in reference [36]). To give readers a more intuitive understanding of the data distribution differences under different $\alpha$ values, we have plotted visualizations showing how clients' data distributions look for CIFAR-10 using these $\alpha$ values. These visualizations are included in Figure 3 of the PDF rebuttal file. We will include this Figure in the final version of the paper.
### References
[1] Lin et al. "Ensemble distillation for robust model fusion in federated learning." NeurIPS’20
[2] Radford et al. "Learning transferable visual models from natural language supervision." ICML’21
[3] Li et al. "Federated learning on non-iid data silos: An experimental study." IEEE ICDE (2022).
[4] Morafah et al. "A Practical Recipe for Federated Learning Under Statistical Heterogeneity Experimental Design." IEEE TAI (2023).
---
Rebuttal Comment 1.1:
Title: Kind Reminder: End of Author-Reviewer Discussion Period is Tomorrow
Comment: Dear Reviewer RvJG,
We sincerely appreciate your extensive efforts in reviewing and commenting on our work. We hope that our rebuttal has comprehensively addressed your comments and concerns.
As the author-reviewer discussion period is approaching to its end by tomorrow on August 13th, we kindly ask you to reach out if you have any further questions or points of discussion. Your feedback is highly valued, and we are eager to address any additional remaining concerns you may have.
Thank you very much for your attention to this matter. | Summary: This paper introduced a KD-based framework (TAKFL) to address the dilution and diversity issues in heterogeneous FL knowledge transfer learning. The TAKFL distills knowledge from prototypes of varying sizes and incorporates a self-regularization to mitigate noise simultaneously, then integrates these separately distilled knowledge by task arithmetic. Empirical evaluations across various CV and NLP datasets demonstrate the framework's effectiveness.
Strengths: 1. The paper is well-organized and easy to follow.
2. The paper novelty introduced a theoretical model to illustrate the efficacy of knowledge distillation in heterogeneous FL.
3. The paper proposed a new framework, TAKFL, for considering varying sizes of prototypes with different contributed information, and experiments on different CV and NLP datasets show its effectiveness.
Weaknesses: 1. Some baselines are lacking. For example, FedProto [1], which also employs prototypes within device heterogeneous FL, should be included for a more comprehensive comparison.
2. It seems that the proposed method incurs higher time and storage costs, as it requires the independent learning of multiple student models compared to the vanilla methods. The paper should provide an efficiency analysis that compares the proposed method with existing baselines, highlighting both time and storage metrics.
3. It would be better to provide a visualization study for a better understanding of the effectiveness of transfer learning from different prototypes.
[1] Tan, Yue, et al. "Fedproto: Federated prototype learning across heterogeneous clients." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 8. 2022.
Technical Quality: 3
Clarity: 3
Questions for Authors: see weakness
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your invaluable review and pointing out the **novelty of our framework and theoretical model** and **effectiveness of our framework on both CV and NLP tasks**.
### **Response to 1**
We appreciate the reviewer's suggestion and would like to highlight a few differences about FedProto and our work, TAKFL:
**(1) Problem Formulation:** FedProto's problem formulation aims to optimize local models in FL (Eq. 2 in [1]), whereas our problem formulation follows [2], where we aim to optimize device prototypes’ global models (Eq. 1 in our paper).
**(2) Feature Exchange Requirements:** FedProto requires the exchange of local and global average per-label features between devices and the server, necessitating that local architectures produce the same feature matrix dimensions. However, in more general cases where architectures are from different families, this does not hold. Our experimental setup includes the heterogeneous-family case too.
**(3) Evaluation Metrics:** FedProto uses average local test accuracy as the evaluation metric, while our evaluation metric is device prototypes’ global model test accuracies, following [2].
Given these differences, a direct comparison with FedProto is not entirely feasible or fair. However, considering these factors, we have adopted FedProto’s official implementation code and compared it within our homogeneous-family case experimental setting in Table 1 of our paper. The results, presented in Table 1 of the PDF rebuttal, show that TAKFL consistently outperforms FedProto for both CIFAR-10 and CIFAR-100.
We will incorporate these comparisons into the final version of the paper to provide a more comprehensive evaluation.
### **Response to 2**
To address the reviewer’s comment, we would like to mention that the full algorithm details of TAKFL have been presented in Appendix B. All the distillation processes are being performed in parallel and asynchronously (lines 9-10 in Algorithm [1]), and once done, the distillation task vectors are merged.
Therefore, the computation time remains O(1), and the overall computation cost of the entire server-side distillation process is O(M), as device prototypes do not perform KD one at a time but combine information asynchronously into a singular task arithmetic operation (Eq. 8). Since there are M prototypes, the total computation is O(M).
We have further conducted an efficiency analysis on a system with two RTX 3090 GPUs and compared our method with prior works for one epoch knowledge distillation process, including time in seconds, GPU memory, and CPU memory in MB for CIFAR-10 experimental setting in Table 1 of our paper. The results are as follows:
| Baseline | Time | CPU Memory | GPU Memory|
| ----------- | ------- | ---------- | --------- |
| FedDF | 55.24 | 1051.89 | 104.90 |
| FedET | 58.81 | 1113.12 | 107.65 |
| TAKFL | 71.55 | 1080.60 | 266.57 |
As we can see, our method achieves similar CPU memory usage compared to the baselines, with a slight increase in GPU memory and time. The additional 16 seconds incurred time can be attributed to our use of the high-level parallelism interface implementation, `concurrent.futures`, for executing callables asynchronously. Furthermore, in FL, servers are typically equipped with substantial computational resources, including multiple GPUs and extensive memory capacities, to manage the orchestration of many clients. These servers can employ highly efficient parallel asynchronous implementations to avoid the incurred time. Given that the entire distillation process occurs at the server, the computation and storage demands are less of a concern since the server has ample resources.
We will incorporate these results and the efficiency analysis into the final version of the paper.
### **Response to 3**
We appreciate the reviewer's suggestion. To better understand the effectiveness of knowledge transfer in TAKFL, we conducted a visualization study using two device prototypes, XS and XL, on the CIFAR-10 dataset. The prototype configurations follow those used in our scalability evaluation in Section 7.3 (more details are available in Appendix D.4). We first pre-trained the two prototypes' global models using FedAvg for 40 communication rounds and then performed knowledge transfer using the TAKFL process for 10 distillation epochs. We have plotted the t-SNE visualizations of both prototypes for FedAvg and TAKFL, included in Figure 1 of the PDF rebuttal file.
As seen in the t-SNE plots, TAKFL demonstrates a substantial enhancement in feature representation for both prototypes compared to FedAvg. The plots show better separation between classes for both XS and XL prototypes. The class clusters are more distinct, suggesting that TAKFL, by effectively transferring knowledge between the two prototypes, yielded a clearer representation of the features, even in the smaller XS prototype. Therefore, the t-SNE visualizations confirm that TAKFL is indeed effective in knowledge transfer, further corroborating its superiority and effectiveness.
We will enrich this visualization study by considering knowledge transfer between different size prototypes and incorporate this study into the final version of our paper.
### References
[1] Tan, Yue, et al. "Fedproto: Federated prototype learning across heterogeneous clients." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 8. 2022.
[2] Lin et al. "Ensemble distillation for robust model fusion in federated learning." NeurIPS’20
---
Rebuttal Comment 1.1:
Title: Kind Reminder: End of Author-Reviewer Discussion Period is Tomorrow
Comment: Dear Reviewer V7t1,
We sincerely appreciate your extensive efforts in reviewing and commenting on our work. We hope that our rebuttal has comprehensively addressed your comments and concerns.
As the author-reviewer discussion period is approaching to its end by tomorrow on August 13th, we kindly ask you to reach out if you have any further questions or points of discussion. Your feedback is highly valued, and we are eager to address any additional remaining concerns you may have.
Thank you very much for your attention to this matter.
---
Rebuttal 2:
Comment: We appreciate the reviewer’s response and would like to clarify the fundamental distinctions between our methodology and FedProto, particularly in terms of problem formulation and evaluation metrics.
**Problem Formulation:**
While FedProto does address model heterogeneity, its primary focus is on optimizing heterogeneous local models in FL (Eq. 2 in [1]). In contrast, our method follows [2], where the objective is to optimize the global models of heterogeneous device prototypes (Eq. 1 in our paper). Although both methods tackle device heterogeneity, the specific goals and contexts of optimization differ, leading to very different approaches and outcomes.
**Evaluation Metrics:**
FedProto evaluates performance using average local test accuracy, while our evaluation metric is the global model test accuracy of device prototypes, as outlined in [2].
Furthermore, we would like to note that our experimental setup significantly differs from the original setting in FedProto’s paper. As a results, these significant fundamental differences in both problem formulations and evaluation metrics makes direct comparisons between FedProto and our method challenging and unfair and misleading.
However, despite these fundamental differences we have adopted the FedProto’s official implementation code and conducted experiments within our homogeneous-family experimental setting (Table 1 of our paper), and presented the results in Table 1 of rebuttal PDF file. The differences in results, such as FedProto performing lower than FedAvg, could be due to these fundamental differing contexts and setups.
In conclusion, while we agree with the reviewer that FedProto is a model heterogeneous FL method, it is important to note that FedProto is a "personalized FL" model-heterogeneous federated learning per its problem formulation in Eq. 2 of [1]. However, our work is a "global FL" KD-based device heterogeneous federated learning per our problem formulation in Eq. 1 of our paper, following [2]. This distinction is crucial in understanding the differences in performance outcomes.
We sincerely appreciate the reviewer's constructive feedback and suggestions. If there are any further concerns or questions, we would be grateful if the reviewer could kindly elaborate so that we can provide additional clarity.
We appreciate the reviewer's continued engagement and are happy to address any further questions or concerns as the discussion period concludes in the next few hours.
### References
[1] Tan, Yue, et al. "Fedproto: Federated prototype learning across heterogeneous clients." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 8. 2022.
[2] Lin et al. "Ensemble distillation for robust model fusion in federated learning." NeurIPS’20 | Rebuttal 1:
Rebuttal: We sincerely appreciate all the reviewers' efforts in reviewing and commenting on our work. We are particularly grateful for the positive feedback highlighting the following aspects:
* **The novel theoretical model and framework illustrating the efficacy of knowledge distillation in heterogeneous federated learning (Reviewer V7t1, Reviewer RvJG, Reviewer ME5v).**
* **Effective handling of challenges in transferring knowledge across heterogeneous devices in federated learning and improving existing knowledge distillation methods (Reviewer RvJG, Reviewer F7vu).**
* **Strong experimental validation on CV and NLP tasks with various datasets, demonstrating superior results over existing methods (Reviewer V7t1, Reviewer RvJG, Reviewer F7vu).**
* **Addressing significant real-world challenges in federated learning with meaningful theoretical and practical contributions (Reviewer V7t1, Reviewer RvJG, Reviewer ME5v).**
* **Clear and well-organized presentation, making the paper easy to follow and understand (Reviewer V7t1, Reviewer RvJG, Reviewer F7vu, Reviewer ME5v).**
In summary, we have introduced a novel framework, TAKFL, to address the fundamental limitations of existing knowledge distillation (KD) methods in the *underexplored* diverse device heterogeneous federated learning environments, ranging from small IoT devices to large workstations. Our framework effectively distills the unique knowledge of each device prototype's ensembles by treating them as separate tasks and adaptively integrates them into the student model via a novel task arithmetic approach. Unlike existing methods, TAKFL avoids dilution and information loss, providing customized knowledge integration for each student model based on the knowledge quality and helpfulness of each prototype's ensembles and its intrinsic capacity. Furthermore, in response to the issues associated to noisy and unsupervised ensemble distillation, we have incorporated a novel KD-based self-regularization loss. Our work makes the following key contributions:
* **Framework:** Introducing a new framework to address the fundamental limitations of existing methods in diverse device heterogeneous FL called TAKFL.
* **Theoretical Contribution:** Presenting the first theoretical framework using the concept of capacity dimension to understand the effectiveness of KD in device heterogeneous FL.
* **Experimental Validation:** Conducting comprehensive experimental evaluations for both CV and NLP tasks with various datasets, architectures, and levels of data heterogeneity, demonstrating the effectiveness of our method and achieving state-of-the-art performance by significantly outperforming existing methods. Additionally, we include a new scalability experiment encompassing devices from XXS to XXL to further demonstrate the efficacy of our method.
Our extensive experiments show that TAKFL not only outperforms existing methods but also offers a novel, simple, and efficient solution to a significant practical challenge. Furthermore, to the best of our knowledge, we are the first to introduce the notion of "task vector" in FL to enhance knowledge transfer across diverse heterogeneous device prototypes. We have derived new consistent theoretical and experimental findings detailed in Appendix D.1.1. We believe our work makes a significant contribution to the FL field (as pointed out by Reviewer RvJG), addresses an important practical challenge (as highlighted by Reviewer ME5v), and establishes a new benchmark. We believe that the impact of our method extends beyond specific datasets and prototype sizes, providing a versatile solution for diverse device heterogeneous federated learning applications.
We appreciate the reviewers' constructive comments and have strived to comprehensively address them. To this end, we have provided at least one additional experiment, table, and/or figure for each reviewer:
* In response to Reviewer V7t1's comments #1 and #3, we have provided Table 1 and Figure 1 in the PDF rebuttal.
* In response to Reviewer RvJG's comment #3, we have provided Figure 3 in the PDF rebuttal.
* In response to Reviewer F7vu's comment #2, we have provided Table 2 in the PDF rebuttal.
* In response to Reviewer ME5v's comments #2 and #3, we have provided Figure 2 and Table 3 in the PDF rebuttal.
The PDF rebuttal file containing the associated figures and tables is uploaded with this message.
Once again, we sincerely appreciate the reviewers' time and effort in reviewing our work. If our response comprehensively addresses the reviewers' comments, we kindly ask the reviewers to consider raising their scores. We are happy to address any remaining comments or concerns during the author-reviewer discussion period.
Thank you!
Pdf: /pdf/7d912acbbc3a5946687a21986e241f503418a923.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
HEPrune: Fast Private Training of Deep Neural Networks With Encrypted Data Pruning | Accept (poster) | Summary: This paper proposes a data pruning algorithm for the training of Homomorphic Encryption (HE)-based neural networks. The authors introduce an HE-friendly importance score and client-aided masking to prune samples in the dataset. The authors further propose ciphertext-wise pruning to merge ciphertexts with empty slots, thereby reducing computational costs during training. Finally, the paper presents empirical studies to validate the effectiveness of the proposed data pruning method.
Strengths: The main advantages can be listed as follows:
1. The paper provides a new data pruning method for data encrypted by HE scheme. The authors propose the HEL2N score, which substitutes the $\ell_2$-norm in the EL2N score with $\ell_1$-norm, and the client will select important samples based on the score computed by server.
2. The paper proposes the ciphertext-wise pruning, enabling the server to merge ciphertexts with empty slots with the communication of the client.
3. The paper conducted experiments on five datasets and compares the proposed method with the HETAL method to demonstrate its effectiveness.
Weaknesses: Despite many strengths, there are some issues with the paper as follows:
1. The submission requires further revisions for clarity and consistency. At line 43 “methds” should be “methods”; At line 74 and Figure 1 “CIFAR10” should be “CIFAR-10” as in Section 4; At line 326 “Table2” should be “Table 2”; At line 340 “Figure 4(4)” should be “Figure 4(a)”; Figure 1 and 4 should have sub-captions denoting which subgraph is (a) or (b).
2. The computation costs associated with data pruning raise concerns. As given by Eqn. (1), computing the HEL2N score involves multiple gradient computations for each sample, which is computationally intensive. Moreover, ciphertext-wise pruning seems to require a large number of rotations, which is also a very slow HE operation. If the data pruning process is time-consuming, it may negate the benefits, making it more efficient to train directly without pruning.
3. The novelty of the paper is questionable. The HEL2N score primarily modifies the $\ell_2$-norm in the EL2N score to an $\ell_1$-norm, and directly computing the square of EL2N score seems to be faster than HEL2N score. The trick of masking is also common place in HE literature. Moreover, one advantage of HE-based method is that they do not require any communications between server and client. The requirement for client-server communication in client-aided masking and ciphertext-wise pruning could diminish the significance of the proposed method.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Why not directly compute the square of the EL2N score, which would avoid the need for computing the square root or the maximum value and is a simpler process?
2. Is the running time for HE-based data pruning included in the total running time reported in the Tables in Section 4.2? If so, what proportion of the total running time does the HE-based data pruning constitute?
3. How does the proposed method compare to a baseline that directly randomly samples a subset of ciphertext at each epoch, instead of computing importance score and merging ciphertext? I think this baseline is simpler and more efficient.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer ADwT for his/her thorough reading of the manuscript and constructive comments.
Q1 Clarity and consistency
We thank the reviewer for the thorough reading. We will fix the typos in the future version.
Q2 The overhead of the proposed methods
The results in Section 4.2 include the running time for HE-based data pruning. The proposed data pruning mainly contains (1) computing the HEFS score and (2) performing ciphertext-wise pruning. We show the proportion of the proposed methods of total runtime in Table 2(rebuttal pdf).
We first benchmark the runtime of basic HE operations in our environment: addition:8ms, ciphertext-plaintext multiplication (cpMult):17ms, rotation:116ms, ciphertext-ciphertext multiplication (ccMult):146ms, max:1.37s and bootstrapping:35.89s.
**HEFS.** The proposed HEFS does not rely on the gradients. Instead, HEFS can be computed via the error vector as shown in the Equation (2). To compute HEFS, we only need subtraction, max and the rotation operation. Computing HEFS on a single ciphertext takes 1.72~1.84s.
**Ciphertext-wise Pruning.**
While rotation is not fast compared with addition and cpMult, it is still faster than the ccMult in the CKKS scheme. Moreover, the rotation barely consumes the noise budget [1]. We remark that the non-linear SoftMax function is computed via approximation, which also relies on ccMult. The intensive evaluation of the ccMult in the forward pass and back propagation leads to frequent bootstrapping, which is the most expensive operation in practice.
In the worst case, the ciphertext-wise pruning needs $O(p(1-p)N)$ rotations, where $N$ is the number of total data samples and $p$ the pruning ratio. Take the CIFAR-10 dataset for an example. We choose a batch size of 128, and the 43750 training data samples are encrypted in 342 ciphertext blocks. When the pruning ratio is 99\%, the ciphertext-wise pruning needs at most 430 rotations to combine the 342 sparse ciphertext blocks into 4 dense ones, which takes 0.04h. In contrast, evaluating one epoch without pruning takes more than 20h.
We remark that the main bottleneck of private training remains to be the evaluation of complex non-linear functions like the SoftMax function and the bootstrapping. We believe that optimizing these operations could further accelerate private training. This is orthogonal to the optimizations discussed in this paper.
Q3 The effectiveness of the proposed HEFS.
In Figure 2(rebuttal pdf), we compare HEFS with (1) the square of the EL2N score; (2) randomly dropping ciphertexts. However, these two methods yield inferior efficiency / utility trade off compared with the proposed HEFS.
**The square of EL2N.**
The HEFS generally has a better accuracy than the squared EL2N. HEFS outperforms the squared EL2N by a noticeable margin when the pruning ratio is high, i.e., the remaining data is below 0.2. This is because the squared EL2N score enlarges the importance of samples that are predicted wrong. These samples can be noisy samples [2]. Emphasizing these samples too much makes the selected data less informative, which leads to an inferior accuracy. As the pruning ratio goes down, the proportion of such noisy samples decreases. Accordingly, the performance of the squared EL2N becomes closer to HEFS. In Table 1(rebuttal pdf), we show that computing the score is not the bottleneck and the savings from using the squared EL2N is marginal.
**Random pruning.** While random pruning strategy is simple, it ignores the importance of data samples. This leads to significant deterioration of accuracy, especially when the remaining fraction is low.
Q4 The novelty and the significance.
To the best of our knowledge, we are the first to explore encrypted data pruning to accelerate private training. Performing data pruning in the encrypted state is non-trivial. Directly adapting plaintext pruning methods is not viable. To start with, the importance score in the plaintext often involves complex nonlinear functions that are expensive in HE. Moreover, sorting samples are considered free in plaintext, but sorting in the encrypted state is expensive. Most importantly, the sample-wise pruning in plaintext leads to sparse ciphertexts and fails to effectively accelerate private training. We propose a series of HE-aware optimizations including HEFS, client aided masking and ciphertext-wise pruning. Our methods boost the efficiency of private training significantly without sacrificing accuracy.
Moreover, we would like to remark that EL2N has been treated as a static data pruning method. Prior works have assumed either that it remains unclear if EL2N reduces the total training time [3] or that EL2N cannot be used to accelerate training [4]. In this work, we innovatively adapt it to the dynamic data pruning setting and show its effectiveness.
Q5 The non-interactivity of HE
Private training with the proposed encrypted data pruning remains largely non-interactive compared with the MPC-based methods. This is because all HE computation is performed solely by the server. The client is only required to be online for less than 0.1% of the total training time, to handle the early stop signal and pruning mask. As shown in prior works like HETAL, these communications are both secure and necessary.
To sum up, we show that the squared EL2N and randomly pruning are less effective than the proposed HEFS. Additionally, the proposed HEFS and ciphertext-wise pruning are efficient at runtime. We sincerely hope our responses have addressed the reviewer's concerns.
Reference
[1] Bossuat, Jean-Philippe, et al. Efficient bootstrapping for approximate homomorphic encryption with non-sparse keys.
[2] Paul, Mansheej, et al. Deep learning on a data diet: Finding important examples early in training.
[3] Truong, Thao Nguyen, et al. KAKURENBO: adaptively hiding samples in deep neural network training.
[4] Qin, Ziheng, et al. Infobatch: Lossless training speed up by unbiased dynamic data pruning.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. Considering the authors' new clarification and overall contribution of the paper, I am willing to improve my score.
---
Rebuttal 2:
Comment: We sincerely appreciate the reviewer's thoughtful feedback and are grateful for the increased score. Thank you for your consideration and support. | Summary: This paper focuses on the scenario where the client encrypts the model and dataset with homomorphic encryption and outsources them to the server for training. It accelerates the training process through dynamic data pruning. This paper makes the following three contributions:
First, this paper is the first to use dynamic data pruning to accelerate model training in homomorphic encryption (HE) scenarios. Second, because using the plaintext data pruning method in the HE scenario incurs significant overhead, this paper proposes an HE-friendly method for evaluating the importance of data samples. Lastly, because of the high cost of sorting in HE, this paper proposes that the client undertake this part of the computation. Additionally, since a single SIMD ciphertext can contain multiple data samples, pruning may not reduce the number of ciphertexts, even though the samples within each ciphertext become more sparse. To address this issue, the paper proposes to combine several sparse ciphertexts to reduce HE computation.
Strengths: 1) This paper is the first to apply dynamic data pruning to accelerate model training in HE scenarios.
2) It introduces a HE-friendly important score to make data pruning more efficient.
3) This paper uses ciphertext-wise pruning to reduce the number of ciphertexts while keeping detailed information.
Weaknesses: 1) The HE-friendly score needs more explanation. In this paper, the score is directly introduced without any theoretic proof of its effectiveness.
2) The work is a bit incremental. Applying data pruning in the HE scenario doesn't seem very challenging, and there is no significant difference between data pruning in plaintext and ciphertext.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) Is there any theoretic proof of the effectiveness of the proposed HE-friendly score?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer ADwT for his/her thorough reading of the manuscript and constructive comments.
Q1 The effectiveness of HEFS
The proposed HE-friendly score relies on the observation that the importance of a data sample can be quantified by its gradients [1]. We denote the input vector as $x\in\mathbb{R}^d$ where $d$ is the input dimension and the one-hot label as $y\in \{0,1\}^K$ where $K$ is the number of class. We denote the logit outputs of the model as $f(x;w)$ and the prediction vector as $p(x;w)=\sigma(f(x;w))$, where $\sigma(\cdot)$ is the SoftMax function. We denote the cross-entropy loss as $\ell(p,y)=\sum_{i=0}^{K-1}y^{(i)}\log p^{(i)}$.
We denote a minibatch of $B$ samples as $S=\{(x_i,y_i)\}_{i=0}^{B-1}$.
We denote a single sample's gradient to the weights at time $t$ as $g_t(x,y)=\nabla_{w_t}\ell(p,y)$. The change of a sample's contribution to training at time $t$ can be quantified by the time derivative of the loss on the sample, i.e., $\Delta_t((x,y), S)= -\frac{d \ell(p,y)}{dt}$. Accordingly, we can evaluate the importance of some data sample $(x_k,y_k) \in S$ by investigating how removing the $(x_k,y_k)$ from the minibatch change the loss of other samples. The impact of removing the sample $(x_k,y_k)$ is bounded by its gradients. More formally, for $\forall (x_i,y_i)\in S$ where $i\neq k$, we have:
$\| \Delta_t((x_i, y_i), S) - \Delta_t((x_i, y_i), S_{\neg k}) \|_2 \leq c \| g_t(x_k, y_k) \|_2 \cdots (1)$
We provide a proof of the correctness of the Equation (1) as follows. By the chain rule, we have $\Delta_t((x_i, y_i), S) = -\frac{d \ell(p,y_i)}{dt} = -\frac{d \ell(p,y_i)}{dw_t}\frac{dw_t}{dt}$. When the weights are updated via SGD, we have $\frac{dw_t}{dt}=-\eta\sum_{(x_j,y_j)\in S}g_t(x_j,y_j)$ where $\eta$ is the learning rate. Accordingly, the change of the loss of the sample $(x_i,y_i)$ is $-\frac{d \ell(p,y_i)}{dw_t}\cdot\eta\sum_{(x_j,y_j)\in S}g_t(x_j,y_j) + \frac{d \ell(p,y_i)}{dw_t}\cdot\eta\sum_{(x_j,y_j)\in S_{\neg k}}g_t(x_j,y_j) = \eta\frac{d \ell(p,y_i)}{dw_t}g_t(x_k, y_k)$. By the submultiplicative property of norms, we have $\| \Delta_t((x_i, y_i), S) - \Delta_t((x_i, y_i), S_{\neg k}) \|_2 \leq \eta \| \frac{d \ell(p,y_i)}{dw_t} \|_2 \| g_t(x_k, y_k) \|_2$. Since $\eta$ and $\frac{d \ell(p,y_i)}{dw_t}$ is independent of $k$, we can set $\eta \| \frac{d \ell(p,y_i)}{dw_t} \|_2$ as a constant $c$. Thus, we have proved the correctness of the Equation (1).
The above bound can be simplified. Specifically, by the chain rule, we have $g_t(x,y)=\frac{d \ell(p,y)}{d f}\frac{d f}{d w_t}$. Given the number of class $K$, the bound can be written in the logit-wise form as $\sum_{i=0}^{K-1}\| \nabla_{f^{(i)}}\ell(p,y)^T\nabla_{w_t}f^{(i)}(x;w_t) \|_2$.
Under the cross entropy loss, the derivative with respect to the $i$-th logit is $p(x;w_t)^{(i)}-y$. It was observed that logit gradients $\{\nabla_{w_t}f^{(i)}(x;w_t)\}_{i=0}^{K-1}$ are generally orthogonal among classes [2,3]. Thus, the bound can be simplified as $\| p(x;w_t)-y \|_2$, which is the EL2N score.
The proposed HEFS further simplifies the computation as the $\ell_1$-norm of the error vector. We observe that the $\ell_1$-norm of the error vector has identical properties as the $\ell_2$-norm. If a sample's prediction vector is largely similar to the label vector, it will have both a small EL2N score and a HEFS score. This indicates that the sample is easier to learn and less informative for the training process. Our experiments also supports our analysis. HEFS can effectively select informative data samples to accelerate training without compromising the accuracy. Additionally, while computing the square root in EL2N is expensive in HE, HEFS can be computed in HE much more efficiently.
Q2 Why encrypted data pruning is challenging
While the idea of leveraging data pruning to accelerate the private training seems straightforward, we note that encrypted data pruning differs from data pruning in the plaintext in several key aspects.
Computing these metrics such as the entropy loss and the $\ell_2$-norm is considered as free in the plaintext. However, evaluating the complex non-linear functions in HE is non-trivial. Evaluating a single square root function on a ciphertext can take up to 2 minutes. This implies that simple metrics like EL2N could negate the benefits of data pruning. We highlight that HEFS rely solely on the prediction vector and the label vector, which are directly available from the private training process. Evaluating the HEFS score only requires subtraction, max and the rotation operation, which takes only 1.72~1.84s for one ciphertext.
Another key difference is that removing the less important data samples directly reduce the computation overhead of training in the plaintext. However, merely excluding the less important data samples does not effectively reduce the overhead of private training. This is because the runtime of private training is determined by the number of ciphertexts, and one ciphertext can contain multiple data samples. Performing sample-wise data pruning leads to a large number of sparse ciphertext. We introduce ciphertext-wise data pruning to effectively reduce the number of ciphertext and boost the efficiency of private training.
The proposed data pruning methods are lightweight, taking only as little as 0.4%~2.5% total runtime, while effectively accelerating private training. We believe this work enables more practical private training in real-world. We thank the reviewer's suggestions and will incorporate the detailed analysis of HEFS in the future version for better clarity.
Reference
[1] Paul, Mansheej, et al. Deep learning on a data diet: Finding important examples early in training.
[2] Fort, Stanislav, et al. Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the neural tangent kernel.
[3] Fort, Stanislav, et al. Emergent properties of the local geometry of neural loss landscapes. | Summary: 1. The paper introduces a Homomorphic Encryption (HE)-based confidential training framework that enhances training efficiency through encrypted data pruning.
2. The paper proposes HE-Friendly Score (HEFS), an enhancement over the existing EL2N score, to efficiently assess the importance of encrypted data samples.
3. Due to the high complexity of sorting scores and calculating the pruning mask on the server, the paper introduces a method that generates data pruning masks with the assistance of the client, enabling the server to perform pruning.
4. The paper proposes a method for pruning at the ciphertext level to reduce sparsity in the encrypted data, thereby accelerating the training process.
5. The performance of HEFS, CAM, and CWP is evaluated on diverse datasets such as MNIST, CIFAR-10, Face Mask Detection, DermaMNIST, and SNIPS. The results are compared with the previous method, HETAL (ICML2023), to demonstrate improvements in training speed and accuracy.
6. The experimental results indicate that the proposed methods can accelerate confidential training by up to 16 times with minimal loss in accuracy.
Strengths: 1. This paper tackles the novel problem of accelerating confidential training through encrypted data pruning, a topic that appears to have not been previously explored in existing research.
2. The methodologies and experimental procedures are clearly explained, ensuring the reproducibility of the results by providing their code (although I have not tested the code yet).
3. Considering that HE training is a very challenging subject due to the computational complexity of operations on homomorphically encrypted data, it is noteworthy that the authors have implemented detailed techniques such as pruning within the homomorphic encryption framework for the first time. However, it is crucial that the design carefully considers both security and performance limitations.
Weaknesses: 1. There are concerns regarding the privacy threat setting in this paper. The focus is solely on the importance of the client's data privacy, without explicitly considering the server's model privacy. In other words, the server's model is assumed to be publicly available information. This assumption is reflected in the final step, where the client recovers the weights of the trained model and sends them back to the server.
2. If the server's model is publicly available, it would be more efficient for the client to process the data in plaintext after receiving the pretrained model. If the scenario involves a massive pretrained model, such as a large language model (LLM), which individual clients cannot train, then training such an LLM in an encrypted state would require at least 1000 times more computation on the server due to the difference in computational overhead between homomorphic encryption and plaintext operations. This level of computation may be unmanageable, and the final decryption of the model by the client would also be infeasible due to the enormous size (100 trillion parameters) of the model.
3. If the primary concern is the client's privacy, it would be significantly more efficient to have the client train on a pretrained model in plaintext, use federated learning, or adopt other methods rather than struggling with encrypted training on the server, as proposed in this paper. The bottom line is that if the server's model privacy is not considered in the security threat model, it is questionable whether this approach is practical or appropriate.
4. The HE.cmp operation does not yield a precise 1 or 0; rather, it produces a fractional value when the two compared values are similar. During the sorting process in Algorithm 1, swapping based on HE.cmp might introduce noise if the score differences are not substantial. This could affect performance. To avoid this, HE.cmp would need a high-degree approximation. The paper does not provide sufficient information on HE.cmp, and additional explanation would be beneficial.
5. Due to the overhead of homomorphic encryption operations, involving the client in the training process because of the complexity of sorting diminishes the method's utility. While the paper does not consider the server's model privacy, allowing the client to access intermediate values during training does not pose an additional security risk. However, in scenarios where the server's model privacy is a concern, this method could enable the client to gain critical information about the model. Increasing the client's role in plaintext processing during training could significantly reduce the server's burden and enhance overall performance. In extreme cases, the client could potentially handle the entire training process in plaintext. The paper needs to explain why the client should only assist with sorting.
Technical Quality: 3
Clarity: 3
Questions for Authors: Is it true that the pruning mask is also encrypted? The distinction between encrypted and non-encrypted elements in the algorithm is not entirely clear. While an overline appears to indicate encrypted values, there are instances where this is not consistently applied.
In scenarios where the server does not send scores to the client, the mask would presumably remain encrypted. If this is the case, how does the server determine which parts to prune, and how does it use rotation to merge ciphertexts and create pruned ciphertexts? Is it assumed that the client must decrypt the data and create the mask in plaintext? If so, the paper should explicitly state this assumption.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: They addressed it adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1 Clarification on the threat model
We would like to clarify some misunderstandings about the privacy threats this paper focuses on. The proposed method protects both the data privacy and the model privacy. In our threat model, both training data and the model weights belong to the client. The client does not have adequate resource or knowledge for training and outsources the training task to the server. Neither the training data nor the model weights are revealed to the server at any time. All the intermediate features and gradients are also encrypted and protected by HE. This threat model is commonly used in private training works [1].
Weakness 1. Since the model belongs to the client, it is reasonable that the client can decrypt the model. Additionally, we note that the client does not need to send the decrypted model to the server. Since the model is already well-trained, the client can keep it for local use. Alternatively, if the client requires the server to provide private inference service on the trained model, it is possible to store the encrypted final model on the server side.
Weakness 2. As mentioned above, the model does not belong to the server and is not publicly available.
Weakness 3. As mentioned above, the goal and privacy threats are different from those in federal leaning. In our threat model, the client outsources the training task to the server because the client does not have enough resource or knowledge for training. In the meantime, the gradients are encrypted and not revealed to the server. Federal learning typically needs local training on the client side and does not directly protect the gradients.
Q2 The security of the pruning mask
(1) Unencrypted mask. In response to weakness 5, we explain why client aided masking is secure and reasonable. As explained above, the both the data and the model belong to the client. Therefore, revealing the mask to the client does not pose security risk against the data privacy or model privacy. In our client aided masking algorithm, the importance score is revealed to the client. The client performs a quick selection algorithm to find the score threshold and generate a pruning mask. This happens in the unencrypted state on the client side.
During the client aided masking and ciphertext-wise pruning, the plaintext pruning mask is revealed to the server to enhance the efficiency of data pruning on the server side. The server does learn the sparsity information of the training dataset implied by the pruning mask. However, since all the training data and model weights are strictly encrypted on the server side, the privacy of client's data and model is still protected. Similar to prior works [1], we allow revealing necessary meta information about training like the the size of the dataset or ciphertexts, the early stop signal and the pruning mask. Sharing these information does not pose direct security risk against the client’s data privacy or model privacy.
(2) Encrypted mask. We explain how the server can remove unimportant samples using encrypted mask without client aided masking. Performing data pruning without the client's aid and via an encrypted mask is possible. The server can first compute the importance score in the encrypted state and generate the encrypted pruning mask as shown in Algorithm 1. With the encrypted pruning mask, the serve can make the slots of unimportant samples empty via homomorphic multiplication and generate sparse ciphertexts. However, since the mask is encrypted, the server cannot learn which samples should be pruned. The server cannot merge the sparse ciphertexts via rotation either. This is because the sparsity of each ciphertext remains unknown to the server, and the server can not determine how to rotate each ciphertext to create pruned ciphertexts. Therefore, while data pruning with encrypted mask is possible, it incurs prohibitive overhead during homomorphic sorting in practice and is barely useful.
Our experiments are indeed based on the client aided masking and the unencrypted pruning mask. We will state this more explicitly for better clarity.
Q3 The approximated comparison in CKKS
The comparison function in the CKKS scheme is approximated and not accurate. We approximate the sign function to compute the comparison function and the max function in equation (3). Both the max and the comparison function can be constructed via the sign function and have similar complexity [3]. Specifically, the max function can be defined as $max(a,b) \approx \frac{(a+b)-(a-b)sign(a-b)}{2}$, where $sign(x)$ returns 1 if $x>0$, 0 if $x==0$ and $-1$ if $x<0$. We use a composition of polynomials to approximate the sign function, i.e., $sign(x) \approx f(g(x))$, where $f(x)=f(x) = 8.83133072x - 46.45750399x^3 + 83.02822347x^5 - 44.99284778x^7$ and $g(x)= 3.94881885x - 12.91030110x^3 + 28.08653622x^5 - 35.59691490x^7 + 26.51593709x^9 - 11.41848894x^{11} + 2.62558444x^{13} - 0.24917230x^{15}$. The approximation error of the max function $e = \left| \max(a,b) - \frac{(a+b) - (a-b)\text{sign}(a-b)}{2} \right|$ is bounded by $2^{-8}$ [3]. While increasing the number of compositions and the degree of each polynomial can lead to an even smaller bound, we find that above approximation is precise enough for computing the HEFS score. Prior works [1] find that polynomials of even lower degree is enough for private training.
We hope our responses address the reviewer's concerns. We will add the details about the max and comparison function to Appendix in future version for better clarity.
[1] Lee, Seewoo, et al. HETAL: efficient privacy-preserving transfer learning with homomorphic encryption.
[2] Lee, Eunsang, et al. Minimax approximation of sign function by composite polynomial for homomorphic comparison. | Summary: This paper presents a method for pruning data in a utility-preserving way under homomorphic encryption, evaluating the method to demonstrate that the savings from training on pruned data outweighs the costs of encrypted data pruning computations. The methods for determining how relevant data items are to improving training performance are similar in spirit to those in the active/few-shot learning literature, but the paper does not explicitly draw this parallel.
Although the paper offers a concrete threat model related to "private training" (where model training is outsourced to a third party), several aspects of the threat model seem not to achieve the stated goal of limiting the outsourcing party/vendor's ability to learn useful facts about the data. As the current state-of-the-art is to bind third-party infrastructure providers contractually, I'd like to see a map from how the approach in this paper, in a threat model that is weak, could be strengthened to a threat model that would obviate these purely contractual limitations. For example, there are standard constructions to move from honest-but-curious models to models where the third party is more adversarial (although some reliance must always be assumed in the case of outsourced computation). More difficult are the problems of separating data flows from inferences about the content of training data through various sorts of indirect leakage (training time, ciphertext size, dependence of the presence/absence of the allowed "early stop" signal, etc). Although this problem is difficult in general, I suspect there are ways to organize the threat model here so that it can make stronger claims around solving them. At a minimum the threat model should declare indirect data flow out of scope while recognizing that it can leak data items to the outsourcing partner.
A few structural aspects of the paper confuse an otherwise solid presentation: a core assumption in the setup of the model in which the protocol is used is that the outsourcing partner will compare the sorted data items (under encryption) to a threshold importance score determined by the utility loss of pruning, but how this threshold is set/computed is left open (an experiment uses an ex vacuo value of 0.9 for this parameter, but why is not explained even in this concrete context!); although it is clear that the goal is only to outsource training, it is not clear that an organization without the infrastructure capability for training will have infrastructure capability for things like serving - this should be declared out of scope or unpacked/discussed a bit; approximations made to simplify computation under encryption, such as the replacement of $\ell_2$ with $\ell_1$ norm at 204-205, are not directly evaluated or justified; in general, the grouping of samples into batch ciphertexts is assumed but not explained - the paper should explain its necessity and the benefits it provides vs. the simple solution of putting each sample in its own ciphertext.
Although the evaluation is valuable in supporting the core claims of the paper, there are some structural issues there as well: although the experiments characterize the tradeoff between utility and pruning ratio, this tradeoff is very different for the two example datasets. How general ought a tradeoff curve for this be? How data dependent might it be? Relatedly, might pruning affect performance for different classes differentially, especially in situations where class balance is poor? Much of the "fairness" literature focuses on ways that aggregate analysis breaks down when distinct classes might or ought to be treated differently by the model. Does this affect the analysis? Could it in some cases?
Last, I observe that 4MB of communication overhead for a tiny database (CIFAR-10, 43750 samples) is manageable but there is no discussion of scaling here. Are the proposed applications small like this? If not, at what point is the overhead too much? Does scale cause this to break down? I here recognize that the paper inherits the inherent inefficiency of FHE constructions.
Strengths: * The research question is well motivated and the solution a useful tool to making private outsourced training a more realistic option. I am not well versed enough in the FHE training literature to evaluate the novelty of this specific approach, but the core idea is sound.
* Evaluations justify the theoretical claims nicely, even where improvements are available.
* The overall argumentation is strong, even if some details are never defined or explained as noted in the summary.
Weaknesses: * The summary notes several places where definitions can be sharpened or details can be re-ordered to improve the presentation.
* There are a handful of places where some copyediting would improve the presentation, although the high-level argument structure is in general strong.
* The key question of how the pruning ratio is determined must be explained, since it is an input to most of the provided private training algorithms and also a key determinant of the model in which the protocol is meant to be used.
Technical Quality: 3
Clarity: 3
Questions for Authors: * How does the pruning ratio get determined in practice? Is it necessary to do many private training runs and measure utility on the resultant models? How does this cost compare to the cost of non-private training?
* Is the scale problem being swept under the rug at 237-240 an issue for larger data sets, or is the idea that this should only apply to data sets at the scale demonstrated? How does communication overhead scale as the training runs become larger?
* Can the issue of indirect data leakage be managed somehow, or must it simply be assumed away in the security model? It's a hard problem, so solving it may well be out of scope here, but also past computation-on-encrypted-data methods have failed catastrophically due to indirect leakage problems so it's necessary to say something about it. What can or should the paper say?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As noted in the summary, there are places where limitations could be more clearly expressed, for example with regard to indirect data leakage, with regard to the determination of the critical pruning ratio parameter, and also with regard to aggregated analysis and generality of the pruning/utility tradeoff.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 98ig for his/her careful reading of the manuscript and constructive comments.
Q1 Deciding the pruning ratio.
In practice, the pruning ratio should be determined by the client during the client aided masking process. As shown in Figure 1 (in rebuttal pdf), a moderate pruning ratio around 0.5 has close accuracy to training with the full dataset across different datasets. Therefore, like the plaintext data pruning works, one straightforward strategy can be choosing a fixed pruning ratio [1]. Recent works show that one can also choose a set of pruning ratios for different epochs [2]. Moreover, the server can store checkpoints after each data pruning. Since the client can observe the test accuracy, the client can also roll back to certain checkpoint if needed. One should avoid running private training multiple times. This is because private training is orders of magnitudes more expensive than the non-private training.
Q2 The communication scalability.
The proposed data pruning methods are scalable to large datasets. The communication complexity of the proposed method is linear with respect to the number of total data samples and the frequency of the pruning. For the ImageNet, performing one encrypted data pruning over 1,281,167 training samples would only incur around 40MB communication overhead. Note that data pruning is performed every $\Delta_{\tau}$ epochs, so the overall communication are few multiples of 40MB.
The major challenge of scaling to larger datasets is the inefficiency of the HE constructions. As far as we know, the latest HE-based private training works still focus on small datasets like CIFAR-10 and MNIST [3]. Securely training on ImageNet can still take years even with GPU acceleration and more efficient cryptographic tools [4].
Q3 Threat model and indirect data leakage.
The threat model can be strengthened via additional integrity techniques like zero-knowledge proofs (ZKP). HE alone cannot ensure the integrity of the computation on the server side and we can introduce ZKP to verify the correct execution on the server side. Making HE verifiable via ZKP is an active research topic that is currently being explored [5]. In addition to ZKP, other integrity techniques such as Message Authentication Codes and Trusted Execution Environments can also be utilized to defend against malicious adversaries.
Protecting the indirect information, such as the training time and the size of the dataset and ciphertext, is indeed out of scope for this work. In the meantime, we believe the security risk of such indirect information leakage is limited, since all data and model weights are strictly encrypted during the whole training phase. In this work, we focus on protecting client’s data privacy and model privacy in the semi-honest setting, like prior works on private training [3]. Such a setting is reasonable because as a service provider, the server is motivated to adhere to the protocols to guarantee the quality of service. We will make more explicit declarations for better clarity.
Q4 Generality issue
We show the pruning/utility tradeoff on more datasets in Figure 1 (rebuttal pdf). The accuracy does differ for different datasets. We notice that extremely high pruning ratios like 0.99 only work for CIFAR-10 dataset and do not generalize well. On the other hand, moderate pruning ratios demonstrate reasonable pruning/utility tradeoff across all datasets. When we keep more than 10% data, the accuracy is only 0.05%~0.31% lower than training with full dataset. When we keep around 50% data, the accuracy can be even higher.
Q5 Explanations on details
We thank the reviewer’s thorough reading and constructive advice on the copyediting.
(1) The client's capability. The client does not have adequate resource or knowledge for training but has basic computation resource for at least encryption and decryption. The encryption and decryption algorithms are lightweight in HE, taking only 86ms and 49ms for one ciphertext respectively. Accordingly, it is reasonable to assume that the client can compute the pruning mask, which takes as little as 15ms for the CIFAR-10 dataset. Similarly, the client should be capable of computing the early stop signal [3].
(2) The effectiveness of the HEFS score. A sample's importance can be quantified by the $\ell_2$-norm of its gradients, which can be approximated by the $\ell_2$-norm of the error vector [1]. The more similar a sample's prediction vector is to the label vector, the smaller EL2N score will a sample have. This means the sample is easier to learn and less informative for the training process. From this perspective, HEFS has identical properties as EL2N. Additionally, HEFS can be computed in HE more efficiently.
(3) Batching data samples in ciphertext. Single Instruction-Multiple Data (SIMD) is a commonly used technique in HE. With SIMD, a single ciphertext has many slots, i.e., 32,768 slots in our setting. On the other hand, a single data sample can only occupy 768 slots in the transfer learning setting, and 784 slots for the MNIST dataset, 3,027 slots for the CIFAR-10 dataset. Packing multiple samples in one ciphertext is a commonly used strategy to reduce the number of ciphertexts as well as the number of HE operations [3,6].
We will incorporate the reviewer's suggestions into the future versions for better clarity.
[1] Paul, Mansheej, et al. Deep learning on a data diet: Finding important examples early in training.
[2] Truong, Thao Nguyen, et al. KAKURENBO: adaptively hiding samples in deep neural network training.
[3] Lee, Seewoo, et al. HETAL: efficient privacy-preserving transfer learning with homomorphic encryption.
[4] Jawalkar, et al. Orca: FSS-based Secure Training and Inference with GPUs.
[5] Viand, Alexander, et al. Verifiable fully homomorphic encryption.
[6] Crockett, Eric. A low-depth homomorphic circuit for logistic regression model training.
---
Rebuttal Comment 1.1:
Comment: I have read the response. While it addresses some of my concerns, I still struggle with some fundamental issues such as how "the client [should determine] the pruning ratio" when "The client does not have adequate resource or knowledge for training".
While it is true that the threat model can be strengthened by moving to verified outsourced computation techniques, that is a general problem with outsourced computation/FHE and wasn't my point, which was more that there's a lot of "side channel" information available to the outsourcing provider. What are the risks of this? Cryptographers have demonstrated that risks to data confidentiality can be enormous when even small channels exist. Verified computation can eliminate this by allowing the client to confirm that only desired computations are performed (during reported activities, but what about after or on the side?). If I give you ciphertext and you give me a ZKP that you did a certain computation using that ciphertext, that tells me zero information about what _else_ you did with that ciphertext. This is the problem. It's a very hard problem, and I don't mean that you have to solve it, but rather that you should acknowledge the limitation and declare it out of scope.
---
Rebuttal 2:
Comment: We appreciate the reviewer's thorough reading of our response. We first explain the choice of pruning ratio in more detail. Similar to data pruning studies in plaintext, the pruning ratio is set as empirical values such as 0.5 or 0.3. Manually setting the pruning ratio does not require client-side training or expert knowledge. As shown in our previous responses and also data pruning studies in plaintext [1] (Section 3.3), a moderate pruning ratio can achieve close generalization performance compared to the full dataset. The client can choose an initial pruning ratio around 0.5, and adjust the initial pruning ratio according to the size of the training dataset, the quality of the data and the budget for the outsource training service. Moreover, the client can also adjust the pruning ratio during training according to the test accuracy. If you still have concerns about the pruning ratio, please let us know, and we will be happy to respond. We will clarify the choice of pruning ratio in our revision.
We thank the reviewer for pointing out the side channel information during the threat model. We will clarify it and discuss the potential side channel risks and declare that securing the side channel risks is out of scope for this work more explicitly in the next-version manuscript.
[1] Truong, Thao Nguyen, et al. KAKURENBO: adaptively hiding samples in deep neural network training. | Rebuttal 1:
Rebuttal: We thank all reviewers for their constructive and insightful feedback.
We are glad that all reviewers unanimously agree that our encrypted data pruning methods are of great significance for improving private training. We appreciate the reviewers recognizing the novelty of our work as the first framework that enables data pruning in private training (FEVE, LFsM). Reviewers find our work and evaluation effective, and our presentation clear (98ig, FEVE, LFsM). The reviewers also find that our work successfully boosts the efficiency of private training without compromising either accuracy or privacy. This paper makes private training more practical in real-world applications.
Performing data pruning in the encrypted state is non-trivial. Directly adapting plaintext pruning methods is not viable. To start with, the importance score in the plaintext often involves complex nonlinear functions that are prohibitive in HE. Moreover, sorting samples are considered free in plaintext, but sorting in the encrypted state is expensive. Most importantly, the sample-wise pruning in plaintext leads to sparse ciphertexts and fails to effectively accelerate private training. We propose a series of HE-aware optimizations including HEFS, client-aided masking, and ciphertext-wise pruning. Our methods boost the efficiency of private training significantly without sacrificing accuracy.
We thank the reviewers for their interest in many aspects of our work.
**(1) Generality and scalability:** We have shown that the proposed encrypted data pruning methods are highly general and scalable for different datasets and pruning ratios.
**(2) Lightweight design:** We present a more detailed runtime breakdown to show the proposed methods are extremely efficient at runtime, taking as little as 0.4% to 2.5% of the total runtime of private training.
**(3) Analysis on Privacy and Efficiency:** We present an in-depth analysis of the privacy and efficiency of the proposed methods. We guarantee data privacy and model privacy simultaneously via HE throughout training and address security issues related to the pruning mask. We demonstrate the mechanism of using the proposed HEFS score to identify the most informative samples in the encrypted state. Meanwhile, we leverage HE-friendly scores, efficient sorting, and ciphertext-wise pruning techniques to construct lightweight yet effective strategies that significantly enhance the efficiency of private training.
Pdf: /pdf/0b51f39c9e38aa416ef1fda82b9bc5b9b5876c4a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning diffusion at lightspeed | Accept (oral) | Summary: This paper considers learning diffusion dynamics from observational data of populations over time, identified as learning the energy functional in Equation 3. Past research has confronted this inverse problem via complex bilevel optimization, limited to potential energies. This paper proposes an alternative model JKOnet* that can work with potential, internal, and interaction energies, efficiently minimizes a quadratic loss instead of a complex bilevel optimization, has much lower computational complexity, and out-performs baselines in simulations. A variant for linearly parameterized functionals has a closed form solution. The paper's new method reconsiders the JKO scheme using first-order optimality conditions, resulting in decompose the problem into first computing optimal transport plans between adjacent populations and then optimizing a loss for fixed plans.
Strengths: - Inferring diffusion dynamics from observational data is a difficult and significant problem for which this paper appears to provide a solid contribution. The paper substantially improves upon JKOnet in terms of multiple directions: better performance (Figure 3), simpler optimization objective (Equation 11), better scalability and efficiency (e.g. Table 1, Section 4.2), and improved generality (Table 1, Section 4.3). These dimensions are analyzed in experiments across a range of different energy functionals, where the gains are shown in log-scale displaying orders of magnitude improvement. The paper makes a convincing argument for using JKOnet* over JKOnet.
- The methodology appears quite strong, well-motivated, and original, with solid intuition given by the authors throughout the paper.
Weaknesses: Minor weaknesses:
- While the results are strong, occasionally the language feels too imprecise. For example, "runs at lightspeed" seems inaccurate compared to "runs very efficiently". The authors also mention that they rely upon weeks-old advancements in optimization in the abstract which seems unneeded.
- The paper is generally very well-written except for the introduction which could use editing. It introduces a lot of terminology and details from past research. Similarly, Figure 1 is referenced multiple times including in the introduction but it was hard to understand until after reading Section 3.
- The construction of the optimal transport plans does not seem to be included in the computational complexity comparisons. While this is computed once for JKOnet*, it is additional expense over JKOnet.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. What is JKOnet_l in Table 1?
2. In Section 4.2, the authors conclude that JKOnet* is well-suited for high-dimensional tasks. Does this include computing the optimal transport maps?
3. The discussion in Figure 3 in the text focuses primarily on the speed improvement, yet the performance gains are also quite large, including seemingly between JKOnet* and JKOnet*_l. Can the authors comment on why the linear parameterization was useful in their experiments?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Limitations are adequately addressed in Section 5
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Weakness 1
We appreciate the suggestions to improve the exposition and agree with the reviewer's comment. Specifically, we now removed "phenomenal", "runs at lightspeed" and "few weeks old advancements" from the abstract and introduction, rephrasing so to be more factual in the exposition, focusing on the contributions. Thanks!
### Weakness 2
We agree that the introduction blends with the related works rather abruptly. We thus prepared a revised version in which we smoothed it, gradually adding information for the reader less familiar with the literature.
For instance, we will add the paragraph "However, the JKO scheme entails an optimization problem in the probability space. Thus, the problem of finding the energy functional that minimizes a prediction error (w.r.t. observational data) takes the form of a computationally-challenging infinite-dimensional bilevel optimization problem, whereby the upper-level problem is the minimization of the prediction error and the lower-level problem is the JKO scheme." to better introduce the reader to what the bilevel optimization problem is about.
We also removed the focus from Figure 1 in the introduction.
### Weakness 3
We thank the reviewer for pointing out this difference, which in fact is a strength of our method which we did not highlight appropriately and we hope we now did in the Remark 3.6 we prepared for the next revision of the paper. In a nutshell, it is true that $\mathrm{JKOnet^*}$ requires the construction of the optimal transport couplings beforehand. However, $\mathrm{JKOnet}$ constructs a new optimal transport plan at each iteration depending on the current estimate of the potential, whereas $\mathrm{JKOnet^*}$ needs to do so only once, at the beginning. We prepared a revision of the paper in which we added the time required to compute the optimal transport couplings in Section 4.1 ($0.03 \pm 0.01$s), and we provided an analysis of the comparison between Sinkhorn and plain linear programming for our scope in the newly added Figure 10 in Appendix C.2, which we now expanded (we report Figure 10 in the PDF attached to this rebuttal, and we will include the extended version of the Appendix C.2 for the next revision of the paper). We also better argued why the scaling to high-dimensions remains unaffected by the computation of the optimal transport couplings: the dimensionality affects only the construction of the cost matrix in the linear program, and otherwise the computational complexity of the linear program only relates to the number of particles. When the number of particles grows, one can apply the same batching applied in other methods to pre-compute the couplings. We add details regarding these practical considerations in the revised Appendix C.2. Finally, following the comments from reviewer vNRb, we introduced a real-world case study on learning and predicting molecular processes (see Figure 5 and the comparison table in the attached document). This way, we hope we strengthen our contributions not only by achieving state-of-the-art on a real-world application but doing so in under a minute of training (including the computation of the optimal transport couplings) compared to the hours required by the other methods.
### Question 1
Thanks for pointing out the notation confusion. We defined $\mathrm{JKOnet^*_l}$ only later on, so now we introduced in the caption to clarify we refer to the linear parametrization.
### Question 2
Thanks for pointing this out. We agree that this is an important point and we will discuss it in the revised version of the paper. In particular, the dimensionality affects only the construction of the cost matrix in the linear program associated with the optimal transport problem, and otherwise the complexity is only related to the number of particles. Specifically, the time required to construct the cost matrix scales linearly with the dimension, and in practice it is minimal and dwarfed by the actual solution of the linear program. When the number of particles grows, one can apply the same batching that is applied in other methods to pre-compute the couplings. We add details regarding these practical considerations in the revision of Appendix C.2. Please see also point 3 above.
### Question 3
Thanks for observing the performance gains and pointing out this aspect. We prepared a revision of Section 4.1 in which we added a paragraph highlighting also the performance gains and discussing linear vs non-linear. In particular, the linear approximation has optimality guarantees, as long as the features are sufficiently rich. In high dimensions, however, the choice of feature is challenging and, thus, we recommend resorting to non-linear parametrizations.
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed response and additional experiments in the rebuttal, and continue to recommend paper acceptance. | Summary: The authors study diffusion processes from the perspective of Wasserstein gradient flows. Based on the recent fixed-point characterisation for Wasserstein proximal operator methods, they introduce Jordan-Kinderlehrer-Otto (JKO) type methods for learning potential and interaction energies that govern the diffusion process. Such methods are assuming that a sample of the population distribution at each time step is at hand (not necessearily obtained by tracking individual particles) implying important applications across various fields. While theoretical novelties are present (w.r.t. paper [26] that lies in the foundation of this work), the main contribution is the overall methodology for learning diffusion processes.
Strengths: Paper is, besides minor issues reported bellow, excellently written - very clear, precise and intuitive with well balances technical details between main text and the appendix. Existing ideas are neatly combined to obtain significant improvements of the JKO-type methods and extensive empirical evaluation is presented. The proofs seem correct and well-written.
Weaknesses: While I do not find important weaknesses, I feel that next several small issues can be addressed to further improve readability:
1. When addressing content presented in the appendix it would be good to refer to the section, e.g. see Figure 6 in Appendix A.
2. It would be good to say what $\rho_t$ is in Example 2.1
3. While Table 3.1 reports per-epoch complexity for all the methods, it would be important to note that JKOnet$^*$ have additional computational complexity for solving $T$ OT problems of size $N$ in $d$-dimensions. Detailed remark on the initial computational complexity, depending of the algorithm used, should be reported.
4. In Section 4 it would be helpful to introduce the problems, that is to better explain the task of each experiment and the role of functionals ($V(x)$ ?!) appearing in Appendix F. Maybe giving an example on Styblinski-Tang functional appearing in Figures 2, 3 and 4, and then referring to other ones by their names and/or reference equations.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. In the implementation of the method, a priori computed optimal transport plans are obtained by solving entropy-regularised OT via Sinkhorn-type algorithms or some other methods?
2. What do you think about the applications and/or limitations of the JKOnet$^*$ for the setting of long-trajectories to infer the behaviour in equilibrium, e.g. detection of meta-stable states of Langevin dynamics?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Limitations are addressed adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Weakness 1
We thank the reviewer for the suggestion. We prepared a revision of the paper in which we added a reference to the appendix when referencing to content related to the appendix.
### Weakness 2
Good catch, thank you! We prepared a revision of the paper in which we added the definition of $\rho$ below the Fokker-Plank equation.
### Weakness 3
We thank the reviewer for pointing out this difference, which in fact we believe to be a strength of our method which we did not highlight appropriately. To this end, we included a dedicated remark, Remark 3.6, for the next revision of the paper. In a nutshell, it is true that $\mathrm{JKOnet^*}$ requires the construction of the optimal transport couplings beforehand. However, $\mathrm{JKOnet}$ constructs a new optimal transport plan at each iteration depending on the current estimate of the potential, whereas $\mathrm{JKOnet^*}$ needs to do so only once, at the beginning. We prepared a revision of the paper in which we added the time required to compute the optimal transport couplings in Section 4.1 ($0.03 \pm 0.01$s), and we provided an analysis of the comparison between Sinkhorn and plain linear programming for our scope in the newly added Figure 10 in Appendix C.2, which we now expanded (we report Figure 10 in the PDF attached to this rebuttal, and we will include the extended version of the Appendix C.2 for the next revision of the paper). We also better argued why the scaling to high dimensions remains unaffected by the computation of the optimal transport couplings: the dimensionality affects only the construction of the cost matrix in the linear program, and otherwise the computational complexity of the linear program only relates to the number of particles. When the number of particles grows, one can apply the same batching applied in other methods to pre-compute the couplings. We add details regarding these practical considerations in the revised Appendix C.2. Finally, following the comments from reviewer vNRb, we introduced a real-world case study on learning and predicting molecular processes (see Figure 5 and the comparison table in the attached document). This way, we hope we strengthen our contributions not only by achieving state-of-the-art on a real-world application but doing so in under a minute of training (including the computation of the optimal transport couplings) compared to the hours required by the other methods.
### Weakness 4
Good point, thank you for the comment. We have a dedicated section in Appendix B, which we now expanded to discuss the prediction schemes as well, and we have added an introduction to the problems in the experimental section. In particular, we added the data-generation equation, in which the role of the functionals $V(x)$ is apparent: $x_{t+1} = x_t - \tau \nabla V(x_t)$. We also added the name and equation of each functional in the figures where it was not listed (e.g., Figure 2).
### Question 1
Thanks for the question. In our implementation, we solved the optimal transport problems via plain linear programming (using the POT python library). We prepared an updated version of Appendix C.2 that we will include in the revision of the paper that contains an analysis of the impact of different ways of solution algorithms on the final outcome in terms of computational time during pre-processing and Wasserstein error (see Figure 10 in the attached PDF). In particular, we conclude that, as long as the couplings are close to the correct one, the algorithm used to compute them does not impact the performance of $\mathrm{JKOnet^*}$. Since small regularizers slow down the Sinkhorn algorithm, we opted to directly solve the linear program (without regularization). In general, the solver choice can be considered an additional knob that researchers and practitioners can tune when deploying $\mathrm{JKOnet^*}$. Please see also point 3 above.
### Question 2
Once an energy functional through $\mathrm{JKOnet^*}$ is learned, the equilibrium state of the system can be inferred in two ways. A simple approach consists of running sufficiently many iterations of the JKO scheme until an equilibrium is reached. Alternatively, the equilibrium state is well-known to be the minimum of the energy functional. Thus, the equilibrium state can be inferred by computing the probability distribution which minimizes the energy functionals, using tools from optimization in the probability space.
---
Rebuttal Comment 1.1:
Title: Acknowledgement of the rebuttal
Comment: I thank the authors for their rebuttal. I remain confident of the quality of their paper, suggest the acceptance and keep my score. | Summary: This paper introduces JKOnet*, a new method for learning diffusion processes from data. It uses first-order optimality conditions of the JKO scheme instead of complex bilevel optimization. JKOnet* can recover potential, interaction, and internal energy components of diffusion processes. The authors provide theoretical analysis and experiments showing JKOnet* outperforms baselines in accuracy, speed, and ability to handle high-dimensional data. They also derive a closed-form solution for linearly parameterized functionals. JKOnet* offers improved computational efficiency and representational capacity compared to existing approaches for modeling diffusion dynamics from population data.
Strengths: - Develops JKOnet*, a method using first-order optimality conditions of the JKO scheme to learn diffusion processes, avoiding bilevel optimization and improving computational efficiency.
- Provides theoretical analysis and proofs for JKOnet*, including a closed-form solution for linearly parameterized functionals, backed by comprehensive experiments across various test functions.
- Demonstrates improved performance in terms of Wasserstein error and computation time compared to existing methods like JKOnet, especially in high-dimensional settings.
- Enables recovery of potential, interaction, and internal energy components of diffusion processes, expanding the model's applicability to more complex systems and improving interpretability.
Weaknesses: - The experimental evaluation is limited to synthetic datasets. Real-world data applications would strengthen the practical relevance of the method.
- While the paper discusses limitations, it does not thoroughly explore potential failure cases or boundary conditions where JKOnet* might underperform.
- The paper does not provide a comprehensive comparison with other recent approaches in learning diffusion processes beyond JKOnet, which could provide broader context for the method's improvements.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The authors demonstrate JKOnet*'s performance on synthetic datasets. How well does the method perform on real-world diffusion processes? Additional evaluations on empirical data would help understand the method's practical applicability.
- The paper focuses comparison mainly with JKOnet. How does JKOnet* compare to other recent approaches in learning diffusion processes?
- In Section 3.4, the authors discuss different parameterizations. How sensitive is JKOnet* to the choice of neural network architecture for the non-linear parameterization case?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author discusses limitations in section 5
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Weakness 1
We thank the reviewer for suggesting us one way to strengthen the presentation of our contributions.
We deployed our method to learn the diffusion dynamics of embryoid body single-cell RNA sequencing (scRNA-seq) data [1], a popular benchmark in the literature, and compared our results with nine other recent methods in the literature. We discuss the application and the results in the newly added Section 4.4, which will appear in future versions of the paper, and we report the related figure and table in the PDF attached to this response. We briefly summarize our experimental setting and results below.
Experimental setting (briefly): We follow the same data pre-processing as in [2,3]; in particular, we use the same processed artifacts of the embryoid data provided in their work, which contains the first 100 components of the principal components analysis (PCA) of the data and, following [2,3], we focus on the first five. We train on $60\%$ of the data at each time-step and test $\mathrm{JKOnet^*}$ to predict the evolution of the left-out data.
Results: $\mathrm{JKOnet^*}$ outperforms all existing methods in the literature (see the results table in the PDF attached). Importantly, our training takes less than a minute (including the computation of optimal transport plans), whereas the training time of all other existing methods takes hours.
We visualize the 2 principal components and the interpolations obtained with our method in Figure 5. Note that, for this application, we used a time-varying potential energy, of which we plot the level curves.
### Weakness 2
We believe our approach might underperform in the following two cases.
First, while diffusion processes include many real-world phenomena, in some real-world applications it might be unknown if the particles are undergoing diffusion. If this is not the case, $\mathrm{JKOnet^*}$ might underperform. For instance, in the absence of noise and interaction energy, if the vector field is not the gradient of some potential energy function (e.g., it includes a solenoidal component $\nabla \times \psi$ for some function $\psi: \mathbb{R}^d \to \mathbb{R}^d$), we cannot expect $\mathrm{JKOnet^*}$ (as well as any other method learning a potential) to infer a reasonable potential. We prepared a dedicated section that we will include in the revision of the paper to discuss this failure mode.
Second, when learning both potential and interaction energy and noise level, there might be observability issues that prevent the distinction of the different components of the energy functional (e.g., a discrete-time population-level effect might be explained both by a potential energy and a noise level).
While we did not experience this issue in our experiments in Section 4.3, we are not aware of rigorous guarantees that ensure observability. In a dedicated section in Appendix G, we provide a small-scale analytical example illustrating this issue. As $\mathrm{JKOnet^*}$ is the only method capable of simultaneously learning all three energy components, we believe it can serve as the baseline to investigate this observability issue.
### Weakness 3
We compared our method with others in our real-world application, discussed in 1) above and in Section 4.4 in the revised version of the paper.
### Question 1, 2
Please refer to our answer to Weakness 1) above.
### Question 3
In our experiments, we considered only vanilla parametrizations: two-layers MLP with 64 neurons in each layer, to compare directly with the other works in the literature. What is certainly required is a network that is expressive enough to approximate the energy functional of interest, so standard rationales apply for the choice of activation functions (we use softplus), dimension of the networks, etc. One of the limitations of our works is the data domain (we do not explore, e.g., images). We plan in future work to do so, and in that case more care will be needed to determine the most suitable architecture. Given that the same architecture and hyperparameters worked well across all the experiments (including the real-world experiment), we are confident to state that the learning algorithm itself is not particularly sensitive to the network architecture, which needs of course be chosen to be suitable to represent the energy in the application of choice. We also believe that exciting future work can be done in terms of understanding how different architectures can capture potential, internal, and interaction energies more efficiently: can e.g., transformers be used to learn an interaction energy more efficiently than a vanilla MLP?
[1] "Visualizing structure and transitions in high-dimensional biological data" by Kevin R Moon, David van Dijk, Zheng Wang, Scott Gigante, Daniel B Burkhardt, William S Chen, Kristina Yim, Antonia van den Elzen, Matthew J Hirn, Ronald R Coifman, et al. (2019)
[2] "Improving and generalizing flow-based generative models with minibatch optimal transport" by Alexander Tong, Nikolay Malkin, Guillaume Huguet, Yanlei Zhang, Jarrid Rector-Brooks,
Kilian Fatras, Guy Wolf, and Yoshua Bengio. (2023)
[3] "Deep momentum multi-marginal schr\"{o}dinger bridge" by Tianrong Chen, Guan-Horng Liu, Molei Tao, and Evangelos A Theodorou. (2023)
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. These addressed my questions. I have raised my score. | Summary: This paper studies the problem of learning a diffusion process from samples. It proposes a new scheme based on learning the "causes mismatch" of the process, rather than the "effects mismatch" as in previous works. The new method is significantly more efficient than the schemes from prior works, and works well in practice.
Strengths: The paper is well-written, and the scheme proposed seems to work well in practice on the examples it was tested on. The loss is intuitive, and resembles the score-matching loss from diffusion models, but is the analogous version for arbitrary diffusion processes. Overall, this seems like a paper that people at NeurIPS would be interested in.
Weaknesses: I am not familiar enough with the literature, but it seems surprising to me that this scheme has never been proposed before. In particular, the loss is exactly the score-matching in the case of diffusion models, and there are works [1], [2] that have proposed a similar loss for arbitrary diffusion processes.
[1]: https://arxiv.org/abs/2208.09392
[2]: https://arxiv.org/abs/2209.05442
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) Can you provide a more thorough comparison with prior literature, especially the works I have linked above?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We believe the problem of score-matching in diffusion models to be fundamentally different from the one in our paper.
In score-matching, one tries to "reverse" the time of a known diffusion process, e.g., to recover the uncorrupted state of a corrupted image.
In our setting, instead, we use observational population data to learn the energy functional underlying an unknown diffusion process. Our goal is not to "reverse" the time of a given diffusion process to reconstruct its initial condition but rather to learn an unknown diffusion process to perform forward-in-time predictions.
From a technical perspective, our methodology heavily relies on optimal transport theory and tools from optimization in the probability space. Indeed, our loss can be interpreted as the "error" in satisfying a first-order optimality condition in the Wasserstein space.
To the best of our knowledge, this approach has not previously appeared in the literature.
For instance, the loss function in reference [1] suggested by the reviewer is constructed using tools from stochastic differential equations: Their loss function minimizes the errors in estimating the term $\nabla_x\log q_t(x)$ which appears when "reversing" the time of a (known) stochastic differential equation. For this reason, their loss cannot be reconciled to ours (in which the stochastic differential equation is unknown).
[1] "Soft diffusion: Score matching for general corruptions" by Daras, G., Delbracio, M., Talebi, H., Dimakis, A. G., \& Milanfar, P. (2022). | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and constructive feedback. Our main changes can be summarized as follows:
First, we applied our methodology to real data in single-cell diffusion dynamics and compared our results with nine existing methods, as requested by reviewer vNRb. In short, our model, $\mathrm{JKOnet^*}$, outperforms all existing methods both in terms of solution accuracy and training time. Remarkably, our training takes less than a minute, in contrast to the hours of all other methods in the literature.
Second, we detailed the computation of optimal transport plans/maps, as requested by reviewers fc3q and vFxe. In a nutshell, $\mathrm{JKOnet^*}$ requires computing optimal transport plans only once, before the training. Conversely, $\mathrm{JKOnet}$ demands re-computing optimal transport plans at each training step. Additionally, we clarified that our numerical experiments rely on the linear programming formulation of optimal transport and included an ablation study to compare the linear programming formulation and Sinkhorn algorithm.
Third, we included a discussion of the failure modes, as requested by vNRb. In short, we envision $\mathrm{JKOnet^*}$ to underperform when the underlying process is not a diffusion process and when observability issues arise (which make the different energy components indistinguishable).
We included a discussion and examples to illustrate these two corner case phenomena (which, however, we did not experience in our experiments).
For a detailed response to each reviewer's question and concern, please refer to the responses below.
We believe that these changes both strengthen our contribution and improve the presentation of our results. We thank again the reviewers for the comments that helped us do so!
Pdf: /pdf/5a5863bd4d1f5957b8b6376b777f80ce6a5a64a6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Random Function Descent | Accept (poster) | Summary: The authors derive a novel gradient descent step schedule from a Bayesian point of view, establishing a connection between Bayesian optimization and classical optimization. The theory gives support to some commonly chosen step schedules and is validated on MNIST dataset.
Strengths: 1. The paper is well written, with clearly explained and carefully chosen notations. The figures are very pretty. It's a pleasure to read.
2. The paper has a good motivation. Worse-case theory, in general, can mislead people. Average-case studies are desired. The disparity between Bayesian optimization and classical optimization is quite obvious, and one can imagine there can be many optimization algorithms with mixed characteristics of both genres. The direction the paper explored is promising.
3. The research is very detailed and solid. The authors give sound proofs to their theorems and organise the results in a clear manner. The experiments are very extensive and well displayed.
Weaknesses: 1. The so-called "average case study" is not fully justified. The expectation of $J(w_n)$ is not in general equal to the expectation of $J(\theta)$ with $\theta$ fixed and then replaced by $w_n$. This is because $w_n$ is by itself a random variable. More concretely, suppose that $J$ is sampled randomly from $\mathcal{N}(\mu, C)$ with $\mu$ being a constant, say $\mu_0$. Then the expectation of $J(w_0)$ would be $\mu_0$ but the expectation of $J(w_n)$ for $n$ large would be much smaller than $\mu_0$. The method in this paper can only be thought of as average case study in the initial stage of optimization. The authors mention "forgetful" but I believe the problem is more serious than it looks. The authors also mention "risk-affine", but I don't necessarily agree with it. The claim "Since RFD is defined as the minimizer of an average instead of an upper bound – making it more risk affine" feels weak, because I don't think it's well justified yet that RFD is the minimizer of an average.
2. Incomplete story and lack of depth. Overall, there are lots of results but none of them are highlighted enough to be a gem. On the theory side, it's not clear whether there is any nontrivial key technical contribution in the proofs. It's not obvious that the derivation of the step schedule from a Bayesian viewpoint involves more than straightforward calculation. It needs more to stand as a strong theoretical paper. Furthermore, it would be better if there was a clear table presenting a convergence rate comparison of this new method and classical ones. On the empirical side, only MNIST is not enough, although the authors did a lot of experiments on MNIST. So as a new methodology paper, we need stronger empirical evidence. It's understood that the authors are studying a very hard problem, but excuses cannot serve as strengths of the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: In the introduction, the authors mention that classic BO is limited to relatively small dimensions. Does RFD improve upon that?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes. Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review, we are happy you found the paper a pleasure to read!
### Question (Complexity of RFD)
> The authors mention that classic BO is limited to relatively small dimensions.
Does RFD improve upon that?
While classical Bayesian optimization has computational complexity $O(n^3 d^3)$
where $n$ is the number of steps and $d$ the number of dimensions, RFD under the
isotropy assumption has the same computational complexity as gradient descent
(i.e. $O(nd)$) as per Theorem 4.2. So yes, it does.
### 1. Random input to J
We are aware of this and footnote 2 placed at Definition 2.2 of RFD points
to a discussion of this problem in appendix D.1.1. We have purposefully
relegated this discussion to the appendix because it appears to be confusing to
non-probabilists and this lie-to-children retains the intuition of this
approach. We agree that we only formalized the minimization of the average in
the inital step but would argue that this is still heuristically true later on.
Here is a sketch for why this is true:
We have formally proven the following statement for a different paper:
For any $n$ let $W_n$ be measurable with regard to
$\mathbf{f}(W_{0:n})=(\mathbf{f}(W_0),\dots, \mathbf{f}(W_{n-1}))$
(where $\mathbf{f} = (\mathbf{J},\nabla\mathbf{J})$ in our case),
then we can treat random inputs as deterministic in the following sense
$$
\mathbb{E}[\mathbf{f}(W_n)\mid \mathbf{f}(W_{0:n})]
= \Bigl(
(w_{0:{n+1}})
\mapsto \mathbb{E}[\mathbf{f}(w_n) \mid \mathbf{f}(w_{0:n})]
\Bigr)(W_{0:n})
$$
where $w_{0:n} = (w_0,\dots, w_{n-1})$. The takeaway from this is, that
if our method was not forgetful it would be valid to treat the inputs
as deterministic.
Now how do we treat forgetful methods? To build some intuition for this,
consider $W_0$ to be a random variable independent of $\mathbf{J}$ (note that
this is not allowed in the statement above because
$W_0$ has to be measurable w.r.t $\mathbf{f}(W_{0:n}) = \emptyset$). The
information $(\mathbf{J}(W_0), \nabla\mathbf{J}(W_0))$ is then almost useless to
us for forming a conditional expectation about $\mathbf{J}(w)$ because we do
not know where $W_0$ is (at least if the distribution of $W_0$ is sufficiently
spread out). To avoid a similar effect for RFD with proper consideration of
random variables, we would have to add at least $W_n$ to the condition of RFD
(we do not want to add the other $W_k$ because then we would essentially give
away all the previous gradients assuming a deterministic optimizer).
I.e. RFD would look like this
$$
W_{n+1} = \arg\min_w \mathbb{E}[
\mathbf{J}(w)\mid \mathbf{J}(W_n), \nabla\mathbf{J}(W_n), W_n
].
$$
But if we condition on $W_n$ and $W_0$ is deterministic, then the
conditional expectation will extract older gradient information from the current
position $W_n$ and therefore we do not have a truly forgetful method. The only
way to prohibit this attempt of the conditional expectation is to let $W_0$
be completely spread out (i.e. uniform on $\mathbb{R}^d$ which is not quite well
defined). But since the first optimization step only selects $W_1 - W_0$ as a
function of $\mathbf{J}(W_0), \nabla\mathbf{J}(W_0)$ and its distribution is
independent of $W_0$ due to the stationarity of $\mathbf{J}$, $W_1$ is still
distributed uniformly on $\mathbb{R}^d$ in the sense that the transition kernel
from $W_0$ to $W_1$ we defined above has the lebesgue measure as an invariant
distribution. I.e. the distribution of $W_1$ would be independent of
$\mathbf{J}$.
So the only way we can prevent $W_n$ from being a proxy for the
previous gradients results in $W_n$ being independent of $\mathbf{J}$
And this means that our heuristic definition of RFD in appendix D.1.1 captures
the forgetfulness we intend much better than any other way we can think of to
formulate it. It does capture the fact that the expectation of $\mathbf{J}(W_n)$
is lower than the expectation of $\mathbf{J}(W_0)$ because it conditions on
$\mathbf{J}(W_{n-1})$ which is already lower.
We also want to note that we respect the formal definition of D.1.1 in the
proofs of all our results.
### 2. Story
Containing numerous results can hardly be seen as a weakness. To make the story punchier we will use the keywords “viability” and “advantages” in the following to better group our results. Please also have a look at the general rebuttal.
To show the viability
- We prove that RFD reduces to gradient descent with very specific step sizes in Theorem 4.2 (This establishes computability and a scalable complexity - cf. Complexity of RFD). Boilt down, this Theorem requires two deep insights, 1. extreme sparsity of the covariance matrix of a single gradient, 2. splitting the optimization over direction and step size). Extensions which generalize this result (appendix E, Theorem 6.2) could have been turned into one large theorem, but this would distract from the main ideas.
- We introduce a completely novel way of estimating the covariance in high dimension necessary to compute RFD step sizes.
To show the advantages of this approach
- we establish scale invariance (Theorem 2.3), respecting the random input (!)
- we show how the step size schedule of RFD explains existing step size heuristics like warmup at the beginning, which is something a convexity framework could never deliver because the convexity assumption is false at the beginning and only true asymptotically. Without our insight of Theorem 4.2 that step sizes can be analytically obtained, this step size analysis could not have taken place and the analysis also contains novel insights on how to understand the results of Theorem 4.2 asymptotically.
We believe that NeurIPS papers should contribute novel background understanding with practical benefits. That’s why we included a standard non-trivial empirical example. We are currently going through an extensive independent empirical study that we believe would have blown the scope of an already very dense paper which pushes many results to the appendix.
---
Rebuttal Comment 1.1:
Title: Well written rebuttal
Comment: Thanks the authors for their courteous detailed and informative rebuttal. I see that forgetful is somewhat necessary. Still, I'm 100% convinced forgetful is fully able to captured the complicated reality of optimization. But optimization is itself a very harsh subject to study theoretical. I'm fully aware not all of us have the capabilities to do extensive empirical studies, but ideally speaking, a paper on novel methodologies is best supported by extensive empirical results so that we can have an objective view on the true potential of the new methodology. My experience in machine learning tells me that theories haven't been very good in predicting new architecture, new methodologies because the full complexity of the reality is just hard to capture in a finite set of mathematical assumptions. Therefore, I shall stick to my previous judgement. | Summary: The current paper studies random function descent, draws connection between RFD and SGD, and derives an adaptive step size scheduler. More specifically, the authors study minimizing a stochastic first Taylor approximation of random functions, which has similar form of gradient descent when the random function is Gaussian process. This connection also hints a step size scheduler for standard GD method. The authors then explore this step size scheduling scheme and study its asymptotic performance, which helps explain some recent step size scheduling tricks such as gradient clipping and warmup. Finally, the authors propose a practical way to evaluate necessary statistics required for the newly found step size scheduler with current ML mini-batch loss. The authors show simulation results for MNIST data to exemplify the effectiveness of the drawn step size scheduler.
Strengths: This paper is very well-written, theoretically sound, and the findings seem new and pretty insightful, thus I feel it makes good contribution to research on optimizer learning rate scheduling.
The main topic is about random function descent, and the minimization of stochastic first Taylor approximation of random function results in a gradient-type method (when Gaussian random function is considered) is surprising and impressive.
The writing is well-organized, with all terms being properly defined and all theorems (Theorem 2.3, 4.2, 5.2, 6.2) well-formulated and capture core ideas. Theorems that are more representative is presented in main text for better digestion with more complete/general versions listed in Appendix. Theorems and Definitions are followed by simple and efficient explanations (i.e., discussion after Definition 2.1, Definition 2.2, around Theorem 2.3, and many others). Plots and tables are provided and are clean and easy to interpret.
The math is clean, sound, and rigorous, with very complete proofs (i.e., D.1.1 and D.1.2). Extensions are well-explored (Section E) and more general cases are discussed (Section E.3 for example). From first derivation of step size (Theorem 4.2), to its asymptotic version A-RFD (Definition 5.2) and its stochastic version S-RFD (Theorem 6.2), all are interesting and important findings.
Practicality of the proposed method has been considered. Though the proposed step size scheduler looks complicated, the authors figure out ways to evaluate necessary statistics required to put the step size scheduler into use (Section 6), and the effectiveness of proposed method applied to current ML tasks is also exemplified by examples (Section 7).
The research topic is valuable. Learning rate scheduling has been an open research area for a long time in optimization field. Currently in machine learning/deep learning research, a great deal of pressure comes from comparing with baseline methods which involves arduous hyperparameter tuning, among which learning rate is often the core. Thus studying learning rate scheduling is of great importance and this paper provides a novel connection between RFD and GD (with also extended comparison to Adam in Section E.1) which is very encouraging. Moreover, classical convergence result for optimization algorithms are mainly with worst case bound, RFD is instead for average case performance, the authors try hard and derive partial result for convergence (Corollary 5.3), and we expect there would be more study of difference between worst case performance and average case performance.
Weaknesses: Though I appreciate the presentation quality, theoretical soundness, and novelty of the work. The main drawback of current paper boils down to three parts: lacking comparison with prior work, potential concerns with the practicality and effectivenss of the proposed method, and the (relatively) strict assumption of the theory.
1. The current paper doesn't involve literature review section, though it draws connection to prior work dispersedly, no systematic review has been intended. I currently make my evaluation of the novelty of the work based on my own (might be poor) understanding. I feel adding a related work section is desirable and then a more fair evaluation of value of current work can be made.
2. Still about prior work but for baseline method comparison. The simulation results (mainly Figure 3) only compares the proposed method with SGD/Adam with tuned fixed learning rates. More recent work such as D-Adaptation [1] also studies tuning-free learning rate scheduler for SGD/Adam, from not RFD perspective but more classical optimization angle, hasn't be mentioned/compared against. Moreover, the experiment in current paper seems much simpler and less thorough than the setting considered in D-Adaptation.
3. With respect to practicality, though the authors provide empirical ways to evaluate covariance in mini-batch training, the recipe still looks a bit complex, i.e., one should go evaluate $C$ and $C'$ from the observation first. Unlike current adaptive gradient method such as Adam/AdamW, or even D-Adaptation, which only depends on some statistics involving current/past gradient/function values. Moreover, since RFD is measuring average case performance, it's more risk-affine and tends to predict larger learning rate, which may be harmful for convergence in some cases.
4. Despite that I feel minimizing stochastic Taylor approximation of random function is interesting and worth exploring, the derived GD-type algorithm is for Gaussian random function (Theorem 4.2), though the authors mention this assumption was also used in [2], it might be desirable to more demonstrate to which extent one should expect this assumption to be close to real settings.
[1] Learning-Rate-Free Learning by D-Adaptation (Aaron Defazio and Konstantin Mishchenko).
[2] Yann N Dauphin et al. “Identifying and Attacking the Saddle Point Problem in High Dimensional Non-Convex Optimization”.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Could the author please add a literature review section to discuss related works (and probably comparisons with current work)?
2. The derived explicit RFD is for Gaussian random function, and I see that there is some relaxation in Section E.3. Could the authors please demonstrate more on to which extent this RFD method is close to real problem setting confronted in machine learning?
3. I feel the part that discusses connection to Adam (Section E.1) can be partly moved to main text since Adam and its variant is pretty dominant in current ML (especially DL) training. Moreover, do the authors think RFD with component-wise estimated variance can match Adam performance with tuned learning rates in DL training?
4. In line 300, it says "on CIFAR-100, the step sizes given by RFD were too large". I don't see these experiment results, could the authors please add this part of result for completeness (even if the result is not ideal).
5. How do the authors expect the performance of proposed method compared with D-Adaptation, will they coincide in certain settings? It seems D-Adaptation is applicable for more general (larger model/more recent dataset) cases, and RDF is more limited since it involves more steps for variance estimation and its risk-affine property might be harmful.
(potential) writing issues:
1. In line 244, it seems "Since all $Z_b$ have the same underlying of cost $J$" should be "Since all $Z_b$ have the same underlying cost $J$"?
2. In line 271, there seems missing a comma between loss $J$ and stochastic errors $\epsilon_i$
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Limitations have been discussed in Section 8.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for taking the time to review our paper so thoroughly, even
taking the time to read the appendix. We are glad you found it insightful.
### Related work/Literature review
In previous drafts the paragraphs on related work in the introduction were more
extensive but were cut because proofreaders argued that the relation was
unclear and did not contribute to an understanding of the main work.
A good compromise might be an extensive discussion at the beginning of the
appendix that we are going to add. We personally feel that Bayesian Optimization is closely related, but
as you can see in the review by 1by3 this is not uncontroversial
either. So the reason that there is not an extensive discussion of related work
is because our approach is fairly novel and unique.
Nevertheless, we would be happy to expand a little on the relation to Bayesian
optimization. The method most similar to RFD is perhaps expected improvement:
$$
x_{n+1}
= \arg\max_{x} \mathbb{E}\Bigl[
\max \bigl\\{ 0, \min_{i=0,\dots,n}\mathbf{J}(x_i)-\mathbf{J}(x) \bigr\\}
\mid
\mathbf{J}(x_0), \dots,\mathbf{J}(x_n)
\Bigr]
$$
If the rectifier and running minimizer wouldn't be there, this would look similar to a non-greedy version of RFD (without gradients). There has also
been a paper which utilizes gradients [33,53,42,37]. But none utilize the
entire sparsity one gets by conditioning only on one point. Consequently, none
of the previous work in BO was applied to truly high dimensional optimization
and there has never been an analysis of the resulting step size before. Neither
has there been any consideration for estimating covariance models in high
dimension. At least we are not aware of any prior work.
### Gaussian assumption
While we provided a relaxation in E.3 we believe that the Gaussian assumption is
fairly accurate (cf. Figure 4) and does not necessarily need to be attacked (in
contrast to the isotropy assumption). But only time will tell and we wanted to
point out with E.3 that the Gaussian assumption could be circumvented if
necessary.
### Connection to Adaptive step sizes (Adam) E.1
As noted in E.1, there is a small disconnect between the entry-wise scaling
resulting from geometric anisotropies and the scaling suggested by Adam/RMSProp
(which apply a square root). Further research is needed to find out where this
disconnect is coming from, so we held off from putting it into the main body.
Nevertheless we agree that it is a very interesting connection.
When it comes to the competitiveness we believe that it is still necessary to
discover the equivalence of momentum in the framework of optimization of random
functions. That is still missing even with component-wise step sizes.
### D-adaptation
The classical optimization angle has been to assume convexity (which is also the
assumption D-adaptation makes prominently). With these classical origins the theory behind D-adaption seems to be relatively unrelated from the theory
behind RFD, so a direct comparison is difficult. We could draw empirical
comparisons, but the empirical part of this paper was only intended to be a
proof of concept with an extensive empirical paper being currently in process.
D-adaptation is also not as accessible as a pre-implemented pytorch optimizer
and since it also only aims to match the performance of the tuned optimizers the
benefits of a comparison seem limited to us.
While D-adaptation seems to be more advanced at the moment (with more extensive
benchmarks and more reliable performance) it is built with the untenable
framework of convex optimization. It can only ever explain asymptotic
performance hoping that the optimizer eventually enters a locally convex area.
Convex optimization theory will never be able to explain initial behavior where
convexity is simply not true, i.e. it will never be able to explain things
like warmup. This shows the strength of our approach for the long term. Our
approach simply doesn't have the maturity of the conventional convexity
assumption yet.
Nevertheless, it addresses the same issue and should therefore be discussed in
the Literature review. Thank you for pointing this out to us.
### Practicality
In the future it might be possible to estimate covariances in an on-line
fashion. Then it is not necessary to spend time to fit the covariance at the
beginning. But the time it takes to fit the covariance is already relatively
small (i.e. only one epoch worth of samples).
### Cifar-100 experiment
There is not too much to learn from the experiment on Cifar-100. The step-sizes
are so large that no convergence happens at all which means that the problem
is unlikely to be an instability of the RFD algorithm and more likely that the
distributional assumption is simply violated.
The reason is also apparently not necessarily the dataset itself but rather the
resnet architecture - a different model exhibited good convergence in a later
experiment. We believe that it will be necessary to attack the distributional
isotropy assumption in order to get reliable performance in applications. E.g.
in Section F.1 we show how isotropy breaks down on linear models and point
towards the generalization of linear isotropy. We plan to fully investigate
empirical performance in a future paper to get a better idea what
generalizations are actually necessary. We will add a preview to the
appendix.
### Writing issues
Thank you for catching these! They are now fixed.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed reply which addresses most of my questions. Since my evaluation is already pretty positive, I'll stick to it. | Summary: Many machine learning model have parameters that are optimized by some form of gradient descent. Given a parameters $\omega$ in a space $\Omega$ and a loss function $\textbf{J}: \Omega \to \mathbb{R}$, typical gradient descent proceeds by picking a starting point $\omega_0$ and iteratively taking steps in the direction of steepest descent
$$
\omega_{n+1} = \omega_n - h \nabla J(\omega_n) = \omega_n - \eta \frac{\nabla J(\omega_n)}{||\nabla J(\omega_n)||}
$$
where $h$ is the learning rate, or similarly $\eta = $ is the step size. The learning rate/step size is an exogenous pre-determined user hyper parameter.
This paper proposes a method to automatically determine the steps size parameter. (I may have misunderstood and I welcome any correction by the authors) this method, called Random Function Descent (RFD), take a point $\omega$, computes the function value and gradient $J(\omega)$, $\nabla J(\omega)$, which is then used to fit a Gaussian process model. The GP model has a constant prior mean mean and a stationary, isotropic kernel, and by fitting one data point and it's gradient vector, the constant prior mean is updated to a still mostly constant surface however with a single local deformation at $\omega$ resulting in a peak in the uphill direction from $\omega$ and a trough on the direct opposite downhill side, . the RFD method jumps straight to the bottom of the trough, mathematically
$$
\omega_{n+1} = \text{arg min}_{\omega'} \mathbb{E}[J(\omega') | J(\omega), \nabla J(\omega') ]
$$
where the expectation is the posterior mean of the GP having been fit to the one data point. As the GP kernel is isotropic, there is no prior bias in any direction and the direction of the trough is exactly the direction of the gradient, consistent with normal gradient descent.
The paper considers many of the technical and theoretical hurdles and provides solutions in each case. Finally experiments with MNIST are provided.
I somewhat struggled with the paper and have set my confidence score to low accordingly.
Strengths: - tuning the baselines in the numerical experiments
Weaknesses: Unfortunately for me, I struggled to understand much of the paper, I believe this could be partially be due to writing style, I have tried to keep my technical and writing comments separate
I apologize if my understanding is incorrect, and look forward to the authors response to correct any such errors.
- all the parameter updates are using euclidean distance in parameter space. In contrast, Natural gradient descent makes parameter updates that have equal distance in output distribution space. In practice, I believe an approximation is implemented by using inverse squared gradients for each parameter similar to ADAM/RMSprop that use root mean squared gradients. Obviously,
- the numerical experiments seem a little lacking, RFD doesn't appear to show a significant improvement on Figures 3, 6, 7. MNIST and FashinMNIST are very small and perhaps too easy, any optimizer will "max out" any model pretty quickly I assume.
The below points are my personal subjective comments on the writing.
- I am a little reluctant to agree that this paper has much to do with Bayesian optimisation as suggested by the abstract and introduction. RFD fits a GP model to a single data point and only uses the posterior mean, it is the same as kernel ridge regression.
- I felt the terminology of "stochastic Taylor Expansion" was rather unhelpful and somewhat counterproductive. In my mind, zeroth/first/second order Taylor expansion refer to constant/linear/quadratic local polynomial approximations to a function, however the given function approximations are non-linear (lemma 4.12) this description unfortunately rather mis-directed my thoughts.
- L69: as above, "it naturally incorporates covariance based trust" assumes a lot of context that has not been introduced in the paper at this point, upon first reading I was rather lost, upon second reading it makes sense but felt out of place.
- (there are many topics and details covered the main paper, would it be possible to focus on a few big ideas?)
- Table 1, Figure 2, what is the scale "s", I assume the length scale in the covariance $C()$ function? This appears not to be introduced in the paper.
- L62, should the final term of the equation be $\frac{L}{2}||\omega - \Theta||^2$?
Technical Quality: 3
Clarity: 2
Questions for Authors: - is it possible to extend to use individual rescaling for each parameter whilst keeping the isometric assumption? Sacrificing the isotropy assumption would require fitting a GP model to 1 + d values which has O(d**3) complexity hence would be impossible for network models. Preserving isotropy avoids this issue.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: - the assumption of isometric in parameter space
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking your the time to review our paper. We understand that it is
a difficult paper to read due to its unconventional approach which takes some
time to get used to. We took your thoughts into account and will improve the paper (see general rebuttal), the paper is essentially a theory paper that should also be accessible/convince practitioners.
## Question (Isotropy and rescaling individual parameters)
The anisotropy you have in mind is probably a "geometric anisotropy" which we
cover in its entirety in Appendix E.1.
This geometric anisotropy still assumes that the covariance is of the form
$$
Cov(\mathbf{J}(w_1), \mathbf{J}(w_2))
= C\Bigl(\frac{\langle w_1 - w_2, \Sigma (w_1-w_2)\rangle}2\Bigr)
= C\Bigl(\frac{\\|w_1 - w_2\\|_\Sigma^2}2\Bigr)
$$
and results in an update proportional to $\Sigma^{-1}\nabla\mathbf{J}(w)$.
If you want to rescale individual parameters only, then you implicitly assume
the geometric anisotropy $\Sigma$ to be diagonal. The $O(d^3)$ cost on
the other hand comes from the inversion of a dense matrix $\Sigma$.
But note that the covariance is still stationary, i.e. of the form $f(w_1 -
w_2)$ for some $f$. We believe that the necessary generalization beyond isotropy
needs to remove this stationarity because it is already violated for a
simple linear model (cf. appendix F.1). We suggest a generalization to linear
isotropy (cf. appendix F) instead. So when we speak of removing the isotropy
assumption in the conclusion we are generally talking about generalizations in
this direction beyond geometric anisotropies.
## Addressing points in weaknesses
### Euclidean distance and natural gradient descent
The isotropy assumption on the covariance implies
$$
Cov(\mathbf{J}(w_1), \mathbf{J}(w_2))
=C\Bigl(\frac{\\|w_1-w_2\\|^2}2\Bigr)
$$
for some covariance kernel $C$, where we took the norm to be the euclidean norm.
How RFD would look like if this norm is replaced by a distributional distance is
up to future research. Perhaps the RFD updates would then look like Natural
gradient descent with a specific step size. But we did not investigate this
avenue so far. We are also unfamiliar with the relation of natural gradient
descent with Adam/RMSProp.
### Numerical experiments
The aim of the project was never to outperform existing *tuned* optimizers. Given
that RFD is simply a form of gradient descent this would have been a tall order.
The aim was to understand the link between optimization theory and optimization
practice in ML and (ideally) to perform the *tuning* statistically (i.e.
removing the need for expensive step size tuning). But we did not only find a
tuned step size, we found an entire step size schedule! And SGD with a tuned
step size schedule can be more performant than SGD with a constant step size,
this is why RFD slightly outperforms SGD in the MNIST case study, but we would
never expect it to outperform SGD by much given that it is still gradient
descent after all.
If you compared RFD to the *untuned* optimizer the advantage is much more visible
and in the end the performance advantage of RFD lies in the fact that tuning
is much less expensive.
### RFD relation to Bayesian optimization
Given that
$$
\alpha(\theta)
= \mathbb{E}[\mathbf{J}(\theta) \mid \mathbf{J}(w_n), \nabla\mathbf{J}(w_n)]
$$
constitutes an acquisition function i.e. $w_{n+1} =
\arg\min_{\theta}\alpha(\theta)$ of which there are many in Bayesian
optimization (e.g. expected improvement, probability of improvement, etc.) we
believe that RFD could reasonably be interpreted as BO with a specific
acquisition function. If you define BO more generally as the reserach
field interested in the "optimization of random functions" then our approach
is part of this field anyway.
### Terminology "stochastic Taylor Expansion"
We are sorry this terminology caused confusion. We intended to capture our
original intuition with this terminology. The taylor approximation "uses" the
derivatives in one point up to order $n$ to approximate
a function. Similarly the "stochastic Taylor Approximation" also *uses* the first $n$
derivatives in a single point to define a function approximation. But the
conditional expectation is the "best" approximation, so in this sense it is
better than the Taylor approximation.
We will add a few lines to ward off against this confusion but believe that the
paper would lose insight if we cut this description completely. It's valuable that you pointed us towards the need of more explanations.
### "naturally incorporates covariance based trust"
Since we introduced $L$-smoothness based trust in L66 and following, we hoped
that the mean-reversion in conjunction with Figure 1 would sufficiently explain
our notion of "covariance-based trust". Which ideas following this note helped
to make more sense of it during the second reading? We would be happy to
mention them earlier.
### Focus
We have addressed this point in the joint rebuttal. Thank you for pointing this out!
### Table 1, Figure 2
Yes, "s" is the length scale. We link the equations defining the covariance
models in the table heading. We unfortunately did not have enough space to
define the covariance models in the main body. We will devote more space to
this in the final version.
### L62 correction
Yes, this was indeed wrong, it should be $\frac{L}2\\|w-\theta\\|^2$. Thank you! This is now fixed.
Thank you for taking the time to read our answers, we hope
they were helpful.
---
Rebuttal 2:
Title: Thank you for the considerate response
Comment: Thank you for accommodating my rough understanding.
Looking at other reviews, I have risen my score. | Summary: ### Summary
The paper "Random Function Descent" explores the limitations of classical worst-case optimization theory in explaining the success of optimization in machine learning and selecting appropriate step sizes. It establishes a connection between Bayesian Optimization and classical optimization through a "stochastic Taylor approximation," rediscovering gradient descent. This rediscovery introduces a new step size schedule called Random Function Descent (RFD), which is scale-invariant. The analysis provides a theoretical foundation for common step size heuristics such as gradient clipping and gradual learning rate warmup. The paper also proposes a statistical procedure for estimating the RFD step size schedule and validates this theory with a case study on the MNIST dataset.
In the introduction, the paper emphasizes the importance of cost function minimization in machine learning, typically performed using gradient-based methods that require step sizes chosen by established heuristics. The paper aims to enhance the theoretical understanding of these heuristics and proposes RFD as a new algorithm based on this deeper insight. The authors highlight that classical optimization theory, which relies on \(L\)-smoothness, provides conservative learning rates unsuitable for average cases, necessitating the reliance on step size heuristics in machine learning.
The authors bridge the gap between Bayesian Optimization (BO) and gradient-based methods by introducing a stochastic Taylor approximation based on a forgetful BO posterior. This results in the RFD optimization method, which combines the properties of gradient descent with scale invariance and a complete step size schedule derived from BO. The contributions include proving the scale invariance of RFD, discussing common distributional assumptions in BO, establishing the connection between RFD and gradient descent, and investigating the step size schedule suggested by RFD.
The paper further develops a non-parametric variance estimation method robust to covariance kernel choices and extends RFD to mini-batch losses. The case study on the MNIST dataset demonstrates the practical application and effectiveness of the proposed RFD algorithm compared to traditional methods like Adam and stochastic gradient descent (SGD). The discussion includes limitations and potential extensions of the proposed method, emphasizing the need for new mathematical theory to address the risk-affine nature of RFD and its larger step sizes.
Strengths: ### Strengths
1. **Innovative Approach**: The paper introduces a novel connection between Bayesian Optimization and gradient descent through the stochastic Taylor approximation, leading to the development of Random Function Descent (RFD). This approach provides a new perspective on step size selection and optimization in machine learning.
2. **Theoretical Foundation**: The analysis of RFD step sizes offers a solid theoretical foundation for commonly used heuristics such as gradient clipping and learning rate warmup. This bridges the gap between empirical practices and theoretical understanding.
3. **Scale Invariance**: RFD's scale invariance is a significant advantage, making it robust to different scales of input parameters and cost functions. This property is stronger than the affine invariance offered by the Newton method.
4. **Practical Validation**: The statistical procedure for estimating the RFD step size schedule and its validation on the MNIST dataset demonstrate the practical applicability and effectiveness of the proposed method. The case study shows that RFD can outperform traditional optimization methods like Adam and SGD.
5. **Comprehensive Analysis**: The paper provides a thorough investigation of the step size schedule suggested by RFD, including explicit formulas, asymptotic behavior, and explanations for gradient clipping and learning rate warmup. This comprehensive analysis enhances the understanding of RFD's behavior and potential benefits.
Weaknesses: ### Weaknesses
1. **Complexity and Accessibility**: The theoretical development and mathematical derivations in the paper are complex, which might limit the accessibility and understanding for practitioners who are not well-versed in advanced optimization theory and Bayesian methods.
2. **Assumptions and Simplifications**: The paper relies on certain assumptions, such as isotropic Gaussian random functions, which might not hold in all practical scenarios. The need for these assumptions could limit the generalizability of the proposed method.
3. **Risk-Affine Nature**: RFD's risk-affine nature, resulting in comparatively larger step sizes, might lead to instability in certain cases. The paper acknowledges this limitation and suggests that further work is needed to address this issue and develop new mathematical theories for convergence guarantees.
4. **Empirical Validation Scope**: While the MNIST case study is a valuable demonstration, the empirical validation is limited to a single dataset and a specific neural network architecture. Additional experiments on diverse datasets and models would strengthen the evidence for RFD's effectiveness.
5. **Variance Estimation Procedure**: The non-parametric variance estimation method, while robust, involves a bootstrapping procedure that could be computationally intensive. This might pose challenges for large-scale applications and require further optimization for practical use.
Technical Quality: 4
Clarity: 3
Questions for Authors: My main issue with this paper is understanding its final message. It feels more like a collection of relevant results rather than a cohesive argument, and I would appreciate a comment on this. If you provided two paragraphs explaining what you proved, why it is important, and what you aim to prove in the future, I still could not grasp the overall vision of the paper.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our work, we are glad you find it
innovative!
## Question (final message and vision)
Most optimisers in ML have their root in convex function optimisation, a property that is far from being satisfied in reality. Modifications and tricks work well but not much can be explained by the convex root of the optimisers. We suggest a new point of view, replacing convex functions by random functions with a very natural statistical way of optimising.
The main achievement of this paper is to demonstrate the **viability** (abstract theory gives something reasonable that can be implemented) and
**advantages** of replacing the "convex function" framework
with the "random function" framework. Theorem 4.2 is the main result
establishing **viability** assuming a given covariance model. Section 6
is concerned with obtaining this covariance model and viability is
demonstrated with a practical example in the MNIST case study (Section 7).
We showcase the **advantages** of this approach with scale invariance
(Theorem 2.3) and an explanation for step size heuristics such
as warmup (cf. section 5.2). This explanation of the initial stage of
optimization could never be delivered by the convex framework, because the
convexity assumption is not fulfilled initially so it can at best explain
asymptotic behavior.
We envision the following improvements to RFD in the future:
1. The *reliability* of RFD can be improved by generalizing
the distributional assumptions to cover more real world scenarios. In particular
we are interested in the generalization to linear isotropy because
we suspect that regularization such as weight and batch normalization [43, 23]
are used to patch violations of isotropy (cf. Section F).
2. The *performance* of RFD can also be improved. Since RFD
is forgetful while momentum methods retains some information it is likely
fruitful to relax the full
forgetfulness. Furthermore, we suspect that adaptive learning rates [e.g. 12,
27], such as those used by Adam, can be incorporated with geometric anisotropies
(cf. Sec. E.1). Performance could also be further improved by estimating
the covariance (locally) online instead of globally at the start. Finally, the
implementation itself can be made more performant.
In the main rebuttal we outline how we will add these clarifications to the paper with very little changes.
## Addressing criticism
### 1. Complexity and Accessibility, 2. Assumptions and Simplifications
Writing this paper was a big challenge - we spent a lot (!) of time
trying to bridge the two points you address. Our approach is mathematically
complex and hard to access because it requires merging a number of ideas from
different domains. That is why we used rather strong assumptions and
simplifications in the main body and addressed extensions only in the appendix
(cf. Section E, e.g. a generalization to geometric anisotropies in E.1). We are
also working on a number of different generalizations we did not want to be part
of this article yet, which, as you quite rightly say, is already complex. The
current version is the best compromise we could come up with.
While we cannot reduce the Maths, we can encourage you again to check out the
general rebuttal on how we use your valuable feedback to make the article even more accessible.
### 3. Risk-Affine Nature
True! And we already suggest a way to modify the risk aversion: Confidence intervals
(cf. Section E.2).
### 4. Empirical Validation Scope
We totally agree, in the end of the day every new algorithm must be validated on
practical examples. When writing this article we already started working with
empirical researchers on a systematic empirical study. As you mentioned above,
our article is already complicated so we decided to focus on one standard
non-trivial example for this article to support the theory. We believe the choice was right when
accompanied with the upcoming article. The catch so far is, sometimes RFD works
perfectly, beating Adam with tuned parameters, sometimes it does not. This is
not very surprising as the algorithm relies on structural assumptions which are
not always satisfied (cf. Section F.1). In general Batch Normalization seems to
be quite important for RFD (with the isotropy assumption) to work.
### 5. Cost of Variance Estimation Procedure
In our MNIST case study the cost of the bootstrapping procedure is already
completely overshadowed by the sampling cost (i.e. evaluating the loss and
gradient for various batch sizes and parameters).
Let us demonstrate this irrelevance by some back of the envelope calculations:
If we had sampled MNIST purely with batch size 30, then one epoch would be 2000
samples and the size of the generalized linear regression matrix $X$ is
therefore roughly $2000\times 2$ (in reality we have less samples because we
also sample at larger batch sizes). More generally we have a regression matrix
of size $n \times 2$. The generalized linear regression then calculates
$$
(X^T C^{-1} X)^{-1}X^T C^{-1}Y
$$
where $C$ is a diagonal matrix and $X^T C^{-1}X$ a $2\times 2$ matrix. The cost
is therefore of the order $O(n)$. The bootstrapping procedure typically finished
in 10-20 iterations. So overall we have perhaps 20 times a problem of
computational complexity $O(n)$ where $n$ (in our case less than $2000$) is
generally vastly smaller than the parameter dimension (in our case $\sim 2$
million). Evaluating a single gradient is probably more expensive.
Overall we highly doubt this will ever have significant computational overhead. In fact we would
actually expect the number of samples $n$ needed to remain relatively constant
independent of the parameter dimension (because we are simply estimating a
covariance with random variables here). So if we had to guess, we would guess
that the bootstrapping cost is less and less relevant the larger the model.
Thank you again for your constructive criticism! | Rebuttal 1:
Rebuttal: We warmly thank all our reviewers for their interest, time, insight and constructive criticism.
Writing the paper was a challenge as it requires familiarity with different
mathematical concepts but should be accessible for practitioners at the same
time. Our approach was to simplify the main text as much as possible and provide generalizations in the appendix. And since it is impossible to fit all
backgrounds equally we see different opinions about the writing style in the
reviews. We believe this is normal for a paper of this kind. Introducing new
theory is often harder than presenting improvements to a classical technique,
especially if it should be accessible to empirical researchers as well.
Taking into account your feedback we decided to better present the focus of the
paper. In the present version the main message seems to get a bit lost in the
details. We cannot move even more concepts into the appendix, but we can better focus
the reader’s attention. With very little change to the
paper this can be done by slightly changing abstract, outline, and conclusion and titles of theorems.
## A list of minor changes to improve the story line in the final version
Motivated by reviewers XTWd's suggestion to summarize the overall vision in a
short paragraph, we will add the following paragraph to the outline (L45):
> The main goal of this paper is to demonstrate the **viability** and
**advantages** of replacing the classical "convex function" framework
with a "random function" framework. Theorem 4.2 is the main theoretical result
establishing **viability** (computatability and scalable complexity) for a given covariance model. Section 6
is concerned with practical estimation of the covariance model and viability is
demonstrated with a practical example in the MNIST case study (Section 7).
The **advantages** of this approach are scale invariance (Advantage 2.3) and an explicit step size schedule, which does not require expensive tuning, and explains existing ML heuristics such as warmup (cf. Section 5.2). This explanation of the initial stage of
optimization could never be delivered by the convex framework, because the
convexity assumption is not fulfilled initially so it can at best explain
asymptotic behavior.
To sharpen the focus throughout the text we plan to rename
- "Theorem 2.3 (Scale invariance)" into "Advantage 2.3 (Scale invariance)"
- "Theorem 5.2" into "Proposition 5.2"
- "Theorem 6.2" into "Extension 6.2" grouping it together with the extensions
of appendix E which are all modifications of Theorem 4.2
Theorem 4.2 is then left as the only remaining Theorem in the main body which
should focus the readers attention onto it as the key result enabling this
approach. To better explain its significance we will add the following remark
explaining its scalable complexity
> ### Remark (Scalable Complexity)
> While Bayesian optimization with gradients typically has computational complexity $O(n^3
d^3)$, where $n$ is the number of steps and $d$ the number of dimensions, RFD
under the isotropy assumption therefore has the same computational complexity as
gradient descent (i.e. $O(nd)$).
We will cut L308-312 from section 8, to focus it on the limitations and work
these extensions into an outlook contained in the conclusion, which will be rewritten to be
> ### Conclusion
> In this paper we have demonstrated the **viability** (computability and scalable complexity) and **advantages** (scale
invariance, explainable step size schedule which does not require expensive tuning) of replacing the classical "convex
function" framework with the "random function" framework. Along the way we
bridged the gap between Bayesian optimization (not scalable so far) and classical optimization
methods (scalable). This theoretical framework not only sheds light on existing step size heuristics, but can also
be used to develop future heuristics.
>
> We envision the following improvements to RFD in the future:
>
> 1. The *reliability* of RFD can be improved by generalizing
the distributional assumptions to cover more real world scenarios. In particular
we are interested in the generalization to linear isotropy because
we suspect that regularization such as weight and batch normalization [43, 23]
are used to patch violations of isotropy (cf. Section F).
> 2. The *performance* of RFD can also be improved. Since RFD
is forgetful while momentum methods retains some information it is likely
fruitful to relax the full
forgetfulness. Furthermore, we suspect that adaptive learning rates [e.g. 12,
27], such as those used by Adam, can be incorporated with geometric anisotropies
(cf. Sec. E.1). Performance could also be further improved by estimating
the covariance (locally) online instead of globally at the start. Finally, the
implementation itself can be made more performant.
Finally, we change the abstract into
> ### Abstract
> Classical worst-case optimization theory neither explains the success of
optimization in machine learning, nor does it help with step size selection.
In this paper we demonstrate the viability and advantages of replacing the
classical "convex function" framework with a "random function" framework.
With complexity $O(n^3d^3)$, where $n$ is the number of steps and $d$ the number of dimensions, Bayesian optimization with gradients
has not been viable in large dimension so far. By bridging the gap between
Bayesian optimization (i.e. random function optimization theory) and classical optimization using a
‘stochastic Taylor approximation’ to rediscover gradient descent (with $O(nd)$ complexity) we establish viability. This
rediscovery yields a step size schedule we call Random Function Descent (RFD).
The advantage of this random function framework is that RFD is scale invariant and that it provides a theoretical foundation for common step size
heuristics such as gradient clipping and gradual learning rate warmup.
We hope these modifications further improve the accessibility of our
paper. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On the Use of Anchoring for Training Vision Models | Accept (spotlight) | Summary: This paper identifies a major problem with anchored training, that the performance of anchored training does not increase with increasing reference set size, and proposes a simple regularization approach to overcome this problem. This approach is evaluated on OOD generalization, calibration and anomaly rejection, and task adaptation, and various facets of anchored training are analyzed.
Strengths: The paper makes the interesting finding that the performance of anchored training does not increase with increasing reference set size, and that this problem is not alleviated by more sophisticated inference strategies. The paper also proposes a simple reference-masking regularization technique to help alleviate this problem. The experiments show the effectiveness of the proposed approach, and there is also analysis of how the method interacts with data augmentation and noisy labels. An ablation study of the $\alpha$ parameter is also performed. Training recipes are also provided, making the paper easier to reproduce.
Weaknesses: One weakness is that the reference set selection strategy and reference set sizes are not explained for the experiments.
The impact/novelty is a bit limited because of the lack of comparisons to non-anchored training works.
Minor points: in the tables, decreases in performance could be colored in a color other than pink. Figure 1 could be improved with error bars. One highlighting was missed in Table 3. The abbreviation LP is not defined.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is there any explanation for why the accuracy stays relatively constant (Figure 1) regardless of reference set size?
2. Does training for more epochs help alleviate the reference set size problem?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations of this work are discussed by the authors at the end of the paper. Negative societal impact is probably not a concern for this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your positive feedback. We hope our responses address the questions you have raised.
**1. Accuracy Remains Constant Regardless of Reference Set Size**
We would like to clarify that this is precisely the problem with the original anchored training protocol, which we solve in this paper. As shown in Fig. 1 of the paper, the original anchoring protocol does not fully leverage the diversity of reference-residual pairs with increasing reference set size and maintains a relatively constant accuracy regardless of the reference set size. We hypothesize that this is because as the size of the reference set increases, the number of reference-residual pairs grows combinatorially. For e.g., when the reference set is the entire dataset D, there are |D| choose 2 pairs, making it impractical to explore all pairs within a fixed number of training iterations. This results in insufficient sampling of reference-residual pairs, increasing the risk that anchored training may overlook the reference and make predictions based solely on the residuals leading to non-generalizable shortcuts. This is problematic as a sample should not be identifiable without knowing the reference.
**2. Alleviating the Reference Set Size Problem**
One possible way of alleviating this problem is by reducing the reference set size. However, this reduces the diversity of the reference-residual pairs exposed during training and can lead to a poor solution. While the issue of diversity can be combated with large reference set sizes, increasing the number of epochs alone does not solve the problem as there exists a combinatorially large number of reference-residual pairs which cannot be practically explored, and the model will still be vulnerable to shortcuts. Moreover, modifying the number of training epochs results in non-trivial modifications in the training hyper-parameters (e.g., learning rate schedules) and can lead to poorly convergent models if the hyper-parameters are chosen incorrectly. Hence, we propose a reference masking regularizer for anchored training, that helps mitigate shortcut decision rules while also being computationally efficient.
**3. Reference Set Selection Strategy**
For all experiments in Section 4, we utilize the entire training dataset as the reference set and train both the original and the proposed anchored models. During inference, we randomly select a single reference from the reference set and perform evaluation on the different test datasets. We will better clarify this in the final version of the paper.
**4. Impact/Novelty**
Anchoring is a framework that is agnostic to any training strategy, domain or application and can be wrapped along with other strategies (data augmentation, loss functions, regularizers, ensembling) that help improve model performance. While this is attractive, our paper identifies the shortcomings of the existing protocol and deals with a fundamental problem of how to train and make predictions with anchored models in practice. We develop a novel reference masking protocol for training anchored models that can significantly improve overall model generalization. We demonstrate significant quantitative performance improvements over standard and original anchored training protocol across different datasets, tasks and architectures. Particularly, we find that our proposed algorithm leads to a wider and flatter optima corresponding to superior solutions (Fig. 4), and can be used on top of augmentation strategies (Fig. 5a) and better handle training label noise (Fig. 5b). We systematically establish the empirical efficacy of our approach on OOD generalization and model safety tasks ranging from calibration, anomaly rejection as well as task adaptation and domain generalization. We expect to foster interesting research directions with anchoring and even impact applications in different domains (e.g., text, graphs etc.) where generalization and model safety are paramount concerns.
**5. Formatting Issues and Typos**
We will make the changes to the tables and figures in the final version of the paper.
---
Rebuttal 2:
Title: Request to check our response
Comment: We thank the reviewer for taking the time to review our paper and providing useful feedback. As the discussion phase is ending soon, we will greatly appreciate if the reviewer can check our response and let us know if there are any additional questions.
---
Rebuttal Comment 2.1:
Comment: Thank you for the detailed response addressing my questions about the accuracy, increasing the number of epochs, and impact and novelty, which have clarified and provided additional insight on the paper, as well as those of the other reviewers. Therefore, I would like to increase my score. I have one follow-up question on the accuracy remaining constant regardless of reference set size in vanilla anchored training: in the paper, it is mentioned that when $|\mathcal{R}| \le 50$, the model will likely see all combinations of reference and sample, but in Figure 1, we see either no increase in accuracy when $|\mathcal{R}| = 5$ and when $|\mathcal{R}| = 50$ (CIFAR-10) or only a slight increase (CIFAR-100). Is there an explanation for this? One might expect that the accuracy will increase until $|\mathcal{R}|$ increases to the point where not all combinations can be seen by the model.
---
Reply to Comment 2.1.1:
Title: Thanks and a clarification
Comment: We thank the reviewer for checking our rebuttal and considering to improve the score.
In anchoring, the quality of the converged optima depends upon the diversity of the reference-residual pairs (induce a rich family of functions) exposed during training. However, we note that the reference set size is only a surrogate for diversity, and more importantly, we do not use any sophisticated strategy for reference set selection (random sampling). As a result, even when all combinations are exposed, it is not guaranteed that the diversity of functions at |R| = 50 is significantly higher than that of |R| = 5. If there is a potentially better way of picking reference sets that are guaranteed to lead to diverse functions, we can expect stronger performance gains at |R| = 50. However, it is not clear how to design such a reference selection protocol. Instead, we recommend the use of very large reference sets (entire training data or even training data along with its augmented versions). That is where the problem of under-exposure of all combinations kicks in, thus motivating our regularizer. | Summary: In this paper, the authors propose a new strategy to train anchoring-based models, significantly improving performance, training efficiency, and model generalization compared to previous approaches. The key to the method is the added masking strategy that allows the model to better profit from anchoring-based training. The authors demonstrate that modifications only in inference (using several samples or searching for the best references) or the number of used references do not improve model performance, while the application of the masking procedure significantly improves it, as shown on various image classification datasets, specifically CIFAR-10, CIFAR-100, and ImageNet, using different architectures (both CNN and attention-based). The experiments demonstrate the effectiveness of the proposed method and the significant benefit of using it for improved generalization.
Strengths: * The paper is clearly written and easy to follow. The idea is intuitive and easy to grasp. The related work section provides an adequate discussion of existing approaches to anchoring-based training. The analysis narrative, with the presented drawbacks of existing methods, is very clear and easy to understand.
* The idea of masking the reference input argument is very clear and logical. The intuition behind why the problem could occur: 1) argument size grows combinatorially, and therefore 2) the model could learn to ignore the reference argument; seems correct, which is further clearly supported by the experiments.
* The authors provided an extensive evaluation of their approach, spanning different datasets and architectures, which provides a solid grounding to support the proposed method.
Weaknesses: * It seems that the evaluation could benefit from an additional comparison with other existing state-of-the-art OOD/uncertainty methods to better represent the quality of the results (not just in comparison with former anchoring-based approaches, but overall).
* From the perspective of the experimental evaluation, I would be curious to see evidence that the behavior demonstrated in the paper would hold in other domains, such as texts, graphs, more complicated vision tasks (e.g. segmentation), not limiting to image classification task.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Why do the authors focus on vision models when the method seems to be very generic and applicable to other domains as well?
* One of the claims the authors make is that the proposed masking procedure helps with the problem of the model ignoring reference input. They support this claim with, for example, Figure 2—an experiment showing that without this masking, we do not observe improvements in terms of performance, which is only a proxy for the claim. Is it possible to measure the sensitivity of the model with regard to reference inputs (for example, by adding noise to it and measuring the change in the outputs)?
* As far as I understand from the method description, the final method in Section 4 uses only one reference image for inference. How does the performance change with an increased number of references? The lack of improvements in performance (e.g., as in the right plot in Figure 2) seems strange to me since we would observe the opposite behavior in all existing ensembling approaches (e.g., [1, 2, 3, 4]). How would one explain such behavior? Additionally, it would be good to see some comparisons with these methods or at least include them in the discussion.
[1] Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. "Simple and scalable predictive uncertainty estimation using deep ensembles." NeurIPS 2017
[2] Wen, Yeming, Dustin Tran, and Jimmy Ba. "Batchensemble: an alternative approach to efficient ensemble and lifelong learning." ICLR 2020
[3] Durasov, Nikita, et al. "Masksembles for uncertainty estimation." CVPR 2021
[4] Laurent, Olivier, et al. "Packed-ensembles for efficient uncertainty estimation." ICLR 2023
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your positive feedback. We hope our responses address your questions.
**1. Generic Applicability of Anchoring**
Thank you for this question. We concur with you that anchoring is a protocol for training deep neural networks for use with any domain for any application (e.g., texts, graphs or vision tasks such as segmentation). In this paper, our goal was to identify the shortcomings of the original anchored training [1] and develop algorithms to improve the same, and vision was chosen only as a domain of convenience. Though we have performed initial experiments on using anchored training for other data domains (e.g., graphs, text), we did not include them to restrict the scope of the current submission. We find that our proposed anchoring approach produces performance improvements even in non-vision tasks. In the final version of the paper, we will include this discussion as part of the concluding remarks. In summary, anchoring is a domain-agnostic, architecture-agnostic, and task-agnostic training strategy.
**2. Clarifying ‘Model Ignoring Reference Input’**
It must be noted that the anchoring principle as demonstrated in [1] implicitly explores a large family of functions during training due to the lack of shift invariance of the underlying neural tangent kernel when the input is translated by a reference. We believe that this strategy produces a local optima similar in spirit to _stochastic weight averaging_ [2] that averages multiple solutions along the trajectory of gradient descent. Unique to anchoring, the quality of the converged optima depends upon the diversity of the reference-residual pairs (induce a rich family of functions) exposed during training.
However, we observe that the original anchored training even with large reference sets does not fully leverage the reference-residual diversity and converges to a poor local optima (Fig. 4b). We attribute this to the anchored model relying on shortcuts to make predictions. Please note that, the usage of the term shortcut (ignoring the reference) in the context of anchoring is different from convention. Shortcuts manifest during anchored training when the model can predict well only with certain arbitrary references but on an average converges to a poor optima. Basically, the functions induced by ignoring the reference are entirely different from the ones obtained without ignoring them, making the model eventually converge to a sub-optimal (implicitly averaged) local optima. We will better clarify this in the final version of the paper.
**3. Clarifying the Anchoring Inference Mechanism**
We would like to emphasize that anchored training (i) enforces prediction consistency of a sample with any reference; (ii) produces a single model; and (iii) converges to a local optima in a manner akin to stochastic weight averaging [2]. Moreover, when the diversity of the reference-residual pairs is well leveraged during training, it allows the model to converge to a wider optima improving model generalization (Fig. 4 in the main paper). Therefore, the `quality’ of the optima governs performance during inference and as a result the inference strategy (e.g., choosing K random references for inference) does not alter the (mean) model performance. It must be noted that anchoring must not be viewed under the lens of model ensembles that train multiple models where each member explores different yet possibly diverse local optima. While [1] measures discrepancies in predictions of a sample with different references as a notion of epistemic uncertainty, we find that the mean performance does not change. Fig. 2 in the main manuscript compares different inferencing protocols (1 random anchor, K anchors and transduction) and finds no significant differences in accuracy.
**4. Comparison with Existing OOD/Uncertainty Estimation Methods**
Thank you for this very important question. While a systematic evaluation of OOD detection with state-of-the art methods is most essential, it is beyond the scope of this paper. Our paper aims to establish anchoring as a useful training protocol and demonstrate its efficacy across a spectrum of tasks and model architectures. Although [1] provided evidence of the efficacy of epistemic uncertainties from anchoring for OOD detection, we will be conducting a large scale study as a part of our immediate future work.
[1] Thiagarajan et al. Single model uncertainty estimation via stochastic data centering, Neurips 2022
[2] Izmailov et al. "Averaging weights leads to wider optima and better generalization”, UAI 2018
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed rebuttal and for addressing the questions and concerns raised. I appreciate the additional insights provided regarding the generic applicability of anchoring, the clarification on the model's interaction with reference inputs, and the explanation of the inference mechanism. Your responses have clarified several key aspects of the paper, particularly the distinction between your approach and traditional ensembling methods.
While the paper focuses on anchoring within the vision domain, I understand the rationale for this choice and acknowledge the potential for broader applicability in other domains. The planned inclusion of discussions on non-vision tasks in the final version will be a valuable addition.
Given the solid contributions of your work and the thorough responses provided, I am maintaining my original rating.
---
Reply to Comment 1.1.1:
Title: Thanks!
Comment: We appreciate the reviewer for going over our response and recommending acceptance. | Summary: This paper presents a thorough discussion on the use of anchoring for training vision models. In particular, the paper tackles 1) the problem of reference diversity when training with anchoring to explain how superior generalization can be achieved 2) addresses the problem of spurious correlations learnt between the residual and 3) how different inference-time strategies can enable greater out-of-support generalization. Overall, this comprehensive study of anchoring provides useful guidelines for how anchoring should be applied to extract maximum performance. The paper empirically confirms this via the proposed anchoring scheme outperforming prior work noticeably.
Strengths: 1) Clarity: The paper is very clearly written and easy to follow. Readers unfamiliar with the literature like myself are able to understand what anchoring is, how it can be useful for (out of support) generalization and how current methods fail to apply anchoring in the most effective way.
2) Thoroughness of Evaluation: The paper conducts thorough ablations on several components of the anchoring pipeline. Reference diversity, reference masking, inference procedure etc. More
Weaknesses: No obvious weaknesses.
Technical Quality: 4
Clarity: 4
Questions for Authors: Have the authors compared the out of support generalization of anchoring procedures to other methods for domain generalization (which tackles a similar problem)? Considering datasets and baselines from *In Search of Lost Domain Generalization* (https://arxiv.org/abs/2007.01434) can further broaden the impact of this paper.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your positive comments and feedback. We hope our response addresses your concern.
**Domain Generalization Benchmarks**
Thank you for this question. We would like to highlight that we performed experiments on DomainNet which is one of the benchmarks from DomainBed [1] (Line 293 of the main paper). Following the setup in [1], we trained on one of the domains (source) and evaluated the model on the remaining domains (target). While it is common in the domain generalization literature to train models end to end on the source dataset, we instead trained a linear probe on top of an anchored feature extractor pre-trained on ImageNet. This was motivated by the need to investigate the impact of anchored training in producing better generalizable feature extractor backbones. In particular, we trained a linear probe with ERM [1] using the ‘real’ and ‘sketch’ (source) splits respectively from DomainNet. We then evaluated performance on the other (target) domains and observed performance improvements over the non-anchored variant.
With respect to the domain generalization specific baselines and training strategies (e.g., DRO, IRM) used in [1], we would like to emphasize that our proposed anchored training protocol can be simply used as a wrapper on all such methods and we expect it to improve overall performance similar in spirit to our analysis on augmentation methods in Section 3.2 (Fig 5a of the main paper). We plan to perform an extensive analysis on domain generalization as a part of our future work.
[1] Gulrajani, Ishaan, and David Lopez-Paz. "In Search of Lost Domain Generalization." ICLR 2021
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and continue to recommend acceptance for this work. | Summary: The authors analyze the effect of anchored training through a series of small experiments and find that, contrary to claims in prior works, increasing the size of the reference set is not beneficial and that this shortcoming cannot be mitigated through existing inference strategies. The authors provide a simple yet efficient fix by randomly masking out the reference during training, and forcing the model to make high entropy predictions in those cases. This solution does not incur any training overhead, and the authors demonstrate in extensive experiments that the fix is applicable to different models and datasets, yields improvements for OOD performance over various distribution shifts, and improves calibration and anomaly resilience.
Strengths: 1. The paper is very well written and structured and is overall easy to follow. The initial experiments highlight the studied problem well.
2. The authors showcase an important limitation to existing anchoring techniques that was unknown to the community.
3. The proposed solution is simple and is demonstrated to consistently improve performance across models and datasets.
4. The experiment section is extensive and covers both OOD performance as well as safety-relevant metrics. The results convincingly demonstrate the effectiveness of the proposed method.
Weaknesses: The paper is very well written, I don't see any major weaknesses that would prevent an accept.
Minor weakness: The optimal $\alpha$ is determined when using the entire dataset as a reference set. However, as is clear from the motivation, risk of spurious shortcuts is larger with a smaller reference set. Wouldn't this imply that the optimal $\alpha$ would be larger for smaller reference sets? How should this value be chosen in practice and for datasets larger than ImageNet-1k?
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Tab. 2 Do you have any insights why the improvements on ImageNet-S and ImageNet-R are drastically different for SWIN transformers and ViT?
2. (minor) The formatting of paragraph headers in the introduction is weird and inconsistently using underline.
3. (minor) Erroneous comma in L.165
4. (minor) It is hard to visually assess from Fig. 4 whether the optima are significantly different.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Limitations were sufficiently addressed, especially the empirical nature of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your positive feedback. Here are our responses to your questions. We plan to incorporate some of these clarifying comments to the manuscript as well.
**1. Choice of $\alpha$**
We want to clarify that at low reference set sizes, there is a high likelihood of exposing the model to all possible combinations of samples and references, and hence the risk of learning shortcuts is minimal. In this case, overemphasizing the reference masking probability (i.e., increasing $\alpha$) can significantly inhibit this exposure. Consequently, this leads to underfitting as the model is tasked with learning solely from the residuals which is undesirable in practice (Blue curve for reference set size $\leq$ 50 in Fig.1 of the paper). Reducing $\alpha$ can combat this behavior, as evidenced by the original anchored training (special case of reference masking with $\alpha = 0$, red curves for reference set size $\leq$ 50 in Fig.1).
Now, with larger reference sets (e.g., datasets in the scale of ImageNet 1K), the number of reference-residual pairs grows combinatorially, making it impractical to expose the model to all diverse pairs in a fixed number of training iterations. In such a scenario, reducing $\alpha$ can increase the risk of learning shortcuts and lead to suboptimal performance. Increasing $\alpha$ on the other hand can in fact aid training as it systematically avoids these shortcuts and improves generalization. In summary, the optimal $\alpha$ value depends both on the reference set size and the convergence behavior of model training.
**2. Performance Improvements on ImageNet-R/S with VITb and SWINv2B**
Anchored training with ImageNet-1K involves exposing the model to significantly large and diverse reference-residual combinations. Our results show that, in order to model such a large joint distribution and to leverage the diversity of the reference set, along with the proposed masking regularizer, higher capacity networks (VITb & SWINv2B) are also required. We hypothesize that this behavior helps such networks better handle challenging, far out-of-distribution datasets such as ImageNet-R/S. For instance, we observe an average of 1.5% and 1.7% improvements in ImageNet-R and S accuracies respectively when using such architectures over lower capacity models.
**3. Formatting**
We will make changes to the paragraph header, correct underline inconsistencies in the introduction and improve Fig. 4 in the final version of the paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their explanation and clarification. I believe a succinct discussion of alpha in different scenarios like the one provided here would be useful to include in the paper.
I have no further questions and continue to recommend acceptance.
---
Reply to Comment 1.1.1:
Title: Thanks!
Comment: We sincerely appreciate the reviewer for going over our response and championing our paper! | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
ManiPose: Manifold-Constrained Multi-Hypothesis 3D Human Pose Estimation | Accept (poster) | Summary: This paper presents a method to estimate 3D human keypoints from a sequence of monocular 2D keypoints observations. It builds upon an existing sequence-to-sequence architecture (MixSTE), with a different output parameterization exploiting a kinematic skeletton prior, and different training losses. Lengths of the skeletton bones are predicted for the whole sequence to ensure consistency across frames (and maybe also left/right symmetry of the skeletton), and five 3D pose hypotheses with associated scores are predicted for each frame, parameterized as a list of 3D relative orientation for each bone with respect to its parent in the kinematic tree.
The authors develop theoretical arguments regarding the benefits of enforcing such structural priors in the predictions, and illustrate with a toy example the interest of having multiple predictions in case of ambiguous multimodal output. They validate their approach on Human3.6M and MPI-INF-3DHP datasets.
Strengths: The motivation for exploiting bone lengths constraints is well expressed, with a clear and detailed discussion provided in Section 4. The discussion of experimental and ablation results is insightful and shows – in a setting dependent on an oracle – benefits of the proposed approach.
Weaknesses: The idea of enforcing body priors (constant bone length here) is not novel and has actually been heavily exploited in a whole line of work relying on more advanced parametric models such as SMPL [100]. This line of work would deserve being considered in the paper, as it encompass approaches suitable for 2D-to-3D sequence lifting such as e.g. [101].
The authors present a pose space consisting in 3D coordinates of joints linked by some rigid segments. Based on this definition, a natural pose parameterization would consist in the 3D direction of each segment, yet the authors chose to overparameterize poses by using relative 3D bone orientation instead. I understand that such choice can have practical benefits in term of biomechanical constraints and additional supervision signal when ground truth data is available, but such choice should be properly motivated, discussed and ablated in the paper.
The authors describe two ways of aggregating results L247 but do not state which one they use for MPI-INF-3DHP, and they only report oracle results on Human3.6M and for the ablations.
In my understanding, pose hypotheses are selected independently for each frame and there are no temporal terms in the training objectives or aggregation method. Since the proposed approach deals with temporal sequences, it would be worth evaluating the temporal consistency of the predictions, through qualitative video examples and quantitatively e.g. using joint acceleration metrics. Having multiple hypotheses for each frame brings combinatorial questions worth discussing in my opinion.
References:
- [100] Loper at al., “SMPL: A Skinned Multi-Person Linear Model”, at SIGGRAPH Asia 2015.
- [101] Baradel et al., “PoseBERT: A Generic Transformer Module for Temporal 3D Human Modeling”, in TPAMI 2022.
Technical Quality: 2
Clarity: 3
Questions for Authors: See the weaknesses section for a list of suggestions.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 1
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We answer their remarks below, following the same order.
1. **Citation of SMPL-based methods:**
- _We will properly cite these important works._ We agree with the reviewer that SMPL-based methods share the same constant-bone-length assumption that we present, and that it represents an extensive line of work which is worth mentioning. We thank the reviewer for pointing this out to us.
- We would like to highlight however a few differences between these works and ours:
- _HMR is a field solving a more complex task:_ “Human body pose and shape estimation” (a.k.a. “human body reconstruction”, “human mesh reconstruction”, “HMR”, …) is a separate field, different from our “3D human pose lifting”: the objective of the former is to predict whole 3D body meshes from images, as opposed to 3D joint positions from 2D keypoints for the latter. This means that these works tackle a different task, which is more challenging.
- _HMR is often frame-based:_ Because their task is more challenging, SMPL methods are more computation-heavy and often restricted to single-frame predictions, which are then smoothed across frames using optimization-based post-processing. A good example of this methodology can be seen in [A].
- _SMLP methods deliver worse MPJPE:_ Of course, predicting the human mesh across time includes the skeleton pose, which means that some of these methods do evaluate MPJPE. Note however that, while [A] is SOTA in Human3.6M data last time we checked, their performance (MPJPE=44.8 mm) lags behind 3D human pose lifting methods (c.f. Table 2 and 8 in our paper).
- _Our novelty lies on multiple hypotheses with constraints:_ Furthermore, please note that our work proposes not just to restrict predicted poses to have constant bone length, but proves that this is not sufficient to optimize both pose fitting (i.e., joint position error) and consistency (i.e., avoiding bone stretching). We prove that multiple hypotheses are needed to accommodate both objectives and propose a practical implementation, show-cased in our experimental results.
2. **Concerns regarding rotations parametrization:**
- _Learning 3D directions instead of full rotations yielded poorer results:_ We understand the reviewer’s concern. It is true that, when compared to full 3D rotation matrices, predicting simple direction vectors (as done in Anatomy3D [4] for example) is a simpler way of parametrizing poses where joints are connected by segments. As advised by the reviewer, we hence performed a new ablation study, where we compare our parametrization to the one used in Anatomy3D, i.e., where the rotations module predicts normalized vectors representing the direction of each segment. As shown in Table 1 of the pdf attached to our global answer, the latter leads to poorer oracle MPJPE of 39.6 mm when compared to our rotations parametrization (39.1 mm) in a multi-hypothesis setting.
- _Our choice was guided by relevant previous works:_ To clarify our choice, we opted for our rotations representation building on results from [43], where the benefits of 6D representations of SO(3) matrices is studied. Please note that [43] includes good results in an inverse kinematics problem with "stick" human poses similar to ours.
3. **Concern regarding results with aggregated poses:**
- _Values for MPI-INF-3DHP are reported in Table 4:_ The penultimate row corresponds to a weighted averaged pose, while the last row corresponds to the oracle pose.
- _Values for Human3.6M are reported in Table 8 in the appendix:_ Metrics of the aggregated pose can be found at the penultimate row of Table 8. We understand that this can be missed since it is in the appendix.
- _We will clarify this in our revision._ We apologize if the results with aggregated poses are not stated with enough clarity in our manuscript.
- _MPSCE performance is reported above:_ As explained to reviewer 5BLY in Q3, both MPJPE and MPSCE performance are hurt when using the averaged pose. While the latter is expected according to Proposition 4.2 in the paper, the MPJPE degradation is explained by a new theoretical result proved in the answer to reviewer 5BLY.
4. **Concern regarding temporal consistency of predictions:**
- _The training objective includes a temporal term in the Human 3.6M and MPI-INF-3DHP experiments._ We apologize if this was not very clear in our manuscript. It is mentioned in Appendix D.3 lines 602-609, where we explain that TCloss and velocity loss terms were added to our original objective in order to have a fair comparison with existing art. We chose not to add this in the main manuscript because of space limitations and because they are relatively standard practice. We agree that it may lead to confusion and will hence add a longer note on this in our revision to the main paper.
- _We provide a gif demonstrating temporal consistency:_ Although we understand the reviewers concern, we found no important issues regarding this point. We believe this to be explained by the aforementioned loss terms and the fact that the model predicts sequences instead of single frames. The reviewer can find a gif in the pdf attached to the main response (NeurIPS rules forbid us from providing a link to a video). Note that Adobe Reader is needed to visualize it.
[A] Goel, Shubham, et al. "Humans in 4D: Reconstructing and tracking humans with transformers." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
[4] Chen, Tianlang, et al. "Anatomy-aware 3d human pose estimation with bone-based pose decomposition." IEEE Transactions on Circuits and Systems for Video Technology 32.1 (2021): 198-209.
[43] Zhou, Yi, et al. "On the continuity of rotation representations in neural networks." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answer. I acknowledge having read the rebuttal (but I don't have Acrobat Reader unfortunately).
For reference, and although this is not directly related to your argument, the continuity theory proposed in [43] is disputed in:
Brégier, "Deep Regression on Manifolds: A 3D Rotation Case Study", in 3DV 2021. | Summary: This paper proposes a MCL-based framework for multi-hypothesis 3D human pose estimation. This framework predicts skeletal parameters so that the predicted 3D poses in a sequence are constrained to one smooth manifold. To prove the superiority of such a framework, the paper presents detailed theoretical analysis on the drawback of unconstrained single-hypothesis HPE and why MPJPE alone is not enough for pose evaluation. The experiments show the proposed framework is capable of keeping the consistency of predicted poses and achieving state-of-the-art MPJPE in the meantime.
Strengths: * Simple and reasonable manifold representation. The proposed framework keeps the predicted human pose on the target manifold by representing the human pose with bone lengths and orientations, and the 3D pose is a direct inference from forward kinematics. The manifold is represented by the kinematics itself.
* Inspiring theoretical analysis on basic problems in 3D HPE. The paper arrives at some theoretical conclusions (line178-183), along with detailed proofs. They can provide some refreshing ideas on the innate drawbacks of traditional loss functions and MPJPE metrics.
* Good performance under both MPJPE and consistency measures, as validated in Table 2 and 3.
Weaknesses: * Theoretical analysis on the advantage of multi-hypothesis methods over single-hypothesis ones could be added. Specifically, why a **constrained multi-hypothesis** method performs better than an **unconstrained single-hypothesis** method in MPJPE? Though this is already validated by the experiments, I personally believe it would make the paper more solid if the authors could make this analysis.
Minor problem:
* In Fig.4 (C) and (D), it is not quite clear how the estimations (crosses and triangles) correspond with the inputs (black dots). There might be some unexpected shifts, as the projections of the predicitons do not strictly align with the inputs (like in B).
Technical Quality: 3
Clarity: 4
Questions for Authors: What is the quality of the score for each hypothesis? If the multiple hypotheses are fused to one (e.g. by taking the one with the largest confidence or taking the weighted average), then how will the MPJPE, MPSCE, and MPSSE change?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We answer their remarks below, following the same order.
1. **New theoretical analysis on the advantage of multi-hypothesis methods over single-hypothesis:**
We agree with the reviewer and provide the proposed theoretical result hereafter.
- Let $\mathcal{X}=\mathbb{R}^{2 \times J}$ denote the space of input 2D poses and $\mathcal{P}=\mathbb{R}^{3 \times J}$ the space of 3D poses. Following Rupprecht et al. 2017 [31] and Letzelter et al. 2024 [A], we define the “oracle risk” for a K-hypothesis model $f_{\text{WTA}} = (f_{\text{WTA}}^1, \dots, f_{\text{WTA}}^K)$ as:
$$ \mathcal{R}^K (f_{\text{WTA}}) \triangleq \int_\mathcal{X}
\sum_{k=1}^{K} \int_{\mathcal{V}^k(f_\text{WTA}(\mathrm{x}))} \\|f_{\text{WTA}}^k(\mathrm{x}) - \mathrm{p} \\|^2_2 \rho(\mathrm{x}, \mathrm{p}) \mathrm{d} \mathrm{p} \mathrm{d} \mathrm{x}, $$
where $\mathcal{V}^k(g)$ denotes the $k^{th}$ cell of the Voronoi tesselation of the output space $\mathcal{P}$ defined by generators $g=(g^1, \dots, g^K) \in \mathcal{P}^K$:
$$\mathcal{V}^k(g) \triangleq \Big\\{ \mathrm{p} \in \mathcal{P} \;\Big|\; \\| g^k - \mathrm{p} \\|^2_2 < \\| g^r - \mathrm{p} \\|^2_2, \forall r \neq k \Big\\}.$$
The risk above translates the notion of oracle pose, since it partitions the space of ground-truth poses $\mathcal{P}$ into regions where some hypothesis is the closest, and uses only that hypothesis to compute the risk in that region.
_Now we can state our new proposition:_
- A $K$-hypothesis model $f_{\text{WTA}}^*=(f_{\text{WTA}}^{1*}, \dots, f_{\text{WTA}}^{K*} )$ minimizing $\mathcal{R}^K$ has always a risk lower or equal to a single-hypothesis model $f_\text{MSE}^*$ minimizing $\mathcal{R}^1$:
$$\mathcal{R}^K (f_{\text{WTA}}^*) \leq \mathcal{R}^1(f_\text{MSE}^*) = \min_f \mathbb{E}_{\mathrm{x}, \mathrm{p}}[\\| \mathrm{p} - f(\mathrm{x})\\|^2_2] = \mathbb{E}\_{\mathrm{x}, \mathrm{p}} [ \mathrm{p} \| \mathrm{x} ].$$
_The proof relies on the following steps:_
- First we assume that $f_{\text{WTA}}$ is expressive enough, so that, minimizing the risk $\mathcal{R}^K$ comes down to minimizing
$$\mathcal{R}\_{\mathrm{x}}^K (f\_{\text{WTA}}) \triangleq \sum_{k=1}^{K} \int_{\mathcal{V}^k(f_\text{WTA}(\mathrm{x}))} \\|f_{\text{WTA}}^k(\mathrm{x}) - \mathrm{p} \\|^2_2 \rho(\mathrm{p} | \mathrm{x}) \mathrm{d} \mathrm{p} ,$$
for each $\mathrm{x} \in \mathcal{X}$.
- Following [A] (Section 2.2), we decouple the cell generators from the risk arguments:
$$\mathcal{K}(g, z) \triangleq \sum_{k=1}^{K} \int_{\mathcal{V}^k(g)} \\|z^k - \mathrm{p} \\|^2_2 \rho(\mathrm{p} | \mathrm{x}) \mathrm{d} \mathrm{p},$$
for any generators $g=(g^1, \dots, g^K) \in \mathcal{P}^K$ and arguments $z=(z^1, \dots, z^K) \in \mathcal{P}^K$.
Note that $\mathcal{R}_{\mathrm{x}}^K(f) = \mathcal{K}(f(\mathrm{x}), f(\mathrm{x}))$.
- Next, according to Proposition 3.1 of [B], if $f_{\text{WTA}}^*$ minimizes the input-dependent risk $\mathcal{R}\_{\mathrm{x}}^K (f\_{\text{WTA}})$, then $(f_{\text{WTA}}^*(\mathrm{x}), f_{\text{WTA}}^*(\mathrm{x}))$ has to minimize $\mathcal{K}$:
$$\mathcal{K}(f_{\text{WTA}}^*(\mathrm{x}), f_{\text{WTA}}^*(\mathrm{x})) \leq \mathcal{K}(g, z), \qquad \forall g, z \in \mathcal{P}^K \times \mathcal{P}^K.$$
- Finally, let's choose $g$ such that $g^k=f_{\text{WTA}}^{k*}(\mathrm{x})$ and $z$ such that $z^k = f_\text{MSE}^*(\mathrm{x})$ for all $1 \leq k \leq K$. Then
$$\mathcal{R}\_\\mathrm{x}^K (f\_{\text{WTA}}^{k*}) \leq
\sum_{k=1}^{K} \int_{\mathcal{V}^k(f_\text{WTA}^*(\mathrm{x}))} \\|f_\text{MSE}^*(\mathrm{x}) - \mathrm{p} \\|^2_2 \rho(\mathrm{p} | \mathrm{x}) \mathrm{d} \mathrm{p} = \mathcal{R}^1_\mathrm{x}(f_\text{MSE}^*) ,$$
where the last equality comes from the fact that $\mathcal{V}^k(f_\text{WTA}^*(\mathrm{x}))$ defines a partition of $\mathcal{P}$.
2. **Fig.4 misalignments:**
Indeed, it seems that some misalignment was introduced during editing and should be corrected in our revision.
3. **Metrics with hypotheses fused to one:**
- _MPJPE degrades:_ When we compute the weighted average (cf. penultimate row in tables 3 and 8 in the paper), we see that MPJPE performance is hurt (42.1 mm in H3.6M instead of 39.1). This might indeed indicate that scores could be better estimated, but it could also just be a consequence of the new proposition proved above in point 1.
- _Expected from our new theoretical result:_ Indeed, according to [17, Equation 8], the weighted average is an estimate of the conditional expectation $\mathbb{E}[\mathrm{p} \| \mathrm{x}]$, i.e., the best single-hypothesis model $f\_\text{MSE}^*$. Hence, according to the proposition proven above, it should underperform the multi-hypothesis model at the limit.
- _MPSSE and MPSCE also degrade as expected:_ Concerning pose consistency, we computed an MPSSE of 0.4 mm and an MPSCE of 0.8 mm, which are again worse than for the oracle pose. This is of course expected, since Proposition 4.2 in the paper proves that single-hypothesis models (which the aggregation approximates) are bound to lie outside the pose manifold.
[A] Letzelter, V., Perera, D., Rommel, C., Fontaine, M., Essid, S., Richard, G., & Perez, P. Winner-takes-all learners are geometry-aware conditional density estimators. In Forty-first International Conference on Machine Learning.
[B] Du, Q., Faber, V., & Gunzburger, M. (1999). Centroidal Voronoi tessellations: Applications and algorithms. SIAM review, 41(4), 637-676.
[17] Letzelter, V., Fontaine, M., Chen, M., Pérez, P., Essid, S., & Richard, G. (2023). Resilient Multiple Choice Learning: A learned scoring scheme with application to audio scene analysis. Advances in neural information processing systems, 36.
[31] Rupprecht, Christian, et al. "Learning in an uncertain world: Representing ambiguity through multiple hypotheses." Proceedings of the IEEE international conference on computer vision. 2017.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for addressing my concerns. The theory they prove in the rebuttal is a valuable addition to the contribution of this paper. I have also read the comments from other reviewers and agree that some additional experiments could make this paper more solid. However, I shall vote for acceptance because of the theoretical contributions. If the proofs are guaranteed correct (I only checked the proof sketches, due to limited time and expertise), then the conclusions can be very valuable for the community. Thus, I will keep my rating. | Summary: This paper presents a new method to estimate 3D human pose from 2D observations (lifting). To ensure the body symmetry and temporal consistency, the authors disentangle human skeleton to two parts: temporally consistency bone scales and temporally variable bone rotations. The authors use fancy formulas to prove that, minimizing MSE loss could not gurantee manifold consistency. The quantitative and qualitative results on Human3.6m and MPI-INF-3DHP datasets show the superiority of the proposed method.
Strengths: 1. The evalution results in this paper is quite impressive, especially the newly proposed consistency metric. Figure 1 clearly shows the superiority of the proposed method.
2. The authors try to prove the theoretical optimal of the proposed method, which is worth encouraging.
Weaknesses: I am not an expert in manifold theory, therefore my questions only relate to human pose estimation.
1. How to constrain the rotation space during training?
2. The pose lifting method is quite similar to Anatomy3D (bone length + rotations). Can I view this paper an multi-hypothesis extension of Anatomy3D? Why?
3. Previous paper "POSE-NDF: MODELING HUMAN POSE MANIFOLDS WITH NEURAL DISTANCE FIELDS" is similar to this paper in concepts. SMPL naturally guarantees bone length symmetry, and the learnable parameters (rotations and shape parameters) are similar to this paper in its functionality. It would be better to cite it.
4. Suppose that, there is a virtual dataset, all 2D human joints are rendered (projected) from strictly symmetric 3D joints, then, could learning the lifting function on this virtual dataset using MSE loss guarantee the results all lie on manifold?
5. (An optional question) The ground truth 3D joints of Human3.6M datasets come from the marker tracking on body surface, which naturally could not guarantee skeleton length consistency. Why learning symmetric bones yields better results (both Anatomy3D and the proposed methods)?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The citation style is weird. They are not NeurIPS style, please correct them.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors addressed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and provide hereafter our response to their concern, in the same order.
1. **How to constrain the rotation space during training?**
- _Our method can be adapted to incorporate angle constraints._ This is possible for example if one chooses to use rotation representations where angles appear explicitly (Axis-angle, Euler angles, …). This is not straightforward when using the 6D representations that we chose in this work though, where angles are implicit. We chose these representations because there is a bijection between them and rotations matrices, which presents optimization advantages and allows us to avoid training instabilities (cf. [43] On the continuity of rotation representations in neural networks. CVPR 2019.)
- _A possible solution:_ If we chose to predict directions instead of rotations (cf. answer to reviewer fpqy Q2), then angles could be easily made explicit in our representation (using spherical coordinates for instance) and we could constrain them to stay within a certain interval by using simple sigmoid activations at the end of our rotations network. This idea could be a nice future extension of our work.
2. **Is ManiPose a multi-hypothesis extension of Anatomy3D?**
- _Yes, in a sense:_ It is true that, similar to Anatomy3D, ManiPose disentangles limbs length and orientation in order to constrain predicted poses to lie in an estimated manifold. In this sense, we can indeed see it as an extension to the multi-hypothesis setting.
- _But with new theoretical results and a very different message:_ Please note however that, unlike Anatomy3D, we provide theoretical proofs and empirical evidence that both constraints and multiple hypotheses are needed if one wants to optimize joint position error and pose consistency together. This is quite a different message than the one in Anatomy3D paper.
- _And a different way of constraining poses:_ Also note that we use a different representation of limbs orientations than Anatomy3D to constrain our predicted poses (cf. answer to reviewer fpqy Q2). New ablation results in Table 1 of the pdf attached to our main response show that our representation yields better results in the multi-hypothesis setting.
3. **Concern with SMPL-methods and Pose-NDF citation:**
- _We will eagerly cite Pose-NDF._ We knew and hold this work in high regard. We agree with the reviewer that it is related to our work, as it implicitly learns a plausible sub-manifold of $SO(3)^J$ for poses.
- _Please note however that Pose-NDF is part of the “body pose and shape estimation” field, which is related but different from 3D human pose lifting._ The objective of most SMPL-based methods is to predict whole 3D body meshes from images instead of 3D joint positions based on 2D keypoints. The task is more challenging, which means that algorithms are heavier and more reliant on optimization-based post-processing. For instance, Pose-NDF estimates 3D meshes in a single image by initializing pose angles and using gradient descent to project them into the learned pose manifold, which is different from doing a simple forward pass with ManiPose.
4. **Question regarding synthetic data and MSE loss:**
- We can guarantee that if multiple hypotheses _and_ constraints are not used together, then the predictions of the lifting function learned using just MSE loss on the synthetic data proposed by the reviewer _will not_ lie on the manifold (unless the distribution of 3D poses conditioned on 2D poses are Dirac measures, i.e., if there is always a single possible 3D pose projecting into 2D space).
- This is proved in Proposition 4.2 and shown (with simplified 2-joint and 3-joint articulated objects) in experiments of sections 4.2 and C.2.
5. **Optional question regarding MoCap:**
- The Human3.6M dataset, as many other datasets, does not contain raw MoCap measurements, but rather post-processed estimations of 3D joint positions obtained through heavy optimization. One of the constraints that is enforced during such post-processing steps is precisely that limb lengths do not vary over time for a given subject. We verify this for ground-truth poses of Human3.6M dataset in figure 5 of our paper (cf. legend).
6. **Citation style**
- We apologize for the citation style and will correct it in our revision to comply with the NeurIPS format.
---
Rebuttal Comment 1.1:
Comment: The authors rebuttal has clarified my concerns carefully. I also notice that the authors add some experiments according to the comments of other reviewers, which makes the evaluation stronger. If the area chairs could guarantee the correctness of mathematical derivation, I think it would be a good choice to accept this paper. | Summary: This paper propose ManiPose, a manifold-constrained multi-hypothesis model for 3D human pose lifting. The authors provide empirical and experimental evidence to show that joint position regression leads to inconsistent skeleton lengths. And they propose to predict globally consistent pose scale and individual joint rotations per frame (rather than joint positions) to constrain the predictions to the pose manifold. Empirical results demonstrates that the proposed ManiPose framework improves the pose consistency.
Strengths: * The paper provides valuable theoretical analysis to support their arguments and provides intuitive toy examples to illustrate the ambiguity in pose lifting.
* The paper conducts extensive experiments on H36M and MPI-INF-3DHP datasets.
Weaknesses: * The paper uses a multi-head design to predict multiple hypotheses. This design loses the flexibility of sampling different numbers of hypotheses and limits the maximum number of hypotheses to a small number. This often results in limited hypothesis diversity. In the experimental section, the authors do not provide numerical of visual measurements of hypothesis diversity.
* According to the comparison in Table 4, the manifold constraint proposed in this paper sacrifices MPJPE to improve pose consistency, serving as a trade-off approach between accuracy and consistency. Although the consistency is improved, it lags behind the traditional position regression or manifold regularization in accuracy, and does not bring essential improvement (improve both in accuracy and consistency) compared with these two methods.
* Missing comparison with two recent multi-hypothesis methods. [1] GFPose: Learning 3D Human Pose Prior with Gradient Fields. [2] DiffPose: Toward More Reliable 3D Pose Estimation.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please review the Weaknesses Section. If the author can address or respond to the above issues well in the rebuttal stage, I will consider increasing my score.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As the authors discussed in the Limitations Section, they used forward kinematics to obtain joint positions, which can lead to error accumulation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and answer here their concerns in the same order.
1. **Diversity and fixed number of hypotheses concern:**
- _SOTA oracle MPJPE is evidence of good diversity:_ It is true that ManiPose produces a fixed number of poses per forward pass. While methods based on generative models (GFPose, DiffPose, etc.) typically control the number of samples drawn at test-time, we argue that in predictive tasks, selecting the right number of samples is important to capture the diversity of the plausible targets, i.e., the modes of the conditional distribution. It was indeed proven in prior works that the winner-takes-all (WTA) training scheme, on which ManiPose is based, can achieve an optimal quantization of the target distribution [A] given sufficient data. This diversity in the predicted poses can be measured through Oracle MPJPE performance, for which ManiPose achieves state-of-the-art results.
- _Few hypotheses with scores are more informative:_ Moreover, note that generative methods assign equal weight/likelihood to all sampled poses for a given input. Since they require to sample a large number of poses (~200) to achieve competitive results, it becomes difficult to practically use such an uniform and high-cardinality output in a real scenario. ManiPose, on the other hand, provides more information to the user by predicting just a few relevant consistent poses with their corresponding scores/likelihoods, which is easier to process.
- _Similar coverage of 3D poses distribution as DiffPose:_ In an attempt to quantify the diversity of poses predicted by ManiPose by other means than oracle MPJPE, we have computed the coverage (c.f. [B]) of generated poses over the ground-truth test distribution. For computation cost reasons (it grows quadratically with sample size), we limited our analysis to 5 actions from subject S11 of Human 3.6M. We compare ManiPose to _[2] DiffPose: Toward More Reliable 3D Pose Estimation_, using 5 hypotheses for both, and observe similar diversity on average (c.f. Figure 1 in the pdf attached to the main response).
2. **Concern with Table 4 and trade-off between consistency and accuracy:**
- _With a single hypothesis, one has to choose between minimizing MPJPE or pose consistency:_ We agree with the reviewer reading of Table 4 and would like to highlight that these are precisely the main messages of our work: 1) in a single-hypothesis setting, constraints will necessarily trade MPJPE for consistency (2nd row) and 2) multiple hypotheses (1st row) are needed to conciliate both MPJPE and consistency metrics.
- _Table 4 is hence empirical evidence supporting our proposition 4.2 and corollary B.1.:_ This result is precisely what makes our approach principled and novel, since there are previous works proposing to constrain predicted poses to estimated manifolds (e.g. Anatomy3D) or proposing to use multiple unconstrained hypotheses (e.g. Diffpose, GFPose, …), but never both together. Please let us know if we misunderstood the reviewers’ point.
3. **Concern with missing comparison with two recent multi-hypothesis methods:**
- _We will add GFPose and DiffPose to our experimental results._ We thank the reviewer for pointing us to these excellent recent works. We have used their official code and checkpoints to generate 3D poses over the test split of Human3.6M and computed their pose consistency metrics.
- _GFPose delivers worse pose consistency and joint position error in a comparable setting:_ Indeed, we measured an MPSCE of 16.5 mm and an MPSSE of 13.1 mm, considerably worse than ManiPose (MPSCE=0.5 and MPSSE=0.3). In terms of MPJPE, the paper reports an impressive 35.6 mm, but this score makes use of 200 sampled hypotheses, which is 2 orders of magnitude higher than us. In a fairer setting, when using a similar number of hypotheses for both methods, GFPose then stands at 45.1 mm of MPJPE with 10 hypotheses, which is considerably worse than ManiPose (39.1 mm with 5 hypotheses).
- _DiffPose delivers worse pose consistency and competitive joint position error when using their checkpoint and code:_ We computed an MPSCE = 6.1 mm and MPSSE = 5.2 mm for them, which is again inferior to ManiPose. In terms of MPJPE, the authors also report an impressive number of 36.9 mm in their paper for their video model. However, using their official code and checkpoint, and setting the number of hypotheses to 5 as mentioned in the paper, we could only obtain 39.3 mm, which is competitive with ManiPose. Note that there are two other issues on their github page (which we can’t link here) mentioning precisely that the code and checkpoint do not allow to reproduce their reported results.
[A] Letzelter, V., Perera, D., Rommel, C., Fontaine, M., Essid, S., Richard, G., & Perez, P. Winner-takes-all learners are geometry-aware conditional density estimators. In Forty-first International Conference on Machine Learning.
[B] Naeem, M. F., Oh, S. J., Uh, Y., Choi, Y., & Yoo, J. (2020, November). Reliable fidelity and diversity metrics for generative models. In International Conference on Machine Learning (pp. 7176-7185). PMLR.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. The response addresses some of my concerns. However, I disagree with the authors' statement that "The diversity in the predicted poses can be measured through Oracle MPJPE performance." MPJPE and diversity reflect two different aspects of the generated results: the accuracy of the best-matching pose and the ability to generate other plausible poses. If we only focus on MPJPE, it aligns more closely with the goal of single-hypothesis pose estimation rather than multi-hypothesis pose estimation.
---
Reply to Comment 1.1.1:
Title: Answer to comments of reviewers tj82 and B5U1
Comment: We thank the reviewers for their prompt answers. Reviewer B5U1 is correct in interpreting Figure 6 and in saying that our answer is connected to [A] "Winner-takes-all learners are geometry-aware conditional density estimators". But we understand reviewer tj82's concern. The reason why we say that **oracle MPJPE** is one way of assessing diversity is because it writes as
$$\frac{1}{N} \sum_{i} \min_k \ell(f_k(x_i), \mathrm{p}\_i),$$
which **is an approximation when $N$ is large of the quantization error**, also known as distortion:
$$\int_{\mathcal{X} \times \mathcal{P}} \min_k \ell(f_k(x), \mathrm{p}) \rho(x, \mathrm{p}) \mathrm{d} x \mathrm{d} \mathrm{p},$$
where $\ell$ is the average joint-wise $L2$ distance in our case. The latter is traditionally used to measure the efficiency of an estimator in summarizing a distribution with few representatives, commonly used to study the K-means estimator for example [C].
So the fact that we achieve better oracle MPJPE than methods requiring a large number of hypotheses (GFPose, D3DP, Wehrbein et al., …) shows that **ManiPose has better quantization properties** than the latter, i.e., it is more efficient in summarizing the diversity of the conditional distribution with fewer representatives/hypotheses.
More practically:
1. the extreme case of a model with no diversity at all (e.g., predicting $K$ times the same pose) would lead to an oracle MPJPE = vanilla MPJPE of its single-hypothesis version. This is not what we obtain in our ablation study of Table 4 (1st vs last rows).
2. In the opposite extreme case, a naive way to obtain $K$ very diverse hypotheses would be to use a regular grid of the pose space $\mathcal{P}=\mathbb{R}^{3 \times J}$. The latter learns nothing and is uninformative, but could still achieve better oracle MPJPE if given an unrealistically large number of hypotheses (cf [A] equation 16).
3. As shown in [A], the winner-takes-all learning scheme, used in ManiPose, allows it to sit between these extreme cases by learning an adaptive “grid” made of a few hypotheses, capturing the geometry of the underlying conditional distribution. Results on its quantization optimality can be found in section 5.2 of [A] for example.
Of course, oracle MPJPE is not the only way of assessing diversity, which is why **we have provided in our rebuttal additional results** measuring the coverage of ManiPose, which corresponds to the ratio of ground-truth poses whose neighborhood contains at least one generated pose. It is a common metric used in the literature on generative models to analyze diversity.
[C] Pages, Gilles, and Jacques Printems. "Optimal quadratic quantization for numerics: the Gaussian case." (2003). | Rebuttal 1:
Rebuttal: We thank the reviewers for their work. We provide answers to all their concerns individually, referring sometimes to the pdf attached to this general answer.
We would like to highlight that our rebuttal includes:
- a new theoretical result, together with its proof sketch,
- a new ablation study related to our rotations representation,
- the evaluation of two new baseline methods,
- and new evaluations of our method in terms of diversity and consistency of aggregated poses.
Pdf: /pdf/003042c7b705530165d3145d817e9df664327baa.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Measuring Mutual Policy Divergence for Multi-Agent Sequential Exploration | Accept (poster) | Summary: The authors study MARL in heterogeneous settings, where agents are not allowed to share their parameters, and make use of the sequential updating scheme under the CTDE schema. They propose a method which exploits the preceding information to improve exploration and heterogeneity sequentially. This method is equipped with a mutual policy divergence maximization framework, which utilizes the discrepancies between episodes to enhance exploration and between agents to heterogenize agents. Interestingly, the authors propose the conditional Cauchy-Schwarz divergence to provide entropy-guided exploration incentives.
Strengths: - The problem of exploration in settings with heterogeneous agents is important in MARL and not well-explored in literature.
- The paper is the first to study the effectiveness of policy divergence maximization in the sequential updating schema, upon which important related work has been built.
- The paper proposed the conditional Cauchy-Schwarz (CS) divergence as an alternative to the popular KL-divergence in MARL. Such an alternation may be interesting to the broader RL community. Interestingly, unlike KL-divergence which can expload for small values of the denominator, CS divergence has a provable good lower bound ($-\log(n)$) only dependent on the number of finite actions.
- The proposed method displays good performance, in comparison to strong SOTA methods (including MAPPO, HAPPO), on benchmarks with heterogeneous agents.
- The proposed framework is simple and easy-to-implement.
- The paper is generally well-written and easy-to-follow.
Weaknesses: - The improvement over the baselines (standard KL-divergence, entropy term, no incentive) does not seem to be quite consistent in the ablation study, due to (a) high variance in the results of the no incentive, and (b) very close improvement over the KL divergence baseline in terms of best episodic reward in 2 out 3 tasks.
- Since the CS divergence is new in MARL and RL, a table containing the running times of the evaluated algorithms is missing. How costly is the CS divergence?
Technical Quality: 2
Clarity: 3
Questions for Authors: - The authors mention: "To the best of our knowledge, there is no exploration method that can adapt to both heterogeneous scenarios with sequential updating and homogeneous scenarios with simultaneous updating". But can the proposed method adapt to homogeneous scenarios with simultaneous updating? No experiments in such settings have been provided. Could the proposed intrinsic rewards be used to improve exploration in MARL settings with homogeneous agents?
- Why do the authors use $\lambda$ and $1 - \lambda$ for weighting the intrinsic rewards, instead of arbitrary weights (not in a convex combination)? How important is it to the performance?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors provide limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive comments. We address your concerns as follows:
## (1) The improvement over baselines
We have re-evaluated our method against the baselines using an aggregate statistical test. We have quantified the interquartile mean (IQM) across tasks of our method and baselines. Please refer to the author rebuttal and the attached PDF for more details.
In the PDF of the author rebuttal above, Figure 2 (a) and (b) show the performance comparison with other exploration strategies. We can observe that in terms of IQM reward across multiple tasks, the proposed method consistently outperforms other methods in MA-Mujoco, with small overlap. Besides, the CS-divergence shows more stability and has smaller confidence intervals in MA-Mujoco compared to other methods. In the Bi-Dexhands environment, it shows a significant improvement gap in the last 10m steps and achieves the highest best episodic reward.
## (2) The running time comparison
We have compared the running time of our methods and baselines. Please refer to the author rebuttal and the attached PDF for more details.
From Table 1 in the attached PDF in the author rebuttal above, we observe that compared to sequential MARL baselines, MADPO only introduce a negligible extra time cost.
## (3) Adapt to homogeneous tasks
Homogeneous scenarios can be regarded as a special case of heterogeneous scenarios. Thus, yes, our method can be adapted to homogeneous scenarios. Existing methods ensure homogeneous agents by enabling parameter sharing. Under such a setting, the intra-agent divergence maximization is unnecessary, since agents should receive a homogeneous intrinsic reward. Consequently, when setting $\lambda$ to $0$, the Mutual PDM can be adapted to homogeneous tasks with simultaneous updating.
Due to the page limit of the attached PDF, we present here the following table, which shows the aggregate IQM best episodic rewards comparison across $10$ tasks and $5$ random seeds in MA-Mujoco, between different exploration strategies when adapted to MAPPO. Here, we disenable the parameter sharing. The results show that as an exploration incentive in MAPPO, the Mutual PDM achieves better performance than the entropy, indicating its effectiveness when adapted to homogeneous agents with simultaneous updating.
| Steps| MAPPO (entropy)| MAPPO (Mutual PDM with $\lambda =0$)|
| :---- | :-----------: |:--------:|
| 2e6| $\pmb{1567.87(\pm138.48)}$| $1431.23(\pm89.18)$|
| 4e6| $2139.04(\pm248.98)$| $\pmb{2494.59(\pm201.06)}$|
| 6e6| $2594.42(\pm356.38)$| $\pmb{2954.39(\pm389.27)}$|
| 8e6| $3032.69(\pm387.55)$| $\pmb{3228.39(\pm231.90)}$|
| 1e7| $3577.72(\pm145.59)$| $\pmb{4155.31(\pm283.54)}$|
## (4) Why $\lambda$ and $1-\lambda$ for weighting
We use $\lambda$ and $1-\lambda$ here to balance the two components of mutual policy divergence maximization for the following reasons.
First, it is necessary to scale the two types of divergences to the same range since they share the same range of values. In different tasks with different requirements for heterogeneity and exploration ability, adjusting the degree of the two components is necessary.
Even in tasks requiring heterogeneity and exploration ability both highly, we can tune the other parameter $\sigma$ to adjust the scale of the mutual divergence. Therefore, jointly tuning $\lambda$ and $\sigma$ can achieve the desired parameter combinations, and is sufficient for MADPO to adapt to different tasks. Compared to arbitrary weights, this parameter setting facilitates parameter tuning, since we can use $\sigma$ to control the sacle of mutual divergence and within use $\lambda$ to weighting the two parts.
Additionally, using other weighting methods, such as no-linear weighted mean or boosting learning, may introduce extra parameters and computational costs. Due to the limited time for rebuttal, we will investigate the performance of different weighting methods in the future.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for addressing my concerns and questions. I increased my rating score.
---
Reply to Comment 1.1.1:
Title: Thanks for the response
Comment: Thank you for your response. We are delighted to hear that our rebuttal addressed your concerns well. | Summary: The paper proposes a novel training objective where it encourages the policies to diverge from each other and from the previous policy under heterogeneous multi-agent tasks based on sequential recently proposed sequential policy update. It utilizes CS divergence for calculation of "distance" between policies for tractable and stable optimization compared to KL divergence. The evaluation is done in high-dimensional multi-agent mujoco and and bi-dexhands environments, outperforming existing state-of-the-art sequential algorithms.
Strengths: - The paper is well written and easy to understand; Fig. 1 is very informative.
- The problem of exploration under agent heterogeneity is an important problem in multi-agent learning
- The proposed method is sound and is backed by theory
Weaknesses: - The evaluation is hard to judge whether the proposed method is actually performs better than the baselines, this is a deal breaker. I suggest the authors also incorporate aggregate quantities from https://agarwl.github.io/rliable/
I'm willing to increase the score if the authors show that the improvement is statistically significant
Technical Quality: 3
Clarity: 3
Questions for Authors: - Is it possible to have a "cyclic" problem where the 1st, 3rd, 5th, ... (and also 2nd, 4th, 6th, ...) policies have the same behavior despite optimizing the proposed training objective?
- Can the authors explain why CS is chosen over Jensen–Shannon divergence (JSD)?
- Is there a guideline for tuning the coefficients for the intrinsic rewards?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Minor comments
- line 36, wrong citation format
- line 193 and line 217, Ep. 5 should be Eq. 4
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive comments. We address your concerns as follows:
## (1) The aggregate evaluation metrics
Thanks for your constructive suggestion. We have re-evaluated our method by using this powerful toolbox ***rliable***. Please refer to the author rebuttal and the attached pdf for more information. Since the environments used in this study does not have a round end score, we choose the aggregate interquartile mean (IQM) across tasks as the evaluation metric. The results in Figure 1 of the PDF demonstrates that, in terms of aggregate IQM, MADPO achieves better performance and has smaller confidence intervals in most tasks. We believe these results has the statistical significance.
## (2) A cyclic similarity problem in learned policies
It is an interesting question, and we believe the probability of such a situation occurring is extremely low. In our MADPO, agents are encouraged to learn different policies from the preceding ones, yet are not to be guided to behave like another one. We admit that there is a chance that one agent can autonomously imitate the one before the preceding one by a divergence incentive, as you mentioned. However, the probability is extremely low due to (a) there is no direct guidance for imitation, (b) agents are also goaded to diversify based on their previous policies (the intra-agent policy maximization), and (c) the CS divergence incentive can bring further stochasticity to agents by implicitly maximizing the policy entropy.
## (3) Why choose CS divergence over JS divergence
JS divergence fixes the problem that CS divergence may explode in MARL scenarios, since it has a constant lower bound and upper bound. However, the JS divergence is not the best choice. The JS divergence between the current policy and a fixed policy is defined as follows,
$$\begin{aligned}
D_{JS}(\pi||\overline{\pi}) &= \frac{1}{2}D_{KL}(\pi||\frac{\pi+\overline{\pi}}{2}) + \frac{1}{2}D_{KL}(\frac{\pi+\overline{\pi}}{2}||\overline{\pi}) \\\\
&=\frac{1}{2}[\mathcal{H}(\pi,\frac{\pi+\overline{\pi}}{2})-\mathcal{H}(\pi) + \mathcal{H}(\frac{\pi+\overline{\pi}}{2},\overline{\pi})-\mathcal{H}(\overline{\pi})].
\end{aligned}$$
It still has the same drawback as KL divergence, i.e. minimizing the policy entropy $\mathcal{H}(\pi)$ when maximizing the divergence, which is harmful to exploration.
On the contrary, as indicated in Proposition 1, our method implicitly maximizes the policy when maximizing the divergence, which can bring more stochasticity to policies and benefit exploration. Hence, our method is able to provide agents with entropy-guided divergence incentives for exploration.
## (4) Guideline for tuning parameters
We have evaluated MADPO under different parameter settings by using the aggregate IQM test of ***rliable***, and thanks again for recommending this method. Please refer to the author rebuttal and the attached PDF above for the experimental results. The results show that our MADPO achieves better performance than HAPPO in a reasonable range of $\sigma$.
Based on the empirical results, here we can give a guideline for tuning $\sigma$ and $\lambda$: (a) set $\sigma$ as commonly used values, such as 1e2 or 5e2; (b) set $\lambda$ based on the task type, but not more than 0.5.
## (5) Minor suggestions
Thanks for pointing out the wrong reference and citation formats, and we have corrected them accordingly.
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for the detailed answers and also for incorporating an improved evaluation metric. I do appreciate the effort the authors took to improve the paper. I'm willing to increase my score from 6 to 7.
One additional question: Can the authors give a quantitative or qualitative analysis on how the behaviors of the policies differ? As we discussed, it is possible that the policies alternate between a few behaviors. If this is the case, future work could use this insight to further improve the algorithm.
---
Rebuttal 2:
Title: Thanks for the response
Comment: Thank you for your response. We are delighted to hear that our rebuttal addressed your concerns well.
To address the issue you mentioned, even though it is unlikely to occur, we need to quantify the behaviors of each policy or measure the divergence among more than two policies [1]. However, quantifying the policy behaviors is challenging due to the difficulty of designing a best behavior representation method. Recent works share insights for behavior representation, such as trajectory distributions [2] and state-action pairs [3]. We will investigate the efficient representation approach and choose a more powerful divergence [1] for our work in the future.
[1] Lu Mingfei, et al., "Measuring generalized divergence for multiple distributions with application to deep clustering", Pattern Recognition, 2024.
[2] Dhruva Tirumala, et al., "Behavior Priors for Efficient Reinforcement Learning", JMLR, 2022.
[3] Huang Zhiyu, et al., "Efficient Deep Reinforcement Learning With Imitative Expert Priors for Autonomous Driving", IEEE TNNLS, 2023. | Summary: This paper is situated in the problem setting of heterogeneous cooperative agents, under the sequential update framework. The paper introduces the novel MADPO algorithm, in which agents maximize the Cauchy Schwarz divergence between agents and between episodes of data gathered by the same agent, to improve exploration. Empirical validation is performed on the Multi-Agent Mujoco and Bi-DexHands benchmark suites, demonstrating that the MADPO outperforms baselines.
Strengths: Overall, the paper is clear, succinct, and the main idea is clear and easy to understand. The format, and figures are good, with all expected components included. The idea of maximizing the inter/intra agent divergences is intuitively appealing. Further, the authors address the pitfalls of naively maximizing intra-agent divergences by adopting the Cauchy Schwarz divergence. It's especially nice that maximizing the CS divergence implies maximizing the policy entropy as well. Experiments are done on a large number of tasks, with comparisons against expected baselines and parameter sensitivity analyses all present.
Weaknesses: 1. The motivation of the paper is not altogether clear to me. The paper seems to suggest that exploration is more challenging in the sequential update setting, necessitating devoted algorithms. Why would this be the case?
2. In many of the presented domains, the improvement of MADPO over the next best method is not very large. Sometimes, confidence intervals of MADPO overlap those of the next best method. Can the authors provide statistical significance tests for the main results in Figures 2 and 3, comparing MADPO to the next best method?
3. Some minor suggestions:
- Please check your paper carefully for typos, as there are quite a few:
- Line 89: "connecting link dimension curse"? Not sure what this is
- No period after Figure 4
- Trust interval -> confidence interval
- Lacking 'and' at line 174
- Line 204: conditoned -> conditioned
- Line 216: extra "of"
- Please be sure to state the number of trials in the main text. It is mentioned in the Neurips checklist, but I could not find it in the main text
- Please make the colors of the methods the same for both domains (i.e. pick 1 color for MADPO and be consistent with it)
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How sensitive is the algorithm to the scale of the divergence rewards? Have you done a study on this?
2. On the intra-policy divergence: is the policy updated every episode? If not, then wouldn't the intra-policy divergence reward often be 0?
3. Line 180 states that it would be challenging to define an inter-agent divergence in the simultaneous update scheme. Why not consider the divergence between $\pi^i_k$ and $\pi^j_{k-1}$? But this does not seem any more challenging to compute, and can be computed under CTDE assumptions.
4. Would it be possible to implement this exploration scheme in the CTDE setting? If so, it would be interesting to see how well the method performs.
5. Proposition 2 states that the CS divergence has a lower bound. Does it also have an upper bound?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we would like to express our gratitude for your careful review of our work, as well as for your positive comments and insightful suggestions.We address your concerns as follows:
## (1) Motivation of our work
The sequential updating scheme offers a novel solution to heterogeneous MARL, enabling agents to access information from preceding ones.
Our focus is on designing a sequential exploration strategy within this framework. Simply applying simultaneous exploration approaches, such as entropy or intra-agent divergence, to sequential methods may not fully leverage the available sequential information.
Besides, in heterogeneous tasks, exploration should not only aim for novel state and policy discovery but also enhance heterogeneity. Existing sequential methods often disable parameter sharing to accommodate heterogeneity, which is a passive approach lacking active guidance.
Therefore, in this work, we propose maximizing mutual policy divergence to actively enhance exploration and heterogeneity.
## (2) Statistical significance tests
We believe that results presented with confidence intervals are relatively convincing in terms of statistical significance. To further evaluate the overall performance of MADPO, we utilize ***rliable***, a powerful statistical testing tool recommended by Reviewer 3yBc. We choose the aggregate interquartile mean (IQM) across multiple tasks as the metric for the sample efficiency test. Please refer to the author rebuttal and the attached PDF for more information.
Figure (1) in the attached PDF shows that MADPO outperforms other baselines in terms of sample efficiency with a significant improvement gap in most tasks, and achieves the highest best episodic rewards in all tasks consistently.
## (3) Minor suggestions
Thank you for pointing out the typos in our paper; we have corrected them accordingly. We have also adjusted the figures to highlight our method in red and included all experimental details in the main text.
Regarding the statements in Line 89, we would like to clarify that in MATRPO [1], proposed by Li and He, a communication method using *connecting links* is proposed for sharing information among agents. However, as the number of agents increases, the additional cost of these connecting links cannot be ignored.
## (4) Sensitivity to the scale of divergence reward
In CS divergence, the kernel width $\sigma$ controls the scale of divergence. We conducted parameter sensitivity experiments, as presented in the original manuscript, which demonstrate that MADPO is somewhat sensitive to $\sigma$. Additionally, we performed an IQM test to further investigate this sensitivity in the rebuttal. Please refer to the author rebuttal and the attached PDF for more information.
We can observe from Figure 2 (c) that, even though MADPO is slightly sensitive to $\sigma$, it outperforms HAPPO in a reasonable range of $\sigma$ across several tasks. Note that we show here the aggregate results of parameter sensitivity experiments, and we can tune $\sigma$ in each tasks individually for further improvement.
## (5) Policy updating in intra-agent divergence maximization
The policy is updated every episode. The intra-agent policy divergence measures the difference between one agent’s current policy and its policy in the last episode. Thus, except for the first episode, once the agent’s policy is improving, the intra-agent policy divergence would not be zero. And the policy is updating according to the divergence.
## (6) Concerns about CTDE
First, we would like to clarify that our MADPO is based on CTDE. Because MADPO has a global V network and multiple agent policy networks and follows CTDE settings. The centralized V network generates a V function for training agent networks, while decentralized agents interact with the environment individually, as indicated in Algorithm 1 of the original manuscript.
Computing the divergence between $\pi_k^i$ and $\pi_{k-1}^j$ is not practically challenging indeed. However, diversifying agents in simultaneous updating methods, such as MAPPO, requires the non-parameter sharing setting, under which MAPPO will lose the monotonic improvement guarantee, even though it may bring better performance [2]. On the contrary, in sequential methods, such as HAPPO and our MADPO, the objective naturally takes the preceding information into account, and maintains the monotonic improvement guarantee in heterogeneous tasks.
Therefore, we made the statement that adapting the inter-agent divergence maximization to simultaneous updating methods is theoretically challenging. Additionally, we are also aware that adapting the proposed model to value-based methods, such as QMIX, is feasible, since it does not have the monotonic improvement issue.
## (7) Upper bound of CS divergence
The CS divergence has an upper bound. According to Proposition 1 in the original manuscript, the upper bound of CS divergence $D_{CS} (\pi||\overline{\pi})$ is the sum of two 2-order Rényi entropies: $$D_{CS} (\pi||\overline{\pi}) \leq \frac{1}{2}\mathcal{H}_2(\overline{\pi})+\frac{1}{2}\mathcal{H}_2(\pi).$$ Furthermore, given a finite action set $\pmb{A}=\\{a_0,...,a_n\\}$, the entropy $\mathcal{H}_2(\pi)$ has an upper bound $\log(n)$ when $\pi$ is a uniform distribution. Thus, the CS divergence is upper bounded by $\log(n)$, where $n$ is the number of the actions.
[1] Li, Hepeng, and Haibo He., "Multiagent trust region policy optimization.", IEEE TNNLS 2023.
[2] Kuba, J. G., et al., "Trust Region Policy Optimisation in Multi-Agent Reinforcement Learning.", ICLR 2022.
---
Rebuttal Comment 1.1:
Comment: I have read the common response and the rebuttal to my specific review. Thanks for going over my review in detail. I am satisfied with the rebuttal to most of my points, except for point 6 (see below). The rebuttal underlines my belief that this is a good paper which should be accepted. However, my score already reflects this belief, so I will not change it.
Recent work by (Sun et al. AAMAS 23)[https://arxiv.org/pdf/2202.00082] shows that even decentralized IPPO maintains the monotonic improvement guarantees by maintaining an approximate trust region, so I disagree that MAPPO w/o PS would lose that guarantee. It would be interesting if the authors could show that their proposed exploration technique works for the simultaneous decision-making approaches as well. However, this is an auxiliary point that is not directly relevant to the acceptance of this paper.
---
Rebuttal 2:
Title: Thanks for the response
Comment: Thank you for your response. We are delighted to hear that our rebuttal addressed your concerns well.
We agree that Sun et al. [1] did great work and offered a monotonic improvement guarantee for decentralized agents in MAPPO and IPPO. They ensured the trust region optimization by bounding the independent ratio. However, we would like to highlight that, decentralized agents are not equal to hetergeneous agents. In [1], when bounding the ratio, the authors noted the necessity for paremeter sharing (please see the statement above Eq. 16 in [1]). Thus, when switching off the paremeter sharing, it remians unclear whether the guarantee proposed in [1] holds.
[1] Sun Mingfei, et al., "Trust Region Bounds for Decentralized PPO Under Non-stationarity", AAMAS, 2023. | Summary: This paper introduces a novel multi-agent reinforcement learning (MARL) method called Multi-Agent Divergence Policy Optimization (MADPO), which enhances exploration and heterogeneity through a mutual policy divergence maximization framework. MADPO leverages a sequential updating scheme and quantifies discrepancies between episodes and agents, termed intra-agent divergence and inter-agent divergence, respectively. To address the instability and lack of directionality in traditional divergence measurements, the paper proposes using conditional Cauchy-Schwarz divergence to provide entropy-guided exploration incentives. Experiments demonstrate that the proposed method outperforms state-of-the-art sequential updating approaches in two challenging multi-agent tasks with various heterogeneous scenarios.
Strengths: 1. **Innovation**: The paper introduces MADPO, a novel MARL method that enhances agent exploration and heterogeneity through mutual policy divergence maximization.
2. **Theoretical Foundation**: The use of conditional Cauchy-Schwarz divergence to address instability and directionality in traditional divergence measurements is a contribution.
3. **Experimental Validation**: The experiments conducted on two challenging multi-agent tasks with different heterogeneous scenarios convincingly demonstrate the effectiveness and superiority of MADPO in enhancing exploration and heterogeneity.
Weaknesses: 1. The paper lacks analysis and comparison with relevant literature on sequential decision-making, such as:
- Liu J, Zhong Y, Hu S, et al. Maximum Entropy Heterogeneous-Agent Reinforcement Learning[C]//The Twelfth International Conference on Learning Representations. (This paper extends SAC to heterogeneous sequential decision-making scenarios, and the relationship between this work and the current paper remains unclear.)
2. It is unclear whether the intrinsic reward method proposed in this paper can ensure that the resulting trained policies are consistent with the original policies.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you provide a detailed comparison between your proposed MADPO method and the approach presented in "Maximum Entropy Heterogeneous-Agent Reinforcement Learning" by Liu et al.? Specifically, how does MADPO improve upon or differ from this method in terms of handling heterogeneous sequential decision-making scenarios?
2. I may have missed some details, but could you clarify whether the intrinsic reward method in MADPO ensures that the trained policies remain consistent with those optimized solely based on the original rewards?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have addressed the limitations of their work and discussed potential negative societal impacts in accordance with the guidelines.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive suggestions. We address your concerns as follows:
## (1) Comparison with HASAC
Liu et al. proposed Heterogeneous Agent SAC (HASAC) by extending maximum entropy RL into heterogeneous MARL [1]. However, we would like to clarify that our MADPO is an on-policy MARL method, while HASAC is based on an off-policy manner. Thus, HASAC naturally enjoys higher sample efficiency than other on-policy sequential updating methods, such as HAPPO, HATRPO and A2PO. However, as an off-policy method, HASAC incurs significant time costs.
In terms of handling heterogenous tasks, HASAC shares the similar idea in HAPPO, decomposing the joint Q function (joint advantage function in HAPPO) to individual ones conditioned by other agents’ policies. Nevertheless, HASAC lacks an active optimization objective to guide agents toward greater heterogeneity and diversity. In terms of enhancing exploration, HASAC adopts the strategy of SAC by incorporating a policy entropy term into the reward. In contrast, MADPO actively maximizes the stable mutual divergence of policies while implicitly maximizing the policy entropy, providing a stronger incentive for exploration and heterogeneity, as presented in Section 4 of the original manuscript.
To comprehensively compare MADPO with other sequential updating approaches, we conducted overall performance and wall time experiments for HASAC and our method. Please refer to the author rebuttal and the attached PDF for more information. We can observe from Table 1 and Figure 4 in the attached PDF that, although HASAC achieves a performance improvement, its huge time costs cannot be ignored.
Additionally, we recognize that the proposed mutual policy divergence maximization can be adapted to off-policy methods as well, which is our future research direction.
## (2) Consistency between policies trained on MADPO and the original reward
As an intrinsic reward, mutual PDM in MADPO aims to apply incentives when the change of extrinsic reward extrinsic rewards does not lead to policy improvements. Therefore, in MADPO, when the episode reward is increasing, the original reward part is dominant, resulting in policies that keep consistency with those trained on extrinsic rewards. On the other hand, when the reward struggles to improve, indicating agents should explore the environment, the intrinsic reward provides guidance to encourage agents to escape from suboptimal policy. In such cases, the policy generated may deviate from the original policy. To tackle this issue, we use parameter $\sigma$ to control the scale of the intrinsic reward, ensuring consistency between policies trained on the original and proposed rewards.
We have conducted the experimental comparison with the $no$ $incentive$ setting. We choose here the interquartile mean (IQM) across tasks as the metric. Please refer to Figure 2 of the attached PDF in author rebuttal above. The results indicate that mutual PDM in MADPO leads to significant policy improvements over the original reward. We have also conducted the parameter sensitivity experiments for $\sigma$, revealing that the mutual PDM with a reasonable choice of $\sigma$ will not cause performance degradation.
[1] Liu, Jiarong, et al., "Maximum Entropy Heterogeneous-Agent Reinforcement Learning", ICLR 2024. | Rebuttal 1:
Rebuttal: We thank all reviewers for their encouraging comments and constructive feedback. We are glad to note that the reviewers recognized our work as innovative, appealing and easy-to-follow *[sHA3, jcAK, 3yBc, gpQ4]*, theoretically nice and interesting to RL community *[sHA3, jcAK, 3yBc, gpQ4]*, well-organized and well-written *[jcAK, 3yBc, gpQ4]*, and experimentally effective *[sHA3, gpQ4]*.
We report here the additional comparison to address several concerns regarding the experimental results.
We re-evaluate our method with baselines by using ***rliable*** [1], a powerful statistical testing toolkit suggested by Reviewer 3yBc. Since the environments we used in this work do not have a round end score, we choose the aggregate interquartile mean (IQM) sample efficiency test of ***rliable*** for evaluation.
First, we compare the IQM rewards of our method across multiple tasks against other MARL baselines *[jcAK, 3yBc]*, against other exploration incentives *[gpQ4]*, and under different parameter settings *[jcAK, 3yBc]*. Then, we compare our method with the SOTA off-policy sequential method HASAC, in terms of overall performance and running time *[sHA3]*. Lastly, we present the running time results of MADPO and other baselines *[gpQ4]*. All the additional results are included in the attached PDF.
## Aggregate IQM sample efficiency test
The interquartile mean (IQM) computes the mean scores of the middle 50% runs, while discarding the bottom and top 25%. Here, we evaluate the performance across multiple tasks, and the total number of runs is $n \times m$, where $n$ is the number of trials for one task ($n=5$ in this paper), $m$ is the number of tasks. IQM test is more robust than the mean and has less bias than the median. The lines in the figures represent the IQM, while the shaded areas indicate the confidence intervals.
## Aggregate IQM comparison with MARL baselines *[jcAK, 3yBc]*
Figure 1 shows the IQM rewards comparison against baselines across $10$ tasks in Bi-Dexhands and $9$ tasks in MA-Mujoco. The $10$ tasks in Bi-Dexhands and include all the Bi-Dexhands tasks used, and the $10$ tasks in MA-Mujoco include all the MA-Mujoco tasks used, in the original manuscripts. The $3$ tasks of MA-Mujoco Ant include *Ant-v2-2x4*, *Ant-v2-4x2*, and *Ant-v2-8x1*. The $3$ tasks of MA-Mujoco Halfcheetah include *Halfcheetah-v2-2x3*, *Halfcheetah-v2-3x2*, and *Halfcheetah-v2-6x1*. The $3$ tasks of MA-Mujoco Walker2d include *Walker2d-v2-2x3*, *Walker2d-v2-3x2*, and *Walker2d-v2-6x1*.
We observed that the proposed MADPO consistently outperforms the state-of-the-art MARL methods in terms of best episodic reward across multiple tasks. The results also show that, MADPO has higher sample efficiency compared to other methods and achieves an improvement gap in most tasks.
## Aggregate IQM comparison with other exploration incentives *[gpQ4]*
Figure 2 (a) and (b) show the IQM rewards comparison against other exploration incentives across $10$ tasks in Bi-Dexhands and $10$ tasks in MA-Mujoco. We observed that in MA-Mujoco, the CS-divergence outperforms other incentives with a small overlap and narrow confidence interval, indicating its better stability than KL-divergence. In $10$ tasks of Bi-Dexhands, Figure 2 (b) shows that MADPO is the only one that keeps increasing with a significant improvement gap after 10m steps.
## Aggregate IQM comparison under different parameter settings *[jcAK, 3yBc]*
Figure 2 (c) and (d) indicate the parameter sensitivity of MADPO across $10$ tasks in Bi-Dexhands and $10$ tasks in MA-Mujoco. We can observe that MADPO is a little sensitive to parameter $\sigma$. However, in terms of IQM, it still outperforms HAPPO in a reasonable range of $\sigma$. Note that we can individually tune $\sigma$ for each task for further performance improvement. Figure 2 (d) indicates that MAPDO is also a little sensitive to the parameter $\lambda$, yet it consistently shows better performance than HAPPO. In conclusion, we suggest that when tuning $\sigma$, frequently used values can be a good choice, and tuning $\lambda$ should consider the characteristics of the task, and do not set $\lambda$ more than 0.5 in most cases.
## The performance and the running time comparison against on-policy baselines and HASAC *[sHA3, gpQ4]*
Here we compare MADPO with off-policy baseline HASAC on five random seeds and two tasks of Bi-Dexhands and two tasks of MA-Mujoco (due to the limited time), as indicated in Figure 3. We can observe that the off-policy HASAC has a performance improvement compared to on-policy methods. However, in Table 1, the running time of HASAC is much more than on-policy methods, which is a trade-off between time cost and performance. Table 1 also indicates that compared to baselines, MADPO only introduces a negligible extra time cost.
[1] Agarwal, Rishabh, et al., "Deep reinforcement learning at the edge of the statistical precipice.", NeurIPS 2021.
Pdf: /pdf/9317d4cb91b92b9cf3aaa8ea940fbc8c43256ed8.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
RCDN: Towards Robust Camera-Insensitivity Collaborative Perception via Dynamic Feature-based 3D Neural Modeling | Accept (poster) | Summary: In this paper, the authors proposed an essential problems: how to overcome the issues caused by the failed camera perspectives, while stabilizing high collaborative performance with low calibration cost? The authors presented a robust camera-insensitivity collaborative perception with a novel dynamic feature-based 3d neural modeling mechanism to address the issue. Moreover, to verify the effectiveness of the model, the authors also provided a new large-scale dataset, OPV2V-N for this field. The experiments result showcase the model’s robustness in proposed dataset.
Strengths: Strength:
1. The paper presents an interesting viewpoint that is to recover noisy camera perceptual information from other agents’ views by modeling the collaborative neural rendering field representation, in which the model is divided into two stages: a time-invariant static background and time-varying dynamic foreground.s
2. The paper develops a new dataset to fill the gap of the lack of a comprehensive collaborative perception dataset that accounts for different camera noise scenarios.
3. The paper is well-organized and interesting to read.
Weaknesses: 1. From my perspective, the paper lacks the theory analysis for the proposed method. Moreover, the authors fail to introduce the motivation of each sub-module in the presented model. For example, can the authors showcase the motivation of using Nerf for the static and dynamic fields, are there any dominant advantages of nerf, compared to other 3d reconstruction methods in this method?
2. It is necessary to give more rigorous mathematic analysis of equations in this paper. Furthermore, the authors are required to introduce the details of each networks, including the training parameters, learning rate, weight values in eq. 12.
Technical Quality: 2
Clarity: 2
Questions for Authors: see weakness part
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The current work focuses on addressing the camera-insensitivity problem in collaborative perception. It is evident that accurate reconstruction can compensate for the negative impact of noisy camera features on collaborative perception.
Flag For Ethics Review: ['Ethics review needed: Safety and security']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your appreciation of our work. We will take your suggestions into account when preparing the final version. Below please find responses to specific questions.
>"W.1: Can the authors showcase the motivation of using Nerf for the static and dynamic fields, are there any dominant advantages of nerf, compared to other 3d reconstruction methods in this method?"
**A.1** The proposed geometry BEV feature avoids the issue of per-scene “network memorization” inherent in NeRF by employing generic feature representations. The decoupling of dynamic and static neural rendering adapts to real-world collaborative autonomous scenarios. We selected NeRF for its photorealistic rendering capabilities. However, research such as Mip-NeRF 360 (CVPR 2022) and Zip-NeRF (ICCV 2023) demonstrates that other reconstruction methods, such as Structure from Motion (SfM) and Multi-View Stereo (MVS), are less effective at novel view synthesis compared to NeRF-based methods. Meanwhile, to make NeRF a better adaption to the collaborative perception setting, we optimize the collaborative neural field by decoupling it into static and dynamic parts for better performance, which improves the segmentation performance of dynamic vehicles by 12.27\% compared to the non-decoupled one.
>"W.2: It is necessary to give more rigorous mathematic analysis of equations in this paper. The authors are required to introduce the details of each networks, including the training parameters, learning rate, weight values in eq. 12."
**A.2** Thank you for pointing this out. The static and dynamic neural components of our model follow the architecture of Instant-NGP (SIGGRAPH 2022). The architecture for the collaborative perception component is detailed on L566-583 of the supplementary material. The initial learning rate is 5e-4 with the exponential learning rate decay strategy. The weight values in eq. 12 is set to 1.0, 1.0, 0.1, and 1.0, respectively. Further details about the loss function are provided on L632-636 of the supplementary material. We will also include these parameters in the main text for improved clarity.
>"L.1: The current work focuses on addressing the camera-insensitivity problem in collaborative perception. It is evident that accurate reconstruction can compensate for the negative impact of noisy camera features on collaborative perception."
**A.1** Thanks for your deep understanding of our proposed RCDN. To the best of our knowledge, We are the first to introduce the NeRF into the collaborative perception field to handle robust settings. Our proposed RCDN has shown significant improvements in our experiments, particularly in scenarios under multi-source noises.
---
Rebuttal Comment 1.1:
Comment: The authors have processed all my concerns. Thanks!
---
Reply to Comment 1.1.1:
Title: Thanks for your comment !
Comment: Thank you for your thorough review and constructive feedback. We sincerely appreciate your valuable reviews and are glad to know that our rebuttal and new experiments have addressed most of your concerns. | Summary: The paper introduces a new problem: how to overcome the issues caused by the failed camera perspectives, while stabilizing high collaborative performance with low calibration cost? Therefore, RCDN, a Robust Camera-insensitivity collaborative perception with a novel Dynamic feature-based 3D Neural modeling mechanism is introduced. To validate the new method, the authors also provide a new dataset: OPV2V-N. RCDN serves as baseline here. Ablation Study shows for 5 models (F-cooper, Att-Fuse, Disco-Net, V2VNet, CoBEVT a significant improvement over their baselines, w/o RCDN.
Strengths: The paper builds up on three pillars: single perception, collaborative perception and neural rendering. The base idea is novel to the best of my knowledge. The problem formulation is clear and well sounded, easy to follow. The System architecture is strong. The authors also focus on the differentiation between static and dynamic scenarios, especially for the neural fields both based on the BEV volume feature space. This differention is very important, not very often in detail discussed. The ablation study especially table 5.1 shows very accurate an increase of performance for different tasks static (lanes, free space) and dynamic perception. The experimentsl part introduces a new dataset, which is necessray for the investigation.
Weaknesses: The overall system architecture sounds good. However, there are some open points for me, the impact of section 4.3 and 4.4, i.e. the neural fields part, seems open in terms of clarification. Example: What is difference between sf w , sbw in equation (7)?
The experimental section is a bit too short. I feel its not finished yet. However, there is limited space. The overall approach is not usable for realtime.
Technical Quality: 2
Clarity: 3
Questions for Authors: What is difference between sf w , sbw in equation (7)?
How many message exchange tasks could be used overall (Figure 2.) Will baseline code be publishe in combination with the dataset? When?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The most relevant limitation is the missibg real-time applicability.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments. We have carefully addressed all the questions raised. Please find our responses below.
>"W.1: What is difference between sfw, sbw in equation (7)?"
**A.1** Apologies for any confusion regarding the terms s\_fw and s\_bw. The term s\_fw stands for forward scene flow, while s\_bw refers to backward scene flow. Specifically, the forward scene flow (s\_fw) estimates the flow from time t to t+1, whereas the backward scene flow (s\_bw) estimates the flow from time t to t-1. By leveraging both s\_fw and s\_bw, we can achieve a more consistent representation of scene movements.
>"W.2: The experimental section is a bit too short. I feel its not finished yet. However, there is limited space."
**A.2** Thanks for your understanding. Due to the limited space available, we aimed to present the key experimental results and analyses as concisely as possible. However, we greatly appreciate your suggestion and agree that providing additional details could enhance the clarity and comprehensiveness of our work. In response to your comment, we will make the following improvements to the experimental section in the final version: i) include a more detailed description of the experimental setup and procedures ii) add additional experimental results and figures (such as detection results) to better support our conclusions.
>"Q.1: How many message exchange tasks could be used overall (Figure 2.)"
**A.1** The proposed RCDN does not add extra message exchange times, and as for communication cost, similar to DiscoNet (NIPS 2021), we only utilize the RGB labels during the training stage, meaning we leave the communication burden to the training stage and do not introduce extra information during the inference.
>"Q.2: Will baseline code be published in combination with the dataset? When?"
**A.2** We will publish the corresponding codes and dataset as soon as we get accepted.
>"L.1: The most relevant limitation is the missibg real-time applicability."
**A.1** We provide the corresponding latency times in the supplementary materials: the proposed static module takes approximately 4.47 ms, the dynamic module takes about 3.94 ms, and each rendering process takes about 21 ms. The training time ranges from 20 to 30 minutes. For more details, please refer to lines L556-583. Currently, RCDN requires further code optimization to meet real-time application requirements.
---
Rebuttal Comment 1.1:
Title: Thank You
Comment: The authors addressed my comments very well. Thank you.
I would recommend to address W.1/A.1 for a final paper.
Overall, I stay at my decision due to the weaknesses described above.
---
Reply to Comment 1.1.1:
Title: Thanks for your comment !
Comment: Thank you for your feedback and for acknowledging our efforts to address your comments. We appreciate your recommendation regarding W.1/A.1 and will ensure that these points are carefully addressed in the final version of the paper. Your insights have been invaluable in improving our work, and we are committed to refining it further. | Summary: The paper presents RCDN, a method to aggregate multi-sensor perception signals in dynamic environment.
The key idea, is to improve the aggregated multi-agent feature with the multi-view rendering loss.
At its core, RCDN gathers input streams at varying timesteps of multiple agents. The gathered images are fused into Birds Eye view (BEV) then further decoded into volume.
The volumetric features are learned into static scene and dynamic scene components with NGP based representation.
Overall procedure is supervised with rendering loss, (cyclic) optical flow consistency.
The method is evaluated on new dataset, OPV2V-N, which is an updated version of OPV2V, with additional masking and optical flow.
The results show that RCDN helps BEV segmentation with various backbones, compared to the model used without RCDN.
Strengths: The main benefit of the RCDN, is that it is fairly easy to apply into different existing feature backbones, as it is the post-processing step built on top of BEV features.
Experimentally, the usage of RCDN significantly improves the segmentations which implies that the features are better aligned throughout the noisy signals.
This makes the work to be a great off-line data augmentation / preparation pipeline for generating BEV segmentation features.
The paper additionally proposes OPV2V-N dataset, which may be somewhat valuable addition to the community.
Aside from technical perspective, the paper is easy to follow and well-written.
Weaknesses: The paper's main weaknesses are two folds.
1. The paper does not evaluate on tasks other than BEV segmentation.
While I believe that the pixel-aligned features from NGP would give benefits over various vision tasks, the paper only demonstrates on smaller domain of work which undermines its actual potential. It would have been more interesting to compare how it impacts in different downstreaming tasks, such as detection / tracking.
2. Technical contribution seems to lack novelty.
The paper is a mix of two known-to-work solutions; BEV feature decoding for segmentation (used with various baselines in the experiments), and NGP (or radiance field based) multi-view pixel / density alignment through rendering loss. Usage of rendering loss to improve segmentation map is well-investigated in different literatures in the NeRF community (e.g, semantic-nerf).
Technical Quality: 3
Clarity: 4
Questions for Authors: These are few questions that I would like the authors to answer in the rebuttal.
1. How real is the synthetic OPV2V-N dataset? In other words, how can features learned in OPV2V-N dataset be translated to real-world usage? Moreover, are there any real-world quantitative results on model trained on synthetic data?
2. Have authors evaluated the method on different down streaming task other than segmentation? How does one verify that the volumetric features are geometrically correct? (how accurate is the Geometric BEV features?)
3. How is BEV segmentation evaluation differ on non-flat surfaces like hills or bridges?
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 1
Limitations: No concerning limitations are found.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and valuable comments. Please note our top-level comment with additional experimental and theoretical results. Below we address specific questions.
>"W.1: Have authors evaluated the method on different down tasks other than segmentation?"
**W.1** Our proposed RCDN is general to different downstream tasks and is not limited to just BEV segmentation. We focus on BEV segmentation due to its crucial role in autonomous driving, with direct applications to other tasks such as layout mapping, action prediction, route planning, and collision avoidance. Additionally, we have validated RCDN for detection tasks. In our experiments, we replaced the original segmentation header with a detection header (see Figure 2 and Table 3 in the attached PDF). Table 3 shows that for CoBEVT, using RCDN improves the metrics of AP@0.50 and AP@0.70 by 19.05\% and 24.99\%, respectively.
>"W.2: Technical contribution"
**A.2** Robust perception is a significant challenge in single-agent systems; however, few studies have addressed this issue in the context of collaborative perception. Instead of adding additional modal sensors, such as LiDAR, as done in single-agent systems, our approach leverages the unique multi-view properties to mitigate the impact of noisy views. Notably, NeRF (Neural Radiance Fields) achieves photorealistic rendering by optimizing 2D multi-view images. To the best of our knowledge, we are the first to apply NeRF to the field of collaborative perception to handle robust settings. Meanwhile, to make NeRF a better adaption with the collaborative perception setting, we propose the geometry BEV features, which can improve the PSNR by about 9.30\% (see more details in Appendix.B.6) and avoid NeRF network memorization. Additionally, we optimize the collaborative neural field by decoupling it into static and dynamic parts for better performance, which improves the segmentation performance of dynamic vehicles by 12.27\% compared to the non-decouple approach.
>"Q.1: How real is the synthetic OPV2V-N dataset? In other words, how can features learned in OPV2V-N dataset be translated to real-world usage?"
**A.1** The OPV2V-N dataset is recorded by the co-simulation with SUMO (traffic manager) under the realistic platform CARLA simulator. As for the "How can features learned in OPV2V-N dataset be translated to real-world usage?", this pertains to the broader issue of sim-to-real transfer. Whether the RCDN pre-trained on OPV2V-N can be effectively applied to other datasets or real-world scenarios depends on the domain gap between the BEV (Bird’s Eye View) feature space learned from OPV2V-N and that of other datasets or real-world environments. Recent research, such as the DUSA approach (ACM-MM 2024), addresses sim-to-real adaptation for collaborative perception. DUSA proposes a unified unsupervised BEV feature adaptation module. Since our RCDN operates as a post-processing step built upon BEV features, it is theoretically possible for RCDN to leverage DUSA to bridge the BEV feature space gap between OPV2V-N and real-world scenarios.
>"Q.2: Moreover, are there any real-world quantitative results on model trained on synthetic data?"
**A.2** Please refer to our top-level comment and the attached PDF for further details. In summary, we can employ sim2real BEV feature space adaptation techniques, such as DUSA, to validate models trained on synthetic data in real-world scenarios. However, the current real-world collaborative perception datasets can not meet the RCDN's camera setting demands, and we will keep track of real-world dataset development, implementing the RCDN in real-world settings if the camera setting meets the demands.
>"Q.3: How does one verify that the volumetric features are geometrically correct? (how accurate is the Geometric BEV features?)"
**A.3** Our proposed module only inputs RGB images without introducing additional features such as depth images. Specifically, we supervise the geometry of volumetric features by using multi-view consistency and corresponding downstream tasks. From the experimental results, the visualization results (please kindly refer to Figure 5 in the manuscript Page 9.) demonstrate the high multi-view consistency of the rendered results. The generated BEV features successfully improve the performance of perception tasks in 3D space (i.e., BEV segmentation/detection), which further indicates the geometrical correctness of our BEV feature.
>"Q.4: How is BEV segmentation evaluation differ on non-flat surfaces like hills or bridges?"
**A.4** Thank you for your questions. Currently, the OPV2V-N segmentation labels are available only in 2D image format. As a result, non-flat surfaces such as hills or bridges are projected onto these 2D segmentation images. Consequently, from the perspective of static part segmentation, our BEV decoder classifies static elements into two categories: road and line. | Summary: The paper proposed Bird Eye View (BEV) semantic segmentation pipeline from collaborative perception, robust to motion blur, sensor noise, occlusion and even failure. The proposed a pipeline that adapts neural rendering techniques to overcome the noise/malfunction in camera capture and occlusion. With the proposed method combined with prior methods, performances on OPV2V-N (the proposed BEV semantic segmentation dataset) are improved.
Strengths: The paper proposed to apply neural rendering concept for ‘robust’ collaborative-perception BEV segmentation. It is natural way of thinking to overcome noise/malfunction in the caption system but the way the paper adapts neural rendering to BEV segmentation is novel. And, the performance is verified with OPV2V-N dataset.
Weaknesses: Evaluation is only performed with OPV2V-N dataset which may result in overfitting. More evaluation with different dataset is required. The author may need to compare methodologies on other dataset although the existing dataset do not have noise. The author also may add random noise to the prior dataset and run experiments.
The manuscript was uneasy to read and understand. The paper should re-written. The comments below are without understanding supplemental materials fully.
- The way proposed algorithm is combined with prior method is unclear. The reviewer guessed that the MCP module can be replaced with prior methods, but it is not stated.
- Many abbreviations are not explained sufficiently and terminologies the author defined are ambiguous and may be incorrect.
- MCP is short for the multi-agents collaborative perception process but the paper did not explain MCP module in details with no reference
- BEV, no full name, no reference.
- “Camera-insensitivity” can be understood terminologies related to camera sensor sensitivity (how much the camera sensor accept photon…).
- Robust Camera-Insensitivity: Robust == Camera-sensitivity? The latter one may be redundant
- Line 6. introduce a new robust camera-insensitivity problem: cam be replaced “introduce BEV segmentation when the camera capture are unreliable (or noisy)?” Should be more concrete without ambiguous words
- Line19 “Ported to” mean?
- There are more unclear sentences.
Technical Quality: 3
Clarity: 2
Questions for Authors: .
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: .
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and thoughtful comments. Please note our top-level comment with additional experimental and theoretical results. Below we address specific questions.
>"W.1: Evaluation is only performed with OPV2V-N dataset which may result in overfitting. More evaluation with different dataset is required. "
**A.1** This point was also raised by other reviewers, and we have addressed it in the general response (above). Please refer to our top-level comment and the attached PDF for detailed information. In summary, i) validated our approach on a newly collected V2XSet-N-mini dataset, demonstrating that the proposed RCDN stabilizes performance under noisy camera conditions.; ii) utilized the OPV2V-N pre-trained RCDN module for direct inference on the V2X-Sim 2.0 dataset, showing that RCDN effectively stabilizes the perception results.
>"W.2: The way proposed algorithm is combined with prior method is unclear. The reviewer guessed that
the MCP module can be replaced with prior methods, but it is not stated"
**A.2** The MCP module stands for the Multi-agent Collaborative Perception module. Existing state-of-the-art (SoTA) MCP modules share a common pipeline: an encoder-fusion-decoder architecture. To ensure fairness in collaborative perception experiments, different MCP modules use the same encoder-decoder architecture but differ in the fusion process. The fusion process is responsible for the bird-eye view (BEV) feature aggregation. Therefore, the MCP module can be replaced by simply switching between different BEV feature aggregation processes.
>"W.3: Many abbreviations are not explained sufficiently and terminologies the author defined are
ambiguous and may be incorrect. MCP is short for the multi-agents collaborative perception process but the paper did not explain MCP module in details with no reference"
**A.3** Thank you for pointing out the misuse of abbreviations. We will ensure to double-check and eliminate any ambiguous abbreviations. For the multi-agent collaborative perception (MCP) module, we employ mainstream collaborative methods based on intermediate BEV features for our experiments. The general setting of the MCP module is described in L570-578 supplements. To avoid any ambiguity, we will include additional technical details about the different MCP baseline methods, such as their respective pipelines and feature fusion processes, in the final supplements.
>"W.4: BEV, no full name, no reference"\"There are more unclear sentences."
**A.4** We will reintroduce the concept of bird’s-eye view (BEV) and include relevant references, such as BEVFormer (ECCV 2022). Additionally, we will address the misuse of the terms s\_fw and s\_bw, which refer to scene flow for forward and backward directions, respectively. We will also revise the paper to clarify any unclear sentences.
>"W.5: “Camera-insensitivity” can be understood terminologies related to camera sensor sensitivity (how much the camera sensor accept photon…).
"\"Robust Camera-Insensitivity: Robust == Camera-sensitivity? The latter one may be redundant
Line 6. introduce a new robust camera-insensitivity problem: cam be replaced “introduce BEV segmentation when the camera capture are unreliable (or noisy)?” Should be more concrete without ambiguous words"
**A.5** Thank you for your careful and valuable feedback. Our intent in using the term “robust” was to emphasize the concept, but we will revise the description to avoid redundancy based on your suggestions. We will replace the phrase “introduce a new robust camera-insensitivity problem” with “introduce BEV segmentation when the camera capture is unreliable (or noisy).”
>"W.6: Line19 “Ported to” mean?"
**A.6** Apologies for any confusion regarding the term "Ported to". The "Ported to" means that it is fairly easy to apply to different existing feature backbones, as it is the post-processing step built on top of BEV features.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: When revising the writing, it is recommended below:
1. Assume that the reader is unfamiliar with the topic.
2. Replace ambiguous or subjective words/sentences to more concrete and specific ones.
---
Reply to Comment 1.1.1:
Title: Thanks for your comment !
Comment: Thanks for your positive feedback and valuable insights! In our revised manuscript, we will ensure that the writing is clear and accessible, assuming that the reader may be unfamiliar with the topic. We will also replace any ambiguous or subjective language with more concrete and specific terms. Additionally, we will clearly outline all technical details to ensure a comprehensive understanding for all readers. | Rebuttal 1:
Rebuttal: **Please see the attached PDF for a one-page PDF with a summary of added experimental results.**
We thank all reviewers for their constructive comments on our work. We found one comment that was common amongst more than one reviewer, hence we highlight it here.
>"Have you tested RCDN on any datasets other than OPV2V-N?[qUQv]"\
>"The author may need to compare methodologies on other dataset.[y98a]"\
>"are there any real-world quantitative results on model trained on synthetic data?[YPsN]"
**A1.1** **For other datasets**, to adapt the multi-view based robust collaborative perception setting, we spent considerable time manually recording the multi-view overlaps existing start time, end time, and corresponding car IDs, and generated corresponding masks and flows (more details are shown in Appendix A). Due to time constraints, we tried our best to convert a single scene from the V2XSet dataset (ECCV 2022) into the V2XSet-N-mini dataset for our experiments. As shown in Table 1 of the attached PDF, the RCDN significantly improves the average performance of the drivable area, lane, and dynamic vehicle detection by 74.40\%, 110.69\%, and 201.74\%, respectively, compared to baseline methods without RCDN. Additionally, we conducted open-set inference on the V2X-Sim 2.0 dataset (RA-L 2022), deploying the pre-trained RCDN model from OPV2V-N directly onto the V2X-Sim 2.0 dataset. Figure 1 in the attached PDF demonstrates that RCDN effectively stabilizes the perception results.
**A1.2** **For the real-world datasets**, the existing open-source real-world collaborative perception datasets are DAIR-V2X (CVPR 2022) and V2V4Real (CVPR 2023). i) V2V4Real focuses on the two connected vehicle-to-vehicle collaboration and currently only provides LiDAR data without camera modality, targeting the downstream of 3D detection. As for our proposed RCDN, we do not introduce lidar modality just like single-agent perception to deal with the robust perception setting, we want to focus on utilizing the special property of collaborative perception: multi-view based. Hence, V2V4Real cannot be utilized for our purposes. ii) DAIR-V2X focuses on the collaboration between one road-side infrastructure (equipped with one camera and one lidar) with one connected vehicle (equipped with one camera and one lidar). During our experiments, we found that the infrastructure cameras are positioned much higher than those on the vehicle, resulting in fewer overlapping views compared to vehicle-to-vehicle collaboration. Due to these limitations, we are also unable to utilize DAIR-V2X. Theoretically, the proposed RCDN compensates for the negative impact of noisy cameras through multi-view accurate reconstruction. The distinction between synthetic and real datasets primarily affects the distribution of RGB images, influencing later BEV feature space but not conflicting with the fundamental principles of RCDN. Therefore, the effectiveness of RCDN remains unchanged whether using synthetic or real datasets. RCDN has been thoroughly validated in synthetic OPV2V-N experiments, demonstrating stable performance under noisy camera conditions. However, we will keep track of real-world dataset development, implementing the RCDN in real-world settings when the camera settings meet our requirements.
Pdf: /pdf/015ff3f360701b1e1054c02f6224ca48c1139b58.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper introduces RCDN, a novel method for robust camera-insensitivity collaborative perception. This method aims to overcome challenges associated with noisy, obscured, or failed camera perspectives by using dynamic feature-based 3D neural modeling. RCDN constructs collaborative neural rendering field representations to recover failed perceptual messages sent by multiple agents. The proposed system consists of two collaborative field phases: a time-invariant static background field and a time-varying dynamic field. To validate RCDN, a new dataset called OPV2V-N was created. The paper demonstrates that RCDN improves the robustness of baseline methods in extreme camera-insensitivity settings.
Strengths: *Innovative Problem Addressing*: The paper tackles a significant real-world problem of camera insensitivity in multi-agent collaborative perception, which is crucial for autonomous systems.
*Novel Methodology*: The introduction of dynamic feature-based 3D neural modeling and the construction of collaborative neural rendering field representations are innovative approaches.
*Comprehensive Dataset*: The creation of the OPV2V-N dataset, which includes various camera failure scenarios, provides a robust platform for testing and validating the proposed method.
*Performance Improvement*: The extensive experiments and quantitative evaluations show significant improvements in robustness and performance over baseline methods.
*Detailed Evaluation*: The paper includes both quantitative and qualitative evaluations, along with ablation studies, which thoroughly demonstrate the effectiveness of RCDN.
Weaknesses: *Complexity and Computation*: The proposed method involves complex modeling and multiple steps. The author should provide the latency.
Generalizability: The performance of RCDN is primarily validated on the OPV2V-N dataset, which may limit the generalizability of the results to other datasets or real-world scenarios.
*Failure Cases*: It would be nice if the authors provide failure cases, which is important.
Technical Quality: 2
Clarity: 3
Questions for Authors: *Dataset Diversity*: Have you tested RCDN on any datasets other than OPV2V-N? How does it perform on real-world data?
*Real-Time Feasibility*: What are the computational requirements of RCDN, and how feasible is it for real-time applications in autonomous systems?
*Scalability*: How well does the method scale with an increasing number of agents and cameras? Are there any performance bottlenecks?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Please see weakness and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your time in reviewing our work and your feedback on the paper's value and clarity! Please note our top-level comment with additional experimental and theoretical results. Below we address specific questions.
>"Q.1: Have you tested RCDN on any datasets other than OPV2V-N? How does it perform on real-world data?"
**A.1** This point was also raised by other reviewers, and we have addressed it in our general response (above). Please refer to our top-level comment and the attached PDF for detailed information. In summary, i) we validated our approach on the newly collected V2XSet-N-mini dataset, demonstrating that the proposed RCDN stabilizes performance under noisy camera conditions; ii) we also deployed the OPV2V-N pre-trained RCDN module for direct inference on the V2X-Sim 2.0 dataset, where RCDN continued to effectively stabilize perception results; iii) we will keep track of real-world dataset development, implementing the RCDN in real-world settings when the camera settings meet our requirements.
>"W.1: The performance of RCDN is primarily validated on the OPV2V-N dataset, which may limit the generalizability of the results to other datasets or real-world scenarios."
**A.1** Since the proposed RCDN operates as a post-processing step on top of BEV features, its direct applicability to other datasets or real-world scenarios depends on the domain gap between the BEV feature space learned by OPV2V-N and those of other datasets or the real world. i) for other datasets, we performed open-set validation by directly applying the OPV2V-N pre-trained RCDN module to the V2X-Sim 2.0 dataset. The corresponding results, shown in Figure 1 of the attached PDF, indicate that RCDN effectively stabilizes perception results. This is expected since both V2X-Sim 2.0 and OPV2V-N are recorded using the CARLA simulator, resulting in a minimal domain gap. ii) for real-world scenarios, the recently published DUSA (ACM-MM 2024) identified a significant domain gap between simulated and real-world environments. DUSA proposed an unsupervised BEV feature adaptation module to bridge this gap. Therefore, in theory, the proposed RCDN could leverage DUSA to mitigate the BEV feature space gap between OPV2V-N and real-world datasets.
>"Q.2: What are the computational requirements of RCDN, and how feasible is it for real-time applications in autonomous systems?"
**A.2** We provide the corresponding latency times in the supplementary materials: the proposed static module takes approximately 4.47 ms, the dynamic module takes about 3.94 ms, and each rendering process takes about 21 ms. The training time ranges from 20 to 30 minutes. For more details, please refer to lines L556-583. Currently, RCDN requires further code optimization to meet real-time application requirements.
>"W.3: It would be nice if the authors provide failure cases"
**A.3** Based on the assumption of the multi-view setting, the multi-view RCDN may fail when agents are at the edge of the communication range, resulting in minimal view overlap. This limitation motivated our effort to collect the OPV2V-N dataset for our experiments.
>"Q.4: How well does the method scale with an increasing number of agents and cameras? Are there any performance bottlenecks?"
**A.4** Regarding the increasing number of agents and cameras, we validated the impact of adding more cameras using the OPV2V-N dataset (corresponding scenario types are T section and midblock respectively) with the CoBEVT baseline. From Table 2 in the attached PDF, we observe the following: i) With a single overlapping camera view, the proposed method significantly improves baseline performance, and ii) While theoretically, more cameras can provide a larger overlap range, the addition of multiple cameras (depending on their positions) may introduce redundant viewing angles, resulting in less significant performance improvements. | null | null | null | null | null | null |
On-Road Object Importance Estimation: A New Dataset and A Model with Multi-Fold Top-Down Guidance | Accept (poster) | Summary: 1.This paper this contributes a new large-scale dataset named Traffic Object Importance (TOI) to addresses the problem of on-road object importance estimation, which utilizes video sequences captured from the driver’s perspective as the input.
2.The author also proposes a model that integrates multi-fold top-down guidance with the bottom-up feature.
Strengths: 1.This paper describes in great detail the specialized methodology and the structure of the models.
2.The scarcity of large-scale publicly available datasets hinder the development of on-road object importance estimation.
3. This paper considers the effect of traffic rule on object importance and successfully models this abstract concept by proposing an adaptive object-lane interaction mechanism.
Weaknesses: 1.In page 3 the author mentions that the traffic rule is crucial for object importance and focus on the traffic line rules , but the influence of traffic rules is varied, such as signalization. Therefore, in page 4 of table 1, the author is able to provide statistics on the scenario categories of TOI dataset and the traffic rule constraints within the dataset in experiment.
2.In page 6, the author uses three common intention behaviors in driving to reflect the driver intention (i.e., turning left, going straight, and turning right). Since the video clip length is set at 16 frames, it is important to clarify if each of the three intentions corresponds to individual frames with the 16-frame clip cut during the training and testing phases, or if multiple intentions are present within the 16 frames. The authors should further elaborate and provide the proportion of each intention in the dataset.
3.Insufficient evaluation of indicators in the experimental section. The author may add another evaluation indicator.
4.The section three can include a schematic diagram of the annotation process for the dataset.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1.I consider whether 16 frames constitute interval sampling or continuous sampling, and how many types of intentional behaviors can be expressed using 16 frames in the paper.
2.The author may can add another evaluation metric for the experiment.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer mvxo
Thank you very much for your positive comments and we appreciate your thoughtful feedback and suggestions.
> ***Q1:**
In page 3 the author mentions that the traffic rule is crucial for object importance and focus on the traffic line rules, but the influence of traffic rules is varied, such as signalization. Therefore, in page 4 of table 1, the author is able to provide statistics on the scenario categories of TOI dataset and the traffic rule constraints within the dataset in experiment.*
**A1:**
Thanks for this kind suggestion. We have added scenario categories and traffic rules to Table. 1 in our revised manuscript.
---
> ***Q2:**
In page 6, the author uses three common intention behaviors in driving to reflect the driver intention (i.e., turning left, going straight, and turning right). Since the video clip length is set at 16 frames, it is important to clarify if each of the three intentions corresponds to individual frames with the 16-frame clip cut during the training and testing phases, or if multiple intentions are present within the 16 frames. I consider whether 16 frames constitute interval sampling or continuous sampling, and how many types of intentional behaviors can be expressed using 16 frames in the paper. The authors should further elaborate and provide the proportion of each intention in the dataset.*
**A2:**
We apologize for any confusion. In both training and testing phases, 16 frames inputted to the model are continuous images. 16 frames occupy 1.6s in the temporal dimension, which is actually in a short time. Therefore, they share the same kind of intention. According to your suggestion, as shown in Fig. R4 of the rebuttal PDF file, we provide the proportion of each intention in the whole dataset, and we also provide the statistics of driving intentions in each scene.
---
> ***Q3:**
The author may can add another evaluation metric for the experiment.*
**A3:**
Thank you very much for this suggestion. According to your suggestion, we add another evaluation metric (i.e., accuracy), as shown in Tab. R1 of the rebuttal PDF file.
---
> ***Q4:**
The section three can include a schematic diagram of the annotation process for the dataset.*
**A4:**
Thank you very much for this suggestion. We illustrate the annotation process, as shown in Fig. R1 of the rebuttal PDF file. | Summary: This paper collects a new large-scale dataset and proposes a novel method that integrates multi-fold top-down guidance with the bottom feature to address the problem of on-road object importance estimation. Specifically, the dataset is almost three times larger than the current publicly dataset for on-road object importance. In addition, this paper considers an adaptive mechanism for object-lane interaction, effectively modeling the impact of traffic rules on object importance. Experiments on several benchmarks validate the effectiveness of the proposed method.
Strengths: This paper makes several key contributions and demonstrates strengths for on-road object importance estimation
(1) This paper introduces a novel, extensive dataset, set to be released to the public, which is nearly three times the size of the current largest public dataset.
(2) The method is well-motivated and straightforward. It estimates the importance of objects on the road, integrating various top-down guidance factors with bottom-up features, marking the first of its kind.
(3) The proposed method addresses the pivotal role of traffic rules in estimating object importance, an aspect previously overlooked by existing methods. It successfully encapsulates this concept through an innovative, adaptive mechanism for object-lane interaction.
Weaknesses: This paper has also two weaknesses:
(1) The paper does not provide a detailed discussion on the computational efficiency of the proposed method, which is crucial for real driving scenarios. Moreover, it is recommended to compare the model parameters and latency with other methods.
(2) Another concern lies in the practicality of the method. This method and the proposed dataset are both for single-camera scenarios, but in real autonomous driving scenarios, surrounding view is a more widely used type and a safer option. Will the proposed method also work well in the surrounding view?
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) Can the proposed method be applied to surrounding view images? I suggest that the authors should consider the application on the current perception pipeline for vision-based autonomous driving pipeline.
(2) I suggest that the authors should analyze the latency of the proposed method, which determines whether the method can be integrated into the practical driving scenarios.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The proposed method only considers the effect of three types of driver intentions on object importance estimation, which is not sufficient for complex driving scenarios. I carefully checked the paper and found no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer Yrjv
Thank you very much for your positive comments and we appreciate your thoughtful feedback and suggestions.
> ***Q1:**
The paper does not provide a detailed discussion on the computational efficiency of the proposed method, which is crucial for real driving scenarios. Moreover, it is recommended to compare the model parameters and latency with other methods.*
**A1:**
According to your suggestion, we compute the latency and parameters of our method and other methods, as reported in Tab. R1 of the rebuttal PDF file. Our model asks for relatively large parameters (i.e., 173M) because multi-fold top-down guidances are involved in the model. However, 173M parameters only occupy a fraction of storage space. Therefore, the parameters will not hinder the application of our method. In comparison, the latency of a model largely affects its deployment on practical platforms. We can observe that, the latency of our method is only longer than that of Ohn-Bar. We note that Ohn-Bar is an early model using relatively simple VGG backbone, thus presenting the shortest latency.
---
> ***Q2:**
Another concern lies in the practicality of the method. This method and the proposed dataset are both for single-camera scenarios, but in real autonomous driving scenarios, surrounding view is a more widely used type and a safer option. Will the proposed method also work well in the surrounding view? Can the proposed method be applied to surrounding view images? I suggest that the authors should consider the application on the current perception pipeline for vision-based autonomous driving pipeline.*
**A2:**
Thanks for this insightful comment. Surrounding view images in autonomous driving can be broadly categorized into three types: front view, side view, and back view. Our method might not be applicable for the side view images, since DISG (Driver Intention and Semantics Guidance) module makes use of the intention guiding mask, while this mask is mainly designed for the front view. Our method should be suitable for back view images, since they are quite similar to the front view images. Thank you again for this comment, which inspire us to extend the model to be applicable for all surrounding views in the future.
---
Rebuttal Comment 1.1:
Title: Post rebuttal
Comment: Thanks for the rebuttal. Most of my concerns are well-addressed therefore I tend to keep my positive rating. | Summary: This paper presents a novel dataset for on-road object importance estimation. More data about which objects are important for self-driving is included and is promised to be released. Moreover, a novel method that integrates driven intention, semantic context, and traffic rule is devised to tackle the related problem. The paper is well-written.
Strengths: A new dataset is introduced with rich data and labels. The presented method is novel and shown to be effective for the studied problem. Details about the dataset and the method are comprehensive and technically sound. Results are also promising.
Weaknesses: Some of the concepts lack sufficient details to explain. See questions below.
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) Regarding the task, my major concern is the definition of importance. It is shown that surrounding objects that follow the traffic rules are not considered as important. Only the objects ahead of the car or have an intersection with the ego-car's direction are important. I doubt whether this is strictly appropriate. For example, if a pedestrian walking along the road, he/she will not be considered as important. However, what if this pedestrian suddenly steps into the road ahead, potential collisions would happen. Therefore, I think a nearby walking pedestrian should be considered as important or at least recognized into a third category like "needs care". I wonder how the authors solve this problem in the dataset.
(2) Regarding the driver's intention, it is indeed difficult to define appropriately. The authors have mentioned this in the paper, but the strategy introduced to accommodate this is still not clear to me. The authors mentioned learning the intention values based on driving behaviors, but how do we know the driving behaviors? Are these behaviors (e.g. turning left) already provided in the dataset?
(3) More visualization about the labels and method comparisons are better to be presented for more clarity.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have mentioned limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer 8N7U
Thank you very much for your positive comments and we appreciate your thoughtful feedback and suggestions.
> ***Q1:**
Regarding the task, my major concern is the definition of importance. It is shown that surrounding objects that follow the traffic rules are not considered as important. Only the objects ahead of the car or have an intersection with the ego-car's direction are important. I doubt whether this is strictly appropriate. For example, if a pedestrian walking along the road, he/she will not be considered as important. However, what if this pedestrian suddenly steps into the road ahead, potential collisions would happen. Therefore, I think a nearby walking pedestrian should be considered as important or at least recognized into a third category like "needs care". I wonder how the authors solve this problem in the dataset.*
**A1:**
Our core idea to define the object importance is whether an object affects the safe driving. With this core idea in the mind, we summarize four types of importance evaluation guidelines, including object attribute, driver intention, traffic semantic context, and traffic rule, as detailed in Line50-63 in the original version. Considering the certain individual guideline might not be strictly applicable for some traffic scenarios, we introduce the double-checking and triple-discussing annotation mechanisms to comprehensively make use of four mutually-dependent guidelines to handle disputed and complex scenarios. Your above mentioned scenario is complex. In this scenario, object attribute (e.g., the distance and object category) is the dominant guideline. The closer an object is to the ego-car, the more important it is. In addition, pedestrian is a special object category, the behavior of a pedestrian is more random than a vehicle, and traffic rule imposes fewer constraints on a pedestrian than that on a vehicle. Moreover, the pedestrian is a crucial category in traffic scenes. Under the similar condition, the pedestrian is more important than other categories. Therefore, a nearby walking pedestrian should be annotated as important.
---
> ***Q2:**
Regarding the driver's intention, it is indeed difficult to define appropriately. The authors have mentioned this in the paper, but the strategy introduced to accommodate this is still not clear to me. The authors mentioned learning the intention values based on driving behaviors, but how do we know the driving behaviors? Are these behaviors (e.g. turning left) already provided in the dataset?*
**A2:**
Actually, the annotations of driving behaviors are not provided. However, the KITTI dataset contains detailed IMU/GPS data, and we can determine whether the vehicle is turning left, turning right, or going straight based on the vehicle's lateral angular velocity derived from the IMU data.
---
Rebuttal Comment 1.1:
Comment: In fact, I think I do not get too much new information from the authors' response to my question 1, though this would not affect my original rating. Overall this is an interesting paper. I only suggest that the authors present more discussions on the annotation process, not only on how annotations are obtained but also on the strengths and weaknesses of the applied annotation policy. Regarding the answer to my question 2, I can get the idea and I also suggest adding more related discussions in the paper. | Summary: This work addresses the issue of estimating the importance of on-road objects using video sequences from a driver’s perspective, a critical task for enhancing driving safety. The authors introduce the Traffic Object Importance (TOI) dataset, which is significantly larger and more diverse than existing datasets, and propose a novel model that integrates multi-fold top-down guidance factors—driver intention, semantic context, and traffic rules—with bottom-up features for more accurate importance estimation. Experimental results demonstrate that the proposed model significantly outperforms state-of-the-art methods in on-road object importance estimation.
Strengths: 1. The introduction of the Traffic Object Importance (TOI) dataset, which is significantly larger and more diverse than existing datasets, provides a robust foundation for training and evaluating models in on-road object importance estimation, thereby addressing a major limitation in the field.
2. The proposed model effectively integrates multi-fold top-down guidance factors—driver intention, semantic context, and traffic rules—with bottom-up features, which showed good performance for the TOI task.
Weaknesses: 1. Lack of description of the annotation details.
How many annotators are involved in the annotation procedure? It would be good if the authors can provide some annotation procedure samples regarding the double-checking annotation mechanism and the triple-discussing annotation mechanism.
2. It seems this annotation will be varied according to different traffic rules. Since KITTI is collected in Germany, the annotators should be familiar to germany traffic rules. However the authors did not mention this information in their submission, thereby the label quality is doubtful.
3. The authors are encouraged to build up the first benchmark based on the proposed dataset by using various existing object detection methods, e.g., Yolo, with the proposed head or simpler head. It is interesting to see how the existing object detectors work on this new task.
4. More statistics of the dataset are encouraged to be given, e.g., the number of important object of different categories, etc.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How many annotators were involved in the annotation procedure for the dataset? Can the authors provide detailed examples of their double-checking and triple-discussing annotation mechanisms?
2. Were the annotators familiar with German traffic rules, given that the dataset was collected in Germany (KITTI dataset)? How was the expertise of the annotators in relation to German traffic laws ensured and validated?
3. Have the authors considered building the first benchmark using their dataset with existing object detection methods, such as YOLO? What were the performance outcomes of these existing methods when applied to the new task?
4. Can the authors provide more detailed statistics about the dataset, such as the number of important objects in different categories? How do these statistics compare to other datasets in the same domain?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: yes the authors mentioned it in the appendix
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer xrYx
Thank you very much for your positive comments on our proposed dataset addressing a major limitation in the field and our model showing the good performance.
> ***Q1:**
Lack of description of the annotation details. How many annotators are involved in the annotation procedure?*
**A1:**
The details of annotating a sequence is shown in Fig. R2 of the rebuttal PDF file. We also illustrate the annotation process in Fig. R1 of the rebuttal PDF file, from which we can observe the annotation process involves three types of annotators, namely the first, second, and third annotator. Six experienced drivers are recruited as volunteers. It is worth noting that every volunteer is able to act as the role of the first, second, or third annotator.
---
> ***Q2:**
It would be good if the authors can provide some annotation procedure samples regarding the double-checking annotation mechanism and the triple-discussing annotation mechanism.*
**A2:**
To illustrate the double-checking and triple-discussing annotation mechanisms, we provide a schematic showing our annotation process in Fig. R1 of the rebuttal PDF file. To better explain double-checking and triple-discussing mechanisms, an example is provided, please see the image (with a van and a cyclist inside) in the box "discuss together" in Fig. R1. The first annotator annotates both van and cyclist as important objects, considering both objects are along the driving path. When the second annotator is checking the annotation, claiming that the annotation for the cyclist is reasonable but the annotation for the van is disputed, since van is far from the driver (though it is along the driving path) and the driver will only pay attention to the cyclist. At this time, the first and second annotators discuss together to convince each other. If they can not reach an agreement, the triple-discussing mechanism is activated, and the third annotator join the discussion. The third annotator holds the idea that the van is also a potential object that might affect the driving safety, thus the final annotation is that both van and cyclist are important objects.
---
> ***Q3:**
It seems this annotation will be varied according to different traffic rules. Since KITTI is collected in Germany, the annotators should be familiar to germany traffic rules. However the authors did not mention this information in their submission, thereby the label quality is doubtful.*
**A3:**
During our annotation process, we focus primarily on universal traffic rules (e.g., vehicles should not drive across solid lanes). Therefore, the traffic rule imposes little effect on annotations. Factually, we find the main effect comes from that many objects simultaneously exist in an image. Thus, to guarantee the reliability of annotations, the first annotator only annotates one object at each time of observing the whole sequence. The annotation for a sequence is finished until all objects are annotated, as illustrated in Fig. R2 of the rebuttal PDF file. In addition, in existing datasets, although multiple annotators perform the annotations, it is not mentioned that the initial annotations are checked by other annotators. Differently, the double-checking and triple-discussing annotation mechanisms are introduced to further guarantee the quality of our annotations. We hope our explanation alleviates your concern.
---
> ***Q4:**
The authors are encouraged to build up the first benchmark based on the proposed dataset by using various existing object detection methods, e.g., Yolo, with the proposed head or simpler head. It is interesting to see how the existing object detectors work on this new task.*
**A4:**
We have conducted the experiment that uses YOLO to build up a benchmark. To make YOLO suitable for our task, we adjust the output dimensions of the final fully connected layer in YOLO detection head to 2 to indicate important and unimportant. The experimental results are reported in Tab. R1 of the rebuttal PDF file.
---
> ***Q5:**
Can the authors provide more detailed statistics about the dataset, such as the number of important objects in different categories? How do these statistics compare to other datasets in the same domain?*
**A5:**
We present more statistics about the dataset in Fig. R4 of the rebuttal PDF file, including the statistics on different driving intentions and the number of important objects in different object categories. Unfortunately, due to the missing information in other datasets, we could not provide the corresponding information of other datasets.
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: Thank you for your response. My concerns are mostly solved. I would lile to improve my score to 6. | Rebuttal 1:
Rebuttal: # General Response
We thank reviewers for their valuable feedback. We are encouraged by the reviewers’ positive comments on our work. Specifically, they find our model novel (8N7U) and effective (xrYx, 8N7U), our idea well-motivated (Yrjv), our proposed dataset sound (8N7U), our paper detailed (8N7U, mvxo) and well-written (8N7U). Reviewer xrYx remarks that *"provides a robust foundation for training and evaluating models in on-road object importance estimation, thereby addressing a major limitation in the field"*.
Reviewer Yrjv comments that *“The proposed method addresses the pivotal role of traffic rules in estimating object importance”*.
We will provide detailed responses to reviewers' questions and feedback, hoping to address any confusion and concern.
Pdf: /pdf/0528e8f76c9728346e36341f598681e4f2e13ae2.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
CIFD: Controlled Information Flow to Enhance Knowledge Distillation | Accept (poster) | Summary: Some existing methods alleviate the capacity gap between the teacher and student by setting up Teacher Assistants (TAs), introducing a large number of additional parameters and computational costs. Based on this, this paper proposes to train multiple RDM modules and connect multiple independent classification heads to generate branches with different performance to simulate the TA model. The authors think that this hierarchical model of extracting teacher knowledge can help alleviate the capacity gap between the teacher and student.
Strengths: * Assistants and teachers sharing shallow modules are more efficient in terms of parameter quantity compared to multiple independent Assistants models.
* Although a large number of fully connected layers have been introduced, the proposed method hardly introduces any additional training overhead.
Weaknesses: * I noticed that there is a significant difference in baseline performance between Table 1 and the original text, and Table 3 only uses a single Assistant for TAKD and DGKD, while the author's proposed method uses three RDM headers, which is not a fair comparison.
* There are significant differences in the value of R across different datasets, R=1 for CIFAR but R=10^4 for ImageNet. This means that the selection of hyperparameters on unfamiliar datasets is challenging, and the parameter tuning process may introduce multiple computational costs, which limits the versatility of the method.
* I noticed that the method proposed by the author introduces and trains at least $3N$ additional layers of MLP, and the forward process and loss calculation cost in distillation is also increased several times. However, the training cost is even the same as the method NormKD based solely on Logits Distillation without additional modules (Figure 1 (b)). Can you present the results with specific numerical values? What is the key to introducing so many parameters without introducing additional computational overhead?
Technical Quality: 2
Clarity: 3
Questions for Authors: * The experiment was conducted when there was not much difference between the teacher and student models (common settings in current KD tasks). It cannot prove that the proposed method can alleviate capacity differences, as many methods have proven that such settings (e.g., Res34 $\to$ Res18) do not require the additional assistant model.
* Why use the penultimate layer feature of the teacher? Is this from theoretical analysis or empirical summary?
* For this paper, I think it would be better to place Related Works at the front.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors have discussed a limitation in Conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and effort. Detailed comments follow the summary.
**Summary**
- We showed that our proposed method outperforms TAKD and DGKD even when there is only 1 RDM. This ensures fairness. However, in terms of training cost, our method with 3 RDMs which is far superior is also far cheaper (as seen Fig. 1).
- We clarified that the low cost of our method is due to the relatively cheap computation cost of RDMs compared to the student/teacher model, ability to train RDMs in parallel, and train them for lesser epochs. We also extended our numerical training cost analysis in appendix C.3 and provided more information.
- We directly addressed the concern on the efficacy of CIFD over large-student teacher gap. We distilled ResNet18 using ResNet152, ResNet101, ResNet50 (Table 13) as teachers and showed that proposed CIFD shows increasing student performance with teacher size. We also analyzed CLIP models (Table 16, 17) and found that it provides more improvement over baseline when the teacher student capacity gap is large.
**Detailed response**
> Difference between TAKD, DGKD and reported numbers
We wanted to clarify if the reviewer is referring to the difference in performance of TAKD and DGKD in Table 1 and the original numbers in their papers? If so, the problem is both those papers used the test set for parameter fine-tuning [A] and [B]. After rectifying these errors, we got the above numbers.
[A] TAKD first author's comments on GitHub: \url{https://github.com/imirzadeh/Teacher-Assistant-Knowledge-Distillation/issues/19#issuecomment-732454350}
[B] DGKD implementation by the first author. Validation function (defined in line 133-137) called in line 143, uses the test set to select the best accuracy across epochs in \url{https://github.com/wonchulSon/DGKD/blob/main/train.py}
> Fairness between TAKD, DGKD, and CIFD
Coming to the fairness between comparing TAKD and DGKD in Table 3, one fairness is along the axis of training cost. In which case, as seen in Fig. 1, we incur lower training cost despite using three RDMs and significantly outperform them. Further as seen below, even when comparing our 1 RDM results with theirs, our proposed method is better.
```
| RN34 to RN18 on IN1k | Top-1 | Top-5 |
+----------------------+-------+-------+
| TAKD | 71.37 | 90.27 |
| DGKD | 71.73 | 90.82 |
| Proposed (1 RDM) | 72.05 | 90.70 |
| Proposed (3 RDMs) | 72.32 | 90.88 |
```
> Computation costs
We would like to clarify that NormKD, DistKD, IPWD, IFD, and KD are indeed slightly lower cost than our method. However, our algorithm is on the pareto front of the training cost v/s performance curve.
We use the number of MACs in the forward pass as a measure of computation. There are three reasons. Firstly, the number of MACs consumed by RDMs is insignificant compared to a forward pass of the teacher model during RDM training. For e.g., one forward pass of the RDM per image is $1.31 \times 10^6$ MACs (RDM for RN34), whereas RN34 forward pass is $3.7 \times 10^9$ MACs. Thus, during RDM training the teacher model's forward pass is the biggest cost. Secondly, the RDM is trained only for 30 epochs. Finally, the RDMs are trained in parallel amortizing the cost of the forward pass of the teacher model during training. These in total reduce the cost of the training. Below we give the cost computations (also presented in Appendix C.3).
Our method in total costs 762 PMACs (Peta or $10^{15}$ MACs) for distilling RN18 from RN34. First, we compute the RDM training cost. RDMs consist of a three layer fully Connected Network, with a computational complexity of $1.31$ MMACs ($10^{6}$ MACs) per image per forward pass. A 30 epoch training results in total cost of $151$ TMACs ($10^{12}$ MACs) for three RDMs, the significant chunk of computation in training RDMs is coming from running the teacher model in inference mode ($142$ PMACs). The rest of the compute comes from the student model training when the three RDMs, student, and teacher are being used. Note that the RDM forward passes are computationally insignificant at this stage, only $0.38$ PMACs. Finally, we compute the training cost for existing methods like NormKD, DistKD, IPWD, IFD, and KD as $704$ PMACs, using the cost of the student and teacher models.
> Larger teacher experiments
We trained distilled RN18 from RN152, RN101, RN50 teachers. DistKD [5] showed that ResNet152, ResNet101 to ResNet18 faces the issue of large capacity gap (Table 3 of DistKD). The parameter ratio of from largest teacher to student is 5.12. Experimental results are in Table 13. First we observed that student performance increased with teacher size. Secondly, our results outperformed that of DistKD. This directly addresses the reviewer's concern about large teacher student capacity gap.
Additionally, we also compare CLIP models. Due to resource constraints, we compared the improvement provided by proposed CIFD over the nearest baseline CLIPKD [31]. Specifically, we compared the difference when a ViT-L-14 teacher was used to train ViT-B-16 and ViT-S-16 students. The parameter ratios are 2.8 and 6.9, respectively. Results are in Table 16 and 17. CIFD shows greater improvement over baseline when the teacher student ratio is large for 3 out of the 5 zero-shot classification datasets and almost always for the two zero-shot retrieval datasets. The capacity gap in CLIP-like models is much larger than the capacity gap in traditional KD settings such as RN34-18 and RN50-MobileNet V1.
> Layer for teacher embedding
This is based on the idea that the penultimate layer output is usually considered the embedding of the network. Additionally, the embeddings holds all the important information for classification and we can easily remove that information using our RDM.
> Related work position
Thank you for the feedback, absolutely we will move it
---
Rebuttal 2:
Comment: Thank you for providing a detailed response that resolved some of my doubts. The range of uncertainty in hyperparameter R is still one of my concerns, but experiments based on CLIP are indeed very distinctive. Based on the opinions of other reviewers and my understanding, I will increase the score to 5.
---
Rebuttal 3:
Title: Thank you!
Comment: Thank you for your time and effort, we greatly appreciate your response. We also wanted to acknowledge your inputs in helping us improve the paper, thank you.
We apologize for missing the point on the hyper-parameter R. From our analysis, we found that the values of R are inversely proportional to the capacity gap between the teacher and the student. Since $R^{-1}$ controls the weight of the bottleneck rate, too small an $R$ and the RDM will focus more on compression than retaining features crucial for accuracy, resulting in a poor RDM. Since the RN34 to RN18 gap is small (both in size and performance), smaller $R$ values were more appropriate and hence the selection. We plan to add this insight to our paper in the final version.
If you have any further questions or concerns to improve our paper, please let us know and we are happy to discuss. Thank you again. | Summary: Inspired by Shannon’s rate-distortion theory, this paper proposes two modules, namely the Rate-Distortion Module and the Information Bottleneck Module, to construct intermediate representations for knowledge distillation. Extensive experiments on various datasets demonstrate the effectiveness of this method.
Strengths: 1. This paper is well-presented and easy to understand.
2. This method not only works for traditional CNN networks but also performs well on modern CLIP models.
3. Extensive experiments demonstrate the effectiveness of this method.
Weaknesses: 1. The author's motivation and explanation for TA distillation are not very convincing. In my view, the RDM and IBM proposed in this work can be interpreted as two adapters connected to the teacher and the student respectively for distillation, and the principle is similar to the FT method.
2. In Table 2, 9 and 10, most of the compared methods were published in 2022 or before. The authors are encouraged to compare your method with recent state-of-the-art methods such as MLKD[1], CTKD[2], and LSKD[3].
3. In Eq. 5, I am a little confused about the author's formula representation. Generally speaking, the left side of the comma is the network to be trained, and the right side is the learning target. But the author seems to have it reversed here.
References:
[1]. Multi-Level Logit Distillation. CVPR 23.
[2]. Curriculum Temperature for Knowledge Distillation. AAAI 23.
[3]. Logit Standardization in Knowledge Distillation. CVPR 24.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. $q(\hat{Y})$ is not clearly marked in Fig. 2. Which part of the network produces it?
2. It would be more beneficial if the author could add a figure on how to perform CIFD distillation on the CLIP model.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and effort. Detailed comments follow the summary.
**Summary**
- By using one RDM without IBM, we showed that the key to our superior performance compared to Factor Transfer (FT) [28] is the principled loss function used to train the RDM. This is in addition to the multiple RDMs and IBM which have no equivalents in FT.
- We showed superior performance against newer works like MLKD, CTKD, and LSKD.
**Detailed response**
> Motivation and comparison with FT [28]
Before we respond let us quickly summarize the work of FT [28]. Instead of using teacher's logits for KD, FT proposed using teacher's embeddings. They transfer the knowledge using a paraphraser network with a convolutional autoencoder architecture. The paraphraser encoder outputs a vector with dimension that is $k$ times the dimension of its input and the decoder used to reconstruct the encoder's input, is discarded after training the encoder. Kim et. al. explored both dimensionality reduction ($k=0.5$) and expansion ($k=4$), and settled on $k=0.5$ as the best performing. During the training of the student model, they use a translater network, similar to the encoder of the paraphraser network and task it to map the student embeddings to the output domain of the paraphraser encoder. Below, we will highlight the main difference between FT and our method, and explain that this difference is crucial in obtaining better performance.
While both paraphraser and our RDM aim to do compression, the mechanism of training the compression module is different. Kim et. al. train the paraphraser network like an autoencoder. The paraphraser training limits information by dimensionality reduction (when $k<1$). We train our network using the Rate-Distortion criterion, the mathematically principled way of performing lossy compression. The crucial difference is that the RDM limits the rate of information flow by minimizing the mutual information between the latent representation ($\hat{Y}$ in Fig. 2(a)) and the input ($X$) in addition to the reconstruction error (see the difference between the loss function in our eqn. (7) and FT's eqn. (1)). This training scheme makes a significant difference in performance as shown in Table 15. In the table, we compare our method with only one RDM with the work of FT, and we see that our method is superior, despite removing all our other innovations, including multiple RDMs, and IBM. Thus, it shows that the principled training of the RDM plays a significant role in improving the performance of the student network. As a further comparison, we also look at IFD [20], which uses an ensemble of three paraphraser networks. However, our method with 3 RDMs and no IBM still outperforms it. This shows that just relying on diversity of initialization of paraphrasers does not yield benefits. However, training in RDMs in a principled manner to focus on different levels of information (fine to coarse) helps train the student better.
To summarize our differences w.r.t. FT
1. We propose a principled way of compressing the features from the teacher model using insights from Rate-Distortion theory. This principled training loss leads to performance gains even in a standalone manner (71.83 > 71.43).
2. We propose multiple RDMs to help student network learn at different compression ratios. This has no equivalent in Kim et. al.
3. We propose the use of IBM in the student network to control the information flow in the student during training. This has no equivalent in Kim et. al.
While at a certain abstraction we can regard RDMs as adapters, the key here is the loss function used to train the adapters. As the new results show, the principled loss function is critical for good performance.
> Comparing with MLKD, CTKD, and LSKD
Thank you for pointing them out. Our performance is superior to MLDK, CTKD, and LSKD works as seen in Table 14.
> How is $q(\hat{Y})$ computed?
$q(\hat{Y})$ is not produced by a network. Instead it is produced using a non-parametric distribution approximation process proposed in \cite{balle2017end}. The method also computes an upper-bound on the entropy as detailed in \cite{balle2017end}.
> W3 and Q2
Thank you for the inputs, we will rectify them.
---
Rebuttal Comment 1.1:
Title: Thank you and summary of changes
Comment: We wanted to thank the reviewer again for their insightful questions that have helped us improve the paper. We wanted to quickly summarize our previous response.
- By using one RDM without IBM, we showed that the key to our superior performance compared to Factor Transfer (FT) [28] is the principled loss function used to train the RDM. This is in addition to the multiple RDMs and IBM which have no equivalents in FT. This sets our work apart from FT.
- We showed superior performance against newer works like MLKD, CTKD, and LSKD which the reviewer requested (Table 14).
Additionally, based on other reviewers' questions
- We showed that proposed CIFD scales well with increasing student-teacher gap. In Table 13, we showed that distilling from a ResNet152 to ResNet18 model, CIFD provided a 0.36\% improvement over top-1 ImageNet accuracy over the nearest competitor DistKD. We also showed that CIFD monotonically improved the student accuracy with teacher size, something not observed in DistKD.
- We also compared with other works like Masked Generative Distillation and Diffusion KD, and showed superior performance against both of them.
- We showed that CIFD works well for CLIP like models where the teacher-student gap are larger than ResNets (Table 16 and 17). By outperforming CLIP specific distillation methods, we also showed the generality of the proposed idea.
We are very grateful for the time and effort from the reviewer to provide us with a review that has helped improve our paper. We are happy to engage further if they have any questions. Thank you again. | Summary: The paper presents a new distillation method, CIFD, designed based on *Shannon’s Rate-Distortion theory* and **Information Bottleneck Principle (IBP)*. CIFD contains Rate-Distortion Modules (RDM) for the teacher to substitute heavy Teacher Assistant (TA) and Information BottelNeck Module (IBM) for the student to mimic the features from several RDMs. Experiments demonstrate the effectiveness of the method.
Strengths: 1. The paper is organized well.
2. The experiments on CLIPs are good, verifying the broader effectiveness of the method.
Weaknesses: My main concerns are from three aspects: **i) the story of the paper; ii) the reason why CIFD works; iii) insufficient experiments and comparisons.** Some concerns are mixed among the three aspects. And I will list them one by one.
1. **Insufficient experiments on verifying the basic settings of the paper.**
The story starts with *"When the teacher model is significantly larger than the student, previous works that utilize TAs induce high training costs."* I would believe the basic settings of this work as the teacher-student network pairs are in large parameter scale differences. From this point of view, the paper should contain more systematic experiments to verify the efficacy under this setting. Specifically, CIFD should be compared with previous methods on ImageNet with teacher-student network pairs with large different parameter scales, not just traditional ResNet-34 -> ResNet-18 and ResNet-50 -> MobileNet-V1.
2. **The trade-off between the story and the empirical solutions.** In my opinion, the paper is a little bit overdecorated and overclaimed. The author proposes many concepts, such as *Shannon’s Rate-Distortion theory* and *Information Bottleneck Principle (IBP)*, and claims **"This is the first application of Shannon’s Rate-Distortion theory to aid knowledge distillation"**. I don't mean that the aforementioned statement is misleading. But, if we go deeper into the design, the reason why the method works may come from the **noise-adding and noise-removing process**. Many previous works have verified that the above process could benefit the learning process in computer vision, like MIM and diffusion models, which have been empirical solutions. In KD, there also exist distillation methods following MIM and diffusion models, like MGD and DiffKD. From this point of view, the authors should not claim ***"the first"*** only, but make a deeper analysis of related methods and make detailed comparisons. ***I strongly encourage the authors to make a good balance between the story and the verified empirical solutions.*** Even though it seems not as novel as this version, it would provide the readers with more useful knowledge and insights.
3. **The design may alter the network architecture of the student.** It seems that the IBM module would also be included in the validation stage. If my judgment is true, the added module (though lightweight) would also benefit the performance. Under such circumstances, the comparisons with previous methods, especially for light-weight models, are unfair.
Technical Quality: 2
Clarity: 2
Questions for Authors: See Weaknesses.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Limitations are included in the main paper and the Appendix
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and effort in providing feedback. Detailed response after summary.
**Summary**
- We directly addressed the concern on the efficacy of CIFD over large-student teacher gap. We distilled RN18 using RN152, RN101, RN50 (Table 13) as teachers and showed that proposed CIFD shows increasing student performance with teacher size. We also analyzed CLIP models (Table 16, 17) and found that it provides more improvement over baseline when the teacher student capacity gap is large.
- We extended our discussion in Appendix C.1 to showcase that the Masked-Image-Modeling (MIM) objective is an approximation of an upper bound on the IB objective, whereas ours is a direct approximation of the IB objective. We compared and showed superior performance compared to Masked Generative Distillation (MGD).
- We compared with DiffKD and showed our performance is superior. Further, it takes two methods, DistKD and DiffKD to achieve similar performance.
- We clarify that we did not modify the student architecture to ensure fairness
**Details**
> Large teacher distillation experiments
We trained a distillation from RN152, RN101, RN50 teachers to RN18 distillation using CIFD. DistKD [5] showed that RN152, RN101 to RN18 face the issue of large capacity gap (Table 3 of DistKD). The parameter ratio from largest teacher to student is 5.12. Experimental results are in Table 13. We observed that student performance increased consistently with teacher size and also outperformed the performance of DistKD. This directly addresses the reviewer's concern about large teacher student capacity gap.
Additionally, we also compare CLIP models. Due to resource constraints, we compared the improvement provided by proposed CIFD over the nearest baseline CLIPKD [31]. Specifically, we compared the difference when a ViT-L-14 teacher was used to train ViT-B-16 and ViT-S-16 students. The parameter ratios are 2.8 and 6.9, respectively. Results are in Table 16 and 17. CIFD shows greater improvement over baseline when the teacher student ratio is large for 3 out of the 5 zero-shot classification datasets and almost always for the two zero-shot retrieval datasets. The capacity gap in CLIP-like models is much larger than the capacity gap in traditional KD settings such as RN34-18 and RN50-MNV1.
> Connections of IBP to MIM and DiffKD
We called the RDM modules as such because they capture different levels of information based on the rate constraint. Given existing work in learning based data compression [9, 34, 35], which explicitly work on Rate-Distortion objective (like we do), we felt appropriate to call this the RDM.
We were aware of the similarity between MIM and IBP, hence we included appendix C.1 in our submission to address this. Building upon our insights in appendix C.1, we show that MIM and MGD objectives are upper bounds on the IBP objective. Consider MIM shown in Fig. 7a. The objective from the information bottleneck principle is $\min -I(X;\hat{U}) + \lambda I(T;\hat{U})$. The first term is usually approximated as a reconstruction error between $X$ and $\hat{U}$. Looking at the second term, we can write, $I(T;\hat{U}) \leq H(T)$, where $H$ signifies the entropy. Since $H(T)$ is not dependent on the neural network parameters, it can be dropped. This is the MIM objective, $\min -I(X;\hat{U})$ or minimize the reconstruction error. Similar simplification also holds for MGD. So both MIM and MGD minimize an upper-bound on the IBP got by dropping the second term. In our case, we compute an approximation for the second term, thus making it tighter to the original IBP objective. IBP works better because it forces the student to focus on those features necessary to predict the teacher embedding (first term) and remove information not useful in the prediction (second term), whereas in MIM and MGD the removal of information is not present. There have recently been works studying how IBP and generalization errors are connected ("How does information bottleneck help deep learning?," ICML 23) which provides support to the proposed idea. Further, our method outperforms the performance gains obtained from MGD [Yang et. al., 22] as seen in Table 14
We have cited the work of DiffKD in related work. Based on our understanding (Fig. 3 in DiffKD), it appears extra modules are added during inference which changes the student architecture, which is not fair. However for completeness, we compare against it in Table 14. Proposed CIFD outperforms DiffKD except for top-1 accuracy in RN50 to MNV1 distillation. It takes the combination of two works DiffKD [25] and DistKD [5] to obtain better top-1 performance than CIFD in both RN34 to RN18 and RN50 to MNV1. However, our top-5 accuracy is better for both cases, indicating the competitiveness of proposed CIFD.
We respectfully disagree with the characterization that CIFD is just noise addition and removal process and thus similar to MGD & DiffKD, latter only removes noise. **The key differentiator are the principled loss functions used to train the models in the presence of noise.** If we were to just add and remove noise from student's embeddings, then it would result in a simple, one-stage procedure, similar to MGD/DiffKD. The central theme of the paper is to control information flow during distillation and one of the best tools to accomplish it is by adding noise. Further, the **task accomplished by the noise in the teacher (mimic TAs) is different from the student model.** Finally, the effect of this is seen in the **performance gains obtained by our proposed method**
> Student architecture changed?
We do not modify the student architecture. We designate a layer in the unmodified student model as the bottleneck (usually the penultimate layer). The model's layers preceding the bottleneck acts as the IB encoder, and the layer(s) following as the decoder. Noise is added to the output of the bottleneck layer during training and noise addition is disabled during inference
---
Rebuttal 2:
Title: Post Rebuttal comments by Reviewer sfSd
Comment: Thanks for your detailed response, which addresses my partial concerns.
Here are my remaining concerns.
I personally believe that lacking deep discussions about related techniques in the main paper is inappropriate. I encourage the author to add this content in the main paper, not just in the Appendix.
Although the motivations seem novel, the main improvements are from multiple RDMs. Similar distillation designs have been explored in [1-3]. On the other hand, when reading the paper and the authors' response, I find no obvious technical flaws. Therefore, I'm a little bit confused about whether the effectiveness comes from the theoretical framework that the author claimed.
As I think the above arguments are not strong enough to reject the paper, I would consider my final rating according to both the response and other reviewers' comments.
[1] Chen, Yudong, et al. "Improved feature distillation via projector ensemble." Advances in Neural Information Processing Systems 35 (2022): 12084-12095.
[2] Zhu, Xiatian, and Shaogang Gong. "Knowledge distillation by on-the-fly native ensemble." Advances in neural information processing systems 31 (2018).
[3] Liu, Xiaolong, et al. "Norm: Knowledge distillation via n-to-one representation matching." arXiv preprint arXiv:2305.13803 (2023).
---
Rebuttal 3:
Title: Response to Reviewer sfSd's post rebuttal comments
Comment: We again thank the reviewer again for their time and response. We greatly appreciate their interest in helping us improve the paper.
> Moving discussion between IBP and MIM to main paper
- We absolutely agree that having this discussion in the main paper will enrich it. In fact, it was in the main body of the initial draft. However, the space requirements proved very restrictive and we were forced to move the discussion to the appendix. We agree with you that it should be put back into the main paper and plan to do so in the final version.
> Although the motivations seem novel, the main improvements are from multiple RDMs.
**Difference between [1], [3] and our work.**
While the multiple projection approaches has existed as pointed out in [1] (which is also cited as [20] in our paper), we performed comparisons specifically with [1] in the rebuttal pdf (Table 15). Note that *[1] and [3] use multiple projectors at the student* (see Fig. 1(c) in [1] and Fig. 1, right subfigure in [3]) and *our method uses multiple RDMs at the teacher*. Thus, these methods are quite different. For completeness we include comparisons with these methods in the table below. However, our method with 1 RDM and IBM performs close to [3] and the with 3 RDMs outperforms it. Additionally, our method with 3 RDMs and IBM outperforms [3] with 8 projectors at the student.
To further answer your concern about whether the claimed method of RDM training is giving the performance improvement, we compare with Factor Transfer Kim et. al [28] in Table 15 (rebuttal PDF). Our proposed method with one RDM and no IBM outperforms FT [28], where the difference is the loss function used to train the RDM. This shows that the improvement is coming from the new principled loss function used to train the RDM network.
**Difference between [2] and ours.**
We have to note that [2] is for Online Distillation *without a teacher*. This setting itself is different from ours where use the presence of a teacher. While [2] does use an ensemble of projection modules at the teacher, the difference in setting used to train makes it hard to compare them. Specifically they use a gated ensemble of branch modules to simulate a teacher using the student's backbone. *In the below table we include [2] for reference only, the setting is different and they should not be compared directly.*
```
+------------------------------+-------+-------+
| RN34 to RN18 on IN1k | Top-1 | Top-5 |
+------------------------------+-------+-------+
| FT (Kim et. al.) | 71.43 | 90.29 |
| IFD [1] | 71.94 | 90.68 |
| ONE [2] (No RN34 teacher) | 70.55 | 89.59 |
| NORM [3] | 72.14 | --- |
| ------------------------ | ----- | ----- |
| Proposed (1 RDM) and no IBM | 71.83 | 90.69 |
| Proposed (1 RDM) and wt IBM | 72.05 | 90.70 |
| Proposed (3 RDMs) and wt IBM | 72.32 | 90.88 |
+------------------------------+-------+-------+
```
**Summary**
We have shown over multiple experiments the efficacy of our proposed method. See Table 2 on CIFAR-100, Table 3 and Table 14 on ImageNet, Table 13 for large student-teacher gap in ImageNet, Table 5 on CLIP models. Specifically we have addressed the question if our proposed method is reason for performance improvement by disabling various parts of the proposed idea and show improved performance with existing methods (see Table 15). **These specifically showcase the importance of the proposed loss functions and their improved performance.**
We again thank the reviewer for their time and effort. If you have any further questions, please feel free to reach out to us. Thank you!
---
Rebuttal Comment 3.1:
Title: Comments by Reviewer sfSd
Comment: Thank you very much for your response.
As I am on the fence about this submission, I will make my final decision after discussing with other reviewers.
---
Reply to Comment 3.1.1:
Title: Thank you!
Comment: Thank you again for your time and effort in helping improve our paper.
We wanted to specially acknowledge the question on large student-teacher capacity gap experiments that pushed us to provide stronger results during the rebuttal. It helped us show that proposed CIFD not only shows consistent improvement with increasing teacher size but also shows significant improvement over existing methods (+0.36% on ImageNet top-1 accuracy for ResNet152 to ResNet18 over existing DistKD), in Table 13. Further our methods showed similar trends for larger student-teacher gaps in CLIP like models (Tables 16 and 17).
Additionally, based on your questions,
- We showed that Masked-Image-Modeling objective is an upper-bound on the Information Bottleneck objective
- We showed improved performance over works like MGD, DiffKD, IFD, and NORM.
- We showed how our multiple RDM method which acts on the teacher embeddings is different from multi projector methods like [1] and [3] which act on student embeddings. Specifically, our RDMs are used to mimic Teacher Assistants, whereas projectors are used to better help the student model learn the teacher embedding by providing gradient via an ensemble of projectors. Additionally, we showed improved performance over these methods.
If you have any further questions, please feel free to ask and we are happy to discuss. If not, thank you again for your time and effort. | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and effort for their feedback. We summarize the main points raised and our response. Detailed responses can be found in the reviewers' individual responses. Tables 13 - 17 and Fig. 7 are in the response PDF.
---
### Summary
- Teacher Assistants have had significant success in Knowledge distillation in facilitating knowledge transfer between the larger teacher and the smaller teacher. However, they are expensive to train. To alleviate this we propose module called Rate-Distortion-Module (RDMs) to mimic teacher assistants by reusing the teacher embeddings. Since RDMs are only two to three layers, they are significantly less costly to train.
- We propose the use of the Information Bottleneck Module (IBM) in the student model during training. We find that IBM on its own provides benefits but is also a crucial regularizer as the number of RDMs increases.
- Across multiple large scale datasets and models like classification on ImageNet and CLIP, we show that our proposed method outperforms existing methods.
---
### Strengths as per the reviewers
- Multiple reviewers (sFSd and iLhT) appreciated the broad effectiveness of the proposed work. Specifically, they highlighted the contribution of the proposed idea in distilling CLIP like models.
- Reviewer vQtp appreciated the idea of using RDMs which in turn enable the proposed method to create cheaper Teacher Assistants by reusing the shallow modules of the teacher network.
---
### Major questions from the reviewers
While we respond to the individual queries in the reviewer specific rebuttals, here we list some of the important ones and our response.
- Reviewers sFSd and vQTp requested experimental results with large teacher student capacity gaps. To address this we conducted more experiments and our results show that CIFD works well when teacher student capacity gap is large. Details of the conducted experimented are as follows:
1. We trained a ResNet18 model with ResNet34, ResNet50, ResNet101, and ResNet152 teachers (Table 13). We showed that CIFD trained student model showed consistent improvement with increasing teacher size (and hence increased capacity gaps) and comfortably outperformed DistKD [5] which experimented on the same combinations. These models were chosen because DistKD [5] showed that knowledge distillation in the ResNet101-ResNet18 and ResNet152-ResNet18 teacher-student combos suffer from the large capacity gap.
2. Additionally, we also analyzed results with larger capacity gaps in the CLIP scenario. Here, CIFD based models showed more improvement over baseline (CLIPKD [31]) when the student teacher capacity gap was larger. Specifically, CIFD gives greater improvement when the capacity gap is larger for 3 out of 5 zero-shot classification datasets (Table 16) and almost always for two zero-shot retrieval datasets (Table 17) compared to the case when the capacity gap is smaller. Here, the parameter ratio between teacher to student is 6.9.
- Reviewer SfSd requested a comparison between Information Bottleneck Module and Masked-Image-Modeling (MIM) based distillation. Building up on our discussion in Appendix C.1 we provide novel insight that MIM minimization objective is an upper-bound on the Information Bottleneck objective (and hence, the IB objective is better). Further, we showed significant improvement over existing MIM based distillation, Masked Generative Distillation.
- Reviewer sFSd and iLhT requested comparisons with more recent works like MLKD, CTKD, LSKD, and Diffusion KD. In Table 13, we showcase that we outperform them all. In fact, it takes the combination of two works DiffKD [25] and DistKD [5] together to obtain results similar to ours.
Pdf: /pdf/de751190c33560ac3765faf11345b8a887757e9d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Double-Bayesian Learning | Reject | Summary: This paper appears to suggest that any decision is composed of two Bayesian decisions and it trys to evaluate the implications of this idea.
I am very confused by this paper and really don't know what to make out of it. For example, the conclusion seems to be only a brainstorming session of random ideas and the rest of the paper does not appear to be much better.
At the very least, it is not well written, at worst the proposed approach does not make any sense.
Strengths: Given that I don't properly understand what exactly the authors want to achieve, I am unable to formulate the strengths of this paper.
Weaknesses: The presentation is very messy. The paper jumps from topic to topic without me understanding their relations to each other.
Technical Quality: 1
Clarity: 1
Questions for Authors: see above
Confidence: 2
Soundness: 1
Presentation: 1
Contribution: 1
Limitations: see above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 1
Code Of Conduct: Yes | null | Summary: The paper discusses the implications of Bayes' theorem, making assumptions inspired by a thought experiment of communicating a message. Prior (and model) elicitation by solving a fixed point equation is discussed.
Strengths: * The paper takes a fresh look at decision marking under uncertainty, which is at the center of machine learning.
* The generality of the setting makes the discussion applicable to virtually all of ML.
Weaknesses: While I am sensible to the topic of prior and model elicitation from coherence arguments, I believe the paper needs a thorough revision focussing on clarity. While I have some intuition now, it is still not crystal clear to me what the exact goal or claims of the paper are. See bullets below for constructive comments.
## Major
1. Section 4: what is the probability $P$? What is the underlying space and sigma algebra? What are they supposed to represent?
2. Section 4 introduces several very strong assumptions, like $1-P(A\vert B) = P(B\vert A)$ (is it for all $A,B$ in some sigma-algebra or for a specific pair of events?), that are motivated by an analogy about communicating a message. It is not clear why I should be prepared to make these strong assumptions. The fact that I don't know what $P$ is supposed to model or serve as does not help. Is it a joint probability over the variables describing a decision problem, as in decision theory? In that case, will it be used in conjunction to a loss function to make decisions? Will it be judged by some measure of decision accuracy? Or are we in a de Finetti framework, coming up with a personal probability $P$ which we will use to make predictions about unobserved variables? My intuition is that we are dealing with the latter kind, but this should be explained. And the strong assumptions need to be motivated by more than an analogy about communication.
3. The information analogy which motivates imposing the fixed point equation (9) is unclear, as well to what probability and what events it should apply.
4. p5 L179: the sentence about the parameter being a dynamic parameter for a learning system is unclear. We haven't discussed any learning algorithm yet.
5. I am not sure I see where Eqn (11) comes from. $\lambda$ has been chosen to derive (10) from Bayes' theorem, but it doesn't have to be the right base to write (11), right? Same remark for (18).
## Minor
1. p7 L248: Although neural networks have been a popular class of models and algorithms, supervised learning is not synonymous with neural network training.
2. p7 L252: the meaning of "the $\lambda$ expression" is unclear.
Technical Quality: 2
Clarity: 1
Questions for Authors: * Can you formally rephrase the goal and claims of the paper?
* Can you explain what $P$ is representing? Is it a personal probability in the spirit of de Finetti, or a model of the data generating process? Or maybe something else?
* Can you formalize and list the assumptions you make on $P$, and justify them in the context of predicting a categorical variable?
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: This is fundamental work that does not have any immediate negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2
Code Of Conduct: Yes | Rebuttal 1:
Comment: I don't see a rebuttal in OpenReview. In any case, I believe that my score would have been hard to move at this stage, and that the manuscript needs a thorough revision before being resubmitted. | Summary: The purpose of this paper is to investigate the optimality of a classifier. It is known that the Bayes classifier is optimal, and it is likewise known that an explicit computation of the Bayes classifier is often very challenging if not impossible. This paper offers an analysis of the Bayes classifier as a sequential solution of two problems. An analysis and interpretation of a vase / faces example is presented and some theory is developed to further understand it. The paper concludes with an application.
Strengths: The authors are exploring an idea which is novel, and the whole thinking about Bayes classifiers as comprising two sub-problems seems novel and worth pursuing.
Weaknesses: I did not really understand the discussion with the vase, the sender and receiver. Perhaps the authors should somehow connect the Bayesian ideas to the description of the problem earlier? I think the paper would really benefit from rewriting Section 4 with the vase as a running example, because it is hard to connect the various decisions with the probabilities. Maybe it's worth to add more illustrations / diagrams for this? The authors are presenting novel ideas and it's hard to understand them as they are currently presented.
For the theoretical implications, I think it would be better to illustrate the approach on a simpler model like a linear one.
The paper started by mentioning the Bayes classifier but does not come back to it as an example.
The paper states that the Bayes classifier is broken up into two decisions, but those are just briefly mentioned in the vase / faces example. The authors should carry this thread of reasoning through the whole paper.
Technical Quality: 2
Clarity: 2
Questions for Authors: In line 134, you say that "...if the message is known, then whether the foreground needs to be swapped is unknown." But isn't knowing the message "vase" or "faces" enough? How will swapping the background help?
How are the fixpoint solutions connected to the whole vase / faces example?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | null | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Towards Human-AI Complementarity with Prediction Sets | Accept (poster) | Summary: The paper analyzes decision support systems based on prediction set algorithms. The authors show that: (i) the usage of conformal prediction techniques is generally sub-optimal in terms of accuracy; (ii) the problem of finding the optimal prediction sets under human assistance is NP-hard. Moreover, they provide (iii) a greedy algorithm that is guaranteed to find prediction sets that are better than those provided by conformal predictors. Experimental evaluation on synthetic and real data show the effectiveness of the considered approach.
Strengths: The main strengths of the paper are:
1. the actual paper contribution is well framed;
2. the theoretical analysis is sound;
3. the proposed algorithm improves over existing approaches.
Weaknesses: I think this work is a good paper, without major weaknesses, as it provides solid theoretical insights.
The concerns I have are mainly due to typos/details missing. I will point out here these and a few remarks that might be considered for the final version of the paper.
1. It seems to me that Table 2 and Figure 3 are missing the BRUTE FORCE baseline.
2. regarding the style of the paper, I found lines 135-146 very dense. Maybe providing a more concrete example (e.g., what could 1,2,3 represent?) might help the reader getting through it.
3. In Algorihm 1, I think adding a comment to the pseudo-code (from lines 4 to 13) could be useful
4. regarding the limitation section (evaluation) a useful reference might be [Stutz et al., 2023], where the authors evaluate the possibility that human experts might not be approximating the true probability distribution
5. the experimental analysis (on real data) could be enriched with other popular Learning-to-Defer datasets, such as Cifar10H or hatespeech.
[Stutz et al., 2023] - Stutz, David, Abhijit Guha Roy, Tatiana Matejovicova, Patricia Strachan, Ali Taylan Cemgil, and Arnaud Doucet. "Conformal prediction under ambiguous ground truth." Transactions on Machine Learning Research (2023).
Technical Quality: 3
Clarity: 3
Questions for Authors: I have a couple of questions/remarks:
1. Can you elaborate a bit more on lines 91-93? I am not fully sure I understand the point there.
2. Can you add the results for BRUTE FORCE SEARCH in Table 2 and Figure 3?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper adequately discussed the limitations and societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Lines 125-146 & Algorithm 1]** To ease readability, we will rewrite 135-146 and we will add comments to the pseudo-code in Algorithm 1.
**[Limitation section]** In the limitation section under "Evaluation", we will add a discussion of and citation to Stutz et al., 2023.
**[Other popular datasets]** We agree that it would be interesting to evaluate our greedy algorithm in other datasets besides ImageNet-16H. However, there is a scarcity of publicly available datasets with multiple expert predictions per sample, a relatively large number of samples, and more than two/three classes. In the suggested datasets, either the performance of the human experts on their own is very high (CIFAR-10H) or the number of classes is very small (hatespeech) for decision support systems based on prediction sets to be practical.
**[Lines 91-93]** If we allow the expert to predict label values from outside the prediction set, the expert would have to decide when to believe that the true label is in the prediction set and thus predict a label from the prediction set, and when to question that the true label is in the prediction set and thus predict a label from outside the prediction set.
In this context, it is worth pointing out that, as shown in a large-scale human subject study by Straitouri et al. (ICML 2024) [19], if we allow the expert to predict label values from outside the prediction sets, the number of predictions in which the prediction sets do not contain the true label and the experts succeed is consistently smaller than the number of predictions in which the prediction sets contain the true label and the experts fail, as shown in Figure 10 in their paper.
**[Brute force search]** Since the greedy algorithm achieves the same performance as brute force search, we omitted brute force search from Table 1, Table 2 and Figure 3. However, we missed to clarify that in the caption of Table 2 and Figure 3. We will clarify that in the revised version of the paper.
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: I thank the authors for their clarifications. I have no further questions.
After reading the rebuttals and the other reviews, I am keeping my score as I think this is a sound paper. | Summary: The authors first show that conformal prediction sets may not lead to human decision optimality. The authors then introduce a greedy algorithm to generate candidate prediction sets that improve human decisions regarding the accuracy metric.
Strengths: The authors find the sub-optimality of conformal prediction sets on providing candidates for human decisions. Thereby, they propose a novel method to produce prediction sets that helps to improve human prediction.
Weaknesses: * The presentation somewhere is unclear:
* Line 86. Please break the sentence properly.
* Line 40/43/48: It is unclear for readers when the authors mention “optimal” multiple times but delay its explicit definition later.
* Line 197: It is confusing when the authors refer to the role of $a$. What is the value of $a$?
* The authors claim they propose an efficient algorithm. However, I am not sure which part is efficient. Are there any numerical metrics, e.g., running time, supporting this contribution? Additionally, how should we understand this restriction of “for a large class of non-conformity scores and expert models” in line 51?
* Line 90: But you also miss the possibility outside the prediction set, especially when the prediction set is not that good. I think the authors need to discuss the exploitation-exploration dilemma.
* The authors use the scores related to softmax and APS. Other papers propose alternative scores like RAPS and SAPS. I think they should be included.
Technical Quality: 2
Clarity: 2
Questions for Authors: * Typo in the title: Predictions --> Prediction
* Did you include the results from the conformal prediction sets by varying the value of $\alpha$?
* Why choose those values of $\omega$? I think their magnitudes are close. The authors may consider even smaller and larger values of $\omega$ to show its sensitivity.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Please see the above sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Optimal]** We will define what we mean by optimal the first time we mention optimal in the revised version of the paper.
**[Role of $a$]** In the generative model we used in the synthetic experiments, out of 20 features per sample, $d=4$ of these features correlate with the label value and thus are informative and the rest are just noise. In this context, "a" denotes the value of one of these four informative features. We will bring this information from Appendix C to the main in the revised version of the paper.
**[Efficient algorithm]** The computational complexity of our greedy algorithm is polynomial on the size of the label set $|\mathcal{Y}|$, as shown in lines 171-176 in Section 4 and Appendix B. We will clarify that, by efficient, we refer to "computational efficiency" in the revised version of the paper.
**[Class of non-conformity scores and expert models]** The guarantees of our greedy algorithm with respect to conformal prediction (Proposition 1) apply to any non-conformity score that is nonincreasing with respect to the classifier scores $f_y(x)$ and expert models parameterized by a mixture of multinomial logit models (MNLs). We will clarify this in the revised version of the paper.
**[Exploitation-exploration dilemma]** Straitouri et al. (ICML 2024) [19] conducted a large-scale human subject study where they compared a setting where users are not allowed to select label values outside the prediction sets against another setting where users are allowed to select label values outside the prediction sets. They found that, in the latter setting, the number of predictions in which the prediction sets do not contain the true label and the experts succeed is consistently smaller than the number of predictions in which the prediction sets contain the true label and the experts fail, as shown in Figure 10 in their paper. As a consequence, humans perform better in the former setting than in the latter setting, as shown in Figure 3 in their paper. Following the reviewer's suggestion, we will include such a discussion in the revised version of the paper.
**[RAPS and SAPS]** Following the reviewer's suggestion, we conducted extensive additional experiments using the best conformal predictors created with RAPS and SAPS [*], and we will include them as baselines in the revised version of the paper. Both methods present additional hyperparameters ($k_{reg}$ and $\lambda_{raps}$, for RAPS, and $\lambda_{saps}$ for SAPS) we optimize using a held-out validation set and the procedure outlined in the original papers (e.g., see Appendix E [49]).
The empirical evaluation shows that their performance is worse than our greedy algorithm for synthetic and real-world tasks (Tables 1 and 2 of the rebuttal PDF). Moreover, RAPS and SAPS offer comparable performances to NAIVE and APS in all the tasks. The complete results can be found in the PDF attached to the general rebuttal comment above.
[*] Huang, Jianguo, et al. "Conformal prediction for deep classifier via label ranking." ICML (2024).
**[$\alpha$ value]** In Table 1 and Table 2, we report the results for the value of achieving the highest empirical success probability for each non-conformity score and classification task, as noted in lines 211-212 and 262-263. In Appendix G, for the experiments with real data, we report the result for all values of $\alpha$.
**[$\omega$ values]** The ImageNet-16H dataset contains only images with the $\omega$ values we used and thus we cannot experiment with smaller or larger values. However, we would like to point out that, even if the levels of phase noise $\omega$ seem close, the accuracy of the predictions made by humans on their own varies significantly among $\omega$ values, as shown in Figure 4S of Steyvers, Mark, et al. "Bayesian modeling of human–AI complementarity." Proceedings of the National Academy of Sciences 119.11 (2022): e2111547119.
---
Rebuttal Comment 1.1:
Comment: I acknowledge the authors' response and I keep my initial score. | Summary: The paper shows the conformal prediction set may not be the optimal set recommendation to humans if humans follow certain choice models. The authors then propose a greedy algorithm by modeling $P(y|x)$ and the choice model of humans assuming it follows MNL model. Authors compare the proposed method against the standard conformal prediction set under synthetic human experts and the proposed method has a slightly better performance compared to traditional conformal sets.
Strengths: The authors consider conformal prediction in the human-in-the-loop setting, which is an important problem. The first part of the paper shows the conformal prediction set may not be the best recommendation set for humans, which is easy to understand since most conformal sets arrange the set in a ranked order and we can play with the human choice models to create an example that conformal sets may not be the best recommendation set.
Weaknesses: The problem setting is not realistic: The authors do not allow humans to select outside the conformal prediction set. However, in the setups of most empirical successes of human-AI collaboration with conformal prediction, this is allowed. Similarly, if the authors do not allow humans to select outside the conformal prediction set, humans' value is greatly reduced and the optimal thing to do may be just to use fully automated AI prediction and in all the toy examples the authors provided, kicking humans out of the loop is the optimal system (humans only make things worse).
The theoretical analysis seems useless: I think the theoretical analysis is useless for two reasons: 1) while identifying the optimal set is NP-hard, in practice the metric we care about is $\mathbb{E} g(S|x)$, not identifying the optimal set. If an algorithm can get a good rate of convergence for this regret, then this problem is not hopeless, so I think authors need to show for all conformal prediction algorithms, what is the regret lower bound for $\mathbb{E} g(S|x)$; 2) while I can see that sometimes the label set can be large. In practice, the theoretical results may not be a big issue for many problems since most problems have small label set (binary or three classes). This negative results may not seem that severe as the authors presented in the paper.
The solution is disconnected and not useful in human-AI collaboration: 1) The proposed solution does not enjoy the distributionally-free guarantee, which is the main reason why people use conformal prediction. I would expect authors to provide a conformal prediction algorithm that is human-centered, rather than directly switch lanes to traditional prediction methods. 2) The proposed solution requires $P(y|x)$ and the true human choice model, which is too strong to be realistic. If I know $P(y|x)$, why should I involve humans in the loop anymore (recall that authors can restrict humans only select from prediction set so humans are not necessary in the system). The optimal strategy would be directly use $P(y|x)$ to select actions.
Baselines: For human-AI collaboration tasks, I expect to see the proposed solution is better than human working alone or AI working alone. The authors should compare with AI only baseline using $P(y|x)$. Based on the toy example and my current understanding of the paper, the proposed solution cannot beat AI only baseline.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weakness.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Problem setting]** Straitouri et al. (ICML 2024) [19] conducted a large-scale human subject study where they compared the setting we adopted, where users are not allowed to select label values outside the conformal prediction sets, against the setting the reviewer suggests, where users are allowed to select label values outside the sets. The results of their study (refer to Figure 3 in their paper) suggest that humans perform better under the setting we adopted, which Straitouri et al. refer to as the strict implementation of their system, than under the setting the reviewer suggests, which they refer to as the lenient implementation of their system.
Further, we would like to clarify that, in our experiments, we found that kicking humans out of the loop is **not** the optimal system (humans do **not** make things worse) in the setting we adopted. More specifically, in the experiments with synthetic data, both the accuracy of the classifier $P(Y'=Y)$ we used (shown in the top row of Table 1) and the accuracy of the human (rows under NONE in Table 1) are always lower than the accuracy of the human using the prediction sets provided by our greedy algorithm (shown in rows under GREEDY of Table 1). This suggests that even if the human has low performance overall, it may still be beneficial to use prediction sets tailored to the performance of humans rather than "kicking them out".
In the experiments with real data, the accuracy of the (fine-tuned) VGG-19 classifier we used ($0.896$ for $w=80$, $0.894$ for $\omega=95$, $0.857$ for $\omega=110$ and $0.792$ for $\omega=125$) and the accuracy of the human working alone ($0.9$ for $\omega=80$, $0.859$ for $\omega=95$, $0.771$ for $\omega=110$ and $0.603$ for $\omega=125$), which we missed reporting in our paper, are also always lower than the accuracy of the humans using the prediction sets provided by our greedy algorithm (shown in rows under GREEDY in Table 2). In the revised version of the paper, we will report the accuracy of the (fine-tuned) VGG-19 classifier we used and of the human working alone.
**[Theoretical analysis]** Our hardness analysis implies that finding the optimal prediction sets that maximize the metric $\mathbb{E}[g(\mathcal{S}|x)]$ is NP-hard, and even finding a prediction set that approximates this to a reasonable factor (e.g. a constant factor) is also NP-hard. As a consequence, there is no polynomial-time algorithm with a sublinear regret (i.e., a _good_ regret) with respect to the (oracle) algorithm that creates optimal prediction sets that maximize $\mathbb{E}[g(\mathcal{S}|x)]$. We will clarify this in the revised version of the paper.
While there are certainly many problems with small label sets, there are also many problems with large label sets. In fact, two of the most popular benchmark datasets in the machine learning literature used by thousands of papers, ImageNet and its more commonly known subset ImageNet-1k, contain over 20,000 and 1,000 different label values, respectively. As an additional example of tasks with large label space, we mention a clinical text annotation task where each span of text has to be mapped to a _"concept label in a large (>400,000) medical vocabulary"_ [1].
[1] Levy, Ariel, et al. "Assessing the impact of automated suggestions on decision making: Domain experts mediate model errors but take less initiative." Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 2021.
**[Distributionally-free guarantees of conformal prediction]** One of the main contributions of our paper is to demonstrate that, in human-AI collaboration, the distributionally-free guarantees offered by conformal prediction may be insufficient to achieve optimal performance. More specifically, in Section 3, we show that, under common choices of non-conformity scores, there are many data distributions for which the optimal prediction set under which the human expert achieves the highest accuracy **cannot** be constructed by **any** conformal predictor. As suggested by the reviewer, one may think of developing human-centered conformal predictors that incorporate information about the distribution of experts’ predictions in the definition of the non-conformity score. However, as argued in lines 118-122, our hardness results show that one cannot expect to fully close the performance gap with such human-centered conformal predictors.
**[Knowledge of $P(Y|X)$]** Even if one knows the conditional distribution of the ground-truth label $P(Y|X)$, one may benefit from involving humans in the loop if the humans have access to additional features besides X, as pointed out in footnote 4. For instance, in the example in lines 106-117, the optimal prediction set under which the human achieves the highest accuracy does not contain the label value with the highest $P(Y|X)$ value. Further, under this prediction set, the human achieves higher accuracy than a classifier that picks the label value with the highest $P(Y|X)$ value (0.6 vs 0.4).
**[Human-only and AI-only baselines]** In our experiments, both with synthetic and real data, our proposed solution is better than humans working alone or AI working alone. Regarding the comparison with AI working alone, please, refer to our previous reply under [Problem setting]. Regarding the comparison with human working alone, in the experiments with synthetic data, the accuracy of the human working alone is reported under "NONE" and, in the experiments with real data, the accuracy of the human working alone is $0.9$ for $\omega=80$, $0.859$ for $\omega=95$, $0.771$ for $\omega=110$ and $0.603$ for $\omega=125$, which we missed reporting in our paper and we will report in the revised version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thank authors for the clarification. I revisited the toy example and realize the human+AI is actually better than the ground-truth set ranked by probability, this is an interesting example, which addressed my concern. I raised my score accordingly. | Summary: This paper aims to construct optimal prediction sets under which experts can achieve the highest accuracy. The authors claim that human experts cannot attain maximum accuracy with the prediction sets generated by conformal predictors. To address this issue, the paper proposes an efficient greedy algorithm based on maximum marginal gain to find prediction sets that outperform those generated by conformal predictors. The paper offers two main theoretical contributions: the first proves that finding the optimal prediction set is an NP-hard problem, while the second demonstrates that the proposed method enables experts to achieve higher accuracy than conformal predictors. Empirical results further validate the effectiveness of the proposed approach.
Strengths: 1. The paper is well-motivated and easy to follow.
2. The authors provide a theoretical analysis for their motivation and offer a theoretical guarantee for the superior performance of the proposed greedy algorithm.
3. The paper presents an extensive set of experiments, including both synthetic and real data.
Weaknesses: 1. Further validation on more realistic datasets, such as ImageNet and CIFAR100, could strengthen the main points of the paper.
2. The experiments lack comparison with other classical score functions, such as Regularized Adaptive Prediction Sets.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In Figure 3, how is the Empirical Success Probability for each image calculated?
2. In line 210, why does the score function of APS discard the random variable? In other words, does the random variable affect the performance of the empirical average test accuracy?
3. Can you report the empirical coverage of the Greedy algorithm, since valid coverage is the fundamental guarantee for conformal prediction?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: They are adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[More realistic datasets]** The dataset ImageNet-16H is among the only publicly available datasets that we found containing multiple expert predictions per sample, a relatively large number of samples, more than two/three classes and a reasonable level of difficulty. The suggested datasets, ImageNet and CIFAR100, do not contain multiple expert predictions per sample. Another dataset for a multiclass classification task is CIFAR-10H which also has multiple expert predictions per sample and 10 classes. However, we found that the experts on their own already achieve high accuracy ($\sim 0.95$), making it a less challenging scenario than ImageNet-16H.
**[RAPS]** Following the reviewer's suggestion, we conducted extensive additional experiments using the best conformal predictors created with RAPS and SAPS [*], and we will include them as baselines in the revised version of the paper. Both methods present additional hyperparameters ($k_{reg}$ and $\lambda_{raps}$, for RAPS, and $\lambda_{saps}$ for SAPS), which we optimized using a held-out validation set and the procedure outlined in the original papers (e.g., see Appendix E [49]).
The empirical evaluation shows that their performance is worse than our greedy algorithm for synthetic and real-world tasks (Tables 1 and 2 of the rebuttal PDF). Moreover, RAPS and SAPS offer comparable performances to NAIVE and APS in all the tasks. The complete results can be found in the PDF attached to the general rebuttal comment above.
[*] Huang, Jianguo, et al. "Conformal prediction for deep classifier via label ranking." ICML (2024).
**[Empirical success probability]** For each image $x$ and prediction set $\mathcal{S}$, we estimate the empirical success probability using the mixture of multinomial logit models (MNLs), i.e., $P_{\mathcal{S}}(\hat Y = y | X=x, Y=y) = \frac{C_{yy}}{\sum_{y' \in \mathcal{S}} C_{y'y}}$, where $\hat{Y}$ denotes the prediction by the human, $Y$ denotes the true label and, in the experiments with real data, the confusion matrix $\mathbf{C}$ is estimated using predictions made by real human experts on their own.
**[Randomization in APS]** We did not find the randomization in APS, which is just needed to achieve $1-\alpha$ coverage exactly, to influence the empirical success probability in our experiments. Therefore, for simplicity, we decided to omit it. We will clarify this in the revised version of the paper.
**[Empirical coverage of Greedy]** We will report the empirical coverage achieved by the greedy algorithm and the best conformal predictors used as baselines in the revised version of the paper. Here below we show the empirical coverage for the synthetic tasks.
In summary, prediction sets constructed with our greedy algorithm present a comparable empirical coverage with respect to the best conformal predictors (Naive and APS) in both synthetic and ImageNet-16H tasks. Moreover, in those few settings in which the conformal predictors coverage is higher (e.g., $P(\hat{Y} = Y) = 0.3$ and $\gamma = 0.3$), the conformal predictors' empirical success probability is lower (see Table 1 of the main paper), which underlines how coverage alone can be a bad proxy to estimate the empirical human accuracy. Moreover, please note how the empirical success probability of the human acting alone with the full label set, that achieves perfect coverage (1.0), is always worse than the combination of human + predictions sets in all of our experiments.
| $\gamma$ | Method | $\mathbb{P}[Y' = Y] = 0.3$ | $\mathbb{P}[Y' = Y] = 0.5$ | $\mathbb{P}[Y' = Y] = 0.7$ | $\mathbb{P}[Y' = Y] = 0.9$ |
|:-----:|--------|:-----------------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------:|
| 0.3 | Naive | $0.637 \scriptstyle\pm 0.066$ | $0.802 \scriptstyle\pm 0.058$ | $0.908 \scriptstyle\pm 0.020$ | $0.973 \scriptstyle\pm 0.007$ |
| | Aps | $0.603 \scriptstyle\pm 0.083$ | $0.804 \scriptstyle\pm 0.045$ | $0.900 \scriptstyle\pm 0.026$ | $0.967 \scriptstyle\pm 0.006$ |
| | Greedy | $0.502 \scriptstyle\pm 0.024$ | $0.764 \scriptstyle\pm 0.019$ | $0.920 \scriptstyle\pm 0.012$ | $0.976 \scriptstyle\pm 0.004$ |
| 0.5 | Naive | $0.583 \scriptstyle\pm 0.104$ | $0.770 \scriptstyle\pm 0.044$ | $0.897 \scriptstyle\pm 0.021$ | $0.968 \scriptstyle\pm 0.009$ |
| | Aps | $0.557 \scriptstyle\pm 0.100$ | $0.741 \scriptstyle\pm 0.036$ | $0.879 \scriptstyle\pm 0.016$ | $0.961 \scriptstyle\pm 0.007$ |
| | Greedy | $0.489 \scriptstyle\pm 0.027$ | $0.732 \scriptstyle\pm 0.015$ | $0.902 \scriptstyle\pm 0.013$ | $0.970 \scriptstyle\pm 0.003$ |
| 0.7 | Naive | $0.535 \scriptstyle\pm 0.104$ | $0.676 \scriptstyle\pm 0.047$ | $0.823 \scriptstyle\pm 0.044$ | $0.938 \scriptstyle\pm 0.013$ |
| | Aps | $0.519 \scriptstyle\pm 0.115$ | $0.661 \scriptstyle\pm 0.036$ | $0.853 \scriptstyle\pm 0.015$ | $0.941 \scriptstyle\pm 0.013$ |
| | Greedy | $0.473 \scriptstyle\pm 0.017$ | $0.696 \scriptstyle\pm 0.014$ | $0.861 \scriptstyle\pm 0.014$ | $0.958 \scriptstyle\pm 0.005$ |
| 1.0 | Naive | $0.499 \scriptstyle\pm 0.075$ | $0.608 \scriptstyle\pm 0.034$ | $0.750 \scriptstyle\pm 0.025$ | $0.905 \scriptstyle\pm 0.014$ |
| | Aps | $0.453 \scriptstyle\pm 0.062$ | $0.631 \scriptstyle\pm 0.038$ | $0.806 \scriptstyle\pm 0.026$ | $0.912 \scriptstyle\pm 0.013$ |
| | Greedy | $0.457 \scriptstyle\pm 0.024$ | $0.664 \scriptstyle\pm 0.015$ | $0.839 \scriptstyle\pm 0.014$ | $0.951 \scriptstyle\pm 0.006$ |
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I still have only one concern:
[**Empirical coverage of Greedy**] Regarding the selection of $\alpha$, is it justifiable to report results for the $\alpha$ value at which the expert attains the highest average test accuracy? The test set is intended for evaluating the performance of various methods, not for choosing hyper-parameters. Therefore, it may be more appropriate to adjust $\alpha$ based on a separate hold-out dataset.
---
Reply to Comment 1.1.1:
Comment: **[Selection of $\alpha$]** We would like to thank the reviewer for their follow-up message. In our experiments, we report the results for the $\alpha$ value at which the expert attains the highest average accuracy because our goal was to show that our greedy algorithm achieves better results than conformal prediction for _any_ value of $\alpha$. However, we agree with the reviewer that, in practice, one would need to select $\alpha$ using a held-out dataset. Therefore, to avoid any misunderstanding, we will add a clarification and, if the reviewer feels it is necessary, we will also add an Appendix where we adjust $\alpha$ based on a separate held-out set. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their careful and insightful comments, which will help improve our paper. Please, find a point-by-point response below and a one-page pdf with additional results attached.
Pdf: /pdf/44abba03dc7f080cdc9489bfbca32ca7899ed3d9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Toward Approaches to Scalability in 3D Human Pose Estimation | Accept (poster) | Summary: Existing data in 3D human pose estimation are typically collected indoors with human actors. To address this scalability issue, the authors propose to synthesize 3D human pose data via an Osteo-kinematic model and introduce biochemical constraints for better physical plausibility. Additionally, to deal with the inherent ambiguity in single-view depth estimation, the authors introduce Binary Depth Coordinates to explicitly model the relative spatial relation between adjacent joints. Extensive experiments verify the effectiveness of the proposed approach.
Strengths: 1. Leveraging biomechanical prior knowledge to synthesize physically plausible human data is $\textbf{intuitive}$ and $\textbf{interesting}$.
2. Comprehensive experiments verify the effectiveness of the proposed data augmentation approach (BPG) and Binary Depth Coordinates (BDC). Specifically, BDC can be applied to different methods, e.g., image-based and lifting-based, showing superior generalization ability.
Weaknesses: 1. $\textbf{Repeated text}$: The first paragraph of Sec.2 appears to be a copy-paste from the abstract, which is highly discouraged.
2. $\textbf{Requirement of camera intrinsics}$: While BDC shows notable performance gains to baselines, solving depth requires camera intrinsics (principal point and focal length), typically not required by current 3D HPE methods. This requirement may introduce additional constraints for in-the-wild inference.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. In Fig. 2 and Fig.4, adding synthesized data consistently decreased performance for some baseline methods, e.g., GFpose. This seems counterintuitive to me. As the authors mentioned, overfitting might be a reason; do the authors have any other insights regarding this? Does this phenomenon indicate there is still a gap between the real data and synthesized data?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have included the limitations in Sec.6. However, the requirement for camera intrinsics may also be considered a limitation and discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable and detailed review. Your feedback on the use of biomechanical knowledge and the practical limitations of our approach provides important guidance for further improving our work.
## Repetition in the First Paragraph of Section 2
We sincerely thank the reviewer for pointing out the repetition in the first paragraph of Section 2. We appreciate your careful review and constructive feedback. We will ensure that this repetition is addressed and revised appropriately to enhance the clarity and originality of our manuscript. Thank you once again for your valuable input.
## Requirement of camera intrinsics
We appreciate the reviewer's insightful comment regarding the requirement of camera intrinsics for solving depth with BDC. We understand that this may seem like an additional constraint compared to some current 3D HPE methods.
However, we would like to clarify that the use of camera intrinsics in our equations primarily serves to illustrate the theoretical framework. In practice, similar to other research [1,2,3], we employ approximations for both inference and training, which effectively mitigate the need for precise camera intrinsic parameters. This approach allows our method to be applied more flexibly in in-the-wild scenarios without introducing significant constraints.
[1] J. Sosa et al., "Self-supervised 3D Human Pose Estimation from a Single Image", CVPR 2023
[2] B. Wandt et al., "ElePose: Unsupervised 3D Human Pose Estimation by Predicting Camera Elevation and Learning Normalizing Flows on 2D Poses", CVPR 2022
[3] Z. Yu et al., "Towards alleviating the modeling ambiguity of unsupervised monocular 3d human pose estimation", ICCV 2021
## Insights on Decreased Performance with Synthesized Data
We appreciate the reviewer's question regarding the decreased performance when adding synthesized data for some baseline methods, such as Gfpose, as shown in Figures 2 and 4.
By examining Figure 6 (Appendix), we can provide additional insights into this phenomenon. It appears that the inclusion of excessive synthesized data can exacerbate the depth ambiguity problem for some baseline methods. As a result, these models tend to predict the middle ground rather than accurately differentiating between the front and back, leading to a degradation in performance.
However, our Binary Depth Coordinates (BDC) approach mitigates this issue effectively. BDC's novel decomposition strategy and its handling of depth ambiguity ensure that the model maintains high performance even with the addition of synthesized data. This demonstrates the robustness of our method in addressing depth ambiguity and improving overall accuracy, unlike the observed degradation in baseline methods.
## **Concluding Remarks**
We deeply appreciate the insightful comments provided by reviewer Gv4b and will thoughtfully incorporate their valuable suggestions in our rebuttal. We are confident that these responses will address the concerns raised and contribute to an improved evaluation of our manuscript.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors' efforts to prepare the response. Most of my concerns were addressed. | Summary: This paper introduces two components aimed at addressing challenges in 3D human pose estimation, specifically in terms of scalability and generalization. The authors propose a Biomechanical Pose Generator (BPG), which incorporates biomechanical principles to generate plausible new 3D poses. They also introduce Binary Depth Coordinates (BDC), a component designed to mitigate the depth ambiguity encountered when lifting a 2D pose to 3D. The paper includes ablation studies to demonstrate the impact of each component, and compares these new approaches to existing pose augmentation methods.
Strengths: The paper’s focus on addressing the challenge of limited datasets and enhancing the generalizability of the method is interesting and to the best of my knowledge the idea of biomechanical pose generator which does not rely on a source dataset is novel. Also, the authors’ attention to the depth ambiguity in 3D pose estimation from a single image adds a value to the field. The authors have conducted comprehensive experiments and ablation studies, which provide valuable insights into the effectiveness of the proposed components. The inclusion of cross-dataset evaluation is crucial, as it allows for a robust assessment of the Biomechanical Pose Generator (BPG) component’s effectiveness.
Weaknesses: 1- The paper is generally well-written, but some parts could be clearer. Including a figure to illustrate the entire system could significantly help reader comprehension. For example, a diagram showing the VPose (or any baseline) architecture and the integration of the BDC component might be more effective than a text-only description. Additionally, including some implementation details about the BDC component in the main paper could improve the flow of information.
2- There are some ambiguities in the experiment section that need clarification. When referring to the “source-dataset”, it would be helpful to specify whether this refers to the Human 3.6M dataset or the newly synthesized poses. Similarly, when discussing evaluations on 3DHP and 3DPW, it would be beneficial to mention the specific subset used, such as the test set.
3- There appears to be some confusion between Table 1 and the results in Figure 4 (left). While Table 1 shows improvements in the Human 3.6M results when adding new poses generated from BPG, Figure 4 (left) indicates that adding more data increases the MPJPE error (without integrating BDC). This seems contradictory and could benefit from further explanation.
4- Typos: There are a couple of typographical errors that need correction. On Line 177, (xi) is repeated twice instead of yi. On Line 287, BDC should be corrected to BPG.
Technical Quality: 3
Clarity: 2
Questions for Authors: As I mentioned in the weaknesses section, I would like to learn more about the effect of adding more synthesized poses to Human 3.6M and evaluating on the same source of data as currently the results in the Table 1 and Figure 4 are a bit confusing to me.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comprehensive and constructive review. Your insights on the scalability and generalization aspects, along with suggestions for improved clarity, are invaluable and will be instrumental in enhancing our paper.
## Suggestions for Improving Clarity
We appreciate the reviewer's positive feedback on the overall quality of our paper and their constructive suggestions for improving clarity. We agree that including a figure to illustrate the entire system could significantly aid reader comprehension. The diagram showing the baseline architecture and the integration of the BDC component has been added to the PDF linked in the General Response. Additionally, we have provided more detailed implementation specifics of the BDC component in the main paper. To aid understanding of the BDC, pseudo-code is included and can be found in Appendix D, Algorithm 2. These enhancements will make our manuscript clearer and more informative.
## Ambiguities in the Experiment Section
We appreciate the reviewer's observation regarding ambiguities in the experiment section. We will carefully address these ambiguities and provide the necessary clarifications to improve the clarity and comprehensibility of our manuscript. Bringing this to our attention helps ensure that our revisions will enhance the overall quality of our work.
## Confusion between Table 1 and Figure 4 (left)?
This is a valid point. We clarified this issue in the general response under "Confusion on Experimental Results.”
## Typos
We acknowledge the reviewer's attention to detail regarding the typographical errors in our manuscript. We will correct the repeated (xi) on Line 177 to yi and change BDC to BPG on Line 287. Highlighting these issues helps us improve the accuracy and readability of our manuscript.
## **Concluding Remarks**
We are grateful for the constructive feedback from reviewer 696C and will incorporate their valuable suggestions to refine our manuscript. We believe that our comprehensive responses will satisfactorily address the concerns and result in a favorable reassessment.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications. I still believe that while the idea of the paper is novel in some aspects, the paper contains some ambiguities in several parts and it needs revision to be ready for a good publication. Given that and also according to other concerns raised by other reviewers, I'd rather keep my original rating. | Summary: The authors propose a 3D human pose estimation framework that incorporates data augmentation and depth ordering information. The main contributions are two-fold: First, the proposed Biomechanical Pose Generator (BPG) generates plausible body poses based on kinematic constraints, which is used for data augmentation. Second, the Binary Depth Coordinates (BDC) disambiguate the projective depth of each joint by classifying whether the joints are positioned towards or away from the camera. The proposed framework achieved state-of-the-art performance in single-frame 3D human pose estimation settings.
Strengths: - The proposed method achieves state-of-the-art results in various 3D HPE datasets.
- The effect of data augmentation is validated in cross-domain learning settings.
Weaknesses: My major concern lies on the novelty of the contribution.
- There are numerous research papers that regularize 3D human pose based on kinematic constraints. The authors did not clarify the distinctiveness of BPG from these conventional works, except for stating that BPG achieved better performance. An analysis showing how the proposed BPG generates more plausible poses compared to previous augmentation methods is required, either by displaying the generated poses or by showing qualitative estimation results.
- The concept of BDC is similar to [1] which learns ordinal depth information. The authors should cite the paper and discuss the difference.
The paper also contains ambiguously explained parts or lacks details about their methods. Please refer to Questions section.
[1] G. Pavlakos et al., "Ordinal depth supervision for 3d human pose estimation", CVPR 2018
Technical Quality: 3
Clarity: 2
Questions for Authors: Method
- In line 155, the focal length of the camera matrix is set to 1 for BPG, is it also the case for the datasets used? Or the camera matrix provided in the datasets are used?
- In line 179, what is the meaning of "depth relative to the plane of the image". I guess $s_i$ is the depth relative to the preceding joint not the image plane.
Experiments
- Why did the authors use different baseline architectures in Sec. 5.1 and 5.2?
- How much portion of augmented data from BPG used for experiments in Sec. 5.1?
- Given that using only BPG increases the error in Fig. 4 left, how could it be possible to achieve better performance in Table 1 and 2 when only BPG is used?
- Why didn't the authors use BPG in Table 6?
- What is the difference between Variant E and BPG in Table 8?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Suggestions
- In Fig. 4, it should be clearly stated what * means. It would be better to use GFpose+BDC instead of GFpose*.
- More detailed explanation about how $T$ in Eq (1) and ${d}_{m,n}$ in Eq (2) are formulated would clarify the methods.
Typos
- Line 185, by the projection from -> by back-projecting a ray from
- Line 212, duplicated sentences.
- Line 115, to peed -> of
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough and thoughtful review. Your comments on the novelty and clarity of our contributions, as well as your specific questions, are extremely valuable and will guide us in refining our manuscript.
## **How does BPG differ from existing kinematic constraint-based methods?**
Thank you for your insightful question. Our Biomechanical Pose Generator (BPG) differentiates itself from existing kinematic constraint-based methods in several key aspects.
Firstly, BPG employs a new biomechanical approach that adheres to realistic human movement constraints, unlike existing methods that often focus solely on statistical kinematic constraints of the given dataset. While existing methods also use kinematic constraints, they depend on source datasets, which can introduce biases due to the limited and specific conditions under which the data was collected. This dependency can lead to biased pose generation that is difficult to eliminate. In contrast, BPG utilizes the biomechanical principle of NROM (Normalized Residual Orthogonality Method) independently of datasets, ensuring diverse and unbiased pose sampling.
Secondly, the concept of a biomechanical pose generator that does not rely on source datasets is novel. While many methods require pre-existing datasets to generate poses, BPG can automatically generate diverse and reliable human poses without being constrained by dataset variability and bias.
The performance improvement of BPG is due to the removal of biases in existing data, achieved by integrating NROM without relying on source datasets.
To further illustrate BPG's advantages, qualitative estimation results are shown in Figure 7 of the appendix, and the generated poses are depicted in Figure 8. Additionally, the general response PDF provides a comparison of pose distributions with existing augmentation methods. Existing methods follow the limited distribution of their datasets, whereas our method ensures robustness by eliminating bias. We believe these details clearly demonstrate the diversity and realism of poses generated by BPG compared to existing methods.
## **BDC concept is similar to [1]. The authors should cite and discuss the difference.**
We appreciate the concerns regarding the conceptual similarity of BDC. However, we would like to emphasize that our approach is differentiated from existing methods by introducing several novel components specifically designed to address depth ambiguity.
We discussed this thoroughly in the general response under "More Need for Comparison to Similar Works on BDC.”
## **Questions**
> In line 155, the focal length of the camera matrix is set to 1 for BPG, is it also the case for the datasets used? Or the camera matrix provided in the datasets are used?
>
We appreciate the reviewer's attention to detail. We used the camera matrix provided in the datasets, not the simplified focal length of 1 as set for BPG.
> In line 179, what is the meaning of "depth relative to the plane of the image". I guess is the depth relative to the preceding joint not the image plane.
>
Thank you for pointing out this ambiguity. You are correct; it should indeed be the depth relative to the preceding joint. We appreciate your careful reading and will correct this in the manuscript.
> Why did the authors use different baseline architectures in Sec. 5.1 and 5.2?
>
In Section 5.1, we employed a common architecture typically used in augmentation work to facilitate a fair comparison with other augmentation methods. In contrast, Section 5.2 utilizes multi-hypothesis models, which are specifically designed to address depth ambiguity. This distinction was made to highlight how our methodology better handles depth ambiguity compared to traditional approaches.
> How much portion of augmented data from BPG used for experiments in Sec. 5.1?
>
As indicated in line 282, the augmentation ratio was set to 1. This means the amount of augmented data was equal to the original data.
> Given that using only BPG increases the error in Fig. 4 left, how could it be possible to achieve better performance in Table 1 and 2 when only BPG is used?
>
This is a valid concern. We addressed this question in detail in the general response under "Confusion on Experimental Results."
> Why didn't the authors use BPG in Table 6?
>
The results for multi-hypothesis models are presented in Figure 4. The integration of image features with BPG was not feasible due to the inability to generate realistic images. Additionally, as described in the Limitations section of the paper, BPG cannot generate temporal poses, which limits its application in scenarios requiring sequential data.
> What is the difference between Variant E and BPG in Table 8?
>
There is a typographical error in the manuscript. Variant E involves using only NROM and PC. Additionally, the results in Table 8 were trained using only BPG-generated data and were evaluated using the H36M test set.
## **Suggestions**
> In Fig. 4, it should be clearly stated what * means. It would be better to use GFpose+BDC instead of GFpose*.
>
Thank you for your suggestion. We will take that into account and ensure that the figure is updated to use "GFpose+BDC" instead of "GFpose*" for better clarity.
> More detailed explanation about how $T$ in Eq (1) and 𝑑𝑚,𝑛 in Eq (2) are formulated would clarify the methods.
>
We appreciate your request for more details. A more detailed description of #T# in Ep (1) and 𝑑𝑚,𝑛 in Eq (2) are formulated would clarify the methods.
## Typos
Thank you for pointing out these errors. We will correct them to improve the clarity and accuracy of our manuscript.
## **Concluding Remarks**
We will diligently address the feedback from reviewer 9ZcB and integrate their valuable suggestions to enhance our manuscript. We trust that our detailed responses will resolve the highlighted concerns and lead to a positive reevaluation.
---
Rebuttal Comment 1.1:
Comment: I sincerely appreciate the authors' thorough response to my concerns and questions. I acknowledge the novelty of the method in generating samples from the distribution of possible 3D poses rather than from the dataset distribution. The response clarified the methods and experiments for me. Based on this, I am raising my score to borderline accept. | Summary: This paper address the task of 3D Human Pose Estimation from monocular RGB. The authors make two main contributions: The Biomechanical Pose Generator (BPG) and the Binary Depth Coordinates (BDC). BPG is a 3D human pose generator that leverages the "Normal Range of Motion" (NROM) that is used in the medical field to describe standard biomechanical limitations. With it, BPG is capable of generating biomechanically sound 3D human poses by randomly sampling joint angles and bones that lie within a certain ratio to each other.
BDC is a coordinate system that decompose a 3D pose into constituents. Specifically, it decomposes it into the 2D coordinate, bone length, a binary depth parameter indicating the closeness to the image plane as well as the 3D coordinates of the parent joint. This decomposition, so the authors claim, allows models to better deal with depth ambiguity.
Experimental results demonstrate that the proposed approach achieves better performance over the compared related work on a variety of datasets (cf. Tbl 1-4). Ablative studies demonstrate that BDC helps keep performance steady even in the face of larger depth ambiguity (Tbl. 5) and that related work can benefit as well from switching to the proposed coordinates (Tbl 6.)
Strengths: - The authors properly motivate and evaluate their approach. Depth ambiguity in monocular RGB is a challenging problem to address. I particularly liked Tbl. 5 that demonstrated that BDC is capable of handling even larger depth ambiguities.
- The paper was easy to digest and understand.
- One of the main strength of this paper is that BDC can be combined with other related work, yielding improvements (Tbl. 6)
Weaknesses: - My biggest concern about the paper is that BDC is very similar conceptually to "Hand Pose Estimation via Latent 2.5D Heatmap
Regression", Iqbal et al., ECCV'18. Yet there is no mention of the paper, let alone any comparisons. The mentioned paper also addresses with depth ambiguity by decomposing the 3D pose into 2D pose and a root-relative depth vector. Addressing the differences, performing comparisons with this approach would better contextualize as well as strengthen the contribution of the paper.
- BPG shows to improve performance by improving the 2D to 3D lifting component. Yet, it's contribution is rather sparse, as it essentially amount to performing forward kinematics on bounded joint angle and bone lengths. It does not take into consideration statistics on poses. Certain poses are more common, due to them corresponding to actual human movement patterns (such as walking) that are affected by gravity. Randomly sampling poses without taking such statistics into consideration may generate a range of synthetic poses that are unrealistic, leading to non-optimal improvements.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How would BPG compare to randomly sampling SMPL poses?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors address limitations of their methods, such as not taking temporal dynamics into account.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed and insightful review. Your feedback on the similarities to existing research and suggestions for further comparisons are greatly appreciated and will help strengthen our work.
## How does BDC differ from "Hand Pose Estimation via Latent 2.5D Heatmap Regression" by Iqbal et al. (ECCV'18)?
We appreciate the concerns regarding the conceptual similarity of BDC. However, we would like to emphasize that our approach is differentiated from existing methods by introducing several novel components specifically designed to address depth ambiguity.
We discussed this thoroughly in the general response under "More Need for Comparison to Similar Works on BDC.”
## BPG's contribution seems sparse, lacking pose statistics consideration
### Response to Concerns on BPG's Consideration of Pose Statistics
Thank you for raising concerns about BPG's consideration of pose statistics. While we acknowledge the potential issues, we believe the contribution of BPG extends beyond merely improving the 2D to 3D lifting component and addresses fundamental biases present in existing training datasets.
#### Addressing Sparse Contribution and Forward Kinematics
- **Beyond Forward Kinematics**: BPG is not limited to performing forward kinematics on bounded joint angles and bone lengths. Instead, it incorporates Normal Range of Motion (NROM) to ensure biomechanically plausible poses, which existing methods fail to generate. This approach enhances the realism and applicability of synthesized poses, moving beyond the constraints of traditional forward kinematics.
#### Consideration of Pose Statistics
- **Intentional Exclusion of Pose Statistics**: We deliberately chose not to incorporate pose statistics from existing datasets to avoid reinforcing inherent biases. These datasets often include everyday poses that do not adequately challenge the model during training and testing. By randomly sampling poses, BPG mitigates these biases, leading to a more robust training environment.
- **Example**: Existing datasets dominated by walking and standing poses could lead to a model that performs well in such common scenarios but poorly in less frequent, more complex poses. Our approach aims to prevent this by ensuring diverse and unbiased pose sampling.
#### Mitigating Unrealistic Pose Generation
- **Ensuring Realism with NROM and Pose Confidence**: While random sampling might raise concerns about unrealistic poses, we have introduced NROM and Pose Confidence metrics to ensure physical and biomechanical validity. Figures 1 and supplementary Figure 5 illustrate how BPG effectively removes biases and avoids generating implausible synthetic poses.
#### Evidence of Performance Improvement
- **Empirical Results**: The performance improvements of BPG are well-documented in Table 1 and Table 2, where even without the inclusion of pose statistics, BPG outperforms existing methodologies. This indicates that our approach not only enhances 2D to 3D lifting but also contributes to generating more varied and realistic poses.
- **Table 4 Insights**: Specifically, BPG demonstrates significant performance gains in challenging environments such as 3DPW, underscoring its robustness and generalization capabilities across different scenarios.
In summary, while BPG does not incorporate traditional pose statistics, its innovative use of NROM and random sampling addresses inherent biases and enhances the realism of generated poses. This approach not only improves performance but also ensures the model's applicability to a wider range of real-world scenarios, ultimately contributing to more robust and generalized training outcomes.
## How would BPG compare to randomly sampling SMPL poses?
We appreciate your insightful question regarding the comparison between BPG and randomly sampling SMPL poses. When generating poses using random sampling with SMPL, we observed a performance metric of 120.4, which aligns closely with the performance of the Kinematic model without the Normal Range of Motion (NROM) constraints, as shown in Table 8. This result underscores the critical role of NROM in our method.
Without the biomechanical constraints provided by NROM, both a simple Kinematic model and a sophisticated model like SMPL yield similar performance outcomes. This similarity suggests that the primary advantage of BPG lies in its incorporation of NROM, which ensures that the generated poses are biomechanically plausible and reflect realistic human motion patterns.
Furthermore, even when NROM is incorporated into SMPL, the performance remains comparable to existing methodologies. However, this approach significantly increases computational resources and processing time. Integrating NROM directly into SMPL results in a much higher computational cost and longer processing times, making it less efficient for practical applications.
Thank you for bringing this question to our attention, as it highlights the fundamental strengths of our approach.
## **Concluding Remarks**
We will carefully address the concerns raised by reviewer 1aXZ and incorporate their valuable suggestions in our rebuttal to improve our manuscript. We hope that these responses will adequately resolve the concerns and lead to a favorable reevaluation.
---
Rebuttal 2:
Comment: Dear Reviewer 1aXZ,
Thank you very much for your thorough review and valuable feedback on our submission. We have carefully considered your comments and have submitted a detailed response addressing the points you raised.
As the discussion period is nearing its end, we would greatly appreciate it if you could review our response soon and confirm whether it addresses your concerns. If you have any further questions or need additional clarification, please feel free to let us know.
Thank you again for your time and consideration. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their insightful feedback and thought-provoking questions regarding our work. We greatly appreciate the recognition of the clarity, relevance, and novelty of our contributions.
We were pleased to receive positive comments from many reviewers.
**Reviewer 696C** acknowledged the novelty of BPG, stating, "The idea of a biomechanical pose generator which does not rely on a source dataset is novel," and appreciated our attention to depth ambiguity in 3D pose estimation.
**Reviewer Gv4b** found BPG intuitive and effective, mentioning, "Leveraging biomechanical prior knowledge to synthesize physically plausible human data is intuitive and interesting," and noted its superior generalization ability across different methods.
**Reviewer 1aXZ** highlighted our handling of depth ambiguity, stating, "Tbl. 5 demonstrates BDC's capability in managing larger depth ambiguities," and praised its combinability with other work for yielding improvements.
**Reviewer 9ZcB** recognized our method's strong performance, stating, "The proposed method achieves state-of-the-art results in various 3D HPE datasets," and validated its data augmentation effects in cross-domain learning settings.
Some concerns were raised by more than one reviewer, so we decided to address them here in this general response. The other concerns are addressed in individual responses.
## Comparison to Similar Works on BDC (1aXZ, 9ZcB)
**Handling Depth Ambiguity**: By estimating poses in discrete space rather than the continuous space used in traditional methods ([1], [2]), we can better handle depth ambiguities. This is achieved through a binary depth parameter that indicates the relative depth to the preceding joint, allowing us to manage the uncertainty of relative depth. Unlike methods that must consider a continuous range of possible depths, our approach only needs to handle two discrete possibilities for depth, significantly simplifying the prediction task and improving robustness. As a result, our approach demonstrates robust performance even for poses with significant depth ambiguity, as shown in Figure 2.
**Decomposition Strategy**: Unlike [1], which primarily decomposes the problem into 2D pose and depth, our method further decomposes it into 2D coordinates, bone length, a binary depth parameter indicating the relative depth to the preceding joint, and the 3D coordinates of the parent joint. This comprehensive decomposition allows for a more granular handling of depth ambiguity, making it easier to isolate and address specific sources of error in the pose estimation process. **The results of the NDC representing the same method as in [1] are shown in Table 7, highlighting the improved granularity and performance.**
**Direct 3D Pose Estimation**: While the depth-ordered learning method [2] employs additional models to reconstruct 3D structures and predict poses in continuous space, our methodology eliminates this step. By leveraging geometric principles, our BDC framework enables a direct and efficient transformation into 3D poses without the need for additional models. This approach simplifies the overall process and improves robustness. The simplicity of not requiring multiple models or additional reconstruction steps means our approach can be more efficient and less prone to compounding errors. As demonstrated in our comparative analysis in Table 6, our method can be seamlessly integrated into various models, showcasing its versatility and effectiveness across different architectures.
[1] U.Iqbal et al., "Hand Pose Estimation via Latent 2.5D Heatmap Regression", ECCV 2018
[2] G.Pavlakos et al., "Ordinal depth supervision for 3d human pose estimation", CVPR 2018
## Confusion on Experimental Results (9ZcB, 696C)
We sincerely appreciate the reviewers' observations regarding potential confusion between the results presented in Tables 1, 2, and Figure 4. We would like to clarify that these results are derived from different types of tests, each serving to evaluate specific aspects of our method's performance.
Firstly, it is important to note the distinct types of tests from which the results in Tables 1 and 2 and those in Figure 4 are derived:
1. Single Hypothesis Test (Tables 1 and 2): These results reflect the prediction of a single 3D pose. This evaluation provides a straightforward measure of our model's performance under a standard testing scenario where one pose is predicted for each instance.
2. Multi-Hypotheses Test (Figure 4): The results depicted in Figure 4 arise from a multi-hypotheses test, where multiple possible 3D poses are predicted. This approach is particularly useful for assessing the model's capability to generate diverse pose predictions and handle greater ambiguity.
To ensure clarity and avoid any potential confusion, we provide the performance metrics for the single 3D pose prediction scenario using the same evaluation criteria as those employed in Figure 4. We hope this allows for a more direct comparison and helps contextualize the different performance measures. Below is a table that shows these performance metrics to facilitate a direct comparison:
|Model\Amount|0|0.1|0.2|0.4|0.6|0.8|1|2|4|6|8|
|-|-|-|-|-|-|-|-|-|-|-|-|
|Vpose|52.7|51.9|51.1|49.0|47.5|47.0|46.9|48.5|50.2|52.5|53.9|
These results indicate that the Single Hypothesis Test initially shows performance improvement differently from the Multi-Hypotheses Test. This demonstrates that using BPG alone is more effective than other existing augmentation methods. Additionally, the value for the amount of 1 matches the results presented in Table 1.
We acknowledge that presenting results from different models and tasks can create complexity and potential confusion. To enhance the comprehensibility of our findings, we are open to switching or adding performance plots for the same model. This adjustment will streamline our presentation and make it easier for readers to consistently interpret the results.
Pdf: /pdf/ef5460822bc73e2ae832a36fe62c48f045382c8b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Inverse Factorized Soft Q-Learning for Cooperative Multi-agent Imitation Learning | Accept (poster) | Summary: This paper extends the IQ-Learn method to cooperative multi-agent settings. The main insight is to use mixing networks to enable centralized training via decentralized Q functions.
Strengths: - The paper is quite relevant to NeurIPS and it is indeed important to extend IQ-Learn (or similar inverse learning algorithms) to multi-agent systems.
Weaknesses: - The major concern that I have is that, if my understanding is correct, the paper assumes access to the global state information. This is not realistic. In real application, this will never be the case. So the algorithm does not seem useful in practice.
- Typo: In line 62, it should be "generalization" instead of "generation",
- In line 72, \citet should be used instead of \cite or \citep so that the author names will become a part of the sentence.
- In line 162, \eqref should be used instead of \ref so that the parenthesis will appear around the equation number.
- The architecture figure is in page 7. It would significantly increase the readability if it came earlier.
- By the time the reader reads line 191, the IGC principle is still undefined. This makes reading very difficult.
- The same thing is true at line 203, too.
- Typo: In line 241, it should be "makes" instead of "make".
- Typo: In line 242, it should be "yields" instead of "yield".
Technical Quality: 2
Clarity: 3
Questions for Authors: - How do the agent have access to the global state information. If this is the case, why does the paper even define observations? Is the global state information available only in training or after deployment, too? In what settings is this applicable?
- How could one adapt this algorithm for non-cooperative settings? Is there a straightforward way or does it require completely new approaches?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The paper does not discuss the broader impacts. I disagree that there is no potential societal impact. I invite the authors to think about the applications their algorithm may have and then consider how their algorithm would affect those applications (both positively and negatively).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > How do the agent have access to the global state information. If this is the case, why does the paper even define observations? Is the global state information available only in training or after deployment, too? In what settings is this applicable?
Thank you for the question! We would like to clarify that our model does *not* assume access to global state information of the entire environment. As briefly described in Section 4.2.1, each agent in our MARL setting only has local observations of other agents (enemies or allies) in the agent’s neighborhood. *The state information notion S in our MIFQ model is simply a combination of these local observations of our agents.* We will make this distinction clearer in the revised version of the paper. We note that this setting of local observations is standard in many previous MARL studies. Such local observations are available in both training and deployment and, we believe, are highly realistic in practical applications.
>How could one adapt this algorithm for non-cooperative settings? Is there a straightforward way or does it require completely new approaches?
We thank the reviewer for the question. Theoretically, our approach can be applied to non-cooperative settings. However, in practice, it would require a completely new algorithm. With conflicting rewards, the non-cooperative setting is much more challenging to train compared to the cooperative setting. We will definitely explore this in future work.
We also thank the reviewer for pointing out the typos, which we highly appreciate and will correct.
*We hope that the above responses address your concerns. If you have any other comments or concerns, we are more than happy to address them.*
---
Rebuttal Comment 1.1:
Title: Limitations
Comment: Regarding the broader impact of our work, since our research focuses on imitation learning in multi-agent systems, it may have potential applications similar to areas where imitation learning has been impactful, such as autonomous driving, healthcare, and game theory. There are also potential negative impacts. For instance, imitation learning could be used for surveillance purposes, following and monitoring individuals in public spaces, or for developing autonomous weapons. We thank the reviewer for bringing this up and will elaborate such impacts in detail in our revised version.
---
Rebuttal Comment 1.2:
Comment: I thank the authors for their response. After reading their answers, I did another pass of the paper and I believe I now have a better understanding of their algorithm. I will update my score and trust the authors that they will (1) clarify the confusion about global state information and the meaning of $$S$$, and (2) add the discussion about broader impacts in their camera-ready version.
---
Reply to Comment 1.2.1:
Comment: We highly appreciate the reviewer for taking the time to read our responses and for the positive feedback on our work. We will definitely improve our discussion on global state information and include a discussion on the broader impacts of our work. | Summary: This paper addresses the problem of extending a single-agent imitation learning algorithm, inverse soft-Q learning (IQ-learn, Garg et al. Neurips 21) to the multi-agent cooperative setting. The proposed algorithm, MIFQ, leverages the ideas of mixing networks and the individual-global-max (IGM) principle, to perform the extension. Experimental evaluations of MIFQ are conducted on SMAC-v2, MPE, and Gold Miner, and demonstrate that MIFQ improves over baselines across various domains and with varying numbers of demonstrations.
Strengths: The paper addresses the challenge of generalizing a key imitation learning (IL) algorithm from single-agent to multi-agent settings, offering a novel approach with MIFQ. The problem is clearly specified and represents an important contribution to the MARL literature.
The empirical results are robust:
- MIFQ outperforms most baselines with various demonstrations.
- Extensive experiments across multiple domains and tasks confirm MIFQ's superior performance.
- Comprehensive comparisons with baselines (BC, independent IQ learning, alternative Q-factorization methods, etc.) highlight MIFQ's advantages.
Weaknesses: 1. Some aspects of the method do not seem fully justified to me:
- The authors claim in lines 143-148 that a shortcoming of the IQ learn method is that the objective depends on the centralized state and joint action. However, Section 5.4 of the IQ Learn paper presents a state-only objective (independent from the actions). I wonder if the authors could discuss whether a simple state-only extension of IQ Learn, where critic depends on the centralized state as usual, but the actor depends on the observations, would be sufficient to sidestep many of the concerns addressed by IQ Learn?
- The authors also claim in Section 4.1.2 that the straightforward Independent Inverse Q-learning is not a satisfactory solution because the method "…has limitations in addressing the interdependence between agents and the global information available during the training process.". Can the authors more explicitly discuss what the shortcomings of an independent version of IQ-learn is not satisfactory? Does it suffer from convergence problems?
2. The current experimental analysis is somewhat shallow, and essentially amounts to a description of the plots. The authors could improve the analysis of MIFQ by considering the following additional questions:
- The original IQ learn paper plots the rewards to validate that their method recovers the ground truth reward. Can the same be done here?
- Why does MIFQ perform worse than BC on MPE, particularly the reference and spread tasks?
3. There are some issues with how the experimental results have been reported.
- What is the number of trials for each of the results? Please include this in the main paper.
- The caption of Figure 2 is missing key information to understand the figure. What is the number of demonstrations used to train each of the methods? What does the shaded region mean? Based on the std devs reported in the Appendix, I assume it is the standard deviation; please see the note below and instead compute 95% confidence intervals.
- No measurements of uncertainty are provided in Table 2, and standard deviations are provided only in the Appendix. Standard deviations reflect the underlying variance in models learned by the algorithm, rather than providing a measure of statistical significance. Please also compute 95% confidence intervals to enable readers to judge the statistical significance of the gaps in mean test returns -- ideally, bootstrapped confidence intervals. See this paper for a reference on best practices: https://arxiv.org/abs/2304.01315
3. There are also some minor clarity issues:
- IGC is used in line 192, but is only explained in the following Section 4.2.2
- Definition 4.2 - this definition is not specific enough to be useful. It handwaves by only requiring that the joint policy be 'equivalent' to the collection of individual optimal policies. Equivalent in what sense?
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Questions about experiments:
- What are some reasons why MIFQ does not achieve expert level performance? While the other methods also do not achieve expert level performance, the original IQ learn algorithm does have this ability.
- How does the method perform with demonstrations not sourced from MAPPO (an algorithm that learns gaussian policies)? For example, demonstrations sourced from QMIX, which learns 'hard max' policies?
- Why does the method need an order of magnitude more demonstrations than IQ Learn needs on complex single-agent tasks?
2. Method:
- Why is it necessary to maintain Q and V networks separately? Why not derive the global V function by computing the softmax of the Q functions as described in line 163-164?
- Why is it necessary to compute Q^tot via Q^tot = -M (-Q)? What is the purpose of the double negation? The stated justification is that this enable the method "to achieve the IGC principle and the convexity", but why exactly is this? Requiring the networks to be multi-layer feedforward w/nonnegative weights and convex activation functions (lines 194-195) is enough to ensure that Q^tot is monotonic w.r.t. the local Q functions, thus ensuring the IGC principle and convexity.
- Would major changes be necessary to enable this algorithm to operate on continuous action spaces? Did the authors consider continuous action space settings?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for carefully reading our paper and providing us with valuable questions and suggestions.
> I wonder if the authors could discuss whether a simple state-only extension of IQ Learn ...
Our argument in lines 143-148 simply means that directly using the global Q, V, and global state would be impractical in multi-agent settings. This is not a limitation of IQ-Learn but a well-known challenge when extending single-agent models to multi-agent settings. This is also why the centralized training decentralized execution (CTDE) approach has become appealing for MARL. The state-only approach in Section 5.4 of the IQ-Learn paper is only useful when actions are not available. Applying this in our context is unsuitable because action observations are available.
> Can the authors more explicitly discuss what the shortcomings of an independent version of IQ-learn ...
If we learn the local Q independently by solving (4) for each agent, it implies that we neglect the interactions between agents. This approach ensures convergence to individual local policies but does not maintain consistency between local and global policies, as required by well-known principles such as IGO and IGM, which are necessary for a successful MARL algorithm.
>The original IQ learn paper plots the rewards to validate that their method recovers the ground truth reward. Can the same be done here?
Visualizing rewards in multi-agent settings is much more challenging compared to the single-agent setting due to the vast joint state and action space. So far, we are unsure how to obtain meaningful visualizations for rewards in multi-agent tasks. Therefore, we will keep this for future investigation.
>Why does MIFQ perform worse than BC on MPE, particularly the reference and spread tasks?
As mentioned in the paper, MPEs are deterministic environments (i.e., no dynamics), and BC typically performs well on such deterministic tasks.
>What is the number of trials for each of the results?
We briefly mentioned these numbers in Table 2 of the appendix. Each number in Table 1 is computed based on 4 seeds and 32 evaluation runs per seed. We will add this information to the main paper.
>What is the number of demonstrations used to train each of the methods? What does the shaded region mean? ...
The number of trajectories is 128 for MPEs and 4096 for Miner and SMAC-v2. We will clarify this in the caption of Figure 2. Additionally, the reviewer's point regarding the 95% confidence interval is well taken. We will compute these and update the paper accordingly.
>IGC is used in line 192, but is only explained in the following Section 4.2.2. Definition 4.2 - this definition is not specific enough to be useful. ...
We appreciate the reviewer for pointing these out. We will remove the mention of IGC in line 192. In Definition 4.2, equivalence means that the joint policy is equal to the product of local policies. We will clarify this.
> What are some reasons why MIFQ does not achieve expert level performance? ...
This was stated as a limitation of our approach (and other existing multi-agent IL algorithms as well). Multi-agent tasks are much more complex, making it difficult to recover the expert policy. Increasing the amount of expert demonstrations might help, but it also leads to an excessively large replay buffer, causing out-of-memory issues. Addressing this limitation will require further efforts, which we plan to pursue in future work.
>How does the method perform with demonstrations not sourced from MAPPO ? ...
In our context, MAPPO achieves the best policy in MARL, so we use it as an expert. The main reason is that it is not reasonable to use a sub-optimal policy as an expert for imitation, as a sub-optimal solution to the imitation learning problem could yield better rewards than the expert, thus biasing the evaluation.
>Why does the method need an order of magnitude more demonstrations than IQ Learn needs on complex single-agent tasks?
The main reason is that multi-agent tasks are much more complex than single-agent tasks, with much larger action and state spaces. Therefore, much more data is needed to understand the environment, requiring significantly more demonstrations for the imitation learning.
> Why is it necessary to maintain Q and V networks separately? Why not derive the global V function by computing the softmax of the Q functions as described in line 163-164?
We have discussed this in Section B.4 of the appendix. The main reason for our approach is to make the algorithm practical. Directly computing $V$ through the global $Q$ is generally impractical because it requires sampling over a joint action space, which is exponentially large. We actually attempted this approach, but it did not work at all—the algorithm couldn't learn anything, and the win rates were always zero. Therefore, we did not include this approach in the comparison. In our approach, we compute the global $V$ using local $Q$ values (which only require sampling over the local action space, making it much more feasible) and then aggregate the global $V$ using the mixing network.
> Why is it necessary to compute Q^tot via Q^tot = -M (-Q)?
The main reason for this double negation is not only for the monotonicity, as $Q^{tot} = M(Q)$ is sufficient for that purpose. We use this approach to ensure that the global objective function is concave in $Q$ (Theorem 4.5),
>Did the authors consider continuous action space settings?
So far, our algorithm is generally not suitable for continuous action spaces. However, all the environments under consideration have discrete action spaces and are taken from prior SOTA MARL works. Extending the approach to continuous action spaces would require further investigation. We plan to explore this in future work.
*We hope that the above responses address your concerns. If you have any other comments or concerns, we are more than happy to address them*
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing most of my concerns and questions. The only questions whose answers I wasn't completely satisfied with why to maintain Q and V networks separately.
The authors argued that sampling actions from the joint policies is intractable due to the size of the joint action space. However, the paper only addresses scenarios where the number of agents is relatively limited (up to 10 agents). Further, this cost would only be incurred during the training phase. Since sample efficiency is not a primary objective of this paper, I don't think this is a key issue.
Since the authors performed the experiment, perhaps they can add the results of directly computing V through Q to the appendix.
In any case, I am largely satisfied with the author's rebuttal and will raise my score.
---
Reply to Comment 1.1.1:
Title: Thank you for the reply!
Comment: We thank the reviewer for reading our responses and for maintaining a positive outlook on our paper.
> The authors argued that sampling actions from the joint policies is intractable due to the size of the joint action space. However, the paper only addresses scenarios where the number of agents is relatively limited (up to 10 agents). Further, this cost would only be incurred during the training phase. Since sample efficiency is not a primary objective of this paper, I don't think this is a key issue.
At this point, we have found that this approach (computing V directly via Q) does not work in our context. There might be ways to overcome this, and we will explore them in the future. Thank you for your feedback!
> Since the authors performed the experiment, perhaps they can add the results of directly computing V through Q to the appendix.
Thank you for the suggestion. We will definitely include these additional experiments (directly computing V through Q) in our paper. | Summary: This paper presents a novel algorithm, Multi-agent Inverse Factorized Q-learning (MIFQ), for cooperative multi-agent imitation learning (IL). It extends the inverse soft-Q learning framework to multi-agent settings by introducing a mixing network architecture for centralized training with decentralized execution. This enables learning local and joint value functions effectively. The authors conducted extensive experiments across multiple challenging environments, demonstrating that their approach outperforms existing methods.
Strengths: - The introduction of a multi-agent extension of inverse soft-Q learning using factorized networks is a significant and novel contribution to the field of IL.
- This paper is well-written and organized, and provides a sound theoretical analysis.
- The empirical results across three different environments, including a complex version of the StarCraft multi-agent challenge, are impressive. The proposed method outperforms existing baselines.
Weaknesses: As someone who is not an expert in the field of imitation learning, I perceive no significant weaknesses in this paper from my perspective.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In Figure 2, the semi-transparent curves are not standardly explained. If these do not represent standard deviations, what statistical measure do they depict?
- Minor Error: On Line 62, the term "generation" is used where "generalization" might be intended. Could the authors clarify or correct this in the context?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for reading our paper and for the positive feedback.
> In Figure 2, the semi-transparent curves are not standardly explained. If these do not represent standard deviations, what statistical measure do they depict?
They present standard deviations. We will clarify this in the paper.
> Minor Error: On Line 62, the term "generation" is used where "generalization" might be intended. Could the authors clarify or correct this in the context?
This is “generalization” and it is a typo. We will correct this.
*We hope that the above responses address your concerns. If you have any other comments or concerns, we are more than happy to address them.* | Summary: The paper addresses the imitation problem in cooperative Multi-Agent Reinforcement Learning (MARL). It extends inverse soft-Q learning to the multi-agent domain by leveraging value factorizations under the Centralized Training with Decentralized Execution (CTDE) paradigm. Experimental results demonstrate the effectiveness of the proposed approach across several environments.
Strengths: - The study of imitation learning in MARL is a valuable and relevant research problem, and the paper provides promising solutions.
- The experimental results are robust and convincingly support the proposed method's effectiveness.
Weaknesses: - The paper's organization could be improved. The current structure alternates between theory and architecture without a clear flow.
- The similarity between IGC and IGO[1] requires further clarification.
- The objective function (6) introduces sub-optimality compared to the original objective (3) due to the restriction that $Q^{tot}$ and $V^{tot}$ must be monotonic. Additionally, since $Q^{tot}$ and $V^{tot}$ use different mixing networks, the relationship between them violates Equation (2). This indicates that Equation (6) does not represent the same objective as Equation (3), even without considering the sub-optimality introduced by factorization. These issues need further theoretical exploration and discussion.
- Although the experimental results are promising, the superior performance seems to stem from the QMIX algorithm's advantage over other MARL algorithms. An important missing baseline is the soft actor-critic version of IQ-Learn, which uses a centralized Q function with decentralized critics and does not seem to violate the original objective.
[1] Zhang, et al., FOP: Factorizing Optimal Joint Policy of Maximum-Entropy Multi-Agent Reinforcement Learning, ICML 2021.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Is BC trained online, given that it shows learning curves with environment steps? If so, why not use DAGGER?
2. Could the authors explain why MIFQ significantly outperforms IQVND? Is it solely due to the factorization structure?
3. Why does the paper state that QPLEX is unsuitable for the proposed method? QPLEX also has $\partial Q/\partial Q_i>0$.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are discussed in conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for reading our paper and the insightful comments and questions.
> The paper's organization could be improved.
Thank you for the feedback! We will revise our writing and improve our exposition.
> The similarity between IGC and IGO[1] requires further clarification.
Thank you for mentioning this. IGO and our IGC are indeed similar. However, IGC is more specific in that it requires the local policies, obtained by solving the local objective functions, to be equivalent to the global policy obtained by the global objective function.
> The objective function (6) introduces sub-optimality compared to the original objective (3) due to the restriction that and must be monotonic. Additionally, since and use different mixing networks, the relationship between them violates Equation (2). This indicates that Equation (6) does not represent the same objective as Equation (3), even without considering the sub-optimality introduced by factorization. These issues need further theoretical exploration and discussion.
Thank you for the insightful comments. We agree that the factorized Q-learning objective violates Eq. (3). The main reason we follow this approach is that the relationship between $V$ and $Q$ cannot simultaneously hold at both the global and local levels, i.e., (2) and (5) cannot hold simultaneously (we discussed this in Section B2 in appendix). On the other hand, maintaining the relationship as in (2) is impractical because it requires computing $V^{tot}$ via a global Q-function and global policy $\Pi$. Therefore, we choose to keep (5) valid and build our factorization approach based on this (Computing local V functions via (5) , and compute $Q^{tot}$ and $V^{tot}$ by the mixing networks is indeed less challenging and more practical)
Furthermore, since (2) holds, each individual objective helps match the individual learning policy with the corresponding individual expert agent. Since IGC holds, our training can ensure global convergence and consistency across all agents.
> Although the experimental results are promising, the superior performance seems to stem from the QMIX algorithm's advantage over other MARL algorithms. An important missing baseline is the soft actor-critic version of IQ-Learn, which uses a centralized Q function with decentralized critics and does not seem to violate the original objective.
Thank you for the comment and suggestion. There are two reasons we did not extend the soft actor-critic version of IQ-Learn to our multi-agent setting. First, as mentioned, it requires computing the global V function via the global Q function and global $\Pi$, which is impractical for multi-agent settings. Second, soft-actor-critic (SAC) methods only work well for continuous-action-space environments. All the environments we considered (following prior SOTA MARL papers) have discrete action spaces, making direct Q-learning algorithms more suitable. To support this argument, we have conducted an additional experiment, detailed in the attached 1-page PDF, where we compare a SAC IQ-learn adapted to multi-agent tasks. The results generally show that SAC-IQ performs worse than our algorithm, MIFQ. We will include these results in the paper.
>Is BC trained online, given that it shows learning curves with environment steps? If so, why not use DAGGER?
Our BC was trained offline. The learning curves actually reflect our evaluations after certain training steps. We will clarify this in our paper.
>Could the authors explain why MIFQ significantly outperforms IQVND? Is it solely due to the factorization structure?
Yes, IQVDN is simply a linear combination of local functions, while MIFQ leverages our two-layer mixing networks with learnable parameters. Previous work has also shown that QMIX generally outperforms VDN for this same reason.
>Why does the paper state that QPLEX is unsuitable for the proposed method? QPLEX also has .
The monotonicity of the Q function is simply a corollary of our mixing structure and is not a key target when constructing our learning objective. Additionally, while QPLEX utilizes the advantage function (A = Q - V), our objective is different and such an advantage function is unsuitable to use. We will elaborate more on this point in the updated paper.
*We hope that the above responses address your concerns. If you have any other comments or concerns, we are more than happy to address them.*
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. I have increased my initial score.
---
Reply to Comment 1.1.1:
Title: Thank you for reading our responses!
Comment: We thank the reviewer for reading our responses and for the prompt reply! | Rebuttal 1:
Rebuttal: We thank the reviewers for carefully reading our paper and providing constructive feedback and questions, which we have been happy to consider and clarify. Please find a summary of our responses below.
**Reviewer GGqd** raised a concern about the fact that equation (2) does not hold under our mixing architecture. In response, we have clarified that the relationship between V and Q cannot be satisfied at both local and global levels simultaneously. Therefore, we chose to keep the equation valid at the local level, making our algorithm practical. In contrast, maintaining the V-Q equation at the global level (Eq. 2) would require sampling over the joint action space, which is impractical, especially for large-scale tasks such as SMAC_v2.
The reviewer also mentioned a soft-actor-critic (SAC) IQ-learn as a missing baseline. In response, we argued that such an SAC algorithm is neither suitable nor practical in our multi-agent setting. To support our arguments, we have conducted an additional experiment, detailed in the attached 1-page PDF, where we compare a SAC IQ-learn adapted to multi-agent tasks. The results generally show that SAC-IQ performs worse than our algorithm, MIFQ. We will include these results in the paper.
We have also provided detailed responses to other questions regarding why DAGGER is not used, why MIFQ outperforms IQVDN, and why QPLEX is not suitable. We will update our paper to clarify these points.
**Reviewer vZKY** has a clarification question about the curves in Figure 2 and pointed out a typo. We have provided a response to address this.
**Reviewer v3qG** raised several questions and requested clarification on the following points: (i) whether the state-only approach in the IQ-learn paper is applicable, (ii) why independent IQ-learn is limited, (iii) why MIFQ cannot achieve expert performance as in the single-agent IQ-learn setting, (iv) why MIFQ performs worse than BC on MPE, (v) how the method performs with demonstrations not sourced from MAPPO, (vi) why our method needs more demonstrations than single-agent IQ Learn, (vii) why it is necessary to maintain Q and V networks separately, and (viii) whether the authors considered continuous action space settings. In response, we have provided detailed answers to each question.
The reviewer also requested clarification regarding the number of trials for our results, the number of demonstrations used in the experiments, and the standard deviations reported in Figure 2. We will clarify these points in the paper
**Reviewer 9wvM** raised a concern about the use of global state information in our training algorithm, which makes it impractical. In response, we clarified that we only assume access to local observations of neighboring agents. These local observations are available in both training and deployment and are highly realistic in practical applications. The use of such information is also standard in previous multi-agent reinforcement learning algorithms. We will clarify this point in the updated paper.
The reviewer also asked whether our algorithm can be applied to non-cooperative settings. In response, we believe that the non-cooperative setting would be much more challenging and would require a new MARL algorithm, which we will explore in future work.The reviewer also pointed out some typos, which we highly appreciate and will correct.
*We thank all the reviewers for their comments and feedback, which we have tried to address and clarify. If you have any further questions, we are more than happy to discuss and clarify them.*
Pdf: /pdf/315dbb6100d43c87c342398eddd5ea980963c3cd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Approximated Orthogonal Projection Unit: Stabilizing Regression Network Training Using Natural Gradient | Accept (poster) | Summary: The paper proposes a novel training framework for regression tasks called the Approximated Orthogonal Projection Unit (AOPU), optimized using truncated natural gradients. The authors utilize the Rank Rate (RR) of the augmented data covariance matrix as a metric. They demonstrate that their method offers more stable training than existing architectures and optimizers, which is crucial for industrial applications requiring online training during production. Additionally, the authors provide a comprehensive analysis of their setup's convergence.
Strengths: 1. Detailed introduction on the background and intuition.
2. The method is very simple.
3. A thorough theoretical analysis of the method was provided.
Weaknesses: 1. Poorly arranged paper; conclusions are at the end of the appendix.
2. In the introduction, the authors claim that their methods improve interpretability, but they do not explain later why that matters. Also, many existing works explain the behavior at a neuron level; it is not clear why one has to track the parameter itself.
3. Experimental qualities are not good; no hyperparameter search is mentioned in the paper, which is essential when the authors claim that their method improves training stability.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Did the authors try to tune the learning rate or other hyperparameters carefully for all other methods? Those bad-performing/unstable training curves in Fig 5 might be because the authors used only one specific learning rate across all those settings.
2. Could the authors offer more insights on the augmented data $\tilde{x}$? Why did the authors choose a random Gaussian matrix as the togo augmentation?
3. Could the authors explain how the data was fed into the model? Suppose we have sensors U1-U7 in Debutanizer, each with 2394 records. I assume those data are the inputs to the network, so what are the targets? Also, for sequence models, the way those data were fitted into sequence matters, so how did the authors implement that?
I am willing to increase my rating if the authors could address my concerns.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to reviewer SnqU:**
We are grateful for the time and effort you have invested in reviewing our manuscript. We take each of your concerns seriously and are confident that we can address all issues raised to your satisfaction.
**Response to weaknesses 1:**
Due to our focus on rigorous experimental validation and theoretical foundations, the manuscript may have become overly extensive. If possible, we will reduce the narrative and formulas in the main text and move quantitative analysis, experimental details, and conclusions into the main text to enhance the integrity of the paper.
**Response to weaknesses 2:**
There are many neuro-level studies of NN focusing on mimicking biological features or mathematical patterns, presuming these structures are advantageous and subsequently validating them through experiments. However, such methodologies often lack practical guidance. For instance, the autocorrelation block introduced in Autoformer, which substitutes attention with autocorrelation, is presented as an effective enhancement. However, it is challenging to ascertain at a neurological level whether neurons adhere faithfully to the assumptions during deployment, i.e., capturing terms with the largest autocorrelation coefficients. Moreover, it is difficult to evaluate whether these assumptions positively or negatively impact NN. As a result, we are limited to analyzing outcomes without theoretical guidance, making such approaches novel yet lacking in interpretability.
AOPU’s interpretability differs from conventional research by emphasizing a deep understanding of the overall behavior of networks through the analysis of RR. For example, if the RR is very low, it indicates that the mini-batch data is more homogeneous, potentially leading to performance decline due to accuracy loss in inverse calculations and homogenized data. Conversely, if the RR is high, it suggests that the data quality is good, and the truncated gradient is effective, approximating the natural gradient (Newton method). This situation is akin to achieving a minimum variance estimation, which typically results in good performance.
**Response to weaknesses 3:**
The hyperparameter selection is guided by two principles: first, to ensure the model size of various comparative methods remains comparable; second, to choose hyperparameters that optimize model performance. We present the detailed hyperparameter information in our **global response**, Fig. 2, where we can see for SRU the smaller setup is recommended, while for Debutanizer bigger model possibly leads to better but still limited performance. However, the model size of the compared methods increases dramatically with layers and hidden dims, which means that the efficiency of parameters drops. Therefore, we chose hyperparameter settings that keep the model size comparable to that of AOPU, maintaining a balance between performance and efficiency.
**Response to question 1:**
We fully understand the reviewer's perplexity. Noting in **Experiment Implementation** section, line 246 to line 247, the learning rate (lr) of AOPU is set 200 times larger than that for others yet even more stable. This is because the natural gradient is much more conservative than the conventional gradient. We conducted two experiments in our **global response**, Fig. 1 to vividly demonstrate the difference between two gradient methods explaining the superiority of our methods, and Fig. 3 to complement the experiments on different lr setups to address the reviewer's concerns.
The major difference between natural gradient and conventional gradient is not the step size but the direction. Conventional gradient ignores the parameter manifold and treats every parameter equally. Natural gradient divides the gradient by its second derivative thus it treats the sensitive parameters carefully (low gradient) and non-sensitive parameters boldly (high gradient). That adjustment results in different gradient directions and contributes to better convergence.
Due to space limitations, we presented the results of different lr setups only for LSTM and AOPU. Our findings indicate that AOPU is significantly more robust than LSTM with respect to changes in lr.
**Response to question 2:**
We find there is a misunderstanding regarding the construction of the aug model in AOPU. It is not necessary for AOPU to follow the specific method we used in the manuscript, which adheres to the traditions of the Broad Learning System and RVFLNN. AOPU’s aug projection can utilize any compatible DNN architecture to model nonlinearity.
Differentiating from traditional DNNs that use various stacked nonlinear structures for end-to-end modeling, AOPU separates nonlinearity and input-output modeling into two independent areas of study and focuses on the latter one.
Due to space limitations, we defer detailed clarification to our **global response.**
**Response to question 3:**
We thank the reviewer for emphasizing the problems.
The targets for the Debutanizer and the SRU (Sulfur Recovery Unit) are the butane content in the bottom flow and the SO2 concentration in the tail gas, respectively. We apologize for any inconvenience caused by the lack of detailed and clear descriptions of the datasets. These datasets are referenced in the book Soft Sensors for Monitoring and Control of Industrial Processes, Appendix A, on pages 231 and 234, respectively.
During training, we first construct an input matrix with the shape [b, s, d], where b represents batch size, s represents sequence length, and d represents input dimensions. For models requiring sequential computation or global query operations, such as LSTM, Autoformer, and Informer, we feed the model with the matrix in this form. For other models that do not explicitly require sequential operations, the matrix is reshaped to [b, s*d] before being fed into the model. This process is described in detail in the **Experiment Implementation section**, lines 248 to 254.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the detailed response and extra experiments. The response addressed part of my concern about hyperparameter scanning.
I am still very puzzled by the argument about untrackable parameters. Suppose we have an augmentation block like $f = w_2 \phi(w_1 x)$, then the derivative needed for calculating Fisher Information can be easily obtained by following chain rules:
$$
\begin{aligned}
\nabla_{w_1} L &= \frac{\partial L}{\partial f} \frac{\partial f}{\partial w_1} \\\\
&= \frac{\partial L}{\partial f} (w_2 \phi'(w_1 x) x)
\end{aligned}
$$
Similarly for $\nabla_{w_2} L$.
Could the authors elaborate more on the importance of untrackable parameters?
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer’s careful review of our response and thank you for emphasizing this issue.
**The untrackability is associated with accelerating the Natural Gradient(NG) computation**. The key to AOPU’s capability to rapidly approximate the NG is its ability to bypass the Fisher Information Matrix (FIM) computation and its inverse. The key to the capability to skip the FIM lies in our utilization of Eq. 13, line 572, which separates the model parameters from the random variables and data. In this context, we can use the gradient matrix of the natural parameter for the expectation parameter to replace the FIM (without actually performing this calculation), thereby substituting the original complicated NG computation with an equivalent conventional gradient computation, i.e., $\nabla_\lambda m=F(\lambda)$ ⇒ $\nabla_m \lambda=(F(\lambda))^{-1}$, so that
$(F(\lambda))^{-1}\nabla_{\lambda}\mathcal{L}$ ⇒ $\nabla_{m}\mathcal{L}$.
**This allows us to use the automatic differentiation toolbox for rapid calculations.**
The reviewer's perspective is indeed correct. Under the guidance of automatic differentiation toolboxes and chain rules, deriving parameters (first and second derivatives, as well as partial derivatives) is straightforward, but it lacks the trackability that separates parameters from random variables and data. In this context, **we must explicitly compute the inverse of the network’s FIM, which is very time-consuming and memory-intensive.** For instance, a network with 20kB of 32-bit parameters, which equates to 5120 trainable parameters, requires inverting a 5120-dimensional matrix with each training iteration. This requirement grows with model size and can easily lead to GPU memory shortages. **More critically, such large matrix inversions often lead to significant numerical precision loss, severely impairing model performance.**
Indeed, the process of computing the truncated gradient in AOPU also introduces a new matrix inversion, which prompted our analysis of RR. Unlike the FIM, which grows with model size, the inverse matrix size introduced by the dual parameter is manageable, equal to the batch size, and provides clear guidance for model improvements.
We have great confidence in the contributions and impact of AOPU, but we are also very honest about its limitations and are open to your critiques. We welcome any additional concerns the reviewer may have and are committed to addressing them promptly.
---
Reply to Comment 1.1.2:
Title: The aug block presented can not be updated so far
Comment: We find the reviewer may be confused regarding the parameter training of the aug block, we would like to clarify that.
For the presented aug block in the manuscript, the random Gaussian weight matrix is fixed and will not be updated, as we stated in line 172. This adheres to the tradition of the Broad Learning System and RVFLNN.
For the potential research mentioned in our **global response**, we pointed out a valuable direction to incorporate other DNN models with AOPU to clarify that AOPU is not limited. However, the implementation detail is lengthy and complicated, e.g. the partial natural gradient. While valuable, such details are out of the scope of AOPU so we did not dive deep in the manuscript.
The manuscript focuses on the online deployment of models in the industrial context, introduces the structure of AOPU, theoretically demonstrates why dual parameter and truncated gradient should be computed, and experimentally verifies its stable convergence ability.
We believe the manuscript is already very informative.
We proposed AOPU as a unified soft sensor deep learning architecture that focuses on **the second step**. To avoid tedious repetition the reviewer could find more information in **global response** and **Response to weaknesses 2 of reviewer 8Lrh**.
---
Reply to Comment 1.1.3:
Comment: Dear reviewer SnqU
We hope our response can provide clarity and assistance. If you have any additional concerns, we will be more than happy to help address them.
---
Rebuttal 2:
Comment: We sincerely appreciate the reviewer’s recognition, your support has greatly encouraged us!
We will briefly provide an overview of the modifications to the manuscript, followed by a detailed explanation of the adjustments made in each section.
**Overview**: The modifications to the manuscript mainly focus on three areas: 1. Reducing or compressing less important content in the main text, 2. Moving important content from the appendix to the main text, and 3. Integrating significant content from the rebuttal-discussion into the main text.
For the **Abstract**, we will retain it completely.
For the **Introduction** and **Related Work**, we will keep the content of the **Introduction** as it forms the basis of the application value of AOPU. We will integrate the **Related Work** into the **Introduction,** as this section provides less insight, forming a new section termed **Introduction and Related Work,** which will save us about 1 page of space.
For the **AOPU Methodology,** we will first delete the introductory paragraphs of each section and subsection. The **Trackable and Untrackable** section will be moved to the **Mathematic Proof** in appendix. We will refine the descriptions in the **Network’s Structure** and finally integrate the **Network’s Mechanism** into the **Network’s Structure**. These adjustments will allow us to better explain the physical significance of the dual parameter and truncated gradient without causing a disjointed reading experience, saving us about 1 page of space.
For the **Experiments and Analysis**, we will also delete the introductory paragraphs of each section and subsection. We will compress the content of the **Dataset Description**. Since the **Baselines** and **Experiment Implementation** are very important, they will remain unchanged. For the **Main Result**, we will compress the content in **How certain we are about the inverse** and move it to the appendix. The section **Is the training stable**, due to its importance, will not be modified. These changes will save us about 0.5 pages of space.
NeurIPS provides an additional 1 page of space for the revised manuscript. Currently, we have saved approximately 3.5 pages of space.
We plan to integrate our discussion of the **two steps** from the **global response** into the **Introduction and Related Work** section, providing more details on the differences between AOPU and conventional DNN research. This will require about 0.3 pages of space.
In the **AOPU Methodology** section, before the **Network’s Structure** section, we will add an additional section named **Natural Gradient vs. Gradient,** incorporating our explanation of their differences (conservative and radical) from the **global response.** This will require about 0.3 pages of space.
At the end of the **AOPU Methodology** section, we will emphasize why AOPU can achieve efficient NG computation, the compromises AOPU has made, and supplement this with the discussion we had with reviewer SnqU. This will require about 0.2 pages of space.
We can move the important **Quantitative Analysis** and **Conclusion and Limitations** sections from the appendix to the end of the main text after refinement, enhancing the structure and completeness of the manuscript. This will require about 2.7 pages of space.
Finally, in the appendix, we can add the descriptions from our **global response** regarding hyperparameter selection and the experimental results under different learning rate setups. This will be helpful for the readers who hold concerns on hyper parameter selections.
Through this reorganization of the AOPU structure, we have integrated the content from the authors-reviewers rebuttal and discussion process into the manuscript, greatly enhancing manuscript’s completeness, readability, and highlighting the contributions, impact, and value of AOPU.
We hope these improvements will lead the reviewer to consider raising the rating for AOPU.
Title: Manuscript improvements
---
Rebuttal Comment 2.1:
Comment: Given the discussion I had with the authors, I believe the authors could update the manuscript properly and prepare an intact, camera-ready version. I have updated my score accordingly.
---
Reply to Comment 2.1.1:
Comment: We want to thank the reviewer for your recognition, it means a lot to us!
If you believe there is further room for improvement with AOPU, we will promptly begin optimizing it. | Summary: The paper introduces the Approximated Orthogonal Projection Unit, the basis for a new neural network, designed to enhance the stability and interpretability of regression models, particularly in industrial soft sensor applications. The primary aim is to address the need for stable and immediate optimization in online settings, where traditional NN training techniques fall short. The paper introduces the theoretical background and demonstrates the effectiveness on two tasks, while also introducing ablations and comparisons to several other techniques.
Strengths: - The proposed method appears novel and straightforward.
- The paper provides a solid theoretical foundation.
- The paper imrpoves interpretability of the neural network's behavior and training dynamics by differentiatiating between trackable and untrackable parameters, enhancing the interpretability.
- The authors demonstrate superior performance of AOPU in experiments with two chemical process datasets, showcasing its practical effectiveness in achieving stable convergence compared to existing models.
- Practical Relevance: Tailors the AOPU framework specifically for industrial soft sensor applications, addressing the need for immediate optimization and stability in online settings.
- Limitations, such as numerical stability issues during matrix inversion in the training process, are discussed.
Weaknesses: - Code not published. The justification provided is somewhat questionable, since easy reproducibility should also enable the authors to provide code (possibly mirrored from the code implemented at the company).
- While the page limit is formally met, the authors make extensive use of the Appendix, including core elements of the paper. The Conclusion and Limitations, for example, are in the appendix.
- There is no mention of thorough hyperparametertuning and its results.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The resoning/phrasing of lines 25-29 is not clear to me. Could you please elaborate?
- I am not sure, how the augmentation of Eq. (4) helps. Isnt it just a linear transformation, that introduces no new representation?
- Could you please provide detaisl of how the hyperparameters (Section 4.3) were found and potentially how sensitive these methods are to changes? It appears that the true value of some of the algorithms might be obscured by improper hyperparameter settings. How many random seeds were used? It is unclear how robust the results are against random variations (see for example the second DNN plot of Figure 5, which appears to be somewhat of an outlier compared to the shorter and longer sequence length).
- I noticed the following typos: Line 9: parameters'; Line 14: missing 'the', Line 106: integrated; Line 153; Line 156: a
- Please keep heading capitalization consistent!
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations have been addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to reviewer PyNt:**
We are deeply grateful for the time and effort the reviewer has invested in reviewing our manuscript. Your recognition and support are crucial to us. We take each of your concerns seriously and have addressed them thoroughly.
**Response to weaknesses 1:**
To expand the influence of AOPU and promote community progress, we have proposed to the company that the code be desensitized and then made public. We will continue to advocate for this initiative.
**Response to weaknesses 2:**
Thank you for your understanding. We also acknowledge that the organizational structure of the manuscript was not optimal. In order to clearly articulate our reasons for studying AOPU and how it is deployed, we devoted a significant portion of the manuscript to this discussion, which necessitated relegating many experimental results and conclusions to the appendix.
However, we believe that this issue can be corrected. As we have planned in the response to reviewer 8Lrh, we are capable of optimizing the structure of the AOPU paper for better presentation.
**Response to weaknesses 3:**
We acknowledge that two reviewers have expressed concerns regarding the selection of hyperparameters. We've presented comprehensive hyperparameter results in the **global response**, Fig. 2. We can see for SRU the smaller setup is recommended, while for Debutanizer bigger model possibly leads to better but still limited performance. However, the model size of the compared methods increases dramatically with layers and hidden dims, which means that the efficiency of parameters drops. Therefore, we chose hyperparameter settings that keep the model size comparable to that of AOPU, maintaining a balance between performance and efficiency.
**Response to questions 1:**
Certainly! Existing deep learning models are trained on offline datasets where they can employ numerous tricks using epoch information, such as:
1. Early Stopping: After each epoch, if the val loss increases over several consecutive epochs, it is assumed that the model has overfitted, and training is halted to select the model with the optimal val loss for testing.
2. Warm Up: In the initial epochs, a smaller lr is used to avoid rapid convergence to local optima, later switching to a constant learning rate.
3. Learning Rate Schedule: The lr is dynamically adjusted during training to help the model overcome local optima and fine-tune, using methods like step decay, exponential decay, cosine annealing, or cycle policy—all based on epochs information.
For the online streaming update processes in actual production, there is no concept of epochs, making these tricks inapplicable.
**Response to questions 2:**
We thank the reviewer for pointing out this issue. This was indeed an oversight. The expression in Eq. (4) lacked the notation of activation function $\text{acti}$, and it should be represented as $\hat{x}=\text{acti}(\hat{G}^Tx)$.
**Response to questions 3.1:**
Regarding the hyperparameter information, we have already conducted a comprehensive analysis in our previous **response to weakness 3**, and therefore, we will not reiterate it here.
**Response to questions 3.2:**
We fully understand the reviewers' perplexity regarding some unexpected experimental outcomes, specifically for Autoformer. We used the code provided in Autoformer's GitHub repository. We believe there are two plausible explanations for the outputs of Autoformer:
1. Time Embedding: The code in this repository includes embedding techniques for year, month, day, hour, and minute. Since our dataset does not contain these detailed timestamps, we replaced this with conventional position embedding.
2. Historical Label: The repository’s code uses historical labels to predict current labels, which is inappropriate for soft sensors, thus we removed this feature.
For Informer, we replicated the experiments using the code from the same repository. As a comparison, Informer performs consistently well with shorter sequence lengths. The contrast between Autoformer and Informer not only verifies the correctness of our code deployment but also highlights several interesting observations:
1. Autoformer, compared to Informer, has a stronger dependency on label data, such that even after removing the historical labels, Informer still captures the input-output relationships correctly, while Autoformer collapses.
2. In industrial time series data, Transformer models do not necessarily model long-term dependencies better. In fact, with larger batch sizes, LSTM models perform better.
**Response to questions 3.3:**
As stated in the **Quantitative Analysis** section of our manuscript, we conducted 20 independent repetitions for each model configuration without specifically restricting random seeds. The mean results are presented in regular font size in Tables 2 to 5, and the std. are presented as lowercase subscripts. Thus, the experimental results can be considered significant. For the visualization of iteration loss, we did not average the results of 20 experiments; instead, we displayed the outcome of a single experiment. While the dynamic changes in iteration loss may indeed be influenced by randomness, they are still representative as the result aligns with our analysis of std. detailed in the **Quantitative Analysis**, lines 827-833, thereby affirming the validity and significance of our findings regarding iteration loss.
There are also additional experiment results in the **global response**, Fig. 3, where we conducted experiments under different lr setups, we believe such results can also corroborate the aforementioned statement.
**Response to questions 4:**
Thank you for your meticulous review. We will promptly inspect the presentation of AOPU to avoid typos like that.
**Response to questions 5:**
Indeed! We will ensure that all section and subsection titles are consistently capitalized. Thank you again for your reminder.
---
Rebuttal Comment 1.1:
Title: Questions on rebuttal
Comment: Thank you for your detailed response!
On weakness 3, what exactly do you mean by efficiency? Was the learing rate properly tuned? I only see three rates and only for the lstm models.
On your response to Question 1:
Couldnt you just apply these principles after a number of training steps instead of epochs?
---
Reply to Comment 1.1.1:
Comment: Dear reviewer PyNt,
We are pleased to inform you that, following a thorough and detailed discussion, Reviewer SnqU has raised the rating of AOPU from 3 to 5.
We are confident on AOPU’ contribution and impact, and we believe it is an excellent piece of work worthy of publication.
If the reviewer has further concerns, we will be more than happy to help address them.
Best wishes,
All authors
---
Rebuttal 2:
Comment: Dear reviewer PyNt
If you have any additional concerns, we will be more than happy to help address them.
---
Rebuttal 3:
Comment: We appreciate the reviewer’s careful review of our response and thank you for pointing out this issue.
**Additional response to weaknesses 3:**
Efficiency refers to parameter efficiency, meaning that if models of different sizes achieve the same performance on the same dataset, the smaller model is more efficient (as it requires fewer parameters).
In Fig. 3 of the **global response** PDF, we present the experimental results of 2 models (LSTM and AOPU) across 6 different learning rate settings (0.016, 0.008, 0.004, 0.002, 0.001, 0.0005 for LSTM and 2.4, 1.6, 0.8, 0.4, 0.2, 0.1 for AOPU) on two datasets (debutanizer and SRU). Due to page and time constraints, we chose LSTM as our compared method, the most representative model in the soft sensor domain.
The experimental results indicate that as the learning rate decreases, LSTM indeed becomes more stable; however, when the learning rate is extremely low (0.0005), the model suffers from overfitting. In contrast, AOPU maintains stable performance across a wide range of learning rate adjustments, only exhibiting performance degradation when the learning rate is extremely high (2.4 and 1.6). Further, AOPU is much more stable than LSTM under any learning rate setup. The stability of AOPU is strongly supported by theoretical analysis, as illustrated in Fig. 1 and in our **global response**.
**We suspect that NeurIPS's privacy policies might have caused some content in our uploaded PDF to be missing.** However, the downloaded PDF appears normal on mobile and PC devices.
**We would like to confirm the contents of the PDF with the reviewer. Specifically:**
1. Fig. 1 contains 4 subplots, visually demonstrating the fundamental reason why natural gradients, compared to conventional gradients, achieve stable updates—natural gradients account for manifold information.
2. Fig. 2 contains 10 subplots, supplementing the results of the hyperparameter experiments.
3. Fig. 3 contains 24 subplots, arranged in two rows. The first row displays experiments conducted on the debutanizer dataset, and the second row on the SRU dataset. The first three columns in each row represent LSTM results, while the last three columns represent AOPU results. Each subplot records the experimental results of different models under different learning rate setups on different datasets.
**Additional response to questions 1:**
The reviewer has raised an insightful question. However, In practical applications, we have observed that even when using a certain number of iterations to replace epochs, several issues persist. For example:
1. Learning rate schedule is not applicable: Techniques like step decay and exponential decay are based on epochs (or iterations) and will gradually reduce the learning rate to zero.
2. Early stopping is not applicable: Due to the drift in conditions (an industrial phenomenon means that changes in production status may be caused by shifts in the production environment, alterations in operational parameters, modifications in control strategy, or variations in data quality), while the validation set remains fixed, the feature representations learned by the model on new conditions may not be suitable for the old validation set. Consequently, even if the model's performance on the validation set decreases, it should not necessarily stop training.
Essentially, when the model is offline, we have some prior knowledge about the dataset: it is limited in size, and the conditions are stable. Therefore, we apply certain tricks at the start, midpoint, or end of the model training process. As we transition into an online learning context, these prior assumptions no longer hold, making these tricks less applicable. A model that does not rely on these tricks is evidently more valuable in an online learning context.
We hope our response can provide clarity and assistance. If there is indeed a case of missing information, we can jointly report this to the ACs. | Summary: This paper introduces a new model for soft sensor tasks, the Approximated Orthogonal Projection Unit (AOPU), to enhance the stability and interpretability of regression networks. AOPU incorporates trackable and dual parameters, which are treated differently during the inference and training processes. AOPU truncates the gradient backpropagation at dual parameters, optimizes the trackable parameters updates, and enhances the robustness of training. The paper provides theoretical proof that AOPU is an approximation of both MVE and Natural Gradient Descent (NGD).Experimental results on two chemical process datasets demonstrat that AOPU outperforms other models in achieving stable convergence.
Strengths: 1. The proposed method is novel and has a strong theoretical basis. The authors provide detailed proofs of theorems in the appendix.
2. If the contents in the appendix are considered, this paper analyses the proposed AOPU from many aspects, and provide sufficient experimental results and ablation study to validate the advantage of AOPU.
Weaknesses: 1. Due to the limitation of paper length, the contents in the formal paper is incomplete. Many important content like quantitative analysis and ablation study are put in the appendix. The formal contents also lacks a conclusion section. For the quality of publishing, I suggest submitting the paper to other platforms like a IEEE Transaction, where the paper length can be longer.
2. The proposed method are not incorporated into DNN structures, therefore its expressive power is limited in more complicated tasks. Considering the requirements of industrial soft sensor tasks, this is not a critical flaw, but it still hinders AOPU from challenging AI applications.
Technical Quality: 3
Clarity: 3
Questions for Authors: I do not have specific questions about the paper.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations of AOPU is discussed in the paper, specifically, the need for an understanding of the RR distribution to guide the selection of hyperparameters. I do not see potential negative societal impact of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to reviewer 8Lrh:**
We greatly appreciate the time and effort the reviewer has dedicated to reviewing our manuscript and thank you for recognizing our work. We will do our best to answer the weaknesses and hope that our answers will make you reconsider raising the AOPU rating.
**Response to weaknesses 1:**
Due to our focus on rigorous experimental validation and theoretical foundations, the manuscript may have become overly extensive. If possible, we will reduce the narrative and formulas in the main text and move quantitative analysis, experimental details, and conclusions into the main text to enhance the integrity of the paper. To be specific, we can
1. We will merge the Introduction and Related Work sections to conserve space.
2. We will briefly introduce the concepts of trackable and untrackable elements, deferring detailed discussion to the appendix.
3. We will refine and concisely describe the network’s structure.
By reallocating space saved from these modifications, we can elaborate on crucial sections such as quantitative analysis and conclusions. This reorganization ensures that each critical part of the manuscript maintains readability and coherence.
We hope to be able to present our research at NIPS, being recognized by a top global AI conference would be immensely encouraging and affirming.
**Response to weaknesses 2:**
**We kindly clarify that this is a misunderstanding.** AOPU in fact can be incorporated into the DNN research framework, differentiating from traditional DNNs that use various stacked nonlinear structures for end-to-end modeling. AOPU separates nonlinearity and output modeling into two independent areas of study.
Specifically, AOPU’s augmentation projection can be any compatible DNN architecture, which enhances RR. The essence of RR is to measure the number of linearly independent items in a matrix; thus, if RR increases with the choice of augmentation projection, it indicates that the selected projection possesses stronger nonlinear modeling capabilities (transforming linearly dependent items into independent ones). Consequently, we can extensively test and measure the nonlinear modeling capabilities of various DNN frameworks, such as Transformers, Diffusions, GNNs, RNNs, CNNs, etc., as long as they are compatible with time series data.
**From this perspective, AOPU essentially serves as a unified framework for constructing nonlinear modules using DNNs.**
The reason we did not conduct these DNN experiments in the paper is that we focused on proving **AOPU’s structural consistency with minimal variance estimation, parameter convergence, and gradient effectiveness, as well as its superior stable training qualities.** AOPU represents a significant contribution and influence in the field of soft sensor deep learning, focusing on research into frameworks that enhance optimization performance and stability. We look forward to in-depth discussion and research on RR and various aug modules with the deep learning community.
We are grateful that the reviewer recognizes the strengths of our work, such as its **strong theoretical basis**, **detailed proofs of theorems**, **sufficient experimental results**, and **comprehensive ablation study**. We believe we have addressed all concerns raised by the reviewer regarding the limited expressive capability of AOPU. We are confident in AOPU's significant practical value (stable training) and its potential for further research value (future work on RR augmentation). Hoping the reviewer can reconsider raising the AOPU rating.
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: I would like to thank the authors for their carefully reading of my comments and the detailed response. I would like to say that the paper is a valuable work that can has a good theoretical basis and sufficient experimental results. However, regarding the raised concerns, I could not suggest accepting the paper in its premature form to be published in a platform as highly rated as NeurlPS.
I am aware that the proposed method can be incorporated into DNNs when necessary, but these contents should appear in the paper, as well as some experimental results.
Considering that the full contents of the paper is good and self-containing enough, I would raise my rating to 5. But I still think the paper should be largely modified to reach the standard of acceptance. I believe the authors could put the comments from all reviewers and revise the paper before publication, but please realize that the rebuttal is not a chance of revision and the rating should be given according to the quality of the original submission.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer 8Lrh,
We sincerely appreciate the reviewer's recognition of AOPU's innovation and contributions, and we are grateful for your thorough review of our response. Your decision to raise AOPU's rating is immensely encouraging for us!
We greatly respect NeurIPS as one of the top global AI conferences. Therefore, we sincerely hope that our best research can be acknowledged by the community.
The original manuscript of AOPU is highly informative, containing detailed experimental results, theoretical analyses, and proofs, making it a pioneering work in the field of soft sensors.
During the rebuttal-discussion period, we carefully considered the reviewers' suggestions and made the contributions of AOPU more explicit and intuitive in our responses. Most of the explanations we provided were aimed at offering a more comprehensible interpretation of the content in the original manuscript.
For the research on integrating AOPU with DNN, it is actually out of the scope of the manuscript, as our primary focus is on: 1. Why it is important to calculate dual parameters and truncated gradients (minimum variance estimation and natural gradient) 2. What are the advantages of this framework (leading to more stable model convergence in training) 3. Providing thorough theoretical proof and experimental validation.
Overall, we are deeply grateful to the reviewer for improving the rating of AOPU. We believe that we share the same goals as the reviewer—to present breakthrough research with practical value to the deep learning community. We are confident that AOPU meets this criterion, and we will continue to strive for its acceptance. Thank you for your recognition again!
Best wishes,
All authors
---
Rebuttal 2:
Comment: Dear reviewer 8Lrh
If you have any additional concerns, we will be more than happy to help address them.
---
Rebuttal 3:
Comment: Dear reviewer 8Lrh,
We have reorganized the manuscript thoroughly to give a clearer and better presentation on AOPU’s contribution and impact to the deep learning community.
We believe such adjustments have well addressed the reviewer’s concern on weaknesses 1. To avoid tedious repetitions the reviewer can go to the **Response to reviewer SnqU: Manuscript improvements** for more information.
sincerely,
all authors
---
Rebuttal 4:
Comment: Dear reviewer 8Lrh,
We are pleased to inform you that, following a thorough and detailed discussion, Reviewer SnqU has raised the rating of AOPU from 3 to 5.
We are confident on AOPU’ contribution and impact, and we sincerely hope the reviewer could consider raising AOPU’s rating.
If you believe there is further room for improvement with AOPU, we will promptly begin optimizing it.
Best wishes,
All authors | null | null | Rebuttal 1:
Rebuttal: We want to thank all reviewers for dedicating their time and effort to scrutinizing the manuscript. We have noted the reviewers' have some concerns and misunderstandings regarding the manuscript's presentation. We wish to clarify the contributions and impact of AOPU on the soft sensor deep learning community more clearly and intuitively here, aiding the reviewers in better assessing the AOPU rating.
AI methods in the soft sensor can **be broadly divided into two steps:** initially extracting abstract features or latent variables from data, followed by modeling the input-output relationships based on these features or variables. Most NN studies **focus on the first step**, proposing a structure at the neuro level that mimics certain biomimetic features or mathematical patterns, assuming that this effectively extracts abstract features, and validating this assumption experimentally. The drawback of this approach is the inability to monitor whether the neurons adhere faithfully to such biomimetic features or mathematical patterns during implementation. Moreover, it is challenging to determine whether the feature extraction structure is conducive to regression problems, much less guide model improvement directions.
AOPU differs from these studies by **focusing on the second step**, namely how to model the input-output relationships. Assuming there is a robust feature extraction module (augmentation block), AOPU focuses on better optimization and more stable convergence. We introduced RR as an interpretability index to provide deep and comprehensive insights into the network dynamics. When RR is low, we expect bad performance because the mini-batch data has much-homogenized information. When RR is high, we expect good performance because AOPU's output approximates the minimum variance estimation, and AOPU's gradient approximates the natural gradient (NG).
AOPU's research has significant contributions and impacts, which can be summarized as follows:
1. It is fully compatible with historical research results on feature extraction; any model compatible with time series can be attempted as an augmentation block to test model performance.
2. We proposed a unified interpretability index, RR, that guides model improvement directions—to enhance RR—regardless of the model used as the augmentation block.
3. We introduced a unified framework for optimizing soft sensor problems, enabling more stable training and providing solutions for the online deployment of neural networks.
However, combining different deep learning modules with AOPU is not simple. Due to the untrackable nature, we cannot efficiently compute natural gradients for the parameters of the augmentation module. According to the chain rule, the gradient of the augmentation module is a partial natural gradient, and existing optimizers cannot perform such calculations. Furthermore, the dynamic changes in RR for different augmentation blocks during training must also be studied. **These topics are too lengthy for the main purpose of this manuscript—introducing the AOPU framework and performing theoretical derivations and experimental validations—hence, corresponding experiments were not conducted**. We believe this content holds significant potential for future research and are eager to explore it with the community.
We also wish to demonstrate the advantages of natural gradients over conventional gradients through a simple experiment. In Fig. 1 of the global response, we conducted a simple GPR experiment. This GPR had only two parameters, the bias of the mean and the coefficient of the kernel matrix, both constant values. We sampled 100 instances from this GPR and updated these two parameters 100 times using these samples. It was observed that natural gradients require a higher learning rate, while conventional gradients only need a smaller one. The major difference between natural gradients and conventional gradients lies in their directions. Conventional gradients ignore the parameter manifold and treat every parameter equally. Natural gradients, by dividing the gradient by its second derivative, treat sensitive parameters cautiously (low gradient) and non-sensitive parameters boldly (high gradient). This adjustment results in different gradient directions and contributes to better convergence.
Nevertheless, the calculation of natural gradients involves considering the inverse of the Fisher Information matrix, thereby introducing computational complexity cubic to the number of parameters, making it entirely infeasible for neural networks. Existing research on natural gradients is almost entirely focused on conventional machine learning, e.g., considering more complex distributions (such as the product of multiple exponential family) for computing natural gradients. Research in the neural network domain on natural gradients mostly centers on second-order optimizers (such as AdamW), which are merely first-order approximations of second-order natural gradients.
AOPU efficiently computes truncated gradients and ensures they approximate natural gradients through many compromises, leading us to analyze RR. We demonstrate in the manuscript that when RR consistently equals 1, AOPU calculates the natural gradient, and the model outputs the minimum variance estimation. Hence, we could observe in experiments that AOPU's iteration loss surpasses all comparative models in stability and is less prone to overfitting. Conversely, when RR is too low, AOP’s output is bad, highlighting the compromises.
In summary, we reiterate and emphasize AOPU's significant contributions, impact, and potential for future research on soft sensor deep learning. We intuitively explain that AOPU's stable optimization capability stems from its consideration of manifold information. We trust this fully resolves the reviewers' concerns and clarifies all misunderstandings, and we hope the reviewers will consider increasing AOPU's rating.
Pdf: /pdf/06a9203f0f17fd400b8fc81ad521642bbbd73ff3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Is Programming by Example Solved by LLMs? | Accept (poster) | Summary: This paper investigates the effectiveness of Large Language Models (LLMs) in solving Programming-by-Example (PBE) tasks. Evaluations are conducted on three classic PBE domains including lists and strings, as well as a graphics programming domain. The findings suggest that while pretrained LLMs are not inherently effective for PBE, fine-tuning significantly enhances their performance on in-distribution tasks.
Strengths: - Thorough evaluation and detailed analysis.
- Clear cases and illustrations.
- Addressing the challenge of small datasets for fine-tuning LLMs.
Weaknesses: - In the experiments, there are no LLM competitors in the graphics domain. Any reasons?
- Why are only FlashFill and LambdaBeam compared in the experiments of Figure 6?
- The adaptation method used to improve out-of-distribution performance exposes the model to the test set content beforehand. Especially in string tasks, directly selecting the adaptation seed program from all test cases may be unfair.
- The examples used in the experiments are relatively weak and do not closely resemble real-world programming tasks.
- If the adaptation's seed program is not provided, even after fine-tuning, the out-of-distribution generalization ability of LLMs still appear to be quite weak.
Typos:
in abs: potentially increasingly the flexibility -> potentially increasing the flexibility
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does GPT-4 perform on the entire PROSE dataset?
- Would the key factors that lead to the success or failure of LLMs differ across problems in three different domains?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review. Please see below for a new experimental results that you suggested we run, together with our responses to your questions.
> In the experiments, there are no LLM competitors in the graphics domain. Any reasons?
Thank you for your suggestion! We added the GPT-4o and GPT-4o-mini multimodal model results with image input. We summarize the results on the graphics domain as below:
|System |Method Type|Accuracy|
|---|---|---|
|Ours-33b | LLM + Symbolic | 90%|
|Ours-7b | LLM + Symbolic | 89% |
|GPT-4o | VLM + Symbolic | 59%|
|Regal (ICML’24) |LLM + Library Learning | 57%|
|LILO (ICLR’24) | LLM + Symbolic | 41%|
|DreamCoder (PLDI’21) |Neurosymbolic | 31% |
|GPT-4o-mini | VLM + Symbolic | 25%|
(We updated our LOGO results because we found a small code issue post-submission; fixing that issue significantly improved our system, hence why our system's numbers above are better from the original submission, but the qualitative conclusions are the same.)
> The examples used in the experiments are relatively weak and do not closely resemble real-world programming tasks.
We disagree for the following reasons:
1. Text editing PBE is used *millions of times every week* [1]
2. PBE is not about e.g. repo-level edits, but is instead about creating individual subroutines, including for users that can't even program. Accordingly the datasets we use cover real-world situations where an individual subroutine is desired. For example, the Shi et al. List dataset comprises 100 programming tasks manually designed to be interesting and useful in practice, while PROSE/SyGuS test common spreadsheet operations.
3. Our benchmarks are typical ones for PBE on neural/symbolic systems, allowing comparisons of LLMs against the recent neurosymbolic literature, done as follows:
|Domain| Work| Venue|
|---|----|----|
|List | LambdaBeam| NeurIPS ‘23|
||Fleet|Nature Comms ‘24|
|LOGO| LILO| ICLR ‘24|
|| Regal| ICML ‘24|
|| DreamCoder | PLDI ‘21|
|String| FlashFill++| POPL ‘23|
[1] https://blog.sigplan.org/2021/09/14/the-story-of-the-flash-fill-feature-in-excel/
> Why are only FlashFill and LambdaBeam compared in the experiments of Figure 6?
For lists, LambdaBeam is the primary comparison because it was designed specifically to solve the list benchmark Fig 6 evaluates on (and it does well on that benchmark).
For strings, FlashFill is the primary comparison because we are evaluating on holdout test inputs, and FlashFill++ does not report numbers for holdout tests (and FlashFill++ is not publicly available so we can't run it ourselves).
> The adaptation method used to improve out-of-distribution performance exposes the model to the test set content beforehand. Especially in string tasks, directly selecting the adaptation seed program from all test cases may be unfair.
Motivated by your comment we ran the OOD-adapted model on a random sample of never-before-seen PROSE problems (unseen during adaptation). We see that the domain gap is approximately halved:
|strings| train SyGuS, test PROSE|
|----|----|
|before adaptation |57.7%|
|after adaptation | 65.2%|
|finetune in-distribution | 72.5% |
> If the adaptation's seed program is not provided, even after fine-tuning, the out-of-distribution generalization ability of LLMs still appear to be quite weak.
Yes! Without (unlabeled) adaptation data, out-of-distribution generalization is quite weak. We hope to paint a nuanced picture of LLM abilities rather than claim LLMs are universally better along every dimension.
> Would the key factors that lead to the success or failure of LLMs differ across problems in three different domains?
Primarily if the problems are in-distribution, which is why we focus on OOD adaptation methods.
Thank you for your input and please let us know if you have any further questions.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses, which have addressed some of my concerns. I have increased the score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your support and for helping us improve our work! | Summary: The paper focuses on the classical task of Programming By Example (PBE): given some (input,output) pairs, the goal is to generate a program that "fits" these examples (producing the outputs when given the inputs), and also generalizes well to new inputs.
The paper evaluates mostly 7B and also 33B LLMs on three PBE tasks. The paper finds that finetuning these LLMs on these tasks further boosts their accuracy.
The paper also investigates out-of-distribution (OOD) generalization, and finds that OOD can be improved using a semi-supervised approach, where the model is given (input,output) pairs from the new domain (but not the desired program); then the LLM samples potential programs that solve the (input,output) pairs; if the program is correct (which can be validated) - it is added to the training set and the LLM is trained / finetuned again iteratively.
Strengths: 1. The paper is very clear and easy to follow, and it contains many examples that are visualized nicely.
1. The paper connects modern LLMs with the classical problem of Programming By Example (PBE)
Weaknesses: 1. Undefined, non-scientific, message - the title of the paper is "Is Programming by Example solved by LLMs?". This title leads the paper ("We investigate here the extent to which large language models pretrained on source code can solve PBE"), but I think that it's an undefined question. What does "solve" mean? By construction, and according to the "no free lunch" theorem, PBE can never be "solved". So "solving" PBE just depends on the difficulty of the questions. Even if we could define "solve PBE", how would you measure it? Is 90% considered "solved"? Is 80% "solved"? This problem is further expressed in L214: "absolute performance in LOGO remains poor" - 16% accuracy is not "poor" when you do not compare it to anything. Any accuracy number below 100% is considered as "unsolved" as any other number, and 100% is not possible on a hard enough dataset (because of "no free lunch").
1. Novelty - this is mostly an evaluation paper, that does not introduce any new approach or technique. Further, from the empirical evaluation, the answer to the question "Is Programming by Example solved by LLMs?" is, as expected, is "somewhat, but not quite": nothing in the empirical results was surprising or unusual: (a) finetuning LLMs on task-specific data works well; (b) semi-supervision on OOD data helps; (c) using Python as the output programming language works much better than the DSLs of classical work, because modern LLMs were trained on much more Python data than niche DSLs.
1. The OOD claim is a bit weak, because only in the relevant section it is said that "assuming we have access to problems drawn from the testing distribution" (without their labels, but these labels can be sampled and validated).
1. The paper compares its approach (a finetuned LLM) to classic symbolic, DSL-based (non-learning / non-neural) approaches several times throughout the paper, and speaks in favor of the LLM-based approach. This comparison to classic approaches is a bit of a strawman, since it is quite obvious that 33B LLMs are much more powerful than Flashfill (which is a paper from 2011) (Table 1).
The paper also mentions that:
>We also find that the resulting system can cover a broader scope of problems than classic symbolic methods, owing to the use of a Turing-complete language, which, at least theoretically, allows learning any computable function.
And I think that such claims completely miss the point: the reason that LLMs are better than classic symbolic methods is **not** the use Turing-complete languages. LLMs would have been better than classic symbolic methods even if the classic symbolic DSLs were turing-complete as well. The reason is that LLMs were trained on trillions of Python tokens.
6. Another trivial claim: in Section 4.2, the authors find that "posterior description length is more predictive than program size and prior description length". Simplifying the paper's claim, without using words from probability, basically says: the perplexity of the desired output sequence is predictive of its accuracy on downstream tasks. I think that this claim is quite trivial, and is very common in practice in LLM training: measuring perplexity on a validation set is usually closely correlated with success on downstream tasks. Isn't this posterior iexactly what the model was *trained* to predict?
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Can the authors evaluate the baseline where the "program" is the LLM itself? That is, the LLM is trained/prompted to predict output to unseen inputs, without going through an explicit program. I am asking this specifically in light of Figure 4 - the examples there seem to be much more easy to solve directly with an LLM (in a few-shot prompting fashion, possibly with chain-of-thought), than to write an explicit program for.
1. In L96 the authors write: "we use samples from a generative model to train an inference network, but we do not further train the generative model itself" - what does this exactly mean? What kind of model is each of the "generative model" and "inference network"? Which of them is a pretrained LLM? And why not further training the generative model itself?
1. What exactly does the "Search Budget (Num Samples)" mean in the experimental section? Does that mean "accuracy@k" - sample $k$ different outputs, and consider the output as correct if *any* of these $k$ outputs is correct?
1. In Figure 3 - What temperature was used, and what other temperatures did the authors explore, for their finetuned model and for the baselines such as GPT-4? Since evaluation depends on sampling of up to 200 outputs, the temperature might have a drastic effect on the success of each model. With a proper tuning of temperature, the order of curves in Figure 3 might be different.
## Summary
Overall, the paper is not wrong and is presented nicely, but its novelty is limited, I'm not sure about the validity of some of the results such as Figure 3, and most of its conclusions are expected. I am thus voting for a borderline reject.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review. We really believe that our responses and new experiment can address your concerns, and hope that you will agree. Please see below.
> empirical results was not surprising or unusual
Papers in the past year find negative results for LLMs on PBE [1-3], and none try unusual PBE domains like our visual graphics programs. For example, our system beats the neurosymbolic LambdaBeam [3], and in LambdaBeam, they also compare to a strong LLM baseline - a Python fine-tuned LLM internal to Google - which was found to *not* do well on PBE. Our system also beats systems like DreamCoder [4] on unusual domains such as LOGO, where DreamCoder-like approaches are thought to excel. Our results should surprise the cited authors, and are unusual in that sense.
[1] Shi et al. ‘24 ICLR [2] Rule et al. 2024 Nature Comms. [3] Shi et al. ‘23 NeurIPS [4] Ellis et al. ‘21 PLDI
> does not introduce any new approach or technique
A core part of the paper is a wake-sleep inspired algorithm for OOD generalization. Would you mind pointing us to a citation that we can refer to which previously introduced our specific approach/technique for adaptation?
> The OOD claim is a bit weak... "assuming we have access to problems drawn from the testing distribution" (without their labels, but these labels can be sampled and validated).
Deployed PBE systems are presented with a stream of "testing problems" generated by end users. In the wild, the requisite data is available for our adaptation technique.
> What does "solve" mean?
We’ll revise to make explicit our different definitions of solve:
1. Beating SOTA on mainstream benchmarks, including those used for deployed real world PBE systems [we solve in that sense]
2. Beating SOTA on unusual benchmarks very different from pretraining data [we solve in that sense: LOGO]
3. Achieving 100% success rate [we don’t solve in that sense, no free lunch]
4. Being practical for deployment [we don’t solve in that sense]
5. Generalizing out of distribution [partly: see our response above to "The OOD claim is a bit weak"]
> “absolute performance in LOGO remains poor" - 16% accuracy is not "poor" when you do not compare it to anything
We compare against 3 baselines (Fig 3)
> Can the authors evaluate the baseline where the "program" is the LLM itself?
Thanks for the suggestion. Please see below for the result of your proposed baseline, which does well on strings but not on lists.
- Strings
|Model|Accuracy|
|--|--|
|gpt-4-turbo (LLM as program)| 74.0%|
|gpt-4-turbo (Python program)|73.9%|
|deepseek-33b (LLM as program)| 49.0%|
|deepseek-33b (Python program)| 70.9%|
|ours-7b|76.9%|
|ours-33b|81.2%|
- Lists
|Model|Accuracy|
|--|--|
|gpt-4-turbo (LLM as program)| 45.5%|
|gpt-4-turbo (Python Program)|58.6%|
|deepseek-33b (LLM as program)| 23.4%|
|deepseek-33b (Python program)|46.2%|
|ours-7b|75.8%|
|ours-33b|79.0%|
> Another trivial claim: in Section 4.2, the authors find that "posterior description length is more predictive than program size and prior description length"... basically says: the perplexity of the desired output sequence is predictive of its accuracy on downstream tasks
Thank you for catching an error in the writing: By posterior we meant the marginal probability of solving the task under $q$ (marginalizing over sampled programs we found that pass), not evaluating perplexity of a single gold-truth target sequence. While it’s unsurprising this marginal is *at least* as predictive as the prior, it is noteworthy that it is *more* predictive than the prior, showing the LLM does not engage in blind guess-and-check, and did not merely learn to sample from the self-instruct distribution.
> the reason that LLMs are better than classic symbolic methods is not the use Turing-complete languages... The reason is that LLMs were trained on trillions of Python tokens
LLMs benefit from both massive data *and* expressive Turing-complete programming languages. As a thought experiment, consider an LLM trained on trillions of programs in the FlashFill++ DSL: It would still be utterly unable to solve the problems in Fig 4.
> In L96 the authors write: "we use samples from a generative model to train an inference network, but we do not further train the generative model itself" - what does this exactly mean?
We sample from the generative model $\mathcal{G}$, defined by prompting an LLM with seed problems (see L86). The inference network $q$ is our fine-tuned model (see L91). The prior $\mathcal{G}$ is a function of the seeds, so by holding the seeds fixed, we don’t update/train the generative model further, deferring such updates to Eq 5 (Adaptation).
> What exactly does the "Search Budget (Num Samples)" mean in the experimental section?... [Do you] sample different outputs, and consider the output as correct if any of these outputs is correct?
Thanks for inviting this clarification. We model PBE in-the-wild where we commit to a single program and must run it on new unseen test inputs. This means we sample $k$ programs (the search budget), filter by the training input-outputs, pick a program randomly if more than one passes that filter, and finally report success only if that program correctly predicts all test input-outputs (following Eq 1). We will revise to clarify.
> What temperature was used… Since evaluation depends on sampling of up to 200 outputs, the temperature might have a drastic effect on the success of each model
The first sentence of the appendix gives the temperature ($T=1$). We used this somewhat high temperature (by the standards of code generation) because we wanted more diversity in the outputs, and did not tune this parameter because of the high cost of our experiments.
Please let us know if you have further questions.
---
Rebuttal 2:
Title: Response to authors
Comment: Thank you for your response.
>Papers in the past year find negative results for LLMs on PBE [1-3]
>and in LambdaBeam, they also compare to a strong LLM baseline - a Python fine-tuned LLM internal to Google - which was found to not do well on PBE
>Our system also beats systems like DreamCoder [4]
* LambdaBeam compared to PaLM 62B - PaLM is also ~2 years old, and as far as publicly known, pre-RLHF. I am not surprised that the most recent GPT-4 performs significantly better.
* DreamCoder (2020), while being algorithmically clever, was also in the pre-LLM era.
* I'm not sure what is the "Shi et al. ‘24 ICLR" - can you please give a link? I don't think it is cited in your paper.
* I also cannot find a public link to "[2] Rule et al. 2024 Nature Comm" (although it is cited, I can't find a PDF).
>A core part of the paper is a wake-sleep inspired algorithm for OOD generalization. Would you mind pointing us to a citation that we can refer to which previously introduced our specific approach/technique for adaptation?
As I understand from the paper, the wake-sleep inspired algorithm is explained under standard "finetuning", and "adaptation" starts only later (in page 4). Is the wake-sleep inspired algorithm for finetuning or adaptation?
Further, regarding the novelty of the wake-sleep algorithm, as mentioned in the paper:
>This method is closely related to self-instruct [29] and wake-sleep [30]. Like self-instruct, we use
prompting to bootstrap a large dataset from a small manually-constructed one. Our method differs
by using the LLM to generate a hidden latent variable (the program) while a different generative
process produces an observed variable (the program outputs).
So, what is the difference between this paper and [29,30]? That a **different** model generates the program outputs, instead of being the same model?
>Deployed PBE systems are presented with a stream of "testing problems" generated by end users. In the wild, the requisite data is available for our adaptation technique.
It's a similar setting for a variety of machine-learning-based systems (e.g., Google Translate, Siri, etc.).
If a system depends on "a stream of testing problems generated by end users", how does it respond to the first end-user queries? The authors's argument is almost like saying: "in a real system, we will wait for a few (hundreds? thousands?) queries to be sent, and only then we will be able to respond".
In contrast, a machine learning system that "truly" generalizes OOD, should ideally generalize starting from the first OOD example.
I know that it's OOD generalization starting from the first OOD example is not trivial and is still one of the open problems in machine learning - I just said that the OOD argument is a bit weak.
>Beating SOTA on mainstream benchmarks, including those used for deployed real world PBE systems [we solve in that sense]
I would still argue that beating SOTA does not mean that the problem is "solved".
>>“absolute performance in LOGO remains poor" - 16% accuracy is not "poor" when you do not compare it to anything
>We compare against 3 baselines (Fig 3)
I don't understand this response - LOGO is "graphics", right? Which means Fig 3(c)? Isn't "Ours" better than the baselines there? So why is performance considered "poor"?
> Please see below for the result of your proposed baseline, which does well on strings but not on lists.
Thank you for these additional results.
>LLMs benefit from both massive data and expressive Turing-complete programming languages. As a thought experiment, consider an LLM trained on trillions of programs in the FlashFill++ DSL: It would still be utterly unable to solve the problems in Fig 4.
But would it change the downstream results if we improved any DSL to be Turing complete?
>. We used this somewhat high temperature (by the standards of code generation) because we wanted more diversity in the outputs, and did not tune this parameter because of the high cost of our experiments.
Tuning the best temperature for each model separately can significantly affect the results and the order between the models.
This tuning is only needed at test time.
---
Rebuttal Comment 2.1:
Comment: Thank you for your engagement. Our biggest disagreements concern the novelty of the methods and whether the empirical results are surprising. Please see below.
## **Methodological Novelty**
> So, what is the difference between this paper and [self-instruct, wake-sleep]?
Our adaptation algorithm is a novel hybrid of self-instruct and wake-sleep. Unlike self-instruct it can update its prior using new (unlabeled) data. Unlike wake-sleep it can adapt from few examples, via in-context learning. Mathematically this can be understood by looking at Equation 5 and making the following observations---which we’re revising to include right after Equation 5:
> Equation 5 should be seen as a wake-sleep algorithm where "dreaming" corresponds to training $q$ on fantasy data (first equation) while "waking" corresponds to running inference and updating the prior $\mathcal{G}$ (by updating the seed, second pair of equations).
Thank you for pushing us to clarify the writing concerning the novel conceptual aspects of our work. The revision will include the above two paragraphs.
(Last, you should think of adaptation as wake-sleep: finetuning is like only having the sleep phase.)
## **Significance of Empirical Results**
To avoid subjectivity, it’s helpful to consult the literature to understand what those in the area would find surprising/noteworthy/significant. Papers in the last year find negative results on LLMs for PBE:
1. [ICLR ‘24, from DeepMind](https://arxiv.org/pdf/2307.13883) pg 7: **"LLMs in general perform poorly on program synthesis tasks specified only through I/O examples"**, *even for PaLM2 Unicorn, the largest PaLM2 model*
2. [Nature Comms. ‘24, Rule et al](https://www.nature.com/articles/s41467-024-50966-x): see Fig 3
3. [ICLR ‘24: Hypothesis Search](https://arxiv.org/abs/2309.05660) evaluates on ARC **finding GPT4 underperforms older symbolic solvers** (see [here](https://arxiv.org/abs/2402.03507) and [here](https://arxiv.org/pdf/2103.05823))
4. [NeurIPS ‘23: LambdaBeam](https://arxiv.org/abs/2306.02049): As you point out this compares against a first-gen medium-size PaLM, but the above ICLR ‘24 paper compares against the latest-and-greatest PaLM2 Unicorn and arrives at similar conclusions.
Our findings go beyond merely cheerleading LLMs. We instead show how to train small open models (7B, not RLHF’d) to surpass both bespoke neurosymbolic methods and massive closed-source systems, and also investigate uncommon creative applications such as visual graphics code. (Surprise is subjective, but we were *shocked* when LOGO graphics worked!)
## **Miscellaneous**
> I just said that the OOD argument is a bit weak
Although weakness/strength is subjective/relative, it’s helpful to consult the relevant literature for comparison. OOD for neural program synthesis has been previously studied using [architecture/prompt engineering](https://arxiv.org/pdf/2307.13883) or [handcrafted feature engineering](https://arxiv.org/pdf/1912.12345). Our (wake-sleep$\cap$self-instruct) algorithm instead does unsupervised domain adaptation, requiring **just tens of unlabeled OOD examples**. This is more generic and scalable than handcrafting features/prompts/architectures, hence a major strength of the approach relative to the prior art.
> “absolute performance in LOGO remains poor" - 16% accuracy is not "poor" when you do not compare it to anything.
The full quotation reads: "absolute performance in LOGO remains poor (*compare Fig. 6c to Fig. 3c*)". Contrasting Fig. 6c (OOD) to Fig. 3c (in-distribution) shows LOGO OOD performs poorly (but please see the updated LOGO results in the global response).
> But would it change the downstream results if we improved any DSL to be Turing complete?
Yes, because symbolic systems like FlashFill++ hinge on the clever design of non-Turing complete languages to judiciously restrict the search space. Going Turing-complete destroys the tractability of search and significantly impairs performance for (neuro)symbolic methods. (See [DreamCoder page 13 ](https://dl.acm.org/doi/pdf/10.1145/3453483.3454080): So intractable, search used a *year* of CPU time).
> I would still argue that beating SOTA does not mean that the problem is "solved".
Yes, hence why we propose five different subjective notions of “solve”, instead of merely declaring victory upon beating SOTA. The "scare quotes" were meant to emphasize the subjectivity of the quoted term, but we can revise to avoid "solve", including changing the title of the paper.
## **Parting Words**
Obviously, we’re butting heads. But arguing these points has clarified to us how to communicate precisely what it is that is methodologically novel and empirically significant, and we’re confident that refactoring the writing can make that come through in the final paper. While we’d love to have your support, even if we don’t, thanks for forcing us to communicate these issues more clearly. | Summary: The paper performs a relatively thorough study on using LLM for example-guided program synthesis tasks. The results presented in the paper suggest that LLMs make strong progress toward solving the typical suite of example-guided synthesis tasks, potentially increasingly the flexibility and applicability of PBE systems.
Strengths: - The PBE problem is interesting and well-motivated. Major papers in the field are well cited and referenced
- Extensive amount of traditional datasets are being evaluated
- The insights derived from experiments are somewhat valuable
Weaknesses: - CoT and other simple prompting methods are not evaluated
- While there is an extensive amount of experiments and comparisons, we find that the outcome is relatively predictable.
- While the writing is generally okay and easy to understand, multiple typos and mistakes found in the writing (also mentioned in questions). Please consider fixing them.
- The LOGO visual examples are converted to an ASCII grid of characters (Fig. 8b). This might not be the most intuitive representation. Details about the transformation is not shown, such as how each number (0-9) is derived, the resolution of the ASCII grid, etc. With this design, it does not make sense for a non-fine-tuned LLM to solve the task. But technically you could still fine-tune GPT-3.5 with these inputs, but I guess it is okay to not include this experiment.
Technical Quality: 2
Clarity: 3
Questions for Authors: - (Typo) line 150, there should be a space between (Tbl. 1,Fig. 3b)
- (Typo) figure 6a “Sygus” -> “SyGuS”
- (Grammar) last sentence of Figure 4 caption has grammar mistakes
- (Grammar) last sentence of Table 1 caption has grammar mistakes
- Appendix A.2 is empty
- I see in the prompt the authors wrote “You are a CS professor”. As far as I know this might not be the perfect prompt for code generation (this is just a joke).
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer e1ch for the thoughtful review. Please see the global response PDF for the requested LOGO graphics details, and below for other new experiments and responses to your specific questions.
> CoT and other simple prompting methods are not evaluated
Thanks for the suggestion. We evaluated chain-of-thought and summarized the results below.
- String Domain
| Model | Accuracy |
| --------------- | -------- |
| gpt-4-turbo with CoT | 76.5% |
| gpt-4-turbo | 73.9% |
| deepseek-33b | 70.6% |
| ours-7b | 76.9%|
| ours-33b|**81.2%**|
- List Domain
| Model | Accuracy |
| --------------- | -------- |
| gpt-4-turbo with CoT | 60.7% |
| gpt-4-turbo | 58.7% |
| deepseek-33b | 46.3% |
| ours-7b | 75.8%|
| ours-33b|**79.0%**|
We'll update the manuscript to include these results.
> the ASCII grid [for LOGO], etc. With this design, it does not make sense for a non-fine-tuned LLM to solve the task. But technically you could still fine-tune GPT-3.5 with these inputs, but I guess it is okay to not include this experiment.
Motivated by your observation we ran a new experiment using the multimodal GPT-4o and GPT-4o-mini with a few-shot prompt as a baseline for LOGO. This baseline was prompted with example programs and images and then given a new test image to write a program for. Please see below:
|System |Method Type|Accuracy|
|---|---|---|
|Ours-33b | LLM + Symbolic | 90%|
|Ours-7b | LLM + Symbolic | 89% |
|GPT-4o (new result) | VLM + Symbolic | 59%|
|GPT-4o-mini (new result) | VLM + Symbolic | 25%|
|Regal (ICML’24) |LLM + Library Learning | 57%|
> The LOGO visual examples are converted to an ASCII grid of characters (Fig. 8b). This might not be the most intuitive representation. Details about the transformation is not shown, such as how each number (0-9) is derived, the resolution of the ASCII grid, etc.
Please see the attached PDF, which shows an example of the ASCII transformation with a detailed conversion process in the caption. Interestingly, because the transformation down-samples the image, it is able to somewhat generalize to hand drawings, which we also show in the attached PDF, and would include in a revision.
> Outcome is relatively predictable
Papers in the past year find negative results for LLMs on PBE [1-3], and none try unusual PBE domains like our visual graphics programs. For example, our system beats the neurosymbolic LambdaBeam [3], and in LambdaBeam, they also compare to a strong LLM baseline - a Python fine-tuned LLM internal to Google - finding that the LLM is a poor PBE solver, counter to our findings. Our system also beats systems like DreamCoder [4] on unusual domains such as LOGO, where DreamCoder-like approaches are thought to excel. Our results are not predictable given the state of the recent literature.
[1] Shi et al. ‘24 ICLR [2] Rule et al. ‘24 Nature Comms. [3] Shi et al. ‘23 NeurIPS [4] Ellis et al. ‘21 PLDI
> Typos and grammar mistakes
Fixed! Thank you for pointing them out.
> I see in the prompt the authors wrote “You are a CS professor”. As far as I know this might not be the perfect prompt for code generation (this is just a joke).
We will consider adding to the prompt “You are reviewer #2, please improve the code like how you would improve a paper” (this is just a joke).
Thanks again for the review and please let us know if we can answer any further questions. | Summary: This paper investigates whether the long-studied programming by example task is "solved" by large language models with Turing-complete languages like python.
Their evaluation is on three domains: lists, strings, and LOGO/Turtle graphics.
They evaluate three LLM-based approaches, including a self-instruct-like fine-tuning approach that tunes LLMs on synthetic labeled data, and an adaption approach assuming access to problems (not solutions) from the testing distribution.
Compared to several symbolic, neurosymbolic, and LLM baselines, the proposed approaches perform better.
The analysis of the correlation between different aspects of the target program indicates that the fine-tuned model is beyond blind guess-and-check.
Strengths: 1. The experiments are comprehensive, and the analysis of different predictors of model performance is helpful in understanding the extent to which LLMs solve PBE.
2. The proposed methods make use of the fact that PBEs problems can be accurately synthesized using model-generated inputs and programs. The experiment results show that they are effective in solving in-domain problems and adapting out-of-distribution ones at test time.
3. This paper answers some interesting questions regarding the role of LLMs for PBE and points out what researchers might work on in the future.
Weaknesses: Contamination. As the authors acknowledged on Line 148, the problems could be in LLMs' pertaining data. I wonder if the authors have an idea of how much of a role such potential contamination plays in LLMs' superior performance. Is there anyway to rule out or minimize the impact of that confounder?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How much does a turing-complete language help in solving PBE, excluding the fact that LLMs have seen lots of python code? Is the expressiveness of a turing-complete language itself helpful?
2. How far can the adaption go? Right now the adaption discussed is still within the same category of problems (such as lists), I imagine a more general PBE system might be able to adapt to problems that are more different.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review, and for your support. Please see the global review for a PDF with new results (including a fun new LOGO experiment). We address your specific questions below.
> Regarding Contamination
We avoided contamination as follows:
1. String dataset: The datasets contain only input/output examples, and thus the program could not be in pretraining data.
2. List dataset: The dataset from Rule et al. does not include program solutions, and furthermore, the dataset is in BigBench, which is conventionally excluded from LLM pretraining data.
3. LOGO dataset: The python program LOGO dataset (from Regal) was released within 1 week of the beginning of training of deepseek (the model that we fine tune), so is almost certainly not in pretraining. Just in case, we further tried prompting the model to confirm that it cannot complete a partial ground truth python LOGO program.
We also tried running our system on new problems that we created by hand, illustrated in Figure 4, and also illustrated in the attached main-response PDF, where we ran the LOGO synthesizer on a hand drawing we made. The system was surprisingly able to handle all of these new problems that could not possibly have been in the pretraining data.
> How much does a turing-complete language help in solving PBE, excluding the fact that LLMs have seen lots of python code? Is the expressiveness of a turing-complete language itself helpful?
We believe the expressiveness itself is helpful due to the fact that a handcrafted restricted programming language may not be able to capture the space of all user-desired programs, even within a single domain. For example, in Fig 4 (top), we can see that a simple example—numbering the lines of a paragraph—can easily be solved by our model but cannot be solved by FlashFill++. We believe this is a general problem with handcrafted domain-specific programming languages: Although they make the search space smaller and therefore more tractable, they inevitably exclude important computations.
In other words, LLMs benefit from both massive data *and* expressive Turing-complete programming languages. As a thought experiment, consider an LLM trained on trillions of programs in the FlashFill++ DSL: It would still be utterly unable to solve the problems in Fig 4.
> How far can the adaption go? Right now the adaption discussed is still within the same category of problems (such as lists), I imagine a more general PBE system might be able to adapt to problems that are more different
Adaptation hinges on solving at least some problems in the OOD target, and we have observed that after fine-tuning for domain A, the model can still solve some problems in domain B. We believe this is because LoRA finetuning does not cause *too* much catastrophic forgetting about domain B.
We’ll update to include experiments of cross domain adaptation (lists task adapt to string task, string task adapt to list task), which could be very interesting as progress toward truly general purpose program synthesis. Thanks for the suggestion! | Rebuttal 1:
Rebuttal: Thank you all for the helpful reviews. Please see your individual responses, but here we wish to include a PDF illustrating:
1. The conversion to ASCII art requested by reviewer e1ch. Interestingly, we also found that by down sampling the image to ASCII, it is able to somewhat generalize to hand drawings (also shown).
2. New baselines including a multimodal model for graphics programming (GPT-4o and GPT-4o-mini), chain-of-thought prompts, and others. (Our LOGO results are now higher than in the original submission because we fixed a small programming issue that was degrading performance)
Pdf: /pdf/d71785d8584a9cda78aa9eb6583c52c749bd6319.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Scene Graph Generation with Role-Playing Large Language Models | Accept (poster) | Summary: This paper proposes SDSGG, a novel open vocabulary scene graph generation(OVSGG) algorithm that leverages the reasoning capability of a LLM to better determine the relations between objects in the scene. It achieves this goal by first prompting a LLM with multiple persona prompts to expand a simple relational predicative to a list of detailed visual descriptions, which are subsequently used to augment the classification process. It also introduces a novel mutual visual adapter, which better captures the interaction between subjects and objects. Experiments show that these proposed designs are effective.
Strengths: 1. Incorporating a LLM to augment the predicate labels for scene graph generation is a novel idea. This paper provides meaningful insight to future works in this area.
2. The experiment results (table 1-2) are strong, significantly outperforming previous methods.
3. The authors conducted extensive ablation studies on various design elements.
Weaknesses: 1. Prompting the LLM is a key element of the method, however some crucial details are missing. For example, how are the prompts constructed? While the author provided the prompt in Appendix Fig 5, it is unclear how the "{scene content to be discussed}" is generated. The author did show some examples throughout the paper, but they are not sufficient for the reader to understand the underlying process. In particular, in L167, the author showed example #1 "Imagine there is an animal that is eating, ". In Fig 1c, there is example #2 "Assuming that the scene has a man riding a horse." These two descriptions have two different granularity, as one only includes the generic concept of "an animal that is eating" while the other has specific class names "man" and "horse". The authors should clearly describe what information is included into the prompt, and discuss the scalability and cost of generating such prompts. I suppose if the prompts are like example #1, they can be generated offline based on predicative label sets. However, if the prompts are like example #2, they need to be generated for every possible triple of (subject, predicative, object) over the label space, or be generated online over possible objects in a scene. It is unclear which is the case.
2. Additional discussions and experiments are required to justify some of the design choices. For example,
2.1 in eq 8, the loss of descriptions marked by possible coexistence is to make the prediction "close to those of CLIP." (L255). If this is the case, why not directly use CLIP results for these possible coexistence descriptions at inference time (eq 2)?
2.2 some discussion is needed on if CLIP is good at classifying the generated descriptions. What are the nature of these descriptions and do they fit well with CLIP's pretraining pipeline (i.e. object-level image caption)? As a concrete example, can CLIP properly distinguish descriptions involving counting, such as "with four legs", and "with two legs", mentioned in the examples?
2.3 what happens if we discard "possible coexistence" descriptions and only use definite coexistence and contradiction? Table8 shows that it is ideal to have a low weight for "possible coexistence" loss. What happens if we set the weight to 0 and remove it at inference pipeline?
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discussed limitations and societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer xMLQ for the valuable time and constructive feedback. We provide point-to-point response below.
**Q1**: **Clarifying the rule of prompt construction.**
**A1**: The prompts **are like example #1** and **are generated offline**. The "{scene content to be discussed}" is constructed based on a given *subject-predicate* or *predicate-object* pair. For instance, in L167 (example #1), the "{scene content to be discussed}" is constructed based on a *subject-predicate* pair where the subject is “animal” and the predicate is “eating”. Details about the scalability and cost of generating descriptions are provided in Q2.
Regarding Fig. 1c, it is indeed misleading. In Fig. 1c, we specify a triplet instead of a pair for easier understanding of the later discussion process. We appreciate the keen observations and recognize that this creates an inconsistency. We will revise Fig. 1c for clarity. The revised version is shown in the PDF (Fig._R1). Thank you.
**Q2**: **Scalability and cost of generating descriptions.**
**A2**: As you noticed, iterating all possible triples of <subject, predicate, object> is **computationally unaffordable** (150 * 50 * 150 for 150 object categories and 50 predicate categories). Therefore, we use *subject-predicate* and *predicate-object* pairs to generate descriptions. Following [a], we use 3 object categories (*i.e*., human, animal, and product) instead of 150 object categories to save computational costs. The description generation process (as mentioned in L653-667) involves the following steps:
1. Initial Description Generation. This step calls OpenAI's API 2 * 3 * 50 times (2 stands for *subject-predicate* and *predicate-object* pairs).
2. Summarizing Descriptions. This step calls OpenAI's API 1 time.
3. Description-Relation Association. This step calls OpenAI's API 50 times.
4. Opposite Description Generation. This step calls OpenAI's API 1 time.
As illustrated, the primary cost is in the initial description generation (i.e., discussion by multi-persona collaboration). The generation is one-time and is completed **offline**, and would not delay the inference stage. Since NOT all triples are iterated, the cost remains acceptable even when the number of object/predicate categories increases. We respectfully refer the reviewer to {reviewer RK4c Q2} for the cost of online deployment.
**Q3**: **Usage of possible coexistence descriptions.**
**A3**: The goal of making the prediction of possible coexistence descriptions close to those of CLIP is to regularize the training process and avoid overfitting (L255-257). Such an approach brings more supervision signals during training and is inspired by the knowledge distillation-based open-vocabulary frameworks [b]. It is important to note that possible coexistence descriptions are used only during training. **During inference, they DO NOT contribute to the similarity measurement**, as the correlation $C_{r}^{n}$ equals 0 (see Eq. 1 & 2).
**Q4**: **More ablative experiments on possible coexistence descriptions.**
**A4**: In Table 8, $\alpha_{indef}$ is a scaling factor (Eq. 8, L258) for training targets rather than a weight of a loss. Per your request, we remove $\mathcal{L}_{indef}$ in Eq. 9 to study its influence. The results are as follows:
| Method | Base | | Novel | |
| --- | --- | --- | --- | --- |
| | mR @50 | mR @100 | mR @50 | mR @100 |
| Ours | 12.3 | 14.7 | 25.2 | 31.5 |
| w/o $\mathcal{L}_{indef}$ | 12.1 | 14.3 | 23.6 | 30.1 |
As seen, the $\mathcal{L}_{indef}$ leads to considerable improvements in the novel set (e.g., 1.6% mR@50 and 1.4% mR@100) and does not affect the performance of the base set. This further varifies our hypothesis (*i.e.*, regularize the training process and avoid overfitting as stated in L255-257). We will incorporate the above results into the appendix of our revision. Thanks.
**Q5**: **Discussion on descriptions and CLIP's recognition capability.**
**A5**: The generated description can be viewed as a **semantically informative** representation compared to the raw representation with only category names. They can convey extra concepts *without modifying model architecture or introducing additional learning processes*. Concretely, they 1) provide comprehensive concepts that help CLIP differentiate between relations (see Fig. 1 in [c]), and 2) offer simpler and more common concepts for relation categories that may not have been encountered during CLIP's pre-training (see Fig. 5 in [d]).
Note that these detailed, fine-grained descriptions **may not align perfectly with CLIP's pre-training pipeline**. By collecting 10 horse images from the Web and comparing the vision-language similarities, we empirically find that CLIP struggles to properly distinguish between concepts with slight differences (*e.g.*, "with four legs" *vs*. "with two legs"). This indicates an inherent drawback of previous description-based methods that uniformly process all descriptions as affirmative classifiers (L50), and motivates us to introduce the renormalization mechanism (L69-76) so as to adaptively weight the generated descriptions. We will incorporate the discussions into the appendix of our revision. Thank you.
References:
[a] Unbiased Heterogeneous Scene Graph Generation with Relation-aware Message Passing Neural Network, AAAI 2023.
[b] Towards Open Vocabulary Learning: A Survey, TPAMI 2024.
[c] Zero-shot Visual Relation Detection via Composite Visual Cues from Large Language Models, NeurIPS 2023.
[d] Visual Classification via Description from Large Language Models, ICLR 2022.
---
Rebuttal Comment 1.1:
Comment: The authors successfully addressed my concerns. I especially appreciate the clarifications of Q1. I maintain my recommendation for acceptance.
---
Reply to Comment 1.1.1:
Title: Thanks for your review
Comment: Dear Reviewer xMLQ,
We sincerely appreciate your time and effort in reviewing our submission and providing valuable comments. Please let us know if you'd like any further information.
Sincerely yours,
Authors. | Summary: This paper aims to solve the open-vocabulary scene graph generation problem. Previous methods mainly adopt scene-agnostic prompts as text classifiers. The authors argue that using the fixed text classifiers not only struggles to model visual relations with high variance, but also falls short in adapting to distinct contexts. Therefore, the authors propose the scene-specific description based OVSGG framework. They employ an LLM and ask it to play different roles. Besides, they design the mutual visual adapter to encode visual features. Extensive experiments show that the proposed method significantly outperforms top-leading methods.
Strengths: The motivation and idea of this paper are innovative and interesting. Simply applying LLM to SGG cannot effectively reason the relationships. The authors consider employing the context and introducing multiple roles of LLM, which is shown to be effective for solving the OVSGG problem.
Besides, the experiments are convincing. Plenty of ablation studies are provided.
Weaknesses: My main concern is Computational Complexity: The proposed framework involves multiple stages, including generating descriptions, renormalizing them, and applying mutual visual adapters. This multi-step process could be computationally intensive, making it less practical for real-time applications or scenarios with limited computational resources.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please read the weaknesses part.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer RK4c for the valuable time and constructive feedback. We provide point-to-point response below.
**Q1**: **Computational complexity of description generation (offline).**
**A1**: Suppose there are 3 common object categories (*i.e.*, human, animal, and product [a,b]) and 50 predicate categories. The description generation process (L653-667) involves the following steps:
1. Initial Description Generation. This step calls OpenAI's API 2 * 3 * 50 times.
2. Summarizing Descriptions. This step calls OpenAI's API 1 time.
3. Description-Relation Association. This step calls OpenAI's API 50 times.
4. Opposite Description Generation. This step calls OpenAI's API 1 time.
The execution time of the above steps depends on the network and the speed of OpenAI's response. Since these descriptions are generated offline, **they DO NOT incur any computational cost during deployment.** We respectfully refer the reviewer to {reviewer xMLQ Q2} for the scalability and cost of generating descriptions.
**Q2**: **Computational complexity of model (online).**
**A2**: Since the renormalization and similarity measurement (Eq. 2) involve only a few matrix operations that can be omitted from the complexity analysis, we will focus on reporting the inference time of the following three main modules:
| Module | Inference Time (ms) |
| --- | --- |
| CLIP’s Visual Encoder | 6.5 |
| Mutual Visual Adapter | 0.2 |
| CLIP’s Text Encoder | 5.4 |
As seen, the inference time of CLIP’s visual and text encoder is significantly higher than that of our mutual visual adapter. Compared to [b] which also uses CLIP, the delay of the newly-introduced MVA (0.2ms) is neglectable. In addition, it is important to note that during deployment, **only the visual part** (*i.e.*, CLIP’s visual encoder and our mutual visual adapter) **requires computational resources.** This is because the descriptions are generated offline for all categories and remain unchanged during deployment so that their text embedding can be pre-computed and stored. Therefore, the overall framework can meet the requirement of real-time applications or resource-limited scenarios. We will incorporate the results about computational complexity into the appendix of the revision.
Reference:
[a] Unbiased Heterogeneous Scene Graph Generation with Relation-aware Message Passing Neural Network, AAAI 2023.
[b] Zero-shot Visual Relation Detection via Composite Visual Cues from Large Language Models, NeurIPS 2023.
---
Rebuttal Comment 1.1:
Comment: I am satisfying with the responses. I will keep my original rating.
---
Reply to Comment 1.1.1:
Title: Thanks for your review
Comment: Dear Reviewer RK4c,
Thank you again for your kind review and comments. Please let us know if you'd like any further information.
Sincerely yours,
Authors. | Summary: This paper starts by discussing methods for Open-vocabulary Scene Graph Generation (OVSGG) based on the CLIP model, highlighting the issue that current OVSGG methods do not differentiate between various scenes, which limits their effectiveness. The authors introduce SDSGG, a scene-specific description-based OVSGG framework that improves both the textual and visual parts, enhancing the model's open-vocabulary relationship prediction capabilities.
Strengths: 1. The novelty of this paper lies in its analysis of the issues present in current OVSGG methods, leading to the conclusion that differentiating between scenes is necessary to enhance the performance of OVSGG. The proposed Scene-specific Descriptions are particularly insightful.
2. The paper validates its findings on two datasets, VG and GQA, with experimental results showing significant performance improvements over previous state-of-the-art methods.
Weaknesses: 1. The description in Sec3.1, Scene-specific Text Classifiers, of the paper is somewhat confusing. This confusion arises primarily because the text section includes multiple different naming conventions and several distinct modules. It is recommended that this section be rewritten to make it easier for readers to understand. Additionally, the terminology used in this section is inconsistent with that in lines 64~77, leading to comprehension difficulties.
2. For the OVSGG method, it is suggested to also train the model on a full set of relations and compare its performance with conventional SGG methods to ensure that it achieves good performance under standard settings.
3. Is the model robust to different base/novel splits? It is recommended to train and test the model on different base/novel dataset divisions to assess its robustness.
4. It is advised to train and test the model on the PSG dataset as well.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Regarding the selection of multiple personas, the ablation study shows that not using this approach results in a significant performance decrease. My question is, what exactly are the "standard prompts" referred to in line 329 of the document? What would be the effect if only one persona is used, and among the three personas mentioned in the document, which persona demonstrates the most significant performance?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer qa1m for the valuable time and constructive feedback. We provide point-to-point response below.
**Q1**: **Presentation.**
**A1**: Our apologies. We will revise Section 3.1 to improve clarity and coherence. Our revisions will focus on:
1. Streamlining the naming conventions: We will eliminate the term "text classifiers" and consistently use "descriptions" throughout.
2. Delineating modules: We will add a summary of the newly-involved modules and their utility for description generation.
3. Ensuring consistency: We will standardize the terminology across the manuscript, aligning it with the definitions provided in L64~77.
**Q2**: **Comparison with conventional SGG methods.**
**A2**: Thanks for your suggestion. Per your request, we trained our model with frequency bias on the full set of relations. The results are as follows:
| Method | mR@50 | mR@100 |
| --- | --- | --- |
| Motifs$_{CVPR'18}$ | 14.9 | 16.3 |
| VCTree$_{CVPR'20}$ | 16.7 | 17.9 |
| RU-Net$_{CVPR'22}$ | - | 24.2 |
| PE-Net(P)$_{CVPR'23}$ | 23.1 | 25.4 |
| VETO$_{ICCV'23}$ | 22.8 | 24.7 |
| DRM$_{CVPR'24}$ | 23.3 | 25.6 |
| Ours | 28.7 | 34.2 |
As seen, **our model also demonstrates good performance under standard settings**. We will incorporate the above results into the appendix of our revision. Thanks.
**Q3**: **Experiments on different base/novel splits.**
**A3**: Good suggestion! As you noticed, the mean and variance are reported in Tables 1-3 to ensure the performance advantage is reliable. Per your request, we trained our model on different base/novel splits to investigate the robustness further. Specifically, we 1) change the proportion of the base and novel split and 2) change the categories within the base and novel split (*i.e.*, different No. for the same ratio). The results are as follows:
| | | Base | | Novel | |
| --- | --- | --- | --- | --- | --- |
| No. | base:novel | mR @50 | mR @100 | mR @50 | mR @100 |
| 1 (paper) | 35:15 | 12.3 | 14.7 | 25.2 | 31.5 |
| 2 | 35:15 | 12.4 | 14.8 | 24.3 | 28.2 |
| 3 | 32:18 | 11.9 | 14.4 | 23.9 | 28.4 |
| 4 | 32:18 | 13.6 | 15.9 | 20.6 | 26.7 |
| 5 | 38:12 | 11.8 | 14.2 | 23.7 | 29.9 |
| 6 | 38:12 | 11.5 | 13.7 | 22.6 | 27.1 |
As seen, **our model is robust to different base/novel splits.** We will incorporate the above results into the appendix of our revision. Thank you.
**Q4**: **Experiments on panoptic scene graph generation.**
**A4**: Thanks for your suggestion. Our current framework is not directly applicable to PSG. Due to time constraints, we are unable to provide empirical results on the PSG dataset, as significant modifications and engineering efforts are needed. Nonetheless, we recognize the importance of exploring this direction and will definitely consider this as our future work. Thanks.
**Q5: Standard prompts in L329.**
**A5**: Given a *subject-predicate* or *predicate-object* pair, we ask LLM to generate corresponding descriptions. The prompt is defined as:
> Imagine [*subject-predicate* / *predicate-object* pair]. Think about what the scene should look like. Summarize each descriptive statement of the scene in about 15 words each.
>
We respectfully refer the reviewer to {reviewer xMLQ Q1} for the rule of prompt construction.
**Q6**: **Clarifying multi-persona collaboration.**
**A6**: As mentioned in L336, the involvement of multiple personas in the **discussion process** enhances the diversity of generated descriptions. The key point of our multi-persona collaboration is about the “**collaboration**” rather than a specific persona. Actually, using only one persona can even decrease the diversity of generated descriptions and hurt the performance, as it can only generate descriptions from its own viewpoint without discussion with others. From the perspective of the number of generated descriptions, each persona's contribution is almost the same.
In addition, we use the standard prompts and change the system prompt of LLM from the default like “you are a helpful AI assistant” into a persona-specific one like “you are a biologist”. We then evaluate the performance of our model with these generated descriptions. We only report the effect of the biologist and engineer persona because of the urgent due. The results are as follows:
| Method | Base | | Novel | |
| --- | --- | --- | --- | --- |
| | mR @50 | mR @100 | mR @50 | mR @100 |
| Multi-pensona Collaboration | 12.3 | 14.7 | 25.2 | 31.5 |
| Biologist Persona | 6.3 | 8.4 | 12.9 | 18.4 |
| Engineer Persona | 7.3 | 9.7 | 8.4 | 11.5 |
| Physicist Persona | | | | |
As seen, using only one persona results in a significant performance decrease, which is consistent with the findings in Table 4. We will give the results of the physicist persona *asap*. Thank you.
References:
[a] Zero-shot Visual Relation Detection via Composite Visual Cues from Large Language Models, NeurIPS 2023.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer qa1m,
The results of the physicist persona are as follows:
| Method | Base | | Novel | |
| --- | --- | --- | --- | --- |
| | mR @50 | mR @100 | mR @50 | mR @100 |
| Physicist Persona | 4.8 | 6.9 | 9.1 | 14.7 |
We appreciate the constructive feedback you provided, and we will incorporate all relevant results and details into our revision.
Sincerely yours,
Authors. | null | null | Rebuttal 1:
Rebuttal: To all reviewers:
Thank you so much for your careful review and suggestive comments. We have revised our paper according to your comments. The major changes are as follows:
1. We improve the presentation of Sec. 3.1, according to Reviewer qa1m's comments.
2. We add an experiment to evaluate the performance of our model under standard settings, according to Reviewer qa1m's suggestion.
3. We add an experiment to evaluate the robustness of our model *w.r.t.* different base/novel splits, according to Reviewer qa1m's suggestion.
4. We clarify the motivation and working mechanism of our multi-persona collaboration, according to Reviewer qa1m's suggestion.
5. We add an experiment to evaluate the performance when using only one persona, according to Reviewer qa1m's suggestion.
6. We offer more detailed discussions regarding the generation process of descriptions, according to Reviewer qa1m's and xMLQ's comments.
7. We give a detailed discussion regarding the computational complexity, according to Reviewer RK4c's comments.
8. We clarify the usage of possible coexistence descriptions, according to Reviewer xMLQ's comments.
9. We add an ablation study on the effect of integrating possible coexistence descriptions into training, according to Reviewer xMLQ's comments.
10. We discuss the nature of descriptions and CLIP's recognition capability, according to Reviewer xMLQ's comments.
In addition, we include supplementary figures in the PDF of this "global" response for the following aspects:
1. The revised version of Fig. 1c.
Please refer to our response for more details. We have strived to address each of your concerns and welcome further discussions and insights.
Sincerely yours,
Authors.
Pdf: /pdf/943aa071ec088602b6e50edd4efe1a9da4b90b48.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
LLM-ESR: Large Language Models Enhancement for Long-tailed Sequential Recommendation | Accept (spotlight) | Summary: The paper presents a framework that integrates large language models (LLMs) into sequential recommendation systems (SRS) to tackle the long-tail challenges. The framework includes dual-view modeling, which combines semantic embeddings from LLMs with collaborative signals, and a retrieval-augmented self-distillation method to enhance user preference representation. The authors validate their approach through extensive experiments on three real-world datasets, demonstrating significant improvements over existing methods.
Strengths: 1) The dual-view modeling and retrieval-augmented self-distillation methods are novel contributions that enhance the performance of SRS.
2) Utilizing LLMs to derive semantic embeddings for items and users adds a new dimension to the traditional collaborative filtering methods.
3) The extensive experimental evaluation, including comparisons with multiple baselines and ablation studies, strengthens the validity of the findings.
4) The paper provides comprehensive details on the methodology, including mathematical formulations and algorithmic steps, facilitating reproducibility.
Weaknesses: 1) There is a risk that the semantic embeddings might overfit to the training data, especially if the textual descriptions are not diverse enough.
2) The performance of the framework might be sensitive to the choice of hyper-parameters, which is not extensively explored in the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) How do the authors mitigate the risk of overfitting with semantic embeddings, especially in scenarios with limited textual data?
2) Can the authors elaborate on the hyper-parameter tuning process and its impact on the performance?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed several limitations, but there is room for more in-depth discussion on potential biases introduced by semantic embeddings and the sensitive to the choice of hyper-parameters.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's valuable time and insightful suggestions, which are important to our paper. We deliver the point-by-point response as follows.
> W1 & Q1
Thank you for highlighting the potential overfitting issue when textual descriptions are not diverse enough. We agree with the reviewer’s concerns. When textual prompts lack diversity, semantic embeddings may overfit the collaborative signals present in the training data. To address this, our paper proposes freezing the semantic embeddings and designing a trainable adapter during the training of LLM-ESR, which helps maintain subtle semantic differences between items.
In response to the reviewer's suggestion, we conducted additional experiments in scenarios with limited textual data. To simulate this situation, we removed all attributes from the item descriptions except for "name" and "categories" when constructing the textual prompts for the Yelp dataset (originally using 8 attributes). This reduced the average word count of the textual prompts from 38.38 to 20.33. We used SASRec as the backbone model in these supplementary experiments, with results presented in __Table 2__ of the __Rebuttal PDF__.
For convenience, we briefly show that the overall HR@10 of LLM-ESR with the original prompt changes from 0.6673 to 0.6069 when not freezing the semantic embedding layer. By comparison, the HR@10 of LLM-ESR with the limited prompt changes from 0.6477 to 0.6025.
In the table, __Full__ and __Crop__ represent the use of the complete and cropped prompts, respectively. __w/o F__ denotes training LLM-ESR without freezing the semantic embedding layer. Compared with Full, the results show a decrease in performance for Crop due to the limited textual prompt. Moreover, Full w/o F and Crop w/o F yield similar results, indicating that semantic embeddings suffer from overfitting with both complete and cropped prompts. In contrast, freezing the semantic embedding layer improves performance in both scenarios and significantly benefits long-tail items, demonstrating that our design effectively alleviates the overfitting issue.
We will include these experiments and analyses in the ablation study of our revised paper to further validate the effectiveness of LLM-ESR.
> W2 & Q2
We appreciate your concern regarding the hyper-parameters of our LLM-ESR. The key hyper-parameters are the number of similar users used for self-distillation ($N$) and the scale of the self-distillation loss ($\alpha$). We performed a grid search to optimize these hyper-parameters in the experiments. Actually, the results of this hyper-parameter tuning have been presented in __Figure 3__ of the current paper, with a detailed analysis in __Section 4.4__.
The hyper-parameter $\alpha$ determines the extent to which self-distillation influences the optimization process. As $\alpha$ varies from 1 to 0.01, the recommendation accuracy initially improves and then declines. Larger values of $\alpha$ can lead to overemphasis on self-distillation, negatively impacting the convergence of the ranking loss. Conversely, smaller values of $\alpha$ reduce the effectiveness of self-distillation, highlighting its importance for our LLM-ESR.
Regarding the number of retrieved users $N$, the optimal value is found to be 10. This is because a higher number of users provides more informative interactions, which helps mitigate the adverse effects of users with diverse interactions. However, if $N$ is too large, it may reduce the relevance of the retrieved users and lead to performance degradation.
---
Rebuttal Comment 1.1:
Comment: The authors have answered all my concerns. To this end, I prefer to recommend this paper acceptance
---
Reply to Comment 1.1.1:
Comment: We really thank you for taking the time to carefully assess our work and provide thoughtful feedback. Your suggestions greatly help improve our paper. | Summary: This paper introduces a novel framework designed to address the long-tail challenges in sequential recommendation systems (SRS). By leveraging semantic embeddings from large language models (LLMs) and combining them with collaborative signals, the authors propose a dual-view modeling framework and a retrieval-augmented self-distillation method. This approach aims to enhance recommendations for both long-tail users and items without adding significant inference load. Extensive experiments on three real-world datasets demonstrate the effectiveness of the proposed framework.
Strengths: 1. The paper successfully integrates LLMs with SRS to address long-tail challenges, a novel approach that leverages the semantic understanding of LLMs while maintaining low inference costs.
2. The dual-view modeling framework effectively combines semantic and collaborative signals, providing a comprehensive enhancement for SRS.
3. This method innovatively uses interactions from similar users to enhance user preference representation, addressing the long-tail user challenge.
4. The proposed framework is model-agnostic and can be adapted to any sequential recommendation model, making it highly applicable in real-world scenarios.
Weaknesses: 1. The proposed dual-view and self-distillation methods add layers of complexity to the SRS, which may pose challenges in practical implementation.
2. The framework assumes a certain level of similarity in user interactions, which might not hold true for highly diverse user bases.
3. Impact on Popular Items: While the focus is on long-tail items and users, the potential impact on recommendations for popular items is not thoroughly explored.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Could the authors provide more details on the practical implementation challenges and how they can be mitigated?
2. How does the framework handle highly diverse user interactions where finding similar users may be challenging?
3. Balanced Performance: What measures have been taken to ensure that the enhancement for long-tail users and items does not adversely affect recommendations for popular items?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Discussing potential negative societal impacts, such as reinforcing biases in recommendations, would be beneficial.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate that the reviewer has raised these valuable questions to help us refine our paper. We have discussed these questions carefully and responded to them as follows.
> W1 & Q1
We appreciate the reviewer's advice on providing more details regarding practical implementation. There are two challenges we encountered during the implementation:
1. __Extra Inference Burden__: The dual-view modeling and self-distillation modules may increase the inference burden on SRS models. To address this challenge, we propose two efficiency solutions:
- To reduce the size of the semantic embedding layer ($|\mathcal{V}|\times d_{llm}$), we cache all the semantic embeddings transformed by the adapter before inference. Consequently, only the reduced semantic embeddings ($|\mathcal{V}|\times d$) need to be loaded during inference.
- The self-distillation module is eliminated during inference, as it only provides guidance for training.
With our efficient implementation, LLM-ESR introduces only a minimal additional inference burden. Originally, the model size of LLM-ESR (SASRec as the backbone) was 20.3M, and it required 12.92 seconds to test all users on the Yelp dataset. However, after applying our implementation tricks for inference, the size and time consumption reduced to 2.32M and 12.62 seconds, respectively (compared to 1.70M and 11.94 seconds for SASRec).
2. __Optimization Difficulty__: Due to the complexity of dual-view modeling and self-distillation, optimizing the entire framework can be challenging. We propose two implementation solutions to significantly alleviate this difficulty:
- The semantic and collaborative embedding layers are trained in distinct stages, resulting in optimization difficulty. Thus, we use dimension-reduced LLM item embeddings to initialize the collaborative embedding layer instead of random initialization.
- The dimension of LLM embeddings is much larger than the semantic embedding $\mathbf{e}^{se}$, making convergence difficult when the adapter is designed as a single linear layer. We found that a two-layer design significantly alleviates this problem. Specifically, we first reduce the dimension to half of $d_{LLM}$ and then to the item embedding size $d$.
To illustrate the effectiveness of these two implementation solutions, we compared one-layer adapter and random initialization variants of LLM-ESR in supplementary experiments. The results, shown in __Table 2__ of the __Rebuttal PDF__, indicate that both variants underperform the original LLM-ESR, verifying the success of our special designs.
We will highlight these implementation details in the revised version of our paper.
> W2 & Q2
We appreciate the reviewer's insight into the challenge of finding similar users with highly diverse interactions.
On one hand, LLMs have been shown to effectively understand user interactions through textual prompts [1,2]. Therefore, LLM embeddings of user interactions can serve as robust semantic representations of users. This suggests that embeddings generated by LLMs can assist in retrieving truly similar users.
On the other hand, we acknowledge the challenge of identifying similar users due to their diverse interactions. To mitigate the adverse effects of users with diverse interactions, we average the representations of the top-N retrieved users to guide the self-distillation process. The hyper-parameter experiments shown in __Figure 3__ partially validate the reviewer's concern and the effectiveness of our design.
The results indicate that LLM-ESR performs sub-optimally when N is relatively small (e.g., N=2), suggesting that inaccurate similar users can negatively impact performance. However, as N increases from 2 to 10, performance improves continuously, demonstrating that our strategy of using top-N users helps mitigate these adverse effects.
Additionally, we will include a discussion on this topic in the revised paper to clarify our top-N retrieval design and its benefits.
[1]. Harnessing large language models for text-rich sequential recommendation. ACM on Web Conference 2024.
[2]. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. 17th ACM Conference on Recommender Systems.
> W3 & Q3
Thank you for your advice on evaluating popular items and experienced users for our LLM-ESR.
In our overall experiments (__Table 1__ in the paper), we categorize users and items into tail and head groups. Here, "Head User" and "Head Item" refer to popular items and experienced users, respectively. The results show that our LLM-ESR not only improves the performance for long-tail users and items but also enhances the performance of head users and head items compared to all baseline methods.
Furthermore, for a more detailed analysis of balanced performance, we tested LLM-ESR across more granular groups in __Section 4.5__. The results in __Figure 4__ demonstrate that while LLM-ESR slightly affects the performance of extremely popular items, it benefits all other groups. These findings confirm that LLM-ESR has minimal adverse effects on popular items and experienced users.
> Limitation
We really appreciate your suggestion on discussing potential societal impacts. For the long-tail item issue, only recommending popular items easily causes the filter bubble problem for users, which may skew one's perspective and trap them in a small range of content. For the long-tail user issue, the ignorance of some minority groups, such as elders, may be intensified due to their less activity but it leads to more serious unfairness to the society. Thus, handling the long-tail problems in RS is essential to our society. Due to word limitations, we will add more discussions to the revised paper.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer, we sincerely appreciate your valuable time and insightful suggestions on our paper again. We hope that we address your concerns by our responses. Since the reviewer-author discussion deadline is approaching, please let us know if you have any other questions. We are glad to further respond to your concerns. | Summary: The paper addresses the challenges in sequential recommender systems (SRS), particularly the long-tail user and long-tail item issues, which complicate user experience and seller benefits in real-world applications. The authors propose the Large Language Models Enhancement framework for Sequential Recommendation (LLM-ESR) to mitigate these challenges. The LLM-ESR framework leverages semantic embeddings derived from large language models (LLMs) to enhance SRS without increasing inference load. To tackle the long-tail item problem, the framework employs a dual-view modeling approach that integrates semantics from LLMs with collaborative signals from traditional SRS. For the long-tail user issue, a retrieval augmented self-distillation method is introduced to improve user preference representation by utilizing more informative interactions from similar users.
Strengths: - The work includes extensive experiments, testing multiple aspects of the model's capabilities.
- The approach is quite new. Recommender systems based on LLMs are a promising direction.
Weaknesses: - The paper does not sufficiently and deeply discuss existing work, making the motivation and core idea of the paper seem less convincing, and the innovation of the paper is also insufficient.
- The baselines used in the experiments are limited.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In line 44, the authors mention that existing studies perform poorly due to "ignorance of the true relationship between items." What is the true relationship between items, and how does it affect recommendations?
- Although SASRec is a classic model, it is not reasonable to conclude that all SRSs perform poorly in long-tail scenarios solely based on SASRec. Have the authors analyzed why SASRec performs poorly in long-tail scenarios? Do models that use other techniques specifically for long-tail scenarios have this problem? What are their limitations?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the meticulous and insightful comments to help us polish the paper. Please find the point-to-point responses to the reviewer's concerns.
> W1 & Q1
We greatly appreciate the reviewer's suggestion on refining the motivation of our paper. Existing research on long-tail issues, including CITIES [1] and MELT [2], aims to enhance the representation of long-tail items through similar popular items. However, these approaches rely on co-occurrence patterns, which overlook the semantic (expressed as "true" in the current paper) relationships between items as highlighted in our paper. Co-occurrence records inherently have an uneven distribution, often skewing item embeddings toward popularity and exacerbating the long-tail problem. In contrast, semantic relationships encoded by LLMs are unaffected by item popularity, which motivates our use of LLMs to correct the embedding distribution in SRS.
For illustration, we visualized the item embeddings of SASRec, CITIES, MELT, our LLM-ESR (concatenate the semantic embedding $\mathbf{e}^{se}$ and collaborative embedding $\mathbf{e}^{co}$), and LLM using t-SNE, as shown in __Figure 1__ of the __Rebuttal PDF__. We grouped the items into four categories based on their popularity. The t-SNE figures reveal that the embeddings of SASRec, CITIES, and MELT tend to cluster according to item popularity. In contrast, the distribution of LLM embeddings is more uniform, indicating that the semantic relationships are not skewed by popularity. Furthermore, the embeddings of our LLM-ESR also show a more even distribution, validating that our method effectively corrects the embedding distribution in SRS and enhances the performance of long-tail items.
We commit to including the supplementary distribution experiments and refining the illustration of our motivation in the revised paper.
[1]. Jang, Seongwon, et al. Cities: Contextual inference of tail-item embeddings for sequential recommendation. 2020 IEEE International Conference on Data Mining (ICDM). IEEE, 2020.
[2]. Kim, Kibum, et al. MELT: Mutual Enhancement of Long-Tailed User and Item for Sequential Recommendation. Proceedings of the 46th international ACM SIGIR conference on Research and development in information retrieval. 2023.
> W2 & Q2
Thank you for the advice to illustrate the limitations of existing methods more clearly.
1. Due to space limit, we present the preliminary experimental results on SASRec in the current paper. In __Figure 5__ and __Figure 6__ of __Appendix C.2__, we display the grouped performance of GRU4Rec and Bert4Rec based on item popularity and user interaction count. These results demonstrate that Bert4Rec and GRU4Rec also exhibit unsatisfactory performance for items and users with fewer interaction records, verifying that they also suffer from long-tail problems, not just SASRec.
2. As mentioned in our response to Q1, the main reason for long-tail problems lies in the skewed distribution of item embeddings. Additionally, to illustrate that other SRS models also have the long-tail issue, we present the item embedding distributions of GRU4Rec and Bert4Rec in __Figure 2__ of the __Rebuttal PDF__. The results indicate that the embeddings of GRU4Rec and Bert4Rec are also clustered according to item popularity. Our LLM-ESR method helps alleviate these skewed distributions, thereby enhancing the performance of long-tail items.
3. MELT and CITIES are two significant works designed for long-tail scenarios. We have compared their performance in __Table 1__ and conducted a more detailed group analysis of their long-tail results in __Figure 4__ of the current paper. The results demonstrate that the proposed LLM-ESR consistently outperforms these two long-tail techniques. As mentioned in response to Q1, their main limitations lie in their reliance on co-occurrence records to enhance long-tailed items and users, which did not address the skewed embedding distribution issue (as shown in __Figure 1 (b)__ and __(c)__ in the __Rebuttal PDF__).
---
Rebuttal Comment 1.1:
Comment: Thanks for the analysis and clarification, and I will raise the rating.
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply. Again, we really appreciate your valuable time and insightful suggestions. | null | null | Rebuttal 1:
Rebuttal: We really appreciate your valuable time and insightful suggestions. The referred figures and tables of results in the rebuttal are included in the supplement PDF (i.e., __Rebuttal PDF__).
Pdf: /pdf/249b73a4192de7232766b14590739771e58f2163.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Evaluate then Cooperate: Shapley-based View Cooperation Enhancement for Multi-view Clustering | Accept (poster) | Summary: This paper studies multi-view clustering and seeks to investigate the view cooperation issue. The authors consider DMVC as an unsupervised cooperative game and regard each view as a participant. Compared with the existing methods, this consideration is new and interesting. Based on the novel idea, the authors proposed SCE-MVC, a novel shapley-based cooperation enhancing multi-view clustering method. The paper is well-organized. The experiments are convincing.
Strengths: 1. The paper proposes a new point also an interesting point for multi-view clustering tasks, i.e., considering the multi-view collaboration as a cooperative game.
2. The experiments are sufficient and convincing. The authors validate the method from many aspects. The proposed SCE-MVC obtains much better performance on six diverse datasets.
Weaknesses: 1. Figure 2 is confusing. The specific structure of View Cooperation Enhancing Module is not clearly presented.
2. There are many formulas and symbols. It is suggested to add a notation table.
3. Although the authors try to explain model (1), it is still difficult to understand Shapley Value from the model. In addition, many variables are not clearly explained. The authors should present more information about the model and explain all variables used in this model, such as S_i, {i}, s\{i}, etc.
Technical Quality: 4
Clarity: 3
Questions for Authors: The article involves rich theoretical and mathematical knowledge. I have a question to the designation of the model: Which design is the key to improving model performance?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Illustration in Figure 2:**
Thanks. Improvements have made to the View Cooperation Enhancing Module in Figure 2 in the Figure 1 of global reponse, presenting the details of gradient modulation within this module.
**2. Notification table:**
Thanks for your suggestions. A notification table will be given in the final version.
**3. Key to improving model performance:**
Thanks. Previous DMVC methods did not adequately consider the aspects of **view cooperation**, especially the joint methods, which may lead to the collapse of fusion results onto one or a few views. Building upon the fundamental assumption of multi-view learning that each view contains complementary information beneficial for downstream tasks, fully leveraging the role of each perspective and enhancing cooperation leads to better model performance.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. All my concerns have been addressed. After reading all the reviewers' comments and responses, I decided to raise the score.
---
Reply to Comment 1.1.1:
Title: Follow Up for reviewer CDST
Comment: Thank you for your valuable insights on our work. We will eloquently present our idea of view cooperation in the final version. | Summary: The author introduces a Shapley-based cooperation enhancement framework aimed at fostering collaboration among different views. The SCE-MVC method incorporates cooperative game theory, considering each view as a participant in the model and assessing their contributions using the Shapley Value.
Strengths: Viewing each view as an individual player within game theory represents a fresh perspective in multi-view clustering. Also, enhancing clustering performance through balancing view contribution is both well-founded and innovative.
Weaknesses: 1. Using the SCE module in an alignment-based framework only provides a marginal improvement to the model. Does this imply that the SCE module is ineffective in the alignment-based framework?
2. The view contributions of alignment-based method is much balanced than view contributions of joint methods. Does this imply that the alignment-based method is much better than the joint method? It's not reasonable since the clustering performance of alignment-based methods may not necessarily be better than that of joint methods.
3. Is the complexity of computing Shapley values truly O(n!)? When dealing with a larger number of views, can this evaluation framework still be utilized for computation?
4. Are the loss functions L in Eqs (15) and (16) on page 6 the same? If so, there is a problem of inconsistent dependent variables. In addition, $D_ij$ in Eq. (9) is a scalar and should not be bolded.
Technical Quality: 3
Clarity: 2
Questions for Authors: The alignment-based method proposed in Theorem 1 will make the contribution values of several views the same. Combined with the experimental results in Table 2, does this mean that the end point of the view contribution optimization proposed in this paper is contrastive learning? If not, please explain in detail the difference between the method in this paper and the contrastive learning method?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. the use of SCE module on alignment-based methods:**
Thanks. The SCE method consists of two modules: the View Contribution Evaluation Module and the View Cooperation Enhancing Module. For alignment-based methods, the View Contribution Evaluation Module obtains the contributions of views, where these contributions are remarkably close in the alignment-based frameworks, agreeing with the results deduced from our theory.
| Dataset | Method | $\phi_1$ | $\phi_2$ |
|------------|--------------|-----|-----|
| CUB | InfoNCE+Kmeans | 0.491 | 0.509 |
| CUB | ProIMP | 0.556 | 0.443 |
| Caltech101-7 | InfoNCE+Kmeans | 0.484 | 0.516 |
| Caltech101-7 | ProIMP | 0.489 | 0.511 |
Building upon the consistent view contributions, the limited enhancements brought by the View Cooperation Enhancing Module can be elucidated. Our experimentation and analysis of alignment-based methods with/without SCE in the paper are not aimed at showcasing the enhancement brought by SCE to the model, but rather at validating the integrity of our theory and the soundness of the View Contribution Evaluation Module.
**2. Is the alignment-based method superior to the joint method?**
Thanks. Given a DMVC framework, the SCE module can evaluate the cooperation among views and make optimizations based on the assessment. If the cooperation among views is insufficient, it indicates potential for improvement upon the original clustering performance, while it does not imply that the original clustering results are poor. For example, on the Caltech101-7 dataset, while DMJC may not match ProImp in view cooperation, it exhibits superior clustering performance. Moreover, the inconsistent view cooperation of DMJC suggests that using the SCE module can yield remarkable enhancements to the model. Our work, from the perspective of view contributions, highlights potential limitations of joint methods, and does not assert that alignment-based methods are superior than joint methods in clustering performance.
| Dataset | Method | $\phi_1$ | $\phi_2$ | ACC |
|--------------|--------|-------|-------|-------|
| Caltech101-7 | ProIMP | 0.489 | 0.511 | 0.382 |
| Caltech101-7 | DMJC | 0.968 | 0.032 | 0.469 |
**3. Algorithmic complexity of SCE module:**
Thanks. SCE module involves computing the fusion distribution for any combination of views. With V views, the complexity of SCE amounts to $O(2^V)$. While the $O(2^V)$ complexity is manageable for a small number of views, it can become cumbersome with a larger number of views. To mitigate the computational burden of calculating Shapley values for numerous views, approximate algorithms can be employed. For instance, the TreeSHAP [1][2] algorithm can compute Shapley values **in polynomial time**, offering a more efficient approach.
**4. Representation of symbols in the formula:**
Thanks. The $L$ in Equation 15 is an abbreviation for $L(\theta^{(v)}_t)$ in Equation 16. The representation of the formulas will be unified in the final version.
**Reference**
[1] Yang J. Fast treeshap: Accelerating shap value computation for trees[J]. arxiv preprint arxiv:2109.09847, 2021.
[2] Muschalik M, Fumagalli F, Hammer B, et al. Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(13): 14388-14396. | Summary: The study centers on improving task performance via deep multi-view clustering (DMVC) and fostering cooperation among different views. Specifically, the study evaluates view contributions, emphasizing the significance of strengthening cooperation among views.
Strengths: Considering multi-view tasks from a collaborative standpoint represents a novel approach, with the paper's motivation being notably fresh. Moreover, the paper elucidates potential contribution imbalances in the joint method and addresses them through the SCE method, thereby enhancing cooperation among views.
Weaknesses: When dealing with datasets comprising more than two views, such as three views, how can one assess whether the contribution of the views has become more evenly distributed after employing SCE? While the paper visually presents the contributions of the views, could a quantitative method be provided for this evaluation?
Technical Quality: 3
Clarity: 3
Questions for Authors: In the unsupervised multi-view scenario, what is the physical meaning of the contribution value of each view proposed in this paper? What is the relationship between the quantitative value of the view's contribution and the clustering performance of a single view ?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Quantification of the equilibrium level of view contributions:**
Thanks. Due to the normalized characteristic of view contributions, the cooperation level among views with/without SCE can be compared by calculating the variance of view contributions, denoted as $D(\phi)$. A smaller variance indicates a more consistent contribution among views, reflecting improved cooperation between views.
The table below illustrates the values of $D(\phi)$ for DMJC with/without SCE. It is evident that using SCE could result in a lower $D(\phi)$ for the views, indicating a better cooperation among them.
| $D(\phi)$ | Caltech101-7 | CUB | UCI-digit | HandWritten | STL10 | Reuters |
|--------|:------------:|:-----:|:---------:|:-----------:|:-----:|:-------:|
| DMJC | 0.219 | 0.225 | 0.057 | 0.054 | 0.040 | 0.006 |
| DMJC+SCE | 0.004 | 0.154 | 0.046 | 0.049 | 0.029 | 0.001 |
**2. Physical meaning of view contributions:**
Thanks. In unsupervised multi-view scenario, the physical meaning of view contributions lies in their influence on fusion. The greater the view's contribution, the more significant its influence on the fusion progress.
**3. Relationship between view contribution and view performance:**
Thanks. There is no link between the contribution of a view and its performance. Based on the fundamental assumption of multi-view learning that each view contains complementary information beneficial for downstream tasks, examining the performance of a single view independently is deemed meaningless. Furthermore, it is infeasible to evaluate the quality of a single view under unsupervised conditions. Hence, the aim is to enhance the cooperation among views by ensuring a more balanced view contribution. | Summary: This research merges game theory with multi-view clustering by introducing the Shapley-based Cooperation Enhancing (SCE) approach. It features a module to systematically evaluate each view's contribution. The approach promotes view cooperation by adjusting the training convergence rate of view parameters based on their contributions. Extensive experiments on various datasets demonstrate the method's effectiveness when applied to different MVC frameworks.
Strengths: 1) The paper integrates the Shapley value from game theory into DMVC, allowing for precise assessment of each view's contribution.
2) Theoretical analysis is thorough, with clear and intuitive figures.
3) The manuscript is well-organized and clearly written.
Weaknesses: The article categorizes DMVC into alignment-based and joint methods. What criteria were used for this classification? Furthermore, only one DMJC method is used as a representative for joint methods.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) Figure 3(a) indicates that the method does not equalize the contribution value of each view. Why do the contribution values become identical after adding the SCE module to the comparison-based method in Table 2? Please provide a detailed discussion.
2) What criteria were used to classify DMVC?
3) Is DMJC representative of joint methods? Have other joint methods employed similar frameworks?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Different characteristics of joint method (Figure 3(a)) and alignment-based method(Table 2):**
Thanks. Figure 3(a) illustrates the change of view contributions of DMJC (a joint method) with/without SCE. In the joint methods, views' representations are optimized in their respective spaces, leading to uneven contributions in the fusion process. Even with the utilization of the SCE module, it can only alleviate the imbalance in view contributions without guaranteeing complete consistency. This is an inherent characteristic of the joint method framework.
| Dataset | Method | $\phi_1$ | $\phi_2$ | ACC |
|--------------|------------|-------|-------|-------|
| CUB | DMJC | 0.974 | 0.026 | 0.758 |
| CUB | DMJC+SCE | 0.892 | 0.108 | 0.797 |
| Caltech101-7 | DMJC | 0.968 | 0.032 | 0.469 |
| Caltech101-7 | DMJC+SCE | 0.567 | 0.433 | 0.583 |
On the other hand, the two alignment-based methods in Table 2 inherently bring views' representations closer, leading to more consistent contributions between views. With the application of the SCE module, it is possible to achieve essentially consistent view contributions. These experimental results validate the theoretical framework proposed in our study.
| Dataset | Method | $\phi_1$ | $\phi_2$ | ACC |
|--------------|------------|-------|-------|-------|
| CUB | ProIMP | 0.556 | 0.443 | 0.825 |
| CUB | ProIMP+SCE | 0.484 | 0.516 | 0.832 |
| Caltech101-7 | ProIMP | 0.489 | 0.511 | 0.382 |
| Caltech101-7 | ProIMP+SCE | 0.499 | 0.501 | 0.382 |
**2. Criteria of classifying DMVC:**
Thanks. Our classification of DMVC methods is inspired by [1], which categorizes multi-view representation learning into (1) joint methods, (2) alignment methods, and (3) shared and specific methods, based on whether the representations of views are brought closer in the same space. In shared and specific methods, shared representations are brought closer to each other in the same space, while specific representations are individually optimized in different spaces, which can be seen as a combination of (1) and (2). Similar classification methods are also found in [2], [3], [4]. Drawing on this classification approach, our work divides DMVC into joint methods, alignment-based methods, and other methods.
**3. Similar frameworks to DMJC:**
Thanks. The methods of DMJC have found numerous applications in MVC research in recent years. Approaches such as SURER [5], SGDMC [6], and DFP-GNN [7] have also employed similar strategies, namely optimizing view representations in respective spaces and using the sharpening of fused distributions as self-supervised signals to guide training.
**Reference**
[1] Jia X, et al. Human collective intelligence inspired multi-view representation learning—Enabling view communication by simulating human communication mechanism[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(6): 7412-7429.
[2] Baltrušaitis T, Ahuja C, Morency L P. Multimodal machine learning: A survey and taxonomy[J]. IEEE transactions on pattern analysis and machine intelligence, 2018, 41(2): 423-443.
[3] Li Y, Yang M, Zhang Z. A survey of multi-view representation learning[J]. IEEE transactions on knowledge and data engineering, 2018, 31(10): 1863-1883.
[4] Jia X, et al. Semi-supervised multi-view deep discriminant representation learning[J]. IEEE transactions on pattern analysis and machine intelligence, 2020, 43(7): 2496-2509.
[5] Wang J, Feng S, Lyu G, et al. SURER: Structure-Adaptive Unified Graph Neural Network for Multi-View Clustering[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(14): 15520-15527.
[6] Huang Z, Ren Y, Pu X, et al. Self-supervised graph attention networks for deep weighted multi-view clustering[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2023, 37(7): 7936-7943.
[7] Xiao S, Du S, Chen Z, et al. Dual fusion-propagation graph neural network for multi-view clustering[J]. IEEE Transactions on Multimedia, 2023, 25: 9203-9215.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses, which have addressed my concerns. I am favor of the coopearation idea for MVC and vote for acceptance.
---
Reply to Comment 1.1.1:
Title: Follow Up for reviewer TWtF
Comment: Thank you for your positive comments. Cooperation among views offers a fresh perspective for the DMVC methods. | Rebuttal 1:
Rebuttal: We thank the SAC, AC, and PCs for their efforts and constructive comments, which are helpful in further improving the quality of our manuscript. We respond to your questions carefully one by one carefully, and we hope our responses can address your concerns.
Note that there are five tables and one figure in the attached PDF, corresponding to RQ1 for Reviewer TWtF, RQ1 for Reviewer URj2, RQ1 and RQ2 for Reviewer v5TM, and RQ1 for Reviewer CDST.
Pdf: /pdf/106e52cfea65f5853ad3bf5fed58aa88764a9544.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper firstly considered DMVC as an unsupervised cooperative game where each view can be regarded as a participant. Then, the authors introduced the shapley value and propose a novel MVC framework termed Shapley-based Cooperation Enhancing Multi-view Clustering (SCE-MVC), which evaluates view cooperation with game theory. In summary, this paper was well written with obvious superiority.
Strengths: -- A MVC framework was designed that utilizeD game theory and Shapley values to evaluate and elevate inter-view cooperation.
-- The experiments were sufficient, and the analysis of the experimental results was adequate.
Weaknesses: -- In this paper, why utilize $\phi_i$ to measure the contribution of views instead of the view weight $w_i$? The article's explanation on this is not clear enough, and there is a lack of experiments to demonstrate the relationship between $\phi_i$ and $w_i$.
Technical Quality: 2
Clarity: 3
Questions for Authors: I have the following questions:
-- What will happen if the view contribution is push away from each other?
-- Are there scenarios where narrowing the contribution between views fails to enhance the effectiveness of multi-view clustering?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. The relationship between $\phi$ and $w$:**
Thanks. Evaluating the view contribution using $\phi$ calculated by shapley value, instead of relying solely on pre-set weights $w$, stems from a systemic perspective on the process of multi-view clustering. In multi-view of representation learning and fusion, conventional static weights ($w$) overlook the intrinsic interactions among views. Contrarily, Shapley values’ distinctive marginal analysis capability allows for precise quantification of each view’s average incremental contribution to the system’s overall performance. Therefore, $\phi$ reveals not only the fundamental worth of each view, but also showcasing the additional value they generate through dynamic interactions.
**2. Pushing away contributions:**
Thanks. Pushing away contributions among different views can be seen as the antithesis of our SCE module. In this context, the fusion of views collapses onto one or a few views, failing to leverage the complementary information of multi-view information.
**3. Counterexamples:**
Thanks. There are situations where the contributions of views are brought closer together without enhancing the clustering performance. The fundamental assumption of DMVC is: Each view contains complementary information beneficial for downstream tasks, better view cooperation leads to better model performance. However, if a dataset obeys this assumption that a certain view contains a significant amount of noise and erroneous information, detrimental to the clustering task, increasing the contribution of that view will result in an overall decrease in clustering performance.
---
Rebuttal Comment 1.1:
Title: Thanks to the reply.
Comment: Thanks to your kindly responses. Most of my concerns have been resolved.
---
Rebuttal Comment 1.2:
Title: Thanks for your detailed responses.
Comment: I have carefully reviewed the authors' responses. Since all my concerns have been tackled, I understood the motivation, contribution, and experimental analysis more clearly. Combined with the comments from other reviewers, I decided to raise the score of this paper. | null | null | null | null | null | null |
Unified Lexical Representation for Interpretable Visual-Language Alignment | Accept (poster) | Summary: The authors propose a method based on lexical representation for Visual-Language Alignment (VLA). The method relies on aligning two strong unimodal models, namely DINOv2 for the visual modality and Llama 2 for the text modality. Each backbone is fine-tuned with a few adapters or additional layers. The two modalities use separate codebooks mapping to a joint vocabulary. The authors also propose an overuse penalty to limit the excessive activation of irrelevant tokens. Finally, the authors introduce the PatchDis metric to measure patch-level alignment. Evaluation on zero-shot cross-modal retrieval datasets shows state-of-the-art performance of the method with the compared baselines. Additional experiments on the patch-level representation and sparsity showing the effectiveness of the method are also reported.
Strengths: - The authors proposed an effective and interpretable Lexical Representation approach for Visual-Language Alignment
- The proposed method is described clearly
- The experimental results show state-of-the-art performance in comparison to the baseline selected
Weaknesses: - The vocabulary is based on the Llama tokenizer which, as stated in the limitations, may split words into meaningless sub-word tokens and may also lack longer relevant words.
- The latent baselines for zero-shot cross-modal retrieval do not include recent methods such as BEiT-3 [Wang, Wenhui, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal et al. "Image as a foreign language: Beit pretraining for vision and vision-language tasks." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19175-19186. 2023.]
- One main difference with the compared methods could be the use of the DINOv2 visual backbone and the Llama 2 textual backbone, it is possible the proposed method benefits from these strong backbones. All methods' visual and text backbones (and their potential pretraining) should be discussed in detail to enable the readers to properly judge the merit of the proposed method
Technical Quality: 3
Clarity: 3
Questions for Authors: - Have the authors explored a simpler approach of just selecting the nouns, adjectives, and non-auxiliary verbs in a caption instead of the LLM-based lexical predictor? How many keywords are extracted by the LLM on average per caption? Does it vary with the length of the caption?
- Eq (3), what is x with any index?
- Table 1: it would be good to also indicate the amount of (unimodal) pretraining data (if any) used for each method e.g. the amount of data used for DINOv2 and LLama 2 for the proposed method. What are the test splits used for this experiment? Commonly, results are reported based on the splits in [Karpathy, Andrej, and Li Fei-Fei. "Deep visual-semantic alignments for generating image descriptions." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3128-3137. 2015.]. If these are not used it would be good to use them as well.
- Figure 3: it would be good to provide the class <-> color mapping here.
- Figure 4: bottom row, how were the local patches selected?
- Figure 5: what does the vertical black dotted line represent? How were the different sparsity level selected?
- The authors mention in the limitations that their “lexical vocabulary based on the Llama 2’s tokenizer (...) splits a word into several sub-word tokens.” does that also mean that some rather rare long words would not appear in the vocabulary? Have the authors studied what are these missing words? Further down the authors state “Given that random initialize a word-level vocabulary and additionally learn a projector from sub-word tokens to word-level tokens works poorly, we regard designing a word-level vocabulary that still benefit from the LLMs as a future work.”, it seems the author did conduct some experiments towards that. Even if the results were not conclusive it would be interesting to share what was tried and what was the performance.
Typos etc:
- p2-l52: missing space “LexVLAto”
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## W1. Tokenizer-based vocabulary is not perfect.
Thanks. Please note that LexVLA has achieved SOTA performance in most experiments, demonstrating its effectiveness. While it is nontrivial to design a perfect vocabulary to handle all the corner cases generally, we would take it as an important future work and already thoroughly discussed in the limitation parts (L289-295). Nevertheless, our main contributions are still well supported by our empirical evidence here, and our work is self-contained.
## W2. Compare with BEiT-3?
Thanks for suggesting the paper of BEiT-3 [1]. We will address how it significantly differs from our work (and thus not included in our original submission), while we will cite and discuss it in the related work in the revision. Particularly,
1) Differences in Goals and Training Strategies: BEiT-3 uses a masked data modeling strategy to develop a general-purpose model utilizing both single- and multi-modal data; in contrast our approach focuses on developing a specific lexical representation for images and texts.
As our target is to learn interpretable lexical representation for images/texts, and BEiT-3 has no such interpretable representations, comparing our method with BEiT-3 would not be informative for analyzing the effectiveness of lexical methods.
2) The comparison between lexical methods and dense methods is to demonstrate that introducing lexical representation does not degrade alignment performance, as shown in the retrieval task. Hence we compare with the contrastive-trained methods, i.e., CLIP and its variants, which are more relevant for assessing the effectiveness of lexical representation.
## W3. The proposed method possibly benefits from strong backbones?
Thanks for this comment. But actually, we didn't propose a general alignment method for any two visual and textual backbones. LexVLA does employ the unique properties of DINOv2 and Llama2. Specifically, as highlighted in our first contribution (L68-L70), we utilize single-modal pre-trained models for vision-language alignment tasks to exploit their unique properties, which cannot be fully captured by contrastive objectives alone. The analysis of the results in Sect 4.1 and 4.2 reflects the advantages of the chosen backbone. As we use standard DINOv2 and Llama2, we will elaborate their pretraining details into Supplementary of our paper as suggested.
## Q1-1. Simpler baseline?
Thanks for your suggestion. We actually do have explored the approach you suggested, which is one typical and simple method called Bag-of-Words (BOW). The results were poor. This aligns with findings in previous studies [2, 3, 4], hence we did not include these results in our submission. We report results in the rebuttal PDF.
The BOW method limits the model's ability to use information effectively. It struggles with vocabulary [5] and semantic [6] mismatches. LexVLA aims to address these limitations by leveraging language models for lexical prediction rather than relying solely on captioning.
## Q1-2. How many keywords are extracted? Does it vary with the length of the caption?
We want to clarify that as stated in Sec. 3.2, when we use a LLM as a lexical predictor, the caption is mapped into a token embedding $y_{text}$, which is then mapped to a lexical representation $s_{text}$.
The token embedding is not made up of specific keywords but a compact representation.
## Q3. Details on pretraining data and test splits?
1) All pretrained models are public released models. Llama 2 is trained on 2 trillion tokens and DINOv2 is trained on 142M images.
2) Following previous approaches, the test splits are from [7].
We will include these information in the revision.
## Q4. Questions about Figures.
1) Fig 3: Thanks. We have added the label mapping in the rebuttal PDF.
2) Fig 4: In the bottom row, we randomly select one object in the image and then annotate the patches that cover the object to calculate the lexical representation on patches in the free form.
3) Fig 5: a) Vertical black dotted line: The vertical black dotted line represents the sparse level of our final LexVLA model, which is reported (98.27\%, 296 activated tokens) on P9, L278.
b) Different sparsity levels: As on P4, L122-L127, sparse levels can be selected via Top-k or Value thresholding. The sparse levels in Fig. 5 were selected using Value thresholding.
## Q5. Question about lexical vocabulary.
Thanks, we tokenize all the words in the test set, and found that even the most frequently split words did not appear often, and most of them retained semantics, such as:
bathroom -> bath, room;
motorcycle -> motor, cycle;
sidewalk -> side, walk.
While corner cases exist, LexVLA has demonstrated effectiveness and achieved SOTA performance in most experiments. We would take it as an important future work and have thoroughly discussed it in the limitations section of our paper (L289-295).
Please note that our main contributions are well supported, and our work is self-contained.
## Q2 & Typos
Thanks!
The $x$ in Eq(3) should have index $img$.
We will revise all the typos pointed out by the reviewers in the final version and continue to proofread the manuscript to reduce any potential confusion.
[1] Wang et al. “Image as a Foreign Language: BEIT Pretraining for Vision and Vision-Language Tasks.” CVPR 2023
[2] Gao et al. "COIL: Revisit exact lexical match in information retrieval with contextualized inverted list." arXiv:2104.07186.
[3] Xiong, et al. "End-to-end neural ad-hoc ranking with kernel pooling." SIGIR 2017
[4] Formal et al. "SPLADE: Sparse lexical and expansion model for first stage ranking." SIC 2021
[5] Croft et al. Search engines: Information retrieval in practice. Vol. 520. Reading: Addison-Wesley, 2010.
[6] Peters, E. et al. “Deep Contextualized Word Representations.” ArXiv abs/1802.05365
[7] Karpathy et al "Deep visual-semantic alignments for generating image descriptions." CVPR 2015
---
Rebuttal Comment 1.1:
Comment: I have read all the reviews and the responses from the authors, they have addressed some of my concerns and I believe integrating most of the discussion would improve the paper enough to update my rating to `5: Borderline accept`. | Summary: The paper proposes LexVLA, a more interpretable VLA framework that learns a unified lexical representation for both modalities without complex design.
LexVLA uses DINOv2 as the visual model and Llama 2 as the language model, proposing an overuse penalty to avoid false discoveries.
LexVLA outperforms baselines on cross-modal retrieval benchmarks, even when fine-tuned on a modest dataset.
Extensive experiments were conducted to analyze LexVLA's performance.
Strengths: 1. The paper is easy to follow.
2. The framework does not require complex design or training configurations, making it more accessible and efficient.
3. LexVLA outperforms baselines on cross-modal retrieval benchmarks, even when compared to models trained on larger datasets.
4. Ablation demonstrates the decision choice and effectiveness of proposed components.
Weaknesses: 1. I can't quite get the novelty of this work. The lexical representation mentioned in the paper is somehow a way to select important information and then map it to the code book. However, the codebook strategy was explored [1]. Especially the visual part, where does the concept of Lexical come in? Can the author elaborate more on this?
2. In Table 1, the improvement is pretty limited in the bottom block compared to using CLIP in the last and first blocks. It makes readers question whether the performance was gained by the DINOv2 representation.
3. The alignment was tested on only one task, it will be more interesting to test on other multimodal tasks such as zeroshot classification, or even grounding since it has DINOv2 representation.
[1] Duan, Jiali, et al. "Multi-modal alignment using representation codebook." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please address the questions in the weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Q1-1. Is lexical representation a way to select important information and map it to the codebook?
Thank you for your question. We respectfully disagree with this characterization. Lexical representation is not a codebook strategy in [1]. Learning a unified codebook for multi-modal data is fundamentally different from learning a lexical representation. Here’s how they differ:
1. **Purpose and Approach:** As discussed in Sec. 3, our goal is to learn the interpretable lexical representation for images and image patches. In contrast, the codebook strategy [1] aims to learn unified, but uninterpretable and non-explicit-semantic-meaningful codes for image and text.
2. **Alignment Mechanism:** As explained in Sec. 3.1, our alignment is conducted at the vocabulary level, not the codebook level. We focus on aligning indexes, which provides more flexibility when training our lexical encoders compared to training codebook learners. In contrast, the codebook strategy [1] attempts to align the codebook embeddings.
3. **Enhanced Interpretability and Efficiency:** As mentioned in the Sec. 2, the lexical representation is widely studied in the field of text information retrieval. This method has been employed in vision-language models due to its good interpretability and high retrieval efficiency. The dense codebook strategy does not meet these properties.
[1] Duan, Jiali, et al. "Multi-modal alignment using representation codebook." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
## Q1-2. Where does the concept of Lexical come in regarding the visual part?
As we introduced on P4, L134-L137, the lexical representation is the output of the visual encoder. And on P3, L102-L104, this output is a score vector, whose element indicates the similarity between the image and the corresponding word from the vocabulary.
## Q2. Limited improvement over CLIP?
Thank you for raising this concern. We would like to clarify that our improvements over CLIP are significant and demonstrate our advantages over competitors. Our main paper and the rebuttal PDF provide comprehensive experiments and analysis to support this. Specifically, as discussed in the analysis of the experimental results (L238-L248)
1. **Comparison of different frameworks:** Comparing LexVLA(CLIP) with dense CLIP, our method shows comparable or even better performance. This indicates that our approach achieves well-aligned lexical representations with significantly less multimodal training data (12M) compared to CLIP (1.1B).
Additionally, our model ensures semantic correctness of the tokens, which is more challenging than merely aligning latent features, as discussed on P1, L32-L36.
2. **Further Enhancement with DINOv2 Features:** Replacing CLIP with the DINOv2 encoder, which provides local-inclined features, further improves our results. This demonstrates that good interpretability contributes positively to retrieval performance. This also aligns with our motivation.
3. **Overall Improvement:** Our full model shows substantial improvement over CLIP in our experimental setup, offering superior retrieval performance and better interpretability, as detailed in Table 1 of the rebuttal PDF file.
## Q3. More interesting to test the alignment on grounding multimodal tasks.
Thank you for your suggestion. Our work primarily focuses on learning interpretable lexical representation for images, our experiments are extensive and support our claims and contributions. Importantly, our paper does include multiple multimodal tasks. Specifically:
As stated in Section 3.4 and Section 4.2, we proposed a grounding task called PatchDis. This task is inspired by patch-level segmentation but is designed for models that are not trained on fine-grained alignment tasks such as segmentation or detection. Our experiments, as reported in Table 2, demonstrate that our model performs significantly better on this grounding task. This highlights the effectiveness of our approach.
We agree that exploring additional multimodal downstream tasks would be interesting. We are keen to investigate further applications of our proposed LexVLA in the future. Since our current work is self-contained, this could be a valuable direction for future research.
We will clarify these points further in our revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal!
Upon examining the comparison between the original CLIP model and its modified version, I observed that there is no definitive winner; the original CLIP showed superior performance on MSCOCO, while the new Lexical version excelled on Flickr30k.
This outcome is somewhat anticipated, as the aim is to find an exact match between the tokens. However, this might not hold true in retrieval tasks due to the presence of a lot of redundant information in the background. I believe that an alternative evaluation setting is needed to showcase the full potential of this model.
---
Rebuttal 2:
Comment: Thanks for the opportunity to address your further concerns in response to our rebuttal.
## Regarding the alternative evaluation:
Thank you for the suggestion. We report the PatchDis evaluation result of LexVLA (CLIP) as follows:
| **Model** | **mIoU** |
|----------------|-----------|
| Random Dis. | 5.0 |
| CLIP | 5.3 |
| VDR | 12.6 |
| *LexVLA (CLIP)*| *13.9* |
| LexVLA | **36.3** |
These results clearly show that our LexVLA (CLIP) provides a better and more interpretable lexical representation of visual patches compared to the original CLIP and previous lexical method. This further confirms the effectiveness of our proposed approach.
## Regarding the results in Table 1
Thank you for your careful review. We would like to supplement that the results in Table 1 demonstrate the effectiveness of LexVLA, even with significantly less multi-modal training data (12M vs. 1.1B).
Particularly, while CLIP (1.1B) outperforms LexVLA (CLIP) in MSCOCO image-to-text results, it underperforms in MSCOCO text-to-image and Flickr30k image-to-text and text-to-image experiments. Moreover, our full LexVLA model outperforms CLIP (1.1B) in all settings.
---
Rebuttal 3:
Comment: Dear Reviewer a8Qd,
We would like to know if our previous response addressed your concern. We have reported the result of the alternative evaluation in another comment section, which shows LexVLA (CLIP) provides a better and more interpretable lexical representation of visual patches compared to the original CLIP. If there are any additional comments or suggestions you would like to share, we would be more than happy to discuss them further.
Thank you again for your valuable feedback.
Best regards,
The Authors | Summary: This paper presents LexVLA, a vision language alignment method integrating a pretrained vision model and a pretrained language model. To retain the original capabilities of pretrained single-modal models, it adopts a unified lexical representation with unique codebooks. Moreover, the vision model is tuned with a projector, and the text model is tuned with LoRA. A metric for patch-level alignment is proposed to evaluate interpretability. Experiments are conducted on retrieval benchmarks.
Strengths: - The paper is well-written and easy to follow.
- The content is rich. An architecture, an objective, and a metric are proposed.
- Inserting lightweight components to tune vision and language models to learn lexical representation while refraining from original capability degradation is intuitive.
- The LexVLA can be applied to various architectures.
- Experiments are conducted on multiple benchmarks.
Weaknesses: - Even though a new metric is proposed, the effectiveness of its reflection on interpretability is not verified quantitatively or qualitatively.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How accurately or reliably does the proposed PatchDis metric evaluate/reflect the interpretability of patch-level visual lexical representations?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: While multiple technical contributions have been made in this paper, some of the components lack rigorous verification.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Q1. How accurately or reliably does the proposed PatchDis metric reflect the interpretability of patch-level visual lexical representation?
Thank you for your concern. We have discussed and analyzed this in the main paper. Our proposed PatchDis is a direct metric for assessing the interpretability of patch-level visual lexical representation. Here’s how:
**1) Interpretability of Lexical Representation:** Lexical representation is inherently interpretable, as each dimension corresponds to the activated value of a word/token in the vocabulary. This has been elaborated on the Introduction (P1-L29) and Related Work (P3-L91), particularly referencing works on lexical representation [1, 2, 3].
**2) PatchDis reflects patch-level interpretability:**
For a lexical representation of selected patches to be considered interpretable, it should accurately reflect local semantic information without being influenced by non-selected patches or incorrect semantics. In this context, interpretability indicates how well the image patches are represented with clear and understandable semantics.
Our new metric, PatchDis, is explicitly designed to measure how effectively visual patches represent their intended semantics. As discussed in Sec. 3.4 and Sec. 4.2., PatchDis allows us to measure and compare the interpretability of patch-level features across different models, providing a quantitative evaluation of visual feature interpretability at the patch level.
**3) Visualization and Evaluation:** Our visualization of PatchDis, based on the popular MSCOCO 2017 dataset, provides a qualitative reflection of its effectiveness in terms of interpretability. The segmentation results offer detailed insights into how the model makes matching decisions. This aligns with the definition of interpretability by Biran and Cotton [4], which refers to the degree to which an observer can understand the cause of a decision.
In summary, the evaluation of lexical representation, both quantitative and qualitative, directly reflects this interpretability.
We hope these points address your concern, and we will update these information accordingly if accepted.
[1] Formal, Thibault, Benjamin Piwowarski, and Stéphane Clinchant. "SPLADE: Sparse lexical and expansion model for first stage ranking." Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2021.
[2] Luo, Ziyang, et al. "Lexlip: Lexicon-bottlenecked language-image pre-training for large-scale image-text sparse retrieval." Proceedings of the IEEE/CVF international conference on computer vision. 2023.
[3] Zhou, Jiawei, et al. "Retrieval-based Disentangled Representation Learning with Natural Language Supervision." The Twelfth International Conference on Learning Representations. 2024.
[4] Biran, Or, and Courtenay Cotton. "Explanation and justification in machine learning: A survey." IJCAI-17 workshop on explainable AI (XAI). Vol. 8. No. 1. 2017.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my concerns. I will keep my previous rating. | null | null | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude to all the reviewers for their insightful and constructive feedback on our paper. We are delighted that our work has been positively received, and we appreciate the time and effort each reviewer has put into evaluating our submission.
We are particularly grateful for the recognition of several strengths: our paper’s clarity and ease of understanding (Reviewer KFJh, Reviewer a8Qd, Reviewer CyPa), the rich content and novel contributions including the architecture, objective, and metric (Reviewer KFJh, Reviewer CyPa), and the intuitive approach of inserting lightweight components without degrading original capabilities (Reviewer a8Qd, Reviewer CyPa). The flexibility of the LexVLA framework (Reviewer KFJh, Reviewer a8Qd) and the comprehensive experiments conducted on multiple benchmarks (Reviewer KFJh, Reviewer a8Qd) were also appreciated. Additionally, the straightforward design and efficiency of our framework (Reviewer a8Qd), its superior performance on cross-modal retrieval benchmarks (Reviewer KFJh, Reviewer CyPa), and the clear demonstration of the effectiveness of our proposed components (Reviewer KFJh, Reviewer a8Qd) were well-received. Lastly, we are pleased that our method’s effectiveness and interpretability and the state-of-the-art performance compared to baselines (Reviewer CyPa) have been acknowledged.
We also extend our gratitude to the Area Chair for their guidance and support throughout the review process.
Thank you once again for your valuable feedback.
Pdf: /pdf/2725db32c33378a2d22d071b0afa3ac2f57367cd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Hybrid Top-Down Global Causal Discovery with Local Search for Linear and Nonlinear Additive Noise Models | Accept (poster) | Summary: This paper studies structure learning problem for additive noise model (ANM) in both linear and nonlinear settings. It proposes a hybrid constraint based approach to learn the DAG by leveraging the local ancestral relationships. The algorithm consists of ordering search and edge discovery these two steps. Correctness is shown and simulation is conducted to compare with other approaches.
Strengths: - Though ANM is shown to be identifiable for a long time, e.g. by RESIT, the high computational complexity and hardness in nonparametric regression and CI tests stand as roadblock. The finer analysis and exploitation of local structure in the proposal show potential to tackle this task efficiently;
- The introducing of the proposed method is well-written and easy to follow for researchers working in relevant area.
Weaknesses: - The main contribution of this work is the exploitation of local structure to reduce the number of nonparametric regression and CI tests. However, despite of the quick discussion below thm 3.7 and 4.5, there is no explicit and formal statement on these to emphasize the contribution, and also comparison with others, e.g. RESIT.
- The experiments are preliminary. More setups should be considered to demonstrate the superiority of proposal: e.g. different graph types like scale-free graphs, different number of edges, different noise, recovery criterion like F1 for linear setting, more benchmarks like CAM, GSGES, etc.
- See Questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: - As the main contribution of the paper, why are the runtime results in the appendix? It is also spurious that the runtime for linear case is slower than the benchmarks in Figure 7; runtime for d=12 is faster than d=8 in Figure 8. There does not seem to be significant improvement empirically;
- Since theoretically the number of nonparametric regression and CI tests are reduced, is it possible to establish some statistical guarantee and sample complexity dependence on the sparsity, e.g. in-degree?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No limitation is discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for their suggestions on how we might better emphasize the contributions of our work. We respond to Weakness 2 in the general response, adding to our preliminary experiments by comparing our algorithms against many state-of-the-art baselines (CAM, NoGAM, GES, GRaSP, GSP, etc.). We find that both our nonlinear topological sorting algorithm NHTS and our edge pruning algorithm ED outperform baselines across all settings, consistently achieving the highest median $A_{top}$ and $F1$. _These strong empirical results provide convincing support for the superiority of our proposal_. We give further clarification on all other comments below (following the order of the review).
**Formalizing Computational Complexity Contribution**
We thank the Reviewer for their suggestion: we now include a formal analysis to compare the number of high-dimensional nonparametric regressions run by our method with baselines such as RESIT and NoGAM in the worst case, i.e. the DAG is fully connected._In particular, we formally show that NHTS runs significantly fewer high-dimensional nonparametric regressions_ to identify the correct topological ordering. We have included the following Theorem in Section 4 (with proof in appendix) of our paper:
"Theorem 4.6: Consider a fully connected DAG $G=(V,E)$ with nonlinear ANM. Let $d:= |V|$.
Let $n^\mathrm{NHTS}_k$ be the number of nonparametric regressions with covariate set size $k\in[d-2]$ run by NHTS when sorting $V$; we similarly define $n^\mathrm{RESIT}_k$ and $n^\mathrm{NoGAM}_k$ respectively. Then, $n^\mathrm{NHTS}_k = d - k$, and $ n^\mathrm{RESIT}_k = n^\mathrm{NoGAM}_k = k+1$. This implies that for all $k > \frac{d}{2}$, $n^\mathrm{NHTS}_k<n^\mathrm{RESIT}_k = n^\mathrm{NoGAM}_k$."
We would like to note that it is difficult to analyze the reduction in computational complexity in the general case, as NHTS is sensitive to the specific local graph structure of the underlying data generating process. We leave such a characterization to future work: for now, we demonstrate our method's improved runtime and increased sample efficiency in new experimental results; _NHTS ran $9\times$ faster than NoGAM with higher median accuracy on a set of 20 dense ER4 graphs (see PDF attached to general response)_.
Below, we provide a proof sketch to Theorem 4.6:
RESIT [1] and NoGAM [2] both identify leaf vertices in an iterative fashion, regressing each unsorted vertex on the rest of the unsorted vertices; RESIT tests the residual for independence with the covariate set, while NoGAM uses the residual for score matching. Therefore, the number of regressions run in both methods in each step equals
one plus the covariate set size. Therefore, when the covariate set size is $k$, there are $k+1$ regressions run.
In the case of a fully directed graph, the first stage of NHTS only runs pairwise regressions with empty conditioning sets. After the first stage, NHTS regresses each unsorted vertex onto all sorted vertices, finding vertices with independent residuals. Therefore, the number of regressions run is equal to $d$ minus the size of the covariate set. Therefore, when the covariate set size is $k$, there are $d - k$ regressions run.
**F1 Score**
We thank the Reviewer for the suggestion. However, we respectfully point out that the Precision subcomponent of the F1 score requires a well-defined notion of false positives and false negatives: _it is unclear how to translate these definitions into the setting of topological sorts_. We instead follow a stream of causal discovery papers [1, 2, 3] that validate their methods using topological divergence measures.
**Runtime Results**
While the runtime results of our algorithms are an important contribution, _our work focuses on achieving greater sample efficiency by leveraging local causal substructures to obtain smaller conditioning sets and run fewer high-dimensional nonparametric regressions_, as demonstrated in our new experimental results (see PDF attached to general response). Given the limited space in the manuscript at the time of submission, we chose to use the main text to highlight gains in accuracy demonstrated in preliminary experiments. Overall, our methods were faster than some baselines, while slower than others; we believe that the mentioned statistical anomalies were a result of the chance occurrence of extremely dense DAGs in some trials. We will make these points clear in the additional page afforded to camera-ready submissions.
**Formal Sample Complexity Result**
We thank the Reviewer for pointing out the need for statistical guarantees of sample complexity dependent on sparsity, and agree that this is a crucial next step. Previous works [4] have successfully demonstrated such statistical guarantees of sample complexity under the assumption of a nonlinear ANM with Gaussian noise, utilizing concentration inequalities for Gaussian random matrices. However, extending these results to nonparametric and non-Gaussian models requires the development of an entirely new set of analytical tools for non-Gaussian matrices, which is _beyond the scope of the present paper._ We
will address this challenge in future research.
**Citations**
- [1] Peters et. al, Causal Discovery with Continuous Additive Noise Models, (2014).
- [2] Montagna et. al, Causal Discovery with Score Matching on Additive Models with Arbitrary Noise, (2023).
- [3] Rolland et. al, Score Matching Enables Causal Discovery of Nonlinear Additive Noise Models, (2022).
- [4] Zhu et. al, Sample Complexity Bounds for Score-Matching: Causal Discovery and Generative Modeling, (2023).
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response, which addressed most of my concerns. I would like to retain my score.
---
Reply to Comment 1.1.1:
Comment: We thank the Reviewer for their reply. We are happy to further clarify any concerns that the Reviewer believes were not adequately addressed in our initial response. | Summary: In this paper, the authors present a causal discovery method by firstly determining the order of the causal variables, then determining the existence of edges between any two variables. The experimental results demonstrate the superiority of the proposed method compared to relevant methods.
Strengths: I thank the authors for their detailed clarifications, which address most of my concerns. I increase my score to 5.
------------------------------------
Despite the theoretical results simple, the idea and the method are interesting and somewhat novel.
The theoretical results seem sensible.
Weaknesses: 1. Lack of necessary discussions: I think there are some similar idea in the literature, such as [1], where they maintain the order of the variables. What is the advantage of this method on [1]? The proposed method should be compared to [1] as well.
[1] L. Solus, Y. Wang, C. Uhler, and L. Matejovicova. Consistency guarantees for permutation based causal inference algorithms. ArXiv preprint arXiv: 1702.03530 (2017)
2. Lemma 4.1 is confusing. In the condition, it is required that $x_i$ is one of the parents of $x_j$. Why is it possible that $x_i$ and $x_j$ are not causally related?
typo:
Line 202: the
Technical Quality: 3
Clarity: 2
Questions for Authors: Could the authors further elaborate on Line 205 - 207? It is not quite clear to me why Alg. 1 cannot be used for the non-linear case.
I am happy to adjust my score according to the authors' rebuttal.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for bringing [1] to our attention. We respond to Question 1 in the general response. We provide clarification on Weakness 1 and 2 below.
**Necessary Discussion of Related Literature**
Methods in [1] tackle a different setting than our paper: they obtain an MEC under the Sparsest Markov Representation assumption, while our methods obtain a unique DAG under the assumption of an ANM. The main contributions of our paper are twofold: 1) our methods _scale to high dimensional datasets by directly building a valid topological sort_, rather than searching across a large permutation space, 2) our methods _improve sample complexity by using smaller, local conditioning sets_ for edge identification. We clarify further below.
Methods in [1] belong to a literature of "permutation-based" causal discovery algorithms that search over the space of permutations to find a DAG that achieves an optimal score. These methods build upon the original Sparsest Permutation (SP) algorithm [2], which follows a two step procedure. In the first step, SP constructs a DAG $G_{\pi} = (V, E)$ for each permutation $\pi$, where $\pi(i)\rightarrow \pi(j) \in E \iff i < j$ and $\pi(i)$ is not independent of $\pi(j)$ conditioned on $\{\pi_k\}$ for all $k < j, k \neq i$. In the second step, SP outputs the set of $G_{\pi}$ that obtain the minimal number of edges, which corresponds to the MEC. SP exploits the fact that all DAGs in the MEC have the same skeleton structure, implying all valid permutations $\pi_v$ produce DAGs $G_{\pi_v}$ that achieve the minimal number of edges; permutations $\pi_i$ not in the MEC are guaranteed to produce $G_{\pi_i}$ with strictly larger edge counts [2]. Unfortunately, SP requires a search over all $d!$ permutations to obtain the MEC; _the key contribution of [1] is to introduce a consistent Greedy Sparsest Permutation algorithm (GSP)_ that uses DFS to efficiently explore the space of permutations, again returning the set of $G_{\pi}$ with minimal edge count (MEC).
While GSP improves on SP, it suffers from 1) the need to use heuristics to bound runtime when run on high-dimensional DAGs, and 2) a loss in sample efficiency as conditioning sets grow large. Authors in [1] clarify that it is often necessary in practice to ``bound the search depth $d$ and number of runs $r$ allowed before the algorithm terminates'', reducing the accuracy of GSP as the permutation search space grows. In contrast, our ordering methods do not utilize a heuristic for high-dimensional discovery: _they directly build the correct topological sort, vertex by vertex, entirely avoiding a search procedure._ Additionally, when computing each $G_{\pi}$, to determine whether the edge $\pi(i)\rightarrow \pi(j)$ exists, GSP runs a CI test that conditions on all vertices prior to $\pi(j)$ in $\pi$: in contrast, our edge pruning algorithm ED is sensitive to the sparsity of the underlying DAG, identifying $\pi(i)\rightarrow \pi(j)$ with _conditioning sets that are bounded above by $|\text{Pa}(\pi(i))| + |\text{Pa}(\pi(j))|$_ (see Step 2 of ED). By leveraging local conditioning sets, ED avoids sample complexity issues suffered by GSP [3].
To accommodate the Reviewer's suggestion, we have inserted the following two paragraphs at the beginning of our Related Work section:
``Our work is related to two kinds of methods that explicitly leverage the topological structure of DAGs in their discovery procedures: 1) permutation-based approaches and 2) functional causal model-based approaches.
Recent permutation-based approaches, such as SP [2], GSP [1], and GRaSP [4], constitute a family of score-based approaches that utilize permutations for efficient discovery of a MEC. SP searches over the space of variable orderings to find permutations that induce DAGs with minimal edge counts. GSP introduces greedy variants of SP that maintain asymptotic consistency; GRaSP relaxes the assumptions of prior methods to obtain improvements in accuracy. These methods highlight the importance of using permutations for efficient causal discovery, but suffer from poor sample efficiency in high dimensional settings [5]."
We have also included various permutation-based algorithms (GSP, GRaSP) as baselines in our new experimental results (see PDF in global response), in order to better evaluate the efficacy of our methods against the discovery literature. _We find that our ordering algorithm NHTS systematically outperforms permutation-based methods._
**Lemma 4.1**
We thank the Reviewer for catching a typo in line 217: it is possible that $x_i,x_j$ are not causally related. The sentence now reads "... where $x_i$ is one of the _potential_ parents of $x_j$ ...".
**Citations**
- [1] Solus et. al, Consistency guarantees for permutation based causal inference algorithms, (2017).
- [2] Raskutti et. al, Learning directed acyclic graphs based on sparsest permutations , (2013).
- [3] Lu et. al, Improving Causal Discovery By Optimal Bayesian Network Learning, (2021).
- [4] Lam et. al, Greedy Relaxations of the Sparsest Permutation Algorithm, (2022).
- [5] Niu et. al, Comprehensive Review and Empirical Evaluation of Causal Discovery Algorithms for Numerical Data, (2024).
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for your detailed clarifications, which address most of my concerns. I increase my score to 5.
---
Rebuttal 2:
Comment: We thank the Reviewer for their reply. We are happy to further clarify any concerns that the Reviewer believes were not adequately addressed in our initial response. | Summary: The paper presents theoretical results about extensions of the partial order induced by a causal DAG and uses these results to propose new constraint-based algorithms for ANMs.
**Edit**: increased rating from 3 to 5, soundness from 1 to 2, and contribution from 2 to 3.
**Edit 2**: increase rating from 5 to 6, after the authors fixed $A_{top}$ calculation; solid paper, but I think the impact of a hybrid causal discovery method like this is limited
Strengths: - Takes a simple idea (which seems original but also somewhat obvious from an order theory perspective) and turns it (creatively, originally) into causal discovery algorithms with contisency guarantees, broad applicability (ANMs), and good identifiability (specific DAG instead of MEC)
- Very clearly written, as far as grammar, organization, motivation (but importantly, not mathematical notation)
- Based on the theoretical results, the algorithms have potential to be very significant to the field of causal discovery
Weaknesses: 1. The main (and fatal) weakness is the claims of strong performance in the abstract combined with the inadequate experimental results:
1. the abstract makes a claim of "achieving greater accuracy than current methods", but then the limited experiments only compare on simulated data (rather than real) to a few closely related algorithms (as opposed to a selection of classic or state-of-the-art methods, such as PC or GRaSP) in settings the authors have already explained are challenging for existing algorithms (very sparse DAGs, rather than a range of sparsities), and even then the proposed algorithm doesn't seem to do especially well. It also seems the NHTS algorithm is missing from the experiments.
2. A smaller but nonetheless important weekness is notation that contradicts mathematical conventions, making the writing unecessarily difficult:
1. consulting introductory texts on partial orders and order theory would help clear up some of the confusion. For example, a topological sort is conventionally a linear extension of a partial order, making the introduced terms "linear topological sort" and "hierarchical topological sort" confusing. Replacing the former introduced term with just "topological sort", "linear order", or "total order", and replacing the latter introduced term with something that more clearly indicates it is 'between' a partial order and a linear order (i.e., it extends the partial order, but not completely into linear order), would be more natural/conventional and easier to understand.
2. the authors seem to use $\mapsto$ to indicate the domain and image of the ordering functions, but $\mapsto$ conventionally denotes how a specific element in the domain is mapped to a specific element of the image, hindering precise and easy comprehension.
3. other notation in Definition 2.1, such as inconsistent/unexplained indexing of $\pi$ make the definition harder to understand/not rigorous
4. it's unclear what the difference between $x_j \dashrightarrow x_i$ (called a directed path) and $x_j \dashrightarrow \ldots \dashrightarrow x_k$ (called a front door path) is.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Aren't there just $d \choose 2$ (i.e., number of entries above the diagonal of a corresponding adjacency matrix) possible edges in a DAG for a given linear order, rather than the $d^2$ claimed on line 303?
2. Suggestion: Include more explicit theorem statements and proofs for the complexity results.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors have adequately described what assumptions their methods require (and hence to which settings the results are limited).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for their detailed comments on how we might improve our notation, and better support our theoretical results. We respond to Weakness 1 in the general response, providing many new experiments that compare our algorithms against many state-of-the-art baselines (CAM, NoGAM, GES, GRaSP, GSP) across a range of sparsities. We find that both our nonlinear topological sorting algorithm NHTS and our edge pruning algorithm ED outperform baselines across all settings, consistently achieving the highest median $A_{top}$ and $F1$. _These strong empirical results provide convincing support for the gains in sample efficiency promised by our theoretical results_. We reply to all other comments below (following the order of the review).
**Unclear Mathematical Notation**
We thank the Reviewer for identifying unclear notation; we are committed to improving the accessibility of our work, as this allows for communication between disparate academic communities. _We have replaced Definition 2.1 in the main text with the following definitions, incorporating the Reviewer's subpoints 1, 2, and 3:_
- "Definition 2.1: Consider a given DAG $G = (V,E)$. A topological sort (linear order) is a mapping $\pi: V \rightarrow $ {$0,1,\ldots, |V|-1$},
such that if $x_i \in \text{Pa}(x_j)$, then $x_i$ appears before $x_j$ in the sort: $\pi(x_i) < \pi(x_j)$."
- "Definition 2.2: A layered sort (between a partial and linear order)
is a mapping $\pi_L: V \rightarrow ${$0,1,\ldots, |V|-1$},
such that if $\text{Pa}(x_i)=\emptyset$, then $\pi_L(x_i)=0$, and if $\text{Pa}(x_i)\neq \emptyset$, then $\pi_L(x_i)$ equals the maximum length of the longest directed path from each root vertex to $x_i$, i.e., $\pi_L(x_i)= 1 + \max${$\pi_L(x_j): x_j \in \text{Pa}(x_i)$}."
Following a request from Reviewer LEV3, our updated definition of the layered sort enforces uniqueness. Additionally, we respectfully note that previous work in the literature [1] has used the term "hierarchical topological ordering"; however, if the Reviewer strongly believes that this would be confusing to a general audience, we are ok with changing the name to a "layered sort" to indicate that the sort is between a partial and linear order.
Regarding subpoint 4: a directed path $x_i \dashrightarrow x_j$ is the same as a front door path $x_i \dashrightarrow \ldots \dashrightarrow x_j$. For clarity, line 105 now reads "A frontdoor path is a _directed_ path...''.
**Possible Edges given a Linear Order**
We thank the Reviewer for catching a typo in Line 304.
The sentence now reads "Edge discovery checks _$O(d^2)$_ possible edges allowed by $\pi_L$".
**Explicit Theorems for Complexity Results**
We thank the Reviewer for the suggestion: _we have added the following theorems for our complexity results to the main text_ (Sections 4, 5, and 6), with the corresponding formal proofs in the appendix. We provide proof sketches below for clarity.
- _Theorem 1_: Given $n$ samples of $d$ vertices generated by a LiNGAM, the worst case runtime complexity of LHTS is upper bounded by $O(d^3n^2)$.
- Proof sketch: in Stage 1, LHTS runs $O(d^2)$ marginal independence tests that each have $O(n^2)$ complexity. In each step of Stage 2 and Stage 3, LHTS runs $O(d^2)$ marginal independence tests, each with $O(n^2)$ complexity. In the worse case of a fully connected DAG, there are $O(d)$ steps in total, across Stage 2 and Stage 3: this is because in each step one layer of the layered sort DAG is identified, and a fully connected DAG has $d$ layers. Therefore, the overall sample complexity of LHTS is $O(d^3n^2)$.
- _Theorem 2_: Given $n$ samples of $d$ vertices generated by an identifiable nonlinear ANM, the worst case runtime complexity of NHTS is upper bounded by $O(d^3n^3)$.
- Proof Sketch: In Stage 1, NHTS runs $O(d^2)$ marginal independence tests that each have $O(n^3)$ complexity. In Stage 2, NHTS runs $O(d^2)$ nonparametric regressions and $O(d^2)$ marginal independence tests, each of which has $O(n^3)$ complexity. In Stage 3 NHTS runs at most $O(d^2)$ conditional independence tests, each of which has $O(n^3)$ complexity. In the worst case of a fully connected DAG, NHTS goes through $O(d)$ steps in Stage 4: in each step of Stage 4, NHTS runs $O(d)$ nonparametric regressions and $O(d^2)$ marginal independence tests, each of which has $O(n^3)$ complexity. Therefore, the overall sample complexity of NHTS is $O(d^3n^3)$.
- _Theorem 3_: Given $n$ samples of $d$ vertices generated by a model corresponding to a DAG $G$, the runtime complexity of ED is upper bounded by $O(d^2n^3)$.
- Proof Sketch: ED checks for the existence of every edge permitted by a topological sort $\pi$ by running one conditional independence test that has complexity $O(n^3)$. In the worst case, there are $O(d^2)$ possible edges, so the overall complexity is $O(d^2n^3)$.
**Citations**
- [1] Wu et. al, Hierarchical Topological Ordering with Conditional
Independence Test for Limited Time Series, (2023).
---
Rebuttal Comment 1.1:
Comment: Thanks for the very thorough rebuttal! I think the updated notation, added explicit complexity results, and substantial simulation results drastically improve the paper. However, I only increase my rating to 5, because I worry that such significant revisions might be outside the intended scope of this rebuttal process and rather warrant resubmission and another round through the review process (I will inquire more and further increase my rating in case I'm mistaken about this).
I also have an important follow-up question, which I've posted to the general results-focused rebuttal.
---
Reply to Comment 1.1.1:
Title: Scope of Rebuttal Process
Comment: We thank the Reviewer for appreciating our rebuttal. We respond to the Reviewer's follow-up question in the comment section of the general response.
We respectfully note that _most essential features of our paper were **not** revised_, including the problem setup, background assumptions, theoretical results characterizing local causal substructures, and algorithms LHTS, NHTS, and ED. The revisions were mainly restricted to 1) _providing additional experimental results_ that support the superiority of our methods across a wider variety of settings, and 2) _clarification of established contributions_ through updated notation and formalization of existing discussion.
We believe that these changes are _within the scope of a conference rebuttal_, and would be happy to provide additional details if the Reviewer believes that further evaluation is needed. | Summary: The paper mainly focuses on proposing efficient search algorithms for finding the hierarchical sort ordering (linear topological sort) of variables. As mentioned in Section 5, finding such hierarchical orders can significantly improve the efficiency of causal discovery of edges, making the algorithm tractable (traditional algorithms such as PC are exponential). The paper studies two cases: linear (LiNGAMs) and non-parametric, where a complete algorithm based only on path analysis is developed for the linear case, and a combination of path analysis and layer-wise search is developed for the non-parametric case. Both algorithms improve the discovery of hierarchical order.
Strengths: The paper is well structured and clearly written. The theoretical contributions, including the causal path analysis and corresponding algorithms, are interesting and also important in practice as can be told from the analysis of computational complexity. All results are properly formulated as definitions and theorems and proofs are included in the appendix. Experiments are also conducted and their results are discussed in depth in Section 6. In general, I enjoyed reading it.
Weaknesses: - In general, I suggest adding more examples to demonstrate the procedure of algorithms, probably for NHTS (Algorithm 2) so that we can see a clear cut between the two stages (root-identification and layer identification).
- While the authors touched a bit at the beginning of Section 4, non-experts may benefit more if the paper could include additional details about the difference between the linear and non-linear cases (especially how they affect conditional independencies if any).
- For definition 2.1, it will be great to provide a hierarchical topological sort that cannot be trivially converted to a linear topological sort; that is, we cannot simply add more layers to a hierarchical sort to obtain a linear topological sort.
- Lemma 4.1 is a little confusing to me: if $x_i$ is a parent of $x_j$, how are PP1 and PP4 possible? $x_i$ must be a direct cause of $x_j$, right? Also, when you say "$x_i$ and $x_j$" are not causally related, does it mean that there is no directed edge from $x_i$ to $x_j$ or no directed path? Does "active path" mean any unblocked dependency path (backdoor or frontdoor)?
Technical Quality: 4
Clarity: 3
Questions for Authors: - I'm curious if it's all the results can also be explained using independencies (d-separations) instead of regressions? This allows us to think only in terms of graphs. I guess regressions in the non-parametric setting are equivalent to d-separations, how about the linear case? Are there any independencies that hold in the linear case but not in the non-parametric case?
- For algorithm 1 (LHTS), is stage 2 really needed? It seems that stage 2 is a special case of stage 3 when mutual ancestors = $\emptyset$.
- For experiments, the paper mentions a tradeoff between accuracy and encoded causal information. Would it be more fair to restrict the ordering length (say limit it to some length $k$) and compare the ordering accuracy?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: OK
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for their insightful questions about how our methods work, and suggestions for how to clarify our explanations and results. We respond to Weakness 2 in the general response and address everything else below (following the order of the review).
**Explanatory Examples**
We thank the Reviewer for their suggestion; in the appendix of our paper, we have added a figure for each algorithm (LHTS, NHTS, ED) with corresponding text that walks through the steps of each algorithm on an exemplary DAG. We hope that this makes it clearer to readers which vertices or edges are being identified in which step, and and easier for readers to understand how the identification occurs. _We include the figure corresponding to NHTS in the PDF attached to the general response and include the corresponding text here:_
"In Stage 1, NHTS finds that A, B, C, D, and E are all mutually not in PP1 relations. In Stage 2, NHTS discovers that A is in PP2 relation with B and C, while D is in PP2 relation with E: therefore, {A, E} are the candidate root set. In Stage 3, NHTS confirms that A is indeed the root vertex, adding it to the first layer of the hierarchical sort. In the first iteration of Stage 4, NHTS recovers an independent residual when regressing B or C on A; B, C are added to the next layer in the hierarchical sort. Similarly, NHTS recovers an independent residual when regressing D and E in the second and third iteration of Stage 4, respectively: therefore, NHTS recovers the hierarchical topological sort [[A],[B, C],[D],[E]] by the end of Stage 4."
**Definition 2.1**
Theoretically, any hierarchical topological sort can be converted into a linear topological sort by flattening the hierarchical structure: vertices in the same layer can be placed in an arbitrary order to each other, while vertices in lower layers are placed before
vertices in upper layers.
_By "trivially converted," we take the Reviewer to be asking
whether given a DAG, the hierarchical topological ordering is unique._
Indeed, we meant that the hierarchical topological sort in our paper is unique. We have corrected Definition 2.1 to reflect this change:
"Definition 2.1: A hierarchical topological sort (between a partial and linear order)
is a mapping
$\pi_L: V \rightarrow$ {$0,1,\ldots, |V|-1$},
such that if $\text{Pa}(x_i)=\emptyset$, then $\pi_L(x_i)=0$, and if $\text{Pa}(x_i)\neq \emptyset$, then $\pi_L(x_i)$ equals the maximum length of the longest directed path from each root vertex to $x_i$, i.e., $\pi_L(x_i)= 1 + \max$ {$\{\{\pi_L(x_j): x_j \in \text{Pa}(x_i)\}}$}.''
We thank the Reviewer for catching this, and _we are happy to clarify further if this interpretation is incorrect._
**Lemma 4.1**
We thank the Reviewer for catching the typo in line 217: the sentence now reads "...where $x_i$ is one of the _potential_ parents of $x_j$...". Therefore, $x_i$ is not necessarily a parent of $x_j,$: PP1 and PP4 are possible.
We thank the Reviewer for pointing out the unclear notation: $x_i,x_j$ are not causally related if and only if no active path exists between $x_i,x_j$, which includes both directed edges and directed paths. For clarity, the sentence in line 218 now reads: "PP1) _no active path exists between_ $\mathbf{x_i,x_j}$...''.
The Reviewer is correct: we use the term "active path" to refer to any unblocked dependency path, which can either be a backdoor path or a frontdoor path.
**Relationship between Results, D-Separations and Regressions**
We agree that _some of our results may be explained entirely by d-separations, but not all_. We have further clarified the following points in Section 4 of our paper.
- Our edge pruning method ED is entirely constraint-based, and therefore is explainable via d-separations in both the linear and nonlinear case. We refer the Reviewer to our proof of correctness for ED in Appendix C.2 for more detail.
- Our topological ordering algorithms (LHTS, NHTS) are not fully explainable by d-separations: d-separations only characterize a MEC of DAGs, and different topological sorts may exist within the same MEC. To identify a unique topological sort, we exploit the additive noise assumption: the causal parents of a variable $x_i$ are independent of the noise $\varepsilon_i$ corresponding $x_i$. In practice, we use this property by running regressions and checking if the residual is independent of the covariates.
- Further, we note that the conditions under which the residual is independent of the covariates are different in the linear and nonlinear case; we refer the Reviewer to the general response for more detail.
**Stage 2 of LHTS**
We agree with the Reviewer.
We distinguish between
Stages 2 and 3 in Algorithm 1 for explanatory purposes,
emphasizing that all AP3 relations are discovered in Stage 2, and all AP2 and AP4 relations are discovered in Stage 3.
These relations constitute distinct local causal substructures, motivating a conceptual separation between Stages 2 and 3.
We have further clarified this point in Line 181 of our revised paper.
**Tradeoff between Accuracy and Encoded Information**
Restricting the ordering length does not impact ordering accuracy.
_The loss of accuracy of our method comes from classifying vertices into layers according to a cutoff_, instead of pinpointing the 'best' candidate to add to a linear topological sort, which involves selecting the vertex that achieves the best test statistic among the remaining vertices. Both kinds of algorithms are asymptotically consistent. While cutoff-based methods may be slightly more error-prone in low-sample settings, they inherently capture more information about the underlying DAG, reducing the complexity of the edge-pruning stage and making the overall process more efficient when the graph is sparse.
---
Rebuttal Comment 1.1:
Comment: Thanks for your clarifications. The new Definition 2.1 looks good to me. I'll keep my score.
---
Reply to Comment 1.1.1:
Comment: We thank the Reviewer for their consideration. | Rebuttal 1:
Rebuttal: We thank the Reviewers for their insightful comments and questions, as they have helped improve the clarity of our paper. We have addressed all raised concerns in this rebuttal, and incorporated the feedback into our manuscript.
We thank the Reviewers for unanimously acknowledging the novelty and potential utility of our proposed local search approach. In particular, Reviewer rTf8 appreciated how we leveraged a simple idea to develop "causal discovery algorithms with consistency guarantees, broad applicability (ANMs) and good identifiability (specific DAG instead of MEC)." Reviewer fR7W appreciated how "the finer analysis and exploitation of local structure" shows potential to efficiently tackle "the high computational complexity and hardness in nonparametric regression and CI tests." In terms of writing, Reviewers LEV3 and rTf8 both found the paper generally "clearly written", and Reviewer fR7W said it was "easy to follow for researchers working in relevant area."
In this general response, we will provide the following clarifications
to all Reviewers:
- Contextualize our paper's **importance to the field of causal discovery** (Reviewer LHcu Weakness 1, Reviewer fR7W Question 1).
- Provide **additional experimental results** (Reviewer rTf8 Weakness 1, Reviewer fR7W Weakness 2).
- Specify the **difference between linear and nonlinear case** (Reviewer LHcu Question 1, Reviewer LEV3 Weakness 2, Reviewer LEV3 Question 1).
**Importance to the Field of Causal Discovery**
Traditional causal discovery algorithms (PC, GES, GSP, etc.) utilize conditional independence relations to recover a Markov Equivalence Class (MEC) of causal models. To enable the identification of a unique DAG, our methods operate under the assumption of an additive noise model (ANM), leveraging independent residuals from regressions to find a topological ordering, and CI tests to prune edges. Our main contributions are twofold: 1) our nonlinear topological ordering algorithm NHTS _exploits local search to run fewer high-dimensional regressions_ than traditional ANM methods, achieving lower sample complexity than baselines, 2) our constraint-based algorithm ED uses the discovered topological ordering to _nonparametrically prune edges with smaller conditioning sets_ than traditional sparse regression methods, outperforming baselines by improving sample efficiency.
**Additional Experimental Results**
In response to Reviewers fR7W and rTf8 we provide new experimental results (see the attached PDF) that test our nonlinear topological sorting algorithm NHTS and our edge pruning algorithm ED against more benchmarks and in many different settings.
_We find that both NHTS and ED systematically outperform baseline methods, providing
strong evidence for the superiority of our methods_.
We are happy to provide additional experimental results upon request. We provide two representative examples of runtime results in the attached PDF. We test our methods in 24 different settings: two choices of dimensionality ($d=10,d=20$), three types of noise distribution (uniform, gaussian, laplacian),
and
four types of Erdos-Renyi [1] graphs with different sparsities; the expected number of edges either equals to $d$ (ER1), $2d$ (ER2), $3d$ (ER3), or $4d$ (ER4). In each experiment, methods are evaluated on 20 DAGs generated by nonlinear causal mechanisms, with $n=300$. We include NoGAM for comparison as a state-of-the-art ANM-based method [2]. As suggested by Reviewers rTf8, LHcu, and fR7W, we include CAM, GES, GRaSP, and GSP as additional baselines. GES, GRaSP and GSP return only a MEC; to enable a fair comparison, we randomly select one topological ordering permitted by the outputted MEC for evaluation. CAM is excluded from trials with $d=20$ due to
extremely long runtimes, taking over 23 minutes to complete a single trial.
We exclude PC and RESIT since in general they perform much worse than baseline methods [2].
**Difference between Linear and Nonlinear Case**
The reason why LHTS (Algorithm 1) cannot be naively applied is that regressions yield independent residuals under different conditions in the nonlinear case. _LHTS leverages the assumption of linear causal mechanisms when running pairwise regressions in Stage 2, which is insufficient in the nonlinear setting_. For clarity, we demonstrate how LHTS fails to correctly recover causal relationships in an exemplary 3-node DAG with nonlinear causal mechanisms, whereas NHTS (Algorithm 2) succeeds.
Consider DAG $G$ with three vertices $x_1,x_2,x_3$, where $x_1 \rightarrow x_3, x_2 \rightarrow x_3$. The functional causal relationships are nonlinear, given by $x_1 = \varepsilon_1, x_2 = \varepsilon_2, x_3 = x_1x_2 +\varepsilon_3$, where the $\varepsilon$'s are mutually independent noise variables. We focus on whether LHTS or NHTS can recover the parent-child relationship between $x_1$ and $x_3$. Both LHTS and NHTS find that the relationship between $x_1,x_3$ is unknown in Stage 1.
In Stage 2, LHTS runs pairwise regressions between $x_1,x_3$ but _incorrectly concludes that $x_1,x_3$ are not in AP3 relation_ because neither pairwise regression provides an independent residual; both parents of $x_3$ must be included in the covariate set for an independent residual to be recovered. In contrast, NHTS correctly constructs the conditioning set $P_{13} = \{x_2\}$ for the pairwise regression of $x_3$ on $x_1$; NHTS recovers an independent residual, _identifying that $x_1,x_3$ are in PP2 relation_, i.e, $x_3$ is a child of $x_1$.
We have incorporated the above explanation into the main text in Section 4. Additionally, as suggested by Reviewer LEV3, for reader clarity we have added figures and text to our paper's appendix that walk through the steps of LHTS, NHTS and ED (see attached PDF for the NHTS figure).
**Citations**
- [1] Erdos et. al, On the evolution of random graphs, (1960).
- [2] Montagna et. al, Causal Discovery with Score Matching on Additive Models with Arbitrary Noise, (2023).
Pdf: /pdf/d8b717ff37c6d7c94f6ae59036c421dfef12f87c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Take A Shortcut Back: Mitigating the Gradient Vanishing for Training Spiking Neural Networks | Accept (poster) | Summary: The paper trains SNNs using surrogate gradient learning. In order to mitigate the gradient vanishing problem, the paper proposed the Shortcut Back-propagation method and utilizes an evolutionary algorithm framework to balance the training of shallow and deep layers. The effectiveness of the proposed method is demonstrated through many experiments.
Strengths: 1) The shortcut backpropagation method and the evolutionary training method are novel.
2) This paper can well handle the gradient vanishing problem.
3) The paper is well-written.
4) The paper shows the effectiveness of the proposed methods through many experiments.
Weaknesses: 1) The author should add more mathematical proof to demonstrate that the mentioned residual structure in SNN is not very effective? The introduction of shortcut branches might add complexity to the network architecture, which could affect the interpretability of the model.
2) Some recent SOTA works should be compared with too. The authors can also compare with paper [1][2] which obtains really good results by MS-ResNet-18 backbone with 1 or 6 timesteps on large imageNet datasets.
[1]Yao M, Zhao G, Zhang H, et al. Attention spiking neural networks[J]. IEEE transactions on pattern analysis and machine intelligence, 2023.
[2] Qiu X, Zhu R J, Chou Y, et al. Gated attention coding for training high-performance and efficient spiking neural networks[C]. Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(1): 601-610.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) Why are the bolded values not always the best values?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I find no limitation about the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your efforts in reviewing our paper and your recognition of our novel method, notable results, and good writing. The response to your questions is given piece by piece as follows.
W**1**: The author should add more mathematical proof to demonstrate that the mentioned residual structure in SNN is not very effective? The introduction of shortcut branches might add complexity to the network architecture, which could affect the interpretability of the model.
**A1**: Thanks for this advice. **The** **residual structure cannot solve the gradient vanishing problem of SNNs completely.** For standard skip connections, it can be expressed as $o=g(f(x)+x)$, where f(x) is convolutional layers and g() is the activation function. The standard ResNet network is composed of multiple skip connection blocks cascaded together. In ANN, ReLU is used for g(), since ReLU is unbounded for the positive part, the gradient can be passed to the input of the block directly. However, in the case of LIF neurons in SNNs, the gradient will be reduced through the surrogate gradient. Thus, the skip connections in standard ANNs still suffer the gradient vanishing problem in SNNs, as Figure 2 in the paper also visually illustrates this problem. However, our shortcut back method can transfer the gradient from the output to the input of the block directly in SNNs.
Furthermore, the proposed methods are only alive in the training phase and will not affect the inference phase of SNN.
---
W2: Some recent SOTA works should be compared with too. The authors can also compare with paper [1][2] which obtains really good results by MS-ResNet-18 backbone with 1 or 6 timesteps on large imageNet datasets.
**A2**: Thanks for this advice. We will add comparisons of these recent methods in the revised version as your suggestion. Thanks.
---
Q1: Why are the bolded values not always the best values?
**A1**: Sorry for the confusion. With the same model and same timesteps, our method reaches the best. We will further clarify this in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. I think this is a nice bit of discussion and could be added to the manuscript.
In light of the additional discussion, I'd like to raise my score to a 6. This is an interesting piece of work and would be a nice addition to NeurIPS.
---
Reply to Comment 1.1.1:
Title: thanks
Comment: Thanks very much for your reply and recognition. We are happy to see that your concerns have been addressed. | Summary: This paper proposes a simple method to mitigate the gradient vanishing problem in the training of SNNs. This method introduces some early classification heads (including a pooling layer and a fully connected layer) to the SNN. Because the gradients from the early classification heads pass fewer surrogate gradients, this method aids the SNN in addressing the gradient vanishing problem. The authors also suggest an evolutionary training framework that changes the loss function to gradually adjust how important early classification head outputs are during the training phase. The proposed methods are only alive in the training phase and will not affect the inference phase of SNN.
Strengths: This proposed method partially alleviates the gradient vanishing problem in the training of SNN with surrogate gradients. Furthermore, the method has demonstrated excellent performance across multiple datasets. The Short-BP method can be easily integrated into the SNN training process without introducing excessive computational overhead. Furthermore, the evolutionary training framework effectively mitigates the short-BP problem, which may make the network pay more attention to early classification heads than the final SNN output. The writing in this paper is clear and concise.
Weaknesses: 1. In this paper, the author only demonstrates a change in gradient distribution in the first layer. Presenting the changes in the men and variance of the absolute gradients for each layer would provide a more direct proof of their argument.
2. The author should provide a more detailed mathematical proof to explain why the use of surrogate gradients in deep SNN would lead to gradient vanishing, as well as why direct use of residual learning will not address the problem.
3. The author has not demonstrated their method on much deeper network architectures where the gradient vanishing problem is more severe.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How is the network divided into multiple blocks? Are there any additional rules for the insertion position and number of early classification heads?
2. The results of using short-BP to train ResNet 18 in Table 1 and Table 2 are quite different. There may be a transcription error here.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your efforts in reviewing our paper and your recognition of our simple method, notable results, and good writing. The response to your questions is given piece by piece as follows.
W**1**: In this paper, the author only demonstrates a change in gradient distribution in the first layer. Presenting the changes in the men and variance of the absolute gradients for each layer would provide a more direct proof of their argument.
**A1**: Thanks for this advice. Here we add the changes in the men and variance of the absolute gradients for the first 10 layers in vanilla ResNet18 our ResNet18 respectively on the CIFAR-100. It can be seen that our ResNet18 could handle the gradient vanishing problem well.
| Layer | Vanilla | Ours |
| --- | --- | --- |
| 10 | 0.2094/0.0361 | 0.3846/0.0739 |
| 9 | 0.1460/0.0158 | 0.2489/0.0432 |
| 8 | 0.1136/0.0083 | 0.2001/0.0243 |
| 7 | 0.0957/0.0056 | 0.1699/0.0173 |
| 6 | 0.0753/0.0034 | 0.1344/0.0106 |
| 5 | 0.0638/0.0023 | 0.1101/0.0070 |
| 4 | 0.0513/0.0014 | 0.0889/0.0045 |
| 3 | 0.0349/0.0007 | 0.0611/0.0021 |
| 2 | 0.0197/0.0002 | 0.0341/0.0006 |
| 1 | 0.0178/0.0001 | 0.0310/0.0005 |
---
W2: The author should provide a more detailed mathematical proof to explain why the use of surrogate gradients in deep SNN would lead to gradient vanishing, as well as why direct use of residual learning will not address the problem.
**A2**: Sorry for the confusion. **The** **residual structure cannot solve the gradient vanishing problem of SNNs completely.** For standard skip connections, it can be expressed as $o=g(f(x)+x)$, where f(x) is convolutional layers and g() is the activation function. The standard ResNet network is composed of multiple skip connection blocks cascaded together. In ANN, ReLU is used for g(), since ReLU is unbounded for the positive part, the gradient can be passed to the input of the block directly. However, in the case of LIF neurons in SNNs, the gradient will be reduced through the surrogate gradient. Thus, the skip connections in standard ANNs still suffer the gradient vanishing problem in SNNs, as Figure 2 in the paper also visually illustrates this problem. However, our shortcut back method can transfer the gradient from the output to the input of the block directly in SNNs.
---
W3: The author has not demonstrated their method on much deeper network architectures where the gradient vanishing problem is more severe.
**A3**: Thanks for this advice. We have added more experiments for deeper network architectures. It can be seen that our method also works well with these architectures.
| Model | Accuracy |
| --- | --- |
| ResNet18 | 72.22% |
| Our ResNet18 | 74.78% |
| ResNet34 | 69.98% |
| Our ResNet34 | 75.67% |
| ResNet50 | 65.22% |
| Our ResNet50 | 76.81% |
| ResNet101 | 53.71% |
| Our ResNet101 | 76.88% |
| ResNet152 | 42.53% |
| Our ResNet152 | 75.49% |
**Q1**: How is the network divided into multiple blocks? Are there any additional rules for the insertion position and number of early classification heads?
**W1**: Thanks for the question. These modern architectures are usually partitioned into different stages corresponding to a downsample of the input feature map and an increase of the channel numbers. We add the branchs to the exit of these stages.
---
**Q2**: The results of using short-BP to train ResNet 18 in Table 1 and Table 2 are quite different. There may be a transcription error here.
**W2**: Thanks for the advice. We confused the results of the Evolutionary Training and the results of the Shortcut Back-propagation. We will correct it in the revised version. Thank you again.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. All of my concerns have been addressed, and I'd like to increase my score.
---
Reply to Comment 1.1.1:
Title: thanks
Comment: Thanks very much for your reply and recognition. We are happy to see that your concerns have been addressed. | Summary: This paper proposes shortcut connections between layers to mitigate the gradient vanishing problem in SNNs. Additionally, the authors present a way to phase out the shortcut connections over training so that inference can be done without these additional connections. The experiments show that this method improves training performance in several image classification tasks.
Strengths: 1.The idea is small, but interesting and effective enough.
2.The performance improvement over the existing SNN methods is noticeable.
3.The paper is well-written.
Weaknesses: 1.The proposed method will increase the training time.
2.In the experimental section, some newer methods should be compared with this method.
3.Figure 2 lacks horizontal and vertical coordinates, and the readability and comprehensibility of the picture need to be improved.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1.Does the proposed method lead to an increase in the calculation of gradient backpropagation? How much is the increased training time.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your efforts in reviewing our paper and your recognition of our interesting ideas, notable results, and good writing. The response to your questions is given piece by piece as follows.
W**1**: The proposed method will increase the training time.
**A1**: Thanks for this question. The additional training time is trivial. Here we add the training time cost comparison of Vanilla SNN and our Shortcut Back-propagation SNN for 1 epoch training with batchsize as 128 on CIFAR10 using ResNet20 with a single V100. Only about 10% of training cost is induced.
| Method | Timestep | Time |
| --- | --- | --- |
| Vanilla SNN | 2 | 22.17s |
| Shortcut Back-propagation | 2 | 24.49s(10.46%) |
| Vanilla SNN | 4 | 43.68s |
| Shortcut Back-propagation | 4 | 47.67s(9.13%) |
---
**Q2**: In the experimental section, some newer methods should be compared with this method.
**A2**: Thanks for the advice. We will add more recent methods like these in the below papers in the revised version.
[1]Yao M, Zhao G, Zhang H, et al. Attention spiking neural networks[J]. IEEE transactions on pattern analysis and machine intelligence, 2023.
[2] Qiu X, Zhu R J, Chou Y, et al. Gated attention coding for training high-performance and efficient spiking neural networks[C]. Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(1): 601-610.
---
**Q3**: Figure 2 lacks horizontal and vertical coordinates, and the readability and comprehensibility of the picture need to be improved.
**A3**: Thanks for the advice. We have improved the picture as you suggested. Please see the general response.
---
**W1**: Does the proposed method lead to an increase in the calculation of gradient backpropagation? Howmuch is the increased training time.
**A1**: Thanks for this question. The additional training time is trivial. Here we add the training time cost comparison of Vanilla SNN and our Shortcut Back-propagation SNN for 1 epoch training with batchsize as 128 on CIFAR10 using ResNet20 with a single V100. Only about 10% of training cost is induced.
| Method | Timestep | Time |
| --- | --- | --- |
| Vanilla SNN | 2 | 22.17s |
| Shortcut Back-propagation | 2 | 24.49s(10.46%) |
| Vanilla SNN | 4 | 43.68s |
| Shortcut Back-propagation | 4 | 47.67s(9.13%) | | null | null | Rebuttal 1:
Rebuttal: Thanks for your efforts in reviewing our paper and your recognition of our interesting ideas, notable results, and good writing. Here we provide the revised picture in the PDF.
Pdf: /pdf/1db36038c0ae23510f9a5e9ec08a7cc70ee30036.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Reasons and Solutions for the Decline in Model Performance after Editing | Accept (poster) | Summary: This paper addresses the challenges associated with the decline in performance of LLMs after undergoing knowledge editing. The study identifies the primary factors contributing to performance degradation from both data and model perspectives. By constructing a Multi-Question Dataset (MQD) and analyzing the impact of editing objectives, token length, and diversity, the paper finds that perplexity associated with editing objectives significantly affects model performance. From the model perspective, a strong correlation was observed between the L1 norm of parameter layers and editing accuracy. The paper proposes a novel method called Dump for sequence (D4C), which effectively manages the parameter growth and improves model performance post-editing.
Strengths: - Innovative Methodological Approach: The study introduces a new method, D4C, which addresses the explosive growth in parameter norms and optimizes model performance post-editing. This approach is both innovative and practical for managing edited models.
- Comprehensive Data Analysis: The construction of the Multi-Question Dataset and detailed analysis of how different types of data affect model performance provide valuable insights into the mechanics of model editing.
- Clear Identification of Problems and Solutions: The paper clearly identifies specific problems associated with knowledge editing in LLMs, such as catastrophic forgetting and performance bottlenecks, and provides targeted solutions to these issues.
- Empirical Validation: The experiments conducted in this paper offer empirical evidence supporting the proposed methods, enhancing the credibility and applicability of the findings.
Weaknesses: - Generalizability of Findings: The study focuses on specific scenarios and datasets, which may limit the generalizability of the findings across different types of LLMs or editing tasks.
- Potential Overfitting to Edited Scenarios: There is a risk that the model may become overly optimized for the edited scenarios, potentially affecting its performance on unedited or unrelated tasks.
- Complexity of Implementation: The proposed D4C method, while effective, may be complex to implement and integrate into existing systems due to its sophisticated handling of parameter layers.
- Unsuitable Citation Format: The citations in this paper are in the format of “XXX et al. [YEAR]”, which are not suitable enough, and had better change into the format of [1], [2], [3], ……
Technical Quality: 3
Clarity: 2
Questions for Authors: - Adaptability of D4C Method: How adaptable is the D4C method to different types of LLMs and knowledge editing tasks beyond those tested in your experiments?
- Impact on Unedited Model Performance: How does the D4C method affect the performance of the model on tasks that have not been edited? Is there any evidence of performance trade-offs?
- Handling of Diverse Editing Objectives: Could you elaborate on how the D4C method manages the complexity and diversity of editing objectives without compromising the model’s overall integrity and coherence?
**Missing References**
- Editing Large Language Models: Problems, Methods, and Opportunities (EMNLP 2023)
- Knowledge Editing for Large Language Models: A Survey (2023)
- A Survey on Knowledge Editing of Neural Networks (2023)
- A Comprehensive Study of Knowledge Editing for Large Language Models (2024)
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: - Dependency on Specific Data Characteristics: The effectiveness of the proposed solutions may depend heavily on the characteristics of the data used for training and testing, which might not be consistent across different domains or applications.
- Evaluation Metrics: While the paper introduces new evaluation methods, the reliance on perplexity (PPL) and L1 norm metrics might not completely capture all aspects of model performance and health, especially in nuanced or context-dependent scenarios.
- Limited Experimentation: The experiments (Section 5) in this paper are too limited to demonstrate the conclusion.
- Scope of Editing Objectives: The study might not fully capture the impact of highly diverse or complex editing objectives that could be encountered in real-world scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedbacks on the paper! We have added detailed explanations for the important questions asked in the review.
$\textbf{Q1: }$ How adaptable is the D4C method to different types of LLMs and knowledge editing tasks beyond those tested in your experiments?
$\textbf{W1:}$ We conducted additional experiments and expanded the datasets to include the Mquake [1] and Counterfact [2] datasets. The experimental results on GPT-J are as follows:
|GPT-J-Mquake |Eff.|Par.|Mul.|Avg.|
|---- |---- |---- |---- |---- |
|FT|`17.00`|`6.22`|`0.00`|`7.74`|
|ROME|0.00|0.00|`0.00`|0.00|
|MEMIT|0.00|0.00|`0.00`|0.00|
|D4C|**97.40**|**75.54**|**21.84**|**64.93**|
|GPT-J-Counterfact |Eff.|Par.|Spe.|Avg.|
|---- |---- |---- |---- |---- |
|FT|32.80|9.00|1.00|14.27|
|ROME|0.60|0.70|0.60|0.63|
|MEMIT|86.20|**59.50**|31.30|59.00|
|GRACE|**100.00**|0.50|**100.00**|`66.83`|
|D4C|`99.10`|`47.00`|`63.70`|**69.93**|
At the same time, we extended the model to Llama2, and the results are shown below:
|Llama2-Mquake |Eff.|Par.|Mul.|Avg.|
|---- |---- |---- |---- |---- |
|FT|41.43|44.93|**28.24**|38.20|
|ROME|`76.85`|**78.41**|3.67|`52.97`|
|MEMIT|0.00|0.00|0.00|0.00|
|D4C|**85.30**|`72.68`|`28.16`|**62.05**|
|Llama2-Counterfact |Eff.|Par.|Spe.|Avg.|
|---- |---- |---- |---- |---- |
|FT|8.46|4.07|2.03|4.85|
|ROME|27.83|`16.03`|5.66|16.50|
|MEMIT|0.00|0.00|6.72|2.24|
|GRACE|**99.9**|0.25|**99.97**|`66.71`|
|D4C|`96.68`|**46.66**|`72.45`|**71.93**|
PS: **Bold** indicates the best results, while suboptimal is `highlighted`.
The results show that our method has superior performance on different types of datasets and models. Furthermore, We have expanded our experimental editing 10,000 times, and our performance far exceeds other methods. **You can find the details in the supplementary PDF of Global Response.**
$\textbf{Q2: }$ How does the D4C method affect the performance of the model on tasks that have not been edited? Is there any evidence of performance trade-offs?
$\textbf{W2: }$ As highlighted in Section 4.2, we observed a strong correlation between the rise in the L1-norm of the parameter layer and the performance of the editing task. Simultaneously, we noted a significant decrease in the performance of the edited model on unedited tasks. We posit that the increase in the L1-norm results in decreased performance on tasks that have not undergone editing. Our approach effectively mitigates the norm growth of the edited model by consolidating all samples edited from the sequence (e.g., 6.a and 6.b). A succinct theoretical proof is presented in Reviewer MFeF's 'Q6'. Figure 6.c demonstrates that the D4C method ensures that the edited model maintains superior performance on unedited tasks.
Furthermore, we extended the number of edits to 10,000, and the edited model still maintained good performance.
|Num of Edits|arc_challenge|hellaswag|mmlu|truthfulqa_mc|winogrande |
|---- |---- |---- |---- |---- |---- |
|0|43.34|57.14|41.35|38.97|69.14|
|10,000|42.41 ($\downarrow 0.93$)|52.99 ($\downarrow 4.15$)|39.76 ($\downarrow 1.59$)|38.56 ($\downarrow 0.41$)|68.75 ($\downarrow 0.39$)|
The existing knowledge editing methods have poor editing performance (such as Table 2), or have a storage complexity of $O(n)$ ( e.g. GRACE[3]), and cannot provide experimental results edited 10,000 times.
$\textbf{Q3: }$ Could you elaborate on how the D4C method manages the complexity and diversity of editing objectives without compromising the model’s overall integrity and coherence?
$\textbf{W3: }$ The complexity and diversity of editing objectives can affect the performance of the edited model, which is the conclusion we have drawn from a data perspective. Our further experiments indicate a correlation between editing objectives and parameter layer norm growth. For example, if the editing objective for a true/false question is "yes/no", the norm growth caused by this dataset is slow, while the directly generated editing objective is "entity/event", the norm growth caused by this dataset is fast. This also confirms our conclusion from a model perspective that the reason for the performance degradation of the edited model is related to norm growth. We will present the specific experimental results in the revised version.
Our D4C method can effectively reduce the norm growth of the edited model (such as Fig 6.a and 6.b), even when the number of edits reaches 10,000 (e.g. W2). Therefore, D4C can manage the complexity and diversity of editing objectives without compromising the model’s overall integrity and coherence.
$\textbf{Q4: }$ The proposed D4C method, while effective, may be complex to implement and integrate into existing systems due to its sophisticated handling of parameter layers.
$\textbf{W4: }$ Thank you for recognizing the effectiveness of our method. To be honest, our system is not complex, with core code ranging from 200 to 300 lines. We have included our code in the Supplementary Material when submitting the paper. You can download the file and follow the instructions in the README.md to set up the code.
$\textbf{Ref}$
[1] Mquake: Assessing knowledge editing in language models via multi-hop questions. EMNLP 2023.
[2] Mass-editing memory in a transformer. ICLR 2023.
[3] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors. Neurips 2023.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Thank you very much for the clarification! Most of them have addressed my concerns.
I appreciate the response, especially the newly conducted experiments. Including those in the revised draft will strengthen the paper.
Overall, I acknowledge the interesting idea of this work, and decide to increase my original scores 5 to 6.
Best Regards,
Reviewer Kq3N
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply. We promise to add additional experiments in the revised version. | Summary: Recent research has shown varying degrees of decline in model performance following small changes made by certain model editing methods. This paper is the first to comprehensively analyze the reasons behind such performance declines. Through extensive experiments, it identifies two main factors: data and model. For data-specific factors, the paper finds that perplexity and token length significantly influence performance. For model-specific factors, the L1 norm of the edited layer is identified as a key influence. Building upon these insights, the paper proposes a method named Dump for sequence (D4C), which significantly improves model performance.
Strengths: - The paper is well-motivated: Exploring the reasons behind and impact of small changes made by model editing techniques on the performance of unedited samples is of great significance.
- The analysis of the data-specific and model-specific factors is supported with diverse datasets and comprehensive experiments. The model-specific analysis, in particular, is evaluated rigorously, addressing the forgetting issue that prior works often overlooked
- The observation of the influence of editing on the model norm is intriguing. High-norm parameters can be sensitive to noise and numerically unstable. It would be beneficial if the authors could also provide an L2-norm plot for comparison.
- The experimental results are impressive, demonstrating significant improvements and validating the effectiveness of the proposed method.
Weaknesses: - My main concern with the data-specific analysis is whether the conclusion is about correlation or causation. Many variables can be changed about the input data. Plotting a single Figure 3 might be insufficient to justify that perplexity and token length are the main reasons for the decline in model performance after editing.
- Unfortunately, the constructed dataset is not open-sourced.
- Recent research [1] has shown that model editing methods (e.g. ROME, MEMIT) are not good at handling multi-hop questions, how would D4C perform in such more challenging scenarios?
- Some theoretical analysis can be conducted to demonstrate that D4C does not lead to an increase in norms.
[1] Mquake: Assessing knowledge editing in language models via multi-hop questions. EMNLP 2023
Technical Quality: 3
Clarity: 4
Questions for Authors: - Can the authors add a section in the appendix to expand on the dataset mentioned in 3.1 (i.e., provide examples and details about the editing objectives) for better readability?
- What dataset was employed in Section 5?
- I encourage the authors to release the full code to enhance reproducibility.
- (Minor) Consider reducing v-space in some parts of the paper (e.g., the bottom of page 2).
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The limitations are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the importance, effort in method, and applications of our work. We outline our response to the main concerns:
$\textbf{Q1: }$ Can the authors add a section in the appendix to expand on the dataset mentioned in 3.1
$\textbf{W1: }$ Thank you for your suggestion. We will provide a detailed introduction to the dataset in the appendix. For example, for the DG category in Table 1, the main structure of the dataset is as follows:
|DG category | | |
|---- |---- |---- |
|Keys: | Text | Annotation |
| Prompt:| {}, resulting in PersonY, | Component Prompt
|Subject:| PersonX accepts PersonY appointment | Edited Subject
|Relation_id:| resulting in | Relation Category
|Target_new:| {‘str’:shakes PersonX hand,} | Target Answer
$\textbf{Q2: }$ The data-specific analysis is whether the conclusion is about correlation or causation.
$\textbf{W2: }$ The conclusion is about causation. We echo your observation regarding the potential variability in input data variables. Our research found that perplexity and token length are the reasons for the performance degradation of the edited model. In order to control variables, as shown in Table 1, our 'Subject' and 'Relation_id' are consistent, that is, 'PersonX accept PersonY appointment, resulting in PersonY, '. To make the model output answers that match the question type, the prompt will be slightly different. Each question type consists of 4000 samples. Although it is difficult to exclude all irrelevant variables, it can be certain that the experimental results have demonstrated the validity of the conclusion, which has a positive effect on exploring the reasons for the decline in model performance after editing.
$\textbf{Q3: }$ Recent research [1] has shown that model editing methods (e.g. ROME, MEMIT) are not good at handling multi-hop questions, how would D4C perform in such more challenging scenarios?
$\textbf{W3: }$ Our work focuses more on sequence editing, which is a common and important scenario in real-life applications. Therefore, we did not explore multi-hop questions before. Inspired by your suggestion, we conducted experiments on the multi-hop dataset Mquake [2]. The experimental results showed that our method also achieved superior performance.
|Llama-Method|Eff.|Par.|Mul.|Avg.|
|---- |---- |---- |---- |---- |
|FT|41.43|44.93|**28.24**|38.20|
|ROME|`76.85`|**78.41**|3.67|`52.97`|
|MEMIT|0.00|0.00|0.00|0.00|
|D4C|**85.30**|`72.68`|`28.16`|**62.05**|
|GPT-Method|Eff.|Par.|Mul.|Avg.|
|---- |---- |---- |---- |---- |
|FT|`17.00`|`6.22`|`0.00`|`7.74`|
|ROME|0.00|0.00|`0.00`|0.00|
|MEMIT|0.00|0.00|`0.00`|0.00|
|D4C|**97.40**|**75.54**|**21.84**|**64.93**|
PS: **Bold** indicates the best results, while suboptimal is `highlighted`. Since GRACE [1] is not suitable for the data structure of Mquake, we did not include it in the comparison.
$\textbf{Q4: }$ What dataset was employed in Section 5?
$\textbf{W4: }$ For our sequence edting experiment, we use the ZsRE [3] dataset. **And we also expanded our experiment. You can find the details in the Global Response.**
$\textbf{Q5: }$ I encourage the authors to release the full code to enhance reproducibility.
$\textbf{W5: }$ Thank you for your suggestion! We have included our code in the Supplementary Material when submitting the paper. You can download the file and follow the instructions in the README.md to set up the code.
$\textbf{Q6: }$ Some theoretical analysis can be conducted to demonstrate that D4C does not lead to an increase in norms.
$\textbf{W6: }$ For simplicity, we can consider the update of parameters edited by MEMIT after $n$ edits:
$$\Delta W_{MEMIT}=\sum_{i=1}^{n} (r_ik_i^T)(K_0K_0^T+k_ik_i^T)^{-1}$$
Regarding the D4C method, we have:
$$\Delta W_{D4C}=(\sum_{i=1}^{n} r_ik_i^T)(K_0K_0^T+\sum_{i=1}^{n} k_ik_i^T)^{-1}=\sum_{i=1}^{n} (r_ik_i^T)(K_0K_0^T+\sum_{i=1}^{n} k_ik_i^T)^{-1}$$
Due to $K_0K_0^T+k_ik_i^T$ and $K_0K_0^T+\sum_{i=1}^{n} k_ik_i^T$ being positive definite, intuitively, the inverse of $K_0K_0^T+\sum_{i=1}^{n} k_ik_i^T$ is expected to have smaller numerical values compared to $K_0K_0^T+k_ik_i^T$. Therefore, the norm of $\Delta W_{D4C}$ is smaller than that of $\Delta W_{MEMIT}$. We are considering adding a detailed theoretical analysis in the revised version.
$\textbf{Q7: }$ It would be beneficial if the authors could also provide an L2-norm plot for comparison.
$\textbf{W7: }$ Thank you for acknowledging our analysis. We have added comparative experiments on L2-norm, and the results are as follows
|GPT2-ROME |0|100|200|300|400|500|600|700|800|900|
|---- |---- |---- |---- |---- | ---- |---- |---- |---- |---- | ---- |
|L1-norm ( e-2 ) |1.06 | 1.58 | 2.20 | 2.71 | 3.14 | 3.52 | 3.87 | 4.19 | 4.49 | 4.77 |
|L2-norm ( e-6 ) |1.67| 2.54| 3.68| 4.62| 5.40| 6.10| 6.71| 7.28| 7.82| 8.31|
|GPT2-D4C |0|100|200|300|400|500|600|700|800|900|
|---- |---- |---- |---- |---- | ---- |---- |---- |---- |---- | ---- |
|L1-norm ( e-2 ) | 1.06 | 1.09 | 1.00 | 1.11 | 1.11 | 1.11 | 1.11 | 1.11 | 1.12 | 1.12 |
|L2-norm ( e-6 ) | 1.67| 1.68| 1.69| 1.70| 1.70| 1.71| 1.72| 1.73| 1.73| 1.74|
The results showed that the conclusions of L2 norm and L1 norm are consistent. Our D4C method can effectively suppress norm growth.
$\textbf{Ref}$
[1] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors. Neurips 2023.
[2] Mquake: Assessing knowledge editing in language models via multi-hop questions. EMNLP 2023.
[3] Zero-shot relation extraction via reading comprehension. CoNLL 2017.
---
Rebuttal 2:
Comment: I appreciate the authors' efforts in addressing my questions and am generally satisfied with the responses provided. However, I would like to request additional clarification regarding the connection with the paper [1], which already received 23 citations to date. This paper shares the same motivation as yours—addressing performance degradation—and also proposes regularization as a solution. While I see the references in the related work, the novelty appears somewhat overlapping. Could the authors further elaborate on how their work differentiates itself from this reference and justify the uniqueness of their approach in light of this existing work?
[1] https://arxiv.org/abs/2401.04700v3
---
Rebuttal 3:
Comment: Thank you for your prompt response. Below, we elaborate on the key distinctions between our work and the referenced studies [1], highlighting our novel contributions and advantages.
* **Novelty:** Our research stands apart by pinpointing norm growth as the primary culprit behind model performance degradation, a revelation that precedes the publication of the V3 [1] version of the references. Notably, at the time of our NeurIPS 2024 submission deadline, only the V2 [2] version was available, which merely acknowledged a decline in model performance post-editing without delving into the underlying causes or proposing solutions. By contrast, we are the pioneers in identifying and addressing this critical issue.
* **Evaluation Methodology:** Even if the V3 version is subsequently released, our work retains significant advantages. Specifically, we identified a fundamental flaw in V3's evaluation approach, which relies solely on the original single edit (e.g. evaluating the current editing performance after editing one sample). This approach fails to align with the standard experimental setup for sequence editing, where the performance of all previously edited samples is assessed after multiple edits. Our Figure 7 visually demonstrates the limitations of V3's method and underscores its inability to validate the resolution of catastrophic forgetting.
* **Addressing Editing Bottlenecks:** Our paper goes beyond merely acknowledging the existence of editing bottlenecks, as exemplified by the 850-times limitation in Memit. We boldly extended our experiments to 10,000 edits, demonstrating the remarkable resilience of our model. In stark contrast, V3's experiments were confined to a mere 20 edits (as shown in Figure 7), indicating a lack of depth in exploring and addressing the editing bottleneck challenge. Furthermore, the disparity in evaluation methodologies and the limited scope of V3's edits hinder a fair performance comparison.
* **Mitigating Norm Growth:** Both our work and V3 recognize norm growth as a pivotal factor in performance degradation. However, we present compelling evidence in Figure 6 that our D4C method effectively curbs norm growth, thereby safeguarding model performance. In contrast, V3's regularization strategy, which involves setting certain parameters to zero, lacks empirical support for its ability to significantly alleviate norm growth issues.
Thank you again for your reply. If you have any remaining concerns or need further clarification, we welcome your additional input. Thank you for your continued consideration.
**Ref**
[1] V3 https://arxiv.org/pdf/2401.04700v3
[2] V2 https://arxiv.org/pdf/2401.04700v2
---
Rebuttal 4:
Comment: Thank you for providing such thorough and compelling responses. Despite the existing critiques on knowledge editing, this paper offers the most intuitive and straightforward demonstration of performance degradation, along with effective solutions. I am confident that this work will significantly advance the field, and I will be increasing my score to further support it.
However, I would like to note that the current code provided in the supplementary materials is incomplete. For example, the MQD dataset that was constructed is not included. Ensuring the code's completeness and improving its visibility would likely enhance the impact of this work even further.
---
Rebuttal Comment 4.1:
Comment: Thank you for your reply. We promise to publicly release the dataset and complete code in the future. | Summary: The paper investigates the reasons behind performance decline in sequential model editing approaches that selectively update parameters based on both data and model factors. To address the issues causing this decline, the authors propose a method to save editing history, thereby transforming sequential editing into batch editing with minimal computational overhead.
Strengths: Extensive experimentation is conducted to empirically demonstrate how factors such as dataset characteristics, editing objectives, and model-specific properties affect performance in sequential model editing.
A simple matrix storage solution is introduced, which enables the conversion of sequential editing into batch editing.
Weaknesses: The study is restricted to two closely related editing approaches.
Experimentation is limited in demonstrating the efficacy of the D4C method. Different datasets and a larger number of edits for a more thorough evaluation are needed.
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and for recognizing the novelty of the our method. Below, we address some of the weaknesses raised:
$\textbf{Q1: }$ The study is restricted to two closely related editing approaches.
$\textbf{W1: }$ First and foremost, our innovation surpasses the mere translation of sequence editing into batch editing. It employs a batch editing-like approach to implement sequence editing, resulting in a significant performance boost. As indicated in Table 2, previous methods like MEMIT [1] and PMET [2] were designed for batch editing, but they struggled to perform well when applied to sequence editing tasks. In contrast, as we highlighted in L173-L176, our focus on sequence editing aligns more closely with real-world requirements, highlighting the limitations of existing knowledge editing techniques. By enhancing the performance of sequence editing while maintaining a storage complexity of $O(1)$, a succinct theoretical proof is presented in Reviewer MFeF's 'Q6'. We aim to advance the practical application of knowledge editing technology.
Furthermore, we conducted additional experiments by extending the baseline method with the latest state-of-the-art method, GRACE [3]. It is worth noting that although the GRACE method performs 22% worse than ours in performance, due to their storage complexity being $O(n)$, they did not provide experiments for editing 10,000 times.
|GPT-J-ZsRE (1,000 times)|Eff.|Par.|Spe.|Avg.|
|---- |---- |---- |---- |---- |
|ROME|2.44|`2.23`|0.92|1.86|
|MEMIT|0.00|0.02|0.00|0.01|
|GRACE|**100.00**|0.07|**100.00**|`66.69`|
|D4C|`95.98`|**91.17**|`70.17`|**88.77**|
PS: **Bold** indicates the best results, while suboptimal is `highlighted`.
$\textbf{Q2: }$ Experimentation is limited in demonstrating the efficacy of the D4C method. Different datasets and a larger number of edits for a more thorough evaluation are needed.
$\textbf{W2: }$ Thanks again for your advice! We further demonstrate the effectiveness of the method by adding additional datasets and number of edits. Firstly, we expanded the datasets to include the Mquake [4] and Counterfact [1] datasets, where Mquake is a multi-hop dataset, Counterfact is a fact dataset. The results are as follows:
|Llama2-Mquake (1,000 times)|Eff.|Par.|Mul.|Avg.|
|---- |---- |---- |---- |---- |
|FT|41.43|44.93|**28.24**|38.20|
|ROME|`76.85`|**78.41**|3.67|`52.97`|
|MEMIT|0.00|0.00|0.00|0.00|
|D4C|**85.30**|`72.68`|`28.16`|**62.05**|
|Llama2-Counterfact (1,000 times) |Eff.|Par.|Spe.|Avg.|
|---- |---- |---- |---- |---- |
|FT|8.46|4.07|2.03|4.85|
|ROME|27.83|`16.03`|5.66|16.50|
|MEMIT|0.00|0.00|6.72|2.24|
|GRACE|**99.9**|0.25|**99.97**|`66.71`|
|D4C|`96.68`|**46.66**|`72.45`|**71.93**|
PS: **Bold** indicates the best results, while suboptimal is `highlighted`.
Additionally, we extended the editing number of the our method to 10,000 (Llama2) and 9,000 (GPT). We evaluated the performance of edited GPT models in downstream tasks. As shown in the table below, the edited model maintains great performance on downstream tasks, proving that the editing method has minimal damage to the model.
|Num of Edits|arc_challenge|hellaswag|mmlu|truthfulqa_mc|winogrande |
|---- |---- |---- |---- |---- |---- |
|0|43.34|57.14|41.35|38.97|69.14|
|10,000|42.41 ($\downarrow 0.93$)|52.99 ($\downarrow 4.15$)|39.76 ($\downarrow 1.59$)|38.56 ($\downarrow 0.41$)|68.75 ($\downarrow 0.39$)|
**Specific results can be seen in Fig. 1 and Fig. 2 of supplementary PDF and our Global Response.** The experimental results show that our method can still achieve superior performance after 10,000 edits.
$\textbf{Ref}$
[1] Mass-editing memory in a transformer. ICLR 2023.
[2] PMET: Precise Model Editing in a Transformer. AAAI 2024.
[3] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors. Neurips 2023.
[4] Mquake: Assessing knowledge editing in language models via multi-hop questions. EMNLP 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response, I have updated the score, the additional experiments should be added to the revised draft to show the efficacy of the approach.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply. We promise to add additional experiments in the revised version. | Summary: This paper investigates the reasons and solutions for the decline in model performance of model editing. The authors conduct experiments from two perspectives: data and model. Specifically, to clarify the impact of data on the performance of edited models, the authors first evaluate how editing different types of data affects model performance. Then, the authors construct a Multi-Question Dataset (MQD) and identified that the performance of the edited models is primarily influenced by the diversity of the editing objectives and the length of the tokens. Secondly, the authors explore the factors that affect model performance from a model perspective. Experiments revealed a strong correlation between the L1 norm of the edited model layers and the editing accuracy, and identified an editing quantity bottleneck. To enhance the performance of edited models, the authors propose a Dump for sequence (D4C) method that effectively improves the performance of edited models and overcomes the previous editing bottleneck issue. This method allows for multiple effective edits with minimal impact on model performance.
Strengths: This paper investigates the impact of data on the performance of edited models. Evaluations are conducted across multiple tasks, revealing that the editing objective is the primary factor influencing model performance.
The authors found that the decline in edited model performance is correlated with the explosive growth of the L1 norm of parameter layers during the editing process.
This paper proposes a caching sequence edit method that leverages O(1) space complexity to retain past knowledge and regulate the explosive growth of the parameter layer norm.
Weaknesses: The writing of this paper should be improved. There is no overview of this paper, which makes it hard to follow the details of Section 3 and 4.
The motivation of the proposed method is not clear.
There are many typos such as line 182.
There are many missing references such as:
Knowledge Editing for Large Language Models: A Survey
Stable Knowledge Editing in Large Language Models
A Comprehensive Study of Knowledge Editing for Large Language Models
Editing Large Language Models: Problems, Methods, and Opportunities
Technical Quality: 2
Clarity: 2
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes
The limitations are on page ten. I am unsure if this counts as exceeding the page limit.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive recommendations and valuable feedback!
$\textbf{Q1: }$ There is no overview of this paper, which makes it hard to follow the details of Section 3 and 4.
$\textbf{W1: }$ We provided an overview of this paper using text and figures. Firstly, in the Introduction section, L36-L43 and L44-L50 briefly introduce the relevant content of Sections 3 and 4, respectively. At the same time, we visually present the overview of this paper in Figure 1, including both data and model perspectives. Finally, in L93-L94 and L168-L169, we provide overviews of Sections 3 and 4, respectively.
$\textbf{Q2: }$ The motivation of the proposed method is not clear.
$\textbf{W2: }$ The motivation for this paper is to investigate the causes of performance degradation in edited models and to optimize them. We have explicitly expressed it multiple times in the paper. Firstly, in L4-L5 of the Abstract section, we present this motivation and provide our proposed solution. In addition, in L36-L37, we reiterated this motivation and provided specific experimental settings. Meanwhile, L51-L54 and L62 have repeatedly demonstrated this motivation. Finally, in the conclusion section, L280 and L283 express the motivation behind this paper.
$\textbf{Q3: }$ About Limitations section.
$\textbf{W3: }$ We learned from the NeurIPS website [1] that the Limitations are not included in the main text of the paper, so it can be placed on page ten.
$\textbf{Q4: }$ About some missing references and typos.
$\textbf{W4: }$ We promise to add missing references and correct typos in the revised version.
$\textbf{Ref}$
[1] CallForPapers. Neurips 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply, my concern has been addressed. I have raised my score. | Rebuttal 1:
Rebuttal: # Global Response
We thank the reviewers for their thoughtful feedback. We are glad the reviewers find that
* Our motivation is innovation and has great significance
* "The paper is well-motivated: Exploring the reasons behind and impact of small changes made by model editing techniques on the performance of unedited samples is of great significance." - MFeE
* "This approach is both innovative and practical for managing edited models." - Kq3N
* Our paper presents a thorough, novel, and significant analysis
* "The model-specific analysis, in particular, is evaluated rigorously, addressing the forgetting issue that prior works often overlooked." -MFeE
* "detailed analysis provide valuable insights into the mechanics of model editing." - Kq3N
* Our solutions are novel and effective and our experiments are well-conducted
* "The experimental results are impressive, demonstrating significant improvements and validating the effectiveness of the proposed method." - MFeE
* "Extensive experimentation is conducted to empirically demonstrate." - rpPG
* "The experiments enhance the credibility and applicability of the findings." - Kq3N
**[Main Motivation]**
As the scale of models continues to grow, how to update model knowledge affordably has remained a persistent challenge. Despite recent intriguing proposals, such as knowledge editing [1], these methods have yet to see practical implementation. The primary issue lies in the significant drop in performance in the edited model, leading to a lack of trust from users. Understanding the causes behind this decline in model performance post-editing is increasingly crucial.
By deriving solutions from the analysis findings and minimizing model disruptions during knowledge updates, the edited model can earn user trust and drive the adoption of this technology in real-world scenarios. As highlighted by reviewer MFeE, this work holds considerable significance.
**[Supplementary Experiments]**
* **1. Dataset Expansion Experiment.** We conducted additional experiments and expanded the datasets to encompass the Counterfact [1] and Mquake [3] datasets. The experimental outcomes for the Counterfact dataset are outlined below
|Llama-Method|Eff.|Par.|Spe.|Avg.|
|---- |---- |---- |---- |---- |
|FT|8.46|4.07|2.03|4.85|
|ROME|27.83|`16.03`|5.66|16.50|
|MEMIT|0.00|0.00|6.72|2.24|
|GRACE|**99.9**|0.25|**99.97**|`66.71`|
|D4C|`96.68`|**46.66**|`72.45`|**71.93**|
|GPT-Method|Eff.|Par.|Spe.|Avg.|
|---- |---- |---- |---- |---- |
|FT|32.80|9.00|1.00|14.27|
|ROME|0.60|0.70|0.60|0.63|
|MEMIT|86.20|**59.50**|31.30|59.00|
|GRACE|**100.00**|0.50|**100.00**|`66.83`|
|D4C|`99.10`|`47.00`|`63.70`|**69.93**|
PS: **Bold** indicates the best results, while suboptimal is `highlighted`.
Additionally, taking into account Reviewer MFeE's suggestions, we investigated the performance of existing baselines under **sequence multi-hop editing** with Mquake [3] dataset:
|Llama-Method|Eff.|Par.|Mul.|Avg.|
|---- |---- |---- |---- |---- |
|FT|41.43|44.93|**28.24**|38.20|
|ROME|`76.85`|**78.41**|3.67|`52.97`|
|MEMIT|0.00|0.00|0.00|0.00|
|D4C|**85.30**|`72.68`|`28.16`|**62.05**|
|GPT-Method|Eff.|Par.|Mul.|Avg.|
|---- |---- |---- |---- |---- |
|FT|`17.00`|`6.22`|`0.00`|`7.74`|
|ROME|0.00|0.00|`0.00`|0.00|
|MEMIT|0.00|0.00|`0.00`|0.00|
|D4C|**97.40**|**75.54**|**21.84**|**64.93**|
PS: Since GRACE is not suitable for the data structure of Mquake, we did not include it in the comparison.
* **2. Method expansion experiment.** We conducted further experiments by extending the baseline method with the latest state-of-the-art method, GRACE [2]. Notably, while the GRACE method shows a performance decrease of only 22% compared to ours, their storage complexity being $O(n)$ prevented them from conducting experiments involving 10,000 edits.
|GPT-1000 times|Eff.|Par.|Spe.|Avg.|
|---- |---- |---- |---- |---- |
|ROME|2.44|`2.23`|0.92|1.86|
|MEMIT|0.00|0.02|0.00|0.01|
|GRACE|**100.00**|0.07|**100.00**|`66.69`|
|D4C|`95.98`|**91.17**|`70.17`|**88.77**|
* **3. Editing frequency expansion experiment.** we extended the number of edits to 10,000, and the edited model still maintained great performance.
|Num of Edits|arc_challenge|hellaswag|mmlu|truthfulqa_mc|winogrande |
|---- |---- |---- |---- |---- |---- |
|0|43.34|57.14|41.35|38.97|69.14|
|10,000|42.41 ($\downarrow 0.93$)|52.99 ($\downarrow 4.15$)|39.76 ($\downarrow 1.59$)|38.56 ($\downarrow 0.41$)|68.75 ($\downarrow 0.39$)|
The existing knowledge editing methods have poor editing performance (such as Table 2), or have a storage complexity of $O(n)$ (e.g. GRACE [2]), and cannot provide experimental results edited 10,000 times.
**The supplementary PDF provides more detailed experimental results.**
$\textbf{Ref}$
[1] Knowledge editing for large language models: A survey. arXiv 2023.
[2] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors. NeurIPS 2023
[3] Mquake: Assessing knowledge editing in language models via multi-hop questions. EMNLP 2023
Pdf: /pdf/4477d3f1c2c031bbc7589790e9e0da593a4317c4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DDK: Distilling Domain Knowledge for Efficient Large Language Models | Accept (poster) | Summary: This paper proposes DDK, a knowledge distillation (KD) framework that distills large language models (LMs) into small LMs. Unlike previous KD methods, DDK dynamically adjusts the domain weights during distillation. Experiments show that DDK outperforms other KD baselines across various tasks.
Strengths: 1. The paper is well written and the method is easy to follow.
2. The experiments show that DDK outperforms other KD baselines on various tasks.
Weaknesses: The extra computation introduced by KKD should be considered. It seems KKD requires the inference of a large LM during the training of the small LM. When the teacher model is much larger than the student model (QWen-1.5 14B v.s. QWen-1.5 1.8B), the inference cost of the teacher model would be even larger than training the student model. Therefore, it is more reasonable to compare the performance of the distilled model and the baselines given the same FLOPs.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. What are the training data for the baselines (CPT, TED, KD, and MiniLLM)? Is the data for DDK the same as that for the baseline methods?
2. In lines 178-179, is the learning rate 3e-5 ($3\times 10^{-5}$) rather than $3e^{-5}$?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your careful reading and constructive suggestions. We will address your concerns shown below in detail.
**Q1: Extra computation introduced by DDK should be considered. Compare the performance of the distilled model and the baselines given the same FLOPs.**
**A1**: Thanks for your insightful suggestions. In Table 13 of Appendix B.2, we also report the training costs (i.e., TFLOPs) of different baseline methods, and we observe that the additional computation costs are acceptable when compared with the baseline KD method. Besides, we observe the ratio between the TFLOPs of DDK and TFLOPs of CPT is (5.401e8/1.456e8)=3.709. Then, we also perform the experiments by continuing pre-training the student model with about 56B tokens, where the TFLOPs is about 5.4e8. The results are shown in the following table. In the table, we observe that when we continue pretraining the student model for more tokens without using the teacher guidance, little gain is obtained. We assume that the Qwen1.5 model has been trained on >3T tokens, and the convergence status is stable when using the standard pretraining method. In contrast, when using the teacher guidance of the DDK, we observe consistent performance improvements in Table 1, Table 2, Table 7, and Table 8, which further demonstrates the effectiveness of our DDK.
| Models | TFLOPs | CEval | MMLU | RACE | C3 | W.G. | GSM8K | C.QA | Arc-E | Arc-C | H.E. | MBPP | Avg. |
| --------------------- | ------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
| Teacher (Qwen1.5-14B) | - | 78.68 | 64.34 | 89.95 | 77.38 | 68.74 | 67.63 | 82.06 | 87.58 | 80.59 | 37.80 | 44.00 | 70.80 |
| Student (Qwen1.5-1.8B) | - | 59.66 | 44.48 | 69.57 | 58.27 | 57.85 | 38.4 | 64.70 | 70.23 | 50.31 | 11.87 | 18.00 | 49.39 |
| +CPT (15B tokens) | 1.456e8 | 60.13 | 45.01 | 69.00 | 60.30 | 56.98 | 42.50 | 64.78 | 72.00 | 51.03 | 13.12 | 20.45 | 50.48 |
| +DDK (15B toekns) | 5.401e8 | 63.75 | 46.01 | 71.56 | 65.53 | 59.10 | 53.54 | 66.75 | 75.01 | 55.03 | 27.13 | 26.10 | 55.41 |
| +CPT (56B tokens) | 5.436e8 | 60.15 | 45.00 | 69.16 | 60.43 | 57.06 | 42.91 | 64.86 | 72.14 | 51.19 | 13.41 | 20.74 | 50.64 |
We will revise the above discussion in our new version.
**Q2: Is the data for DDK the same as that for the baseline methods?**
**A2**: The training data is the same and we adopt the same seed for shuffling the dataset, which can ensure the training data same for different methods. We will clarify this detail in our new version.
**Q3: Learning rate issue.**
**A3**: Thanks for your kind suggestions. We will revise this typo in the new version.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer 6kDi
Comment: I thank the authors for their response. After reading the response, I think my current score is appropriate.
---
Reply to Comment 1.1.1:
Comment: Thanks for your feedback. We will carefully address your concerns in our new version. | Summary: The paper introduces a new framework called Dynamic Domain Knowledge Distillation (DDK) to enhance the efficiency of knowledge distillation for large language models (LLMs). Unlike traditional methods that overlook domain performance differences between student and teacher models, DDK dynamically adjusts the distillation dataset composition based on these differences, ensuring a more stable and effective knowledge transfer. This approach addresses the issue of excessive focus on domains with minimal performance gaps and enhances overall model performance. Extensive evaluations demonstrate that DDK significantly outperforms existing knowledge distillation methods and continuously pretrained baselines.
Strengths: - The proposed dynamic dataloader for KD is technically sound.
- Numerical experiments well validate the efficacy of the method.
Weaknesses: - Dynamic dataloader requires knowing the training data distribution and category beforehand.
- Missing references. Similar ideas have been explored in pruning LLMs, such as ShearedLLaMA, LoRAShear to recover the knowledge . The paper needs to discuss with them in the related work section due to the closed relation between pruning and KD.
Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning
LoRAShear: Efficient Large Language Model Structured Pruning and Knowledge Recovery
Technical Quality: 3
Clarity: 3
Questions for Authors: - How the methods perform on under other KD losses, such as reversed KLD, JSD, skew-KLD.
On-policy distillation of language models: Learning from self-generated mistakes.
DistiLLM: Towards Streamlined Distillation for Large Language Models.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your careful reading and constructive suggestions. We will address your concerns shown below in detail.
**Q1: Require knowing the training data distribution and category beforehand.**
**A1**: Our DDK requires the training data distribution and category to divide the domains and we acknowledge this limitation. However, it should be mentioned that existing pretraining datasets usually provide the source composition. For example, the Repajama[https://www.together.ai/blog/redpajama] includes 7 domains (i.e., CommonCrawl, C4, Github, Wikipedia, Books, ArXiv, and StackExchange), The Pile [https://arxiv.org/pdf/2101.00027] includes 22 domains (i.e., Pile-CC, PubMed Central, Books3, OpenWebText2 and etc.) and Dolma[https://arxiv.org/pdf/2402.00159] includes 6 domains (i.e., Common Crawl, GitHub, Reddit, Semantic Scholar and etc.)
Meanwhile, we also tried another option to dive the training data into 10 domains based on the clustering method to address this limitation, where we first extract the feature of each document using [https://huggingface.co/BAAI/bge-m3] and then use k-means to divide the training corpus into 10 domains implicitly. Then, we apply DDK on the divided implicit domains based on the clustering and we call this method DDK (clustering), where we do not need to know the training data distribution and category. In the following table, we observe that the DDK (clustering) achieves relatively comparable performance with original DDK. We assume that the explicitly divided domains based on the sources (e.g., The stack, OpenWebMath) are distinct. Besides, we will continue to investigate the clustering method, which is more scalable.
| Models | CEval | MMLU | RACE | C3 | W.G. | GSM8K | C.QA | Arc-E | Arc-C | H.E. | MBPP | Avg. |
| ----------------------------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
| Student (1.8B)+DDK | 63.75 | 46.01 | 71.56 | 65.53 | 59.1 | 53.54 | 66.75 | 75.01 | 55.03 | 27.13 | 26.1 | 55.41 |
| Student(1.8B)+DDK(clustering) | 63.19 | 46.11 | 72.41 | 67.03 | 59.03 | 51.59 | 65.01 | 74.91 | 55.95 | 25.43 | 26.38 | 55.19 |
**Q2: Cite and discuss shearllama and lora shear.**
**A2:** Please See **General Response** (**G.Q2**) on the discussion between Sheared LLaMA and DDK.
The discussion between LoRAShear and DDK is as follows.
The LoraShear is proposed for structured pruning on LLMs, which first creates the dependency graphs over LoRA module and then proceeds progressive structured pruning on LoRA adaptors. To recover the lost knowledge during pruning, LoRAShear proposes a Dynamic Knowledge Recovery scheme on both the pre-training and instructed fine-tuning datasets to effectively narrow down the performance gap to the full models.
In contrast to LoraShear, DDK aims to efficiently transfer the domain knowledge of the teacher network, and propose a factor smooth updating strategy to stabilize the domain knowledge guided sampling process.
We will cite these works in our new version.
**Q3: How the methods perform under other KD losses (reversed KLD, JSD, skew-KLD)?**
**A3:** To address the mode-averaging problem of the student model, where the student model learns an overly smooth distribution in an attempt to cover the teacher’s entire support set, the reversed KLD in MiniLLM and generalized GSD in [R1] are proposed. However, these on-policy approaches (i.e., the reversed KLD and the generalized JSD) show lower efficiency, the skew-KLD in [R2] introduces the adaptive off-policy approach to enhance the efficiency. In contrast to these works, our DDK focuses on efficiently transferring the domain knowledge of the teacher network based on standard forward KD loss, where the motivation and technical details are different from these works a lot. Besides, our DDK is also orthogonal to these KD losses, where we can directly replace the KL divergence in Eq. 3 with the above losses in MiniLLM, [R1] and [R2] easily. In the following table, we report the results of our DDL using different distillation losses, and we observe that these losses have not provided more performance gains for our DDK based on KLD. For this phenomenon, we assume that our DDK focuses on the pre-training setting with a relatively large training data set, where the data quality and data mixture are very important for pretraining (See **G.Q1**). Meanwhile, in MiniLLM and [R1, R2], these losses mainly focus on the SFT phase to improve learning efficiency by addressing the mode-averaging problem. Therefore, we assume that the main contribution on the improvements comes from changing the domain mixture in our DDK. In the future, we will also continue to investigate the optimal distillation loss formulation for our DDK following [R1, R2].
| Models | CEval | MMLU | RACE | C3 | W.G. | GSM8K | C.QA | Arc-E | Arc-C | H.E. | MBPP | Avg |
| ------------------------------ | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
| Student (Qwen1.5-1.8B) | 59.66 | 44.48 | 69.57 | 58.27 | 57.85 | 38.4 | 64.7 | 70.23 | 50.31 | 11.87 | 18 | 49.39 |
| +DDK (Default KLD in Eq.3) | 63.75 | 46.01 | 71.56 | 65.53 | 59.1 | 53.54 | 66.75 | 75.01 | 55.03 | 27.13 | 26.1 | 55.41 |
| +DDK (Reversed KLD in MiniLLM) | 62.19 | 45.85 | 70.66 | 65.03 | 54.67 | 50.6 | 65.11 | 74.51 | 52.38 | 23.44 | 24.67 | 53.56 |
| +DDK(JSD in [R1]) | 62.63 | 45.72 | 71.7 | 65.58 | 57.14 | 53.07 | 67.08 | 75.34 | 54.95 | 22.61 | 25.15 | 54.63 |
| +DDK(skew-KLD in [R2]) | 62.96 | 45.42 | 71.84 | 65.71 | 58.59 | 53.45 | 67.16 | 75.46 | 55.03 | 26.05 | 25.98 | 55.24 |
[R1]: On-policy distillation of language models: Learning from self-generated mistakes, https://arxiv.org/pdf/2306.13649
[R2]: DistiLLM: Towards Streamlined Distillation for Large Language Models, https://arxiv.org/pdf/2402.03898
---
Rebuttal Comment 1.1:
Comment: I am satisfied with the additional discussions and results, upon which I decide to increase my rate.
---
Reply to Comment 1.1.1:
Comment: Thanks for your feedback. We will carefully address your concerns in our new version. | Summary: The work introduces a novel framework for knowledge distillation (KD) for LLMs. The key innovation of DDK is its dynamic adjustment of the distillation dataset composition based on domain performance differences between the teacher and student models. The paper presents extensive evaluations demonstrating that DDK significantly improves performance in various KD settings, outperforming both continuous training baselines and existing KD methods.
Strengths: 1. The authors provide extensive empirical evidence demonstrating the effectiveness of DDK in improving the performance of student models across various benchmarks.
2. As the computational and storage demands of LLMs are significant barriers to their widespread deployment, KD is a promising solution. The proposed KD method is simple and easy to follow.
Weaknesses: 1. Discuss the difference between DDK and the Dynamic Batch Loading proposed by Sheared LLaMA[1], which is also
proposed to adjust domain proportions for dynamically training smaller models. They also identify discrepancies in loss between smaller and larger models across various domains, and accordingly, they sample more data from domains where the discrepancy is more pronounced. While they concentrate on structural pruning, it is akin to the DDK. Consequently, I perceive the novelty of DDK as being somewhat limited.
2. The results of Qwen 1.5 in Table 1 are not significantly convincing. The MMLU/HumanEval of Qwen 1.5 1.8B in the Qwen official blog are 46.8/20.1 while the authors' report is 44.5/11.9. In addition, compared to the official results, we can see that the DDK fails to improve the model of the students on MMLU. The authors need to check this and provide **more robust results of baselines**.
[1] SHEARED LLAMA: ACCELERATING LANGUAGE MODEL PRE-TRAINING VIA STRUCTURED PRUNING. Xia et al., 2023
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. In the setting of paper, the domains are predefined. How to extend the DDK framework for new domain during the distillation training process? Could give more experiments on continual domain learning settings?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments.
**Q1: Discuss with Sheared LLaMA. Novelty.**
**A1:** Please See **General Response** (**G.Q1** and **G.Q2**).
**Q2: Qwen1.5 results on MMLU and Humaneval.**
**A2:** For MMLU, the accuracy from Qwen Blog is based on a 5-shot setting, while the result in Table 1 is based on a zero-shot setting (See Line 182). Besides, we provide the 5-shot result of MMLU in Table 12 and observe the result is 45.59, which is close to the provided result (46.8) of the Qwen blog. Besides, we assume the gap between 46.8 and 45.59 comes from the evaluation prompt. Specifically, the evaluation prompt for DDK is as follows:
```SQL
# Prompt-1
for opt in ['A', 'B', 'C', 'D']:
prompt="Answer the question.\nQuestion: {question}\nChoices:\nA. {A}\nB. {B}\nC. {C}\nD. {D}\nAnswer: " + opt
```
The ``{question}``, ``{A}``, ``{B}``, ``{C}``, ``{D}`` are the corresponding inputs for each evaluation sample.
We also have tried the following prompt from OpenCompass [https://github.com/open-compass/opencompass], where the task description is also added to the evaluation prompt. Note that the ``mmlu_all_sets`` contains 57 tasks. Based on this prompt, we obtain 46.88 and 47.05 under the zero-shot and few-shot settings, respectively.
```SQL
# Prompt-2
mmlu_all_sets = ['college biology', 'college chemistry', 'college computer science', ..., 'human aging', 'us foreign policy', 'conceptual physics']
for task in mmlu_all_sets:
for opt in ['A', 'B', 'C', 'D']:
prompt="The following are multiple choice questions (with answers) about" + task + '\n\n' + "{question}\nA. {A}\nB. {B}\nC. {C}\nD. {D}" + '\n' + 'Answer: ' + opt
```
We also provide the following results and DDK achieves consistent improvements in MMLU.
| | Prompt-1 | Prompt-2 | Prompt-1(few-shot) | Prompt-2(few-shot) |
| -------------- | -------- | -------- | ------------------ | ------------------ |
| Student (1.8B) | 44.48 | 46.88 | 45.59 | 47.05 |
| +DDK | 46.01 | 47.95 | 47.59 | 48.12 |
For Humeneval, we observe that the results of Qwen1.5-1.8B are very sensitive to the prompt, and we use the naive testing prompt (accuracy of 11.87) as follows:
```SQL
# Prompt-1
prompt='{prompt}'
```
Similarly, we have tried following two prompts following OpenCompass:
```SQL
# Prompt-2
prompt='Complete the following python code:\n{prompt}'
# Prompt-3
prompt='You are an intelligent programming assistant to produce Python algorithmic solutions.\nCan you complete the following Python function?\n```python\n{prompt}\n```'
```
And we observe the results of ``Prompt-2`` and ``Prompt-3`` are 23.17 and 26.83, respectively, which are higher than the reported result (20.1) on Qwen blog.
Meanwhile, we report the DDK results on these three evaluation prompt settings. In the following table, we observe that DDK still achieves better performance than the baseline. In new version, we will provide a detailed analysis of the evaluation prompt and provide more solid and robust results.
| | Prompt-1 | Prompt-2 | Prompt-3 |
| -------------- | -------- | -------- | -------- |
| Student (1.8B) | 11.87 | 23.17 | 26.83 |
| +DDK | 27.13 | 29.59 | 31.08 |
**Q3: Extend DDK for new domains during distillation.**
**A3:** In distillation, when a new domain is introduced, we first need to prepare the validation dataset for this domain following Appendix B.1 (Line 490-510). Then, following Eq. (1), based on the current model checkpoint, we compute the perplexity scores over the validation sets of all domains for student and teacher respectively, and obtain the newly initialized domain discrepancy factor, which is then used to change the domain mixture following the Algorithm 1 of the main paper.
**Q4: Experiments on continual domain learning setting.**
**Q4:** We provide the results of involving a new source domain data (i.e., Law) using the Pile-of-Law dataset [https://huggingface.co/datasets/pile-of-law/pile-of-law]. Specifically, we first train the original 10 domains for 5B/10B tokens. Then, we append the Law domain into the training dataset. After that, following Algorithm 1 of the main paper, we continue the distillation training on the student for additional 10B/5B tokens on these 11 domains, which aims to achieve a fair comparison with the DDK baseline results on 10 domains for about 15B tokens. To show the effect of introducing the law domain, we additionally report the results on the law subset of MMLU-Pro[https://huggingface.co/datasets/TIGER-Lab/MMLU-Pro]. (MMLU-Pro has 14 high-quality subsets with challenging questions.), and observe that by introducing the law domain, better results on the MMLU-Pro(Law) are obtained, and results on other datasets are also preserved well.
| Models | MMLU | RACE | W.G. | GSM8K | Arc-E | Arc-C | MMLU-Pro(Law) | Avg. |
| ------------------------------------------------------------ | ----- | ----- | ----- | ----- | ----- | ----- | ------------- | ----- |
| Student (Qwen1.5-1.8B) | 44.48 | 69.57 | 57.85 | 38.4 | 70.23 | 50.31 | 14.4 | 49.32 |
| Student+DDK(train 5B on original 10 domains) | 44.12 | 69.79 | 58.17 | 44.35 | 72.39 | 52.75 | 14.8 | 50.91 |
| Student+DDK(train 5B on original 10 domains+10B with additional new law domain) | 45.98 | 71.43 | 58.61 | 51.12 | 74.78 | 54.95 | 16.29 | 53.31 |
| Student+DDK(train 10B on original 10 domains) | 45.85 | 70.96 | 59.19 | 49.66 | 74.07 | 53.18 | 14.68 | 52.51 |
| Student+DDK(train 10B on original 10 domains+5B with additional new law domain) | 46.12 | 71.02 | 58.8 | 52.51 | 74.36 | 54.27 | 15.75 | 53.26 |
| Student+DDK(train 15B on original 10 domains, baseline in Table 1) | 46.01 | 71.56 | 59.1 | 53.54 | 75.01 | 55.03 | 14.74 | 53.57 |
---
Rebuttal Comment 1.1:
Title: Looking forward to Feedback as Discussion Deadline Approaches
Comment: Hi, we sincerely thank you very much for these constructive comments and evaluation of our manuscript. As the discussion phase will be closed on Aug 13 11:59 pm AoE, we would like to kindly ask you to take a look at our responses and reevaluate our work based on our clarifications. Please let us know whether our response addresses your concerns or whether there is any further detail we can provide to help address these concerns.
Thank you again for dedicating your time to reviewing our paper. | Summary: This work proposed a KD strategy for LLMs. Specifically, with assess to the domain-specific performance of both the teacher and student LLMs, DDK uses domain knowledge guided sampling to dynamically update the data mixture. In addition the paper also conducts a statistical analysis of the domain distribution of the datasets involved. The training process is relatively straightforward and easy to generalize. The experimental results also show that DDK's training method improves the performance average of different data sets.
Strengths: 1. A complete training algorithm is designed, and the process is explained clearly. The process of the DDK algorithm is easy to extend to the training process of other models.
2. The authors conducted a comprehensive knowledge distillation experiment on two large model families and a comprehensive ablation study.
Weaknesses: 1. Although the method proposed in this paper is easy to understand and effective, I doubt that the method in this paper is limited to LLMs. In other words, this paper does not mention (or needs to explain) how previous researchers (before LLMs) performed domain-enhanced distillation for domain-biased datasets, and why these previous methods cannot be applied to the distillation of LLMs to achieve similar results. The advantages and novelty of this paper's domain sampling method over previous work that may be transferable to LLMs need further explanation.
2. In the experimental part, there is a lack of key comparison between DDK and other methods that focus on similar domain sampling. The baseline actually involves the work that focuses on domain in KD (cited as [60], etc.), but the subsequent analysis only compares the total average score of DDK and these works, which seems to lack comparison and analysis of similar works. As far as I know, other baselines are more general KDs, and do not focus on domain information.
It is certainly worth noting that DDK performs better than baselines such as MiniLLM, but I think what can better illustrate the effectiveness and novelty of this paper is the comparison with similar domain data sampling, including experimental analysis.
3. In the experimental section, you can add experiments on the dataset and the scale property of the teacher model. This is a possible suggestion.
Technical Quality: 3
Clarity: 3
Questions for Authors: The questions I expect to ask would be similar to the above section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The method proposed in this paper is very complete and solves the problem well, but there are perhaps two points to note:
1. The distillation method designed in this paper does not seem to be necessarily limited to LLMs. One of the main difficulties of KD on LLM may be that the distribution difference between the teacher and student models is too large, but the key points that this paper focuses on and solves seem to be orthogonal to this point. So it also leads to a similar question: why previous similar methods that focus on domain sampling cannot be migrated to LLMs, and what are the advantages and novelties of this paper's design.
2. Following point 1, in the experiment, what are the specific advantages of this method over the predecessors in the domain problem (not just the overall average, of which result shows DDK outperformed others). This may be what I am very curious about after reading the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your nice comments and suggestions.
**Q1: Compare domain-enhanced KD methods.**
**A1:** After investigating KD [R1] and applications on LLMs [R2], we have observed existing domain-enhanced KD methods can be divided into two categories. The first is cross-domain KD for domain adaptation to improve the generalization abilities of the unseen target domains based on seen source domains. In contrast, DDK aims to improve training efficiency and effectiveness given multiple seen domains, and these cross-domain KD methods cannot be applied to LLM pretraining. The second is based on multi-teacher strategy [R3, R4], which first trains different domain expert models and then distills the overall domain capacities into one single model. Thus, we implement two baselines following [R3, R4].
[R1]. Knowledge Distillation: A Survey
[R2]. A Survey on Knowledge Distillation of Large Language Models
[R3]. BAM! Born-Again Multi-Task Networks for Natural Language Understanding
[R4]. Knowledge Fusion of Large Language Models
Specifically, first, we train the domain expert models on 10 different domains as teacher models, where each domain expert is trained on about 5B tokens on the corresponding domain. Then, for the first baseline [R3], we select a corresponding teacher to produce logits for each sample based on the belonged domain. For the second baseline [R4], we ensemble logits from different domain teacher models. The table below lists the PPL on validation datasets for 10 domains.
| | CC | C4 | The Stack | Wikipedia | Books | Arxiv | StackExchange | Chinese Books | Chinese CC | OpenWebMath |
| ------- | ----- | ----- | --------- | --------- | ----- | ----- | ------------- | ------------- | ---------- | ----------- |
| Student | 10.15 | 18.50 | 4.14 | 13.27 | 25.81 | 12.97 | 10.04 | 23.54 | 21.20 | 9.50 |
| +[R3] | 10.00 | 19.14 | 4.14 | 10.12 | 22.73 | 12.90 | 8.37 | 20.59 | 20.19 | 4.75 |
| +[R4] | 10.06 | 19.09 | 4.38 | 9.54 | 26.01 | 12.72 | 9.46 | 20.51 | 20.12 | 5.97 |
| +DDK | 10.02 | 18.94 | 3.95 | 8.36 | 20.43 | 11.23 | 7.22 | 19.69 | 20.33 | 4.62 |
We also report the downstream results and observe that baseline methods are inferior to DDK. We assume that these methods only consider producing better teacher guidance, and do not consider the effect of the domain mixture for LLM pretraining.
| Models | MMLU | RACE | W.G. | GSM8K | Arc-E | Arc-C | H.E. | Avg. |
| ------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
| Student | 44.48 | 69.57 | 57.85 | 38.4 | 70.23 | 50.31 | 11.87 | 48.96 |
| +[R3] | 45.02 | 68.25 | 58.8 | 47.24 | 71.17 | 52.44 | 19.09 | 51.72 |
| +[R4] | 45.53 | 68.32 | 58.25 | 39.65 | 74.33 | 53.84 | 20.26 | 51.45 |
| +DDK | 46.01 | 71.56 | 59.1 | 53.54 | 75.01 | 55.03 | 27.13 | 55.34 |
**Q2: Experiments on dataset and scale property of teacher.**
**A2:** For experiments on dataset, we have performed experiments using StarCoder on Stack V2 dataset in Table 11. Besides, we perform experiments on The Pile dataset [https://arxiv.org/pdf/2101.00027] with 22 domains, and we use Qwen-1.5 14B and Qwen-1.5 1.8B as teacher and student following Table 1. Note that as The Pile focuses on English, we do not evaluate CEval and C3.
| Models | MMLU | RACE | W.G. | GSM8K | C.QA | Arc-E | Arc-C | H.E. | MBPP | Avg. |
| ----------------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
| +CPT (main paper) | 45.01 | 69.00 | 56.98 | 42.50 | 64.78 | 72.00 | 51.03 | 13.12 | 20.45 | 48.32 |
| +DDK (main paper) | 46.01 | 71.56 | 59.10 | 53.54 | 66.75 | 75.01 | 55.03 | 27.13 | 26.10 | 53.36 |
| +CPT(The Pile) | 44.74 | 68.86 | 56.43 | 40.56 | 64.29 | 71.13 | 50.34 | 12.20 | 18.48 | 47.45 |
| +DDK(The Pile) | 45.72 | 69.98 | 58.80 | 45.41 | 65.44 | 74.03 | 54.52 | 22.56 | 22.59 | 51.01 |
We observe that DDK achieves better performance than CPT baseline on The Pile and results on The Pile are relatively lower than results in our main paper. We assume that the data quality used in our main paper (See Line 166-171) is better than The Pile, as we directly use the original Pile dataset in 2021 without additional cleaning strategies.
For scale property of teacher, we have used different sizes of teacher models (i.e., Qwen1.5-14B and Qwen1.5-7B) to distill Qwen1.5-1.8B in Table 1 and Table 8, and average results distilled by Qwen1.5-14B and Qwen1.5-7B are 55.41 and 53.64, respectively, which means that better results are achieved using a relatively larger teacher.
**Q3: Advantages and novelties of DDK.**
**A3:** Please See **General Response** (**G.Q1**).
**Q4: Advantages over predecessors (not the overall average).**
**A4:** First, in Fig. 1, DDK can reduce the performance gaps between teacher and student a lot in many domains. For example, for DDK, PPL on the ``Books`` domain is less than other methods a lot, and PPL on ``StackExchange`` domain is very close to teacher.
Second, in Table 1, we observe that a large relative performance gap usually exists in reasoning-related tasks for teacher and student. For example, relative performance gap ratios are 43.22% [(67.63-38.4)/67.63=43.22%] and 68.59%[(37.80-11.87)/37.80=68.59%] for GSM8K and Humeneval, respectively. In contrast, for knowledge-related tasks, the relative performance gap ratios are 24.17% [(78.68-59.66)/78.68=24.17%] and 30.87%[(64.34-44.48)/64.34=30.87%] for CEval and MMLU. After using DDK, the relative performance gaps are 20.84%, 28.23%, 18.98%, 28.49% on GSM8K, Humeneval, CEval and MMLU. In contrast, for ``CPT & DoReMi'' and MiniLLM, the relative performance gaps are [32.18%, 76.85%, 21.91%, 30.15%] and[27.69%, 55.34%, 21.63%, 29.95%], respectively. Thus, we assume that DDK can greatly improve the weaknesses on reasoning-related tasks.
---
Rebuttal Comment 1.1:
Title: Looking forward to Feedback as Discussion Deadline Approaches
Comment: Hi, we sincerely thank you very much for these constructive comments and evaluation of our manuscript. As the discussion phase will be closed on Aug 13 11:59 pm AoE, we would like to kindly ask you to take a look at our responses and reevaluate our work based on our clarifications. Please let us know whether our response addresses your concerns or whether there is any further detail we can provide to help address these concerns.
Thank you again for dedicating your time to reviewing our paper.
---
Rebuttal Comment 1.2:
Title: Combine MiniLLM with the domain sampling method of DDK.
Comment: The comparison with MiniLLM in a similar domain sampling strategy is also discussed in **Q3** from **Reviewer oZHE**, where we can directly replace the KL divergence loss in Eq. 3 with the reversed KLD loss in MiniLLM. In **A3** for **Q3** from **Reviewer oZHE**, we call this alternative method as **DDK (Reversed KLD in MiniLLM)**, and we observe that replacing the distillation loss does not bring more gains when compared with the DDK with default KL divergence loss. For this phenomenon, just discussed in **A3** , we assume that our DDK focuses on the pre-training setting with a relatively large training data set, where the data quality and data mixture are very important for pretraining (See **G.Q1**). Meanwhile, the MiniLLM focuses on the SFT phase to address the mode-averaging problem in distillation. Therefore, we assume that the main contribution on the improvements comes from changing the domain mixture in our DDK.
We will include this discussion in our new version.
---
Rebuttal Comment 1.3:
Comment: Dear **Reviewer foae**,
Hello! We appreciate your support for the detailed suggestions on our paper. We would like to kindly ask you to take a look at our responses and reevaluate our work given our clarifications. Again, we would like to sincerely thank you very much for these constructive comments and evaluation of our manuscript. | Rebuttal 1:
Rebuttal: ## **General Response**
Thanks a lot for handling/reviewing our submitted manuscript. We would like to thank the reviewers for their thoughtful and constructive comments and suggestions. By addressing each of the issues raised by the reviewers, we believe that the quality and clarity of our DDK can be improved a lot. The general responses are summarized as follows:
**G.Q1: The advantages and novelties of this paper's design.**
**A1:** The pretraining datasets for LLMs are typically sampled from a mixture of many domains. These data from different domains interplay with each other, showing complex interchangeable, unrelated, or contradictory relationships. Besides, many works have claimed that the data mixture of the pretraining data greatly affects the effectiveness of LLMs[R1,R2,R3,R4,R5,R6] and selecting the optimal domain mixture is a challenging problem. Based on knowledge distillation, our DDK aims to produce effective lightweight LLMs by transferring the domain knowledge of the teacher network. Specifically, in Fig. 1 of the main paper, we observe that the performance gap between student (Qwen-1.5 1.8B) and teacher (Qwen-1.5 14B) varies significantly across domains, which means that we need to adjust the domain mixture to better improve the weaker domains by reallocating more data sampling weights. Besides, in Table 1, Table 2, Table 7 and Table8, our DDK is better than the baseline KD a lot, which shows the effectiveness of changing the domain mixture in distillation. Therefore, we are the first to investigate the effectiveness of domain mixture for distilling LLMs and introduce the DDK to enhance the student LLMs. Meanwhile, we observe that naively changing the domain weight compromises the stability of the distillation (See Fig.3) and propose the factor smooth updating strategy to stabilize the distillation.
Overall, our DDK is motivated by the effectiveness of the domain mixture on the lightweight LLMs and we design corresponding strategies to better transfer the domain knowledge of the teacher network.
**G.Q2: Discuss with Sheared LLaMA.**
**A2:** Datasets for training LLMs are sampled from many domains and the data mixture of the pretraining data greatly affects the effectiveness of LLMs[R1,R2,R3,R4,R5,R6]. We should emphasize that the motivation of adjusting domain mixtures is similar for Sheared LLaMA and our DDK, but the solved tasks and technical details are different a lot.
First, for the solved tasks. the Sheared LLaMA is proposed for structure pruning, while our DDK is proposed for knowledge distillation on the LLMs. Therefore, our DDK is orthogonal to the Sheared LLaMA and we can further improve the performance of small models pruned by Sheared LLaMA. In the following table, following the setting of Table 2, we propose to use the LLaMA2-13B to distill the Sheared-LLaMA-1.3B model, which is pruned by Sheared LLaMA based on LLaMA2-7B.
| Models | MMLU | RACE | W.G. | Arc-E | Arc-C | Avg. |
| ---------------------------- | ----- | ----- | ----- | ----- | ----- | ----- |
| Student (Sheared-LLaMA-1.3B) | 25.71 | 23.62 | 56.81 | 41.53 | 28.14 | 35.16 |
| +DDK | 28.25 | 27.43 | 58.12 | 44.92 | 31.85 | 38.14 |
Second, for the technical details, the sheared LLaMA needs to first fit a scaling function using three open-sourced LLaMA models (i.e., LLAMA2-7B/13B/70B) when pruning the LLaMA model series, where the fitting data points are very limited. In Sheared LLaMA, the authors also claim the estimated reference losses for different domains are also biased as the limited data points for estimating the scaling law parameters. Besides, in many real-world pruning scenarios, we cannot usually obtain a series of models under the same training setting. Thus, we cannot predict the losses across different domains of the small size model when a series of models are not provided. In contrast, in DDK, we directly use the domain loss predicted by the teacher model as the reference loss without fitting the scaling function, where the teacher predicted losses are used as the accurate guidance to improve the student model. Besides, the Sheared LLaMA is proposed to enhance the training efficiency of the continued pre-training setting of the pruned model, the guidance of the teacher model is not included. In addition, our DDK also introduces the factor smooth updating strategy, which is not used in Sheared LLaMA and can make the change of the domain mixture more stable. Moreover, in the following table, we also provide the results of replacing our proposed factor smooth updating strategy with the strategy of Sheared LLaMA, and observe that DDK is better a lot, which further shows the effectiveness of our factor smooth updating strategy.
| Models | MMLU | RACE | W.G. | Arc-E | Arc-C | Avg. |
| ------------------------------------------------------------ | ----- | ----- | ----- | ----- | ----- | ----- |
| Student+DDK(Qwen-1.5-1.8B) | 46.01 | 71.56 | 59.1 | 75.01 | 55.03 | 61.34 |
| Student+DDK using Sheared LLaMA updating strategy (Qwen-1.5-1.8B) | 45.03 | 70.16 | 57.09 | 73.98 | 53.18 | 59.88 |
[R1]. An empirical analysis of compute-optimal large language model training, https://arxiv.org/abs/2203.15556
[R2]. Data selection for language models via importance resampling, https://arxiv.org/abs/2302.03169
[R3]. Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning, https://arxiv.org/abs/2310.06694
[R4]. RegMix: Data Mixture as Regression for Language Model Pre-training, https://arxiv.org/abs/2407.01492
[R5]. Data Mixing Laws: Optimizing Data Mixtures by Predicting Language Modeling Performance, https://arxiv.org/abs/2403.16952
[R6]. DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining, https://arxiv.org/abs/2305.10429 | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
RestoreAgent: Autonomous Image Restoration Agent via Multimodal Large Language Models | Accept (poster) | Summary: For real-world images corrupted by multiple simultaneous degradations, this paper first analyzes the limitations of using all-in-one restoration models and various task-specific models. The authors then introduce RestoreAgent, which automatically identifies the types of degradation in a degraded image, determines the sequence of restoration tasks, and selects suitable models from the model pool. RestoreAgent presents an automated restoration pipeline that requires only an input image and a general human instruction, without any prior knowledge of the involved degradation tasks or manually predefined task sequences.
Strengths: 1. The paper comprehensively analyzes the challenges and limitations of employing all-in-one models and multiple task-specific expert models with fixed or random task sequences, as well as fixed or random models for each task.
2. The authors evaluate various configurations of RestoreAgent using diverse objective image quality metrics (PSNR, SSIM, LPIPS, DISTS, and their combinations), all of which outperform the human expert model on the corresponding metric.
3. RestoreAgent exhibits the scalability by extending to new tasks and models with minimal computational resource.
4. The presentation, including writing, analysis, and visualization, is clear and easy to follow.
Weaknesses: 1. Incomplete descriptions about data construction.
- Authors randomly select up to four types of degradation from a degradation set (noise, blur, JPEG, rain, haze, and low-light) to construct paired training data. According to data synthesis strategies in [1,2], JPEG compression is typically performed after noise and blur, and in the final order. Is the degradation order of JPEG compression in this paper the same? If not, the authors should discuss the reasonableness of random sampling.
- What are the components of 23k paired data? One degraded image for each high-quality image or many degraded versions for each high-quality image?
- What is the configuration in ablation studies about training data amount? Simultaneously scaling up low & high-quality images or synthesizing more low-quality images for each high-quality image? If it’s the former, will increasing the number of degraded images while keeping the number of high-quality images unchanged improve performance?
2. Inference time for input images with diverse resolution.
- The authors are suggested to report the running time for input images of various resolutions. This should include the total time, the running time for the RestoreAgent, and the running time for the subsequent restoration models. The reviewer is curious whether the agent's response time exceeds that of the restoration models when processing high-resolution images, such as those with 4K resolution.
3. Scalability for new tasks and models.
- Section 4.5 demonstrates that the proposed RestoreAgent can extend to new tasks and models in just half an hour, surpassing human expert-level performance on the new task. However, it is unclear whether adaptation to the new task results in performance degradation on prior tasks, similar to the catastrophic forgetting problem in continual learning. The authors are encouraged to report the performance of the fine-tuned model on the previous tasks to address this concern.
[1] Wang X, Xie L, Dong C, et al. Real-esrgan: Training real-world blind super-resolution with pure synthetic data[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 1905-1914.
[2] Zhang K, Liang J, Van Gool L, et al. Designing a practical degradation model for deep blind image super-resolution[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 4791-4800.
Technical Quality: 3
Clarity: 4
Questions for Authors: Addressing concerns in the Weaknesses with thorough explanations and additional experiments would significantly enhance my confidence in this work. A satisfactory response to these points may lead to a reconsideration of the current evaluation.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The manuscript includes the checklist guidelines.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: `Q1`: **Degradation order of JPEG compression.**
Thank you for bringing up this important point regarding the degradation order. In our study, the order of JPEG compression is not fixed and is entirely random, unlike the sequence suggested in references [1,2]. The strategy of placing JPEG compression at the end, as mentioned by the reviewer, represents a subset of the data we constructed. Therefore, our dataset inherently includes this configuration while also supporting a broader range of degradation combinations.
Our primary objective is to validate the robustness and feasibility of our proposed pipeline. By incorporating a wider variety of degradation sequences, we can more effectively demonstrate the generalizability and effectiveness of our method across different scenarios. Moreover, the specific degradation order can be flexibly defined and adjusted based on user requirements, ensuring adaptability to various applications.
`Q2`: **What are the components of 23k paired data? One degraded image for each high-quality image or many degraded versions for each high-quality image?**
Our dataset comprises one degraded image for each high-quality image. This means that each image pair consists of a unique degraded image and its corresponding high-quality counterpart, ensuring diverse image backgrounds. Specifically, we utilized images from the DIV2K and Flickr8K datasets, as well as the RESIDE dataset to construct images containing haze. Additionally, the LOLv2 Real dataset was employed to create low-light images.
`Q3`: **What is the configuration in ablation studies about training data amount?**
As previously mentioned, each high-quality image in our dataset corresponds to a single degraded image. Therefore, when we scale up or down the training data, both low-quality and high-quality images increase or decrease simultaneously.
Increasing the number of degraded images while keeping the number of high-quality images unchanged would indeed improve performance. As discussed in our manuscript, image degradation is complex, and even minor variations can lead to different degradation characteristics, requiring varied handling strategies. Thus, adding more degraded images extends the range of degradations the model is exposed to, enhancing its ability to manage a broader spectrum of degradation scenarios.
`Q4`: **running time**
The RestoreAgent in our approach is constructed based on a multimodal large language model. Specifically, the computational time for the RestoreAgent is approximately 0.4 seconds. Therefore, the additional running time introduced by the RestoreAgent is minimal. In our experiments, the image patch size used is 512x512 pixels. In practice, the MLLM resizes all inputs to a fixed size, such as 224x224. For processing 4K resolution images, there are several strategies to consider:
(1) Single Patch Representation: Often, image degradation across different regions is uniform. In such cases, a single patch can represent the overall degradation of a 4K image.
(2) Patch-wise Processing: The image can be divided into smaller blocks, with each block represented by a patch.
(3) Resizing: The 4K image can be resized appropriately before making decisions using the multimodal model.
Given these strategies, decision-making for 4K images does not significantly increase the required time.
Regarding the running time of the restoration models, it is important to note that our pipeline utilizes restoration models based on user preferences. Consequently, we can select models of varying sizes and running times according to specific needs. Therefore, concerns about the running time of the restoration models may not be necessary as the flexibility in our pipeline allows for optimization and customization based on user requirements.
`Q5`: **it is unclear whether adaptation to the new task results in performance degradation on prior tasks, similar to the catastrophic forgetting problem in continual learning.**
Thank you for pointing out this important concern. As shown in Table 1, we report the performance of the fine-tuned model on previous tasks after adaptation. The changes in performance are minimal. We believe that these minor variations in performance are primarily attributable to the **inherent randomness in the training process**. We believe that with careful tuning, the model's performance can be further enhanced.
This demonstrates a crucial point: the current capabilities of multimodal large language models are more than sufficient to handle this task. Their robust reasoning abilities support the management of scenarios far more complex than those presented in our experiments.
We believe one reason for this is that the newly added degradation task does not conflict with the existing tasks; in some aspects, it is very similar. For example, in the combination of snow + noise + JPEG, the original model already has extensive knowledge of handling noise + JPEG, which can be easily applied to the new combination.
We are confident that in more complex scenarios (with more restoration tasks and more restoration models), RestoreAgent will demonstrate its significant advantages compared with human. In such complex scenarios, human decision-making will be less effective since the space is much bigger, whereas the powerful capabilities of learning-based multimodal large language models will exhibit even greater superiority.
| method | Average Ranking on All Previous Datasets | Ranking on the New Task |
|---|---|---|
| Human Expert | 19.5% | 21.2 / 64 |
| RestoreAgent | 12.9% | - |
| RestoreAgent + New Task | 13.1% | 4.5 / 64 |
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their detailed responses. After considering the other reviews and the replies provided, I can confirm that the authors have addressed all my concerns about data construction, catastrophic forgetting, and running time. Additionally, the authors provided potential solutions for handling high-resolution inputs. Thus, I raised my final rating to "accept." I also encourage the authors to explore the incorporation of RLHF and human preference optimization in future updates. I believe that this agent-based paradigm has the potential to spark a new wave of research in the low-level vision community. | Summary: This paper proposes a new pipeline to address multiple degradation, like noise, blur and low light. Besides, a RestoreAgent with multimodal large language models is introduced to assess the type and extent of degradations in the input images and perform dynamic restorations.
Strengths: 1. The paper is well-written and well-organised.
2. The whole pipeline seems to be novel and reasonable.
3. The method achieves SOTA performance on several benchmarks and different degradation tasks.
Weaknesses: The overall motivation of this paper is commendable, but I have a few concerns:
1. The author mentions that RestoreAgent can autonomously assess the type and extent of degradation in input images and perform restoration. This strategy is interesting. However, I am wondering how the order of different enhancement techniques is defined. For example, if the input has noise and rain streaks, how is the order of dehazing and denoising techniques determined? Will this affect performance?
2. In contrast to other image enhancement techniques, the proposed RestoreAgent should first find a suitable restoration task and then select the most appropriate model to enhance the quality of the input. Therefore, I am concerned whether this process will increase the inference time. The authors should provide some computational analysis.
3. The enhancement capabilities of this work rely heavily on existing enhancement frameworks. If existing frameworks cannot work well in some cases, such as extreme noise effects, I guess the proposed RestoreAgent may also fail. Is this true? If so, I suggest the authors mention this in the limitations section.
4. The explanation of "ranking" and "balanced" in Table 1 is still unclear. The authors should clarify the definitions of these terms.
5. It would be better to show more visual comparisons of the RestoreAgent.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are not mentioned in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: `Q1`: **How the order of different enhancement techniques is defined. For example, if the input has noise and rain streaks, how is the order of dehazing and denoising techniques determined? Will this affect performance?**
Thank you for highlighting this important aspect. The order of applying different enhancement techniques can significantly impact the restoration results, as illustrated in Figure 2 (b.1, b.2) of our manuscript.
**Impact of Degradation Order on Patterns**
When constructing training data, the sequence of applying degradations like noise, JPEG compression, and blur affects the resulting degradation patterns. For instance, adding noise before JPEG compression/blur alters the noise characteristics, making it smoother due to the compression/blur effects. Conversely, adding JPEG compression/blur before noise results in unaltered noise characteristics. This interplay between degradations creates distinct patterns based on their application order. The MLLM can identify different task sequences through degradation patterns.
**Influence of Restoration Order on Performance**
Similarly, the order of applying restoration techniques affects the degradation characteristics. For example, applying a deblurring model first can sharpen noise and rain streaks, altering their features. Conversely, applying a denoising model first can reduce noise but may impact rain streak features, making subsequent deraining less effective. An interesting case is when noise and rain streaks coexist: low noise levels may not impact deraining, but high noise levels can render deraining ineffective. Conversely, denoising first might alter rain streak features, affecting deraining performance.
Our proposed RestoreAgent, being a learning-based method, is trained on extensive data to recognize degradation patterns and make informed decisions based on subtle variations. It autonomously determines the optimal sequence of restoration steps to effectively remove degradations. This is achieved through a comprehensive understanding of how different degradations and restoration techniques interact.
`Q2`: **Inference time.**
RestoreAgent is built on a MLLM. In our experiments. The image patch size we used is 512x512 pixels. In practice, the MLLM resizes all inputs to a fixed size, such as 224x224, and the inference time of RestoreAgent is approximately 0.4 seconds in RTX4090. The increase in inference time is very minimal.
`Q3`: **If existing frameworks cannot work well in some cases, such as extreme noise effects, I guess the proposed RestoreAgent may also fail. Is this true? If so, I suggest the authors mention this in the limitations section.**
We appreciate the reviewer's concern regarding the generalization capability of our model when faced with degradations outside the predefined scope. Here, we would like to further elaborate on this aspect.
First of all, it is true that there might be generalization issues when encountering unseen types or levels of degradation. We acknowledge this limitation and have discussed it in detail in our manuscript.
However, the primary and most significant contribution of our work is to demonstrate the feasibility and effectiveness of the proposed pipeline. The essence of our work lies in utilizing multimodal large models as agents for image restoration, which has shown promising results across various conditions. Our experiments show that the RestoreAgent performs exceptionally well in most complex scenarios, thus proving its viability.
In addition, our approach allows for flexible integration of different tools. If the model encounters degradations beyond the predefined scope, additional tools can be seamlessly incorporated. This adaptability ensures that our pipeline remains robust even when faced with unexpected degradation types.
Essentially, the generalization issue is more related to the specific image restoration models (tools) used within the pipeline, rather than the pipeline itself.
Lastly, one of our future research directions is to significantly expand the scope of restoration tasks and incorporate a greater variety of restoration models. By doing so, we aim to enhance the system's capability to handle real-world complex degradations more effectively.
This expansion will involve integrating models trained on a broader spectrum of degradation types and more generalizable restoration models, ensuring that the pipeline can generalize better to unseen conditions.
`Q4`: **The explanation of "ranking" and "balanced" in Table 1 is still unclear. The authors should clarify the definitions of these terms.**
We have added the following clarification to the paper:
**The explanation of "ranking":**
We've clarified the ranking system:
For individual test sets:
X/Y format: X is the average ranking, Y is total combinations.
Example: 6.4/64 means average 6.4th best out of 64 options.
For overall average:
We use percentages due to varying combination counts of diffrent restoration tasks.
Process: Calculate percentage ranking per set, then average.
Example: 12.9% means top 12.9% on average across datasets.
**The explanation of "balanced":**
We have also provided a detailed explanation of "balanced" in Section 4.1 of our manuscript, specifically within the Scoring Function subsection. Balanced refers to the sum of the PSNR, SSIM, LPIPS, and DISTS metrics after each has been standardized. To ensure clarity, we have also added additional explanatory notes directly in Table 1. We hope these changes address your concerns.
`Q5`: **Show more visual comparisons.**
Thank you for your valuable feedback. We have updated our manuscript to include additional visual comparisons of the RestoreAgent. These enhancements provide a clearer demonstration of the effectiveness of our proposed method. Please refer to the updated **PDF file** we have uploaded, which contains these new visual results.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their detailed response. After reviewing all the explanations and the provided visual results, I can confirm that all of my concerns have been fully addressed. | Summary: This paper presents an image restoration pipeline designed to handle various degradation types and levels by leveraging MLLM’s capabilities to select the appropriate model and determine the execution order. It begins with an analysis of why execution order and utilizing multiple models for different degradation levels are crucial for restoring complexly degraded images. The paper then constructs an instruction dataset and fine-tunes the MLLM. Experimental results demonstrate the effectiveness of the proposed restoration pipeline.
Strengths: 1.
This work presents a compelling analysis of complex image restoration. This insight is valuable given that degraded images in real-world scenarios often involve multiple types of degradation.
2.
This approach leverages the strengths of different models for handling specific noise levels, thereby eliminating the trade-off between generalization and performance.
3.
This paper formally defines the problem of handling multiple degradations and model selection in image restoration.
4.
Extensive experiments demonstrate superiority of such pipeline in processing degraded images with multiple degradations.
Weaknesses: 1.
In the introduction, it would be helpful to explain how the Multi-Level Learning Model (MLLM) excels at understanding different types and levels of image degradation. This will show why MLLM is well-suited for handling complex combinations of image degradation. Providing this clarity will make the benefits of using MLLM for image restoration more evident.
2.
When incorporating a new type of degradation, the cost extends beyond merely training the MLLM. Please also discuss the process of constructing training data for the newly added degradation and how it integrates with previously trained data.
3.
In lines 211-212, please clarify what the mean and standard deviation are calculated over. The subscript "i" is already used for degradation type and it might be clearer to use another character.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1.
What if the degradation of the input image falls outside the predefined degradation scope? This could present a generalization issue, as the model might not perform well on unseen types or levels of degradation not covered in the predefined scope. Please discuss it.
2.
In Table 2, it would be clearer to highlight the best method for each evaluation criterion. Additionally, please specify which methods the ranking improvement is compared against for better context.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: `Q1`: **In the introduction, it would be helpful to explain how the MLLM excels at understanding different types and levels of image degradation.**
Thank you for your valuable suggestion. We will incorporate the following explanation into our introduction to clarify how MLLMs excel at understanding and handling different types and levels of image degradation.
MLLMs are well-equipped to handle various types and levels of image degradation due to their extensive training on diverse datasets. This training enables them to develop a deep understanding of the relationships between different modalities, which is crucial for identifying and addressing complex degradation patterns.
MLLMs excel in image restoration due to their robust reasoning abilities and advanced pattern recognition skills. They can analyze and interpret intricate details of various degradations, discerning subtle variations in noise, blur, compression artifacts, and other forms of image deterioration. This capability allows MLLMs to make precise decisions on applying the most appropriate restoration techniques, even when faced with complex combinations of degradation effects. For instance, when encountering an image with mixed noise, blur, and compression artifacts, an MLLM can accurately determine the optimal sequence and type of restoration techniques to apply.
Furthermore, the adaptability of MLLMs allows them to quickly learn and optimize for new degradation scenarios through fine-tuning. This flexibility is essential for real-world applications where the types and combinations of degradations can vary widely. By continually updating their knowledge base, MLLMs can maintain high performance across a broad spectrum of image restoration tasks.
`Q2`: **Please also discuss the process of constructing training data for the newly added degradation and how it integrates with previously trained data.**
The process of constructing training data for new degradations follows the same methodology as for existing ones. For instance, if we were to add a new degradation type such as snow, we would randomly combine it with other existing degradations like noise and JPEG.
Specifically, we experiment with different permutations and combinations of these degradations and find the best one to generate diverse training data.
Once the new training data is generated, it is directly added to the original training dataset. This combined dataset is then used to fine-tune the model.
Adding a new task requires a relatively small amount of data and has a quick training time. It allows for continuous improvement of the model without the need for complete retraining from scratch.
We have updated our manuscript to include a more detailed discussion of this process.
`Q3`: **In lines 211-212, please clarify what the mean and standard deviation are calculated over. The subscript "i" is already used for degradation type and it might be clearer to use another character.**
We have revised lines 211-212. Our mean and standard deviation are calculated separately for each metric. Specifically, for each metric, we have results from all possible permutations and combinations restoration pipelines. We calculate the mean and variance using all these data points for each respective metric.
As you correctly pointed out, using the subscript "i" for both degradation type and in the summation could lead to confusion. We have updated our notation to use different characters for clarity.
`Q4`: **What if the degradation of the input image falls outside the predefined degradation scope?**
We appreciate the reviewer's concern regarding the generalization capability of our model when faced with degradations outside the predefined scope. Here, we would like to further elaborate on this aspect.
First of all, it is true that there might be generalization issues when encountering unseen types or levels of degradation. We acknowledge this limitation and have discussed it in detail in our manuscript.
However, the primary and most significant contribution of our work is to demonstrate the feasibility and effectiveness of the proposed pipeline. The essence of our work lies in utilizing multimodal large models as agents for image restoration, which has shown promising results across various conditions. Our experiments show that the RestoreAgent performs exceptionally well in most complex scenarios, thus proving its viability.
In addition, our approach allows for flexible integration of different tools. If the model encounters degradations beyond the predefined scope, additional tools can be seamlessly incorporated. This adaptability ensures that our pipeline remains robust even when faced with unexpected degradation types.
Essentially, the generalization issue is more related to the specific image restoration models (tools) used within the pipeline, rather than the pipeline itself.
Lastly, one of our future research directions is to significantly expand the scope of restoration tasks and incorporate a greater variety of restoration models. By doing so, we aim to enhance the system's capability to handle real-world complex degradations more effectively.
This expansion will involve integrating models trained on a broader spectrum of degradation types and more generalizable restoration models, ensuring that the pipeline can generalize better to unseen conditions.
`Q5`: **Specify Table 2**
Thank you for your suggestion to improve the clarity of Table 2. We have updated Table 2 to highlight the best performing method for each evaluation criterion. We also have added a clear note to Table 2 specifying that the ranking improvement is compared against human experts.
---
Rebuttal Comment 1.1:
Comment: Thanks the detailed response, my concerns were well addressed. | Summary: This paper introduces RestoreAgent, an innovative image restoration system that leverages multimodal large language models to autonomously handle images with multiple types of degradation. The system addresses limitations of existing all-in-one models and fixed task sequences by dynamically adapting to each image's specific degradations. RestoreAgent can identify degradation types, determine appropriate restoration tasks, optimize the execution sequence, select the most suitable models, and execute the restoration process autonomously. The authors present a method for constructing training data and demonstrate that RestoreAgent outperforms existing methods and human experts in handling complex image degradations.
Strengths: 1. This paper represents a innovation and a good contribution in image restoration and potentially opens up a new research direction for this area.
2. The motivation is strong. The authors effectively demonstrate the importance of task execution order and model selection in multi-task scenarios. The designed system adeptly addresses these issues.
3. Experimental results indicate that RestoreAgent's decision-making capabilities in handling complex degradations surpass those of human experts. This kind of pipeline also surpass all-in-one models.
4. The paper is generally well written and clear to understand.
Weaknesses: 1. The paper constructs a training dataset for training the multimodal large language model and a testing dataset as a benchmark for evaluating performance across multiple tasks. More details and explanations regarding the construction methods of these datasets would be beneficial.
2. Table 1 presents performance rankings using both ordinal and percentage forms. The definitions and explanations for these ranking forms are somewhat lacking, which might require readers to spend extra time understanding them. Clearer explanations would facilitate better comprehension.
3. The proposed Autonomous Restoration Agent represents a novel paradigm that is likely to encounter numerous new challenges. Beyond the issues already mentioned in the paper, the authors could consider discussing additional limitations and future research directions for this paradigm. This would help future researchers better follow and improve upon this work.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The current method appears to predict all execution steps at once for a given input image. In Figure 3, each image has a dashed line pointing to the input. Does this imply that after each execution, the result can be fed back as input? (Based on my understanding, this system supports this) The paper seems to lack analysis and experiments related to this aspect. Could the authors provide more details on this part?
2. The authors have proposed a testing dataset to evaluate multi-task processing capabilities. Will this dataset be made publicly available to facilitate further research by other researchers?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: `Q1`. **More details regarding the construction methods of these datasets**
Thank you for your feedback. We have significantly expanded the relevant sections in the revised version of our paper to offer a much more comprehensive explanation of our data preparation process.
Regarding the training dataset, the following is part of the part of newly added detailed content:
The training pairs construction begins with applying various types of degradation to an image. Subsequently, we determine the optimal restoration pipeline using model tools for processing. This involves generating all possible permutations of task execution sequences and model combinations, applying each pipeline to the degraded image, and assessing the quality of the restored outputs using a scoring function. The model for different restoration tasks are shown in **Tbale 1**.
**Table 1: Model tools for different restoration tasks.**
| Task | Model Tools |
|---|---|
| Gaussian denosing | Restormer (trained on small / medium / large noise level) |
| DeJPEG | Restormer (trained on low / high quality factor) |
| Dehazing | RIDCP |
| Deraining | Restormer |
| Motion deblurring | DeblurGANv2 |
| Low-light enhancement | Retinexformer |
**Table 2** illustrates 5 scenarios incorporated into our dataset, designed to enhance the versatility and robustness of the RestoreAgent model:
**Table 2: Five scenarios for dataset construction and corresponding examples.**
| Index | Input | Answer | Function |
|---|---|---|---|
| 1 (Primary) | How to enhance the quality of this image? Execution history: None. | 1. Denoising, low noise level, 2. Dehazing, 3. DeJPEG, high quality factor. | Initiate full enhancement sequences for degraded images. |
| 2 | How to enhance the quality of this image? Execution history: 1. Denoising, low noise level. | 1. Denoising, low noise level, 2. Dehazing, 3. DeJPEG, high quality factor | Dynamically adjust strategies based on intermediate results. |
| 3 | How to enhance the quality of this image? Execution history: 1. Denoising, low noise level, 2. Dehazing | Rollback. | Identify and correct suboptimal steps through rollback mechanisms. |
| 4 | How to enhance the quality of this image? Execution history: 1. Denoising, low noise level. Rollback from Dehazing. | 1. Denoising, low noise level, 2. DeJPEG, high quality factor, 3. Dehazing. | Avoid repetition of ineffective procedures post-rollback. |
| 5 | How to enhance the quality of this image? Execution history: 1. Denoising, low noise level, 2. DeJPEG, high quality factor, 3. Dehazing. | Stop. | Recognize when image quality has reached its optimal state. |
As for the testing dataset used as a benchmark, the details are presented in **Table 3**, which demonstrates our construction of various combinations of degradation types. Each image in the set contains a minimum of one and a maximum of four types of degradation, with the entire set comprising 200 images.
**Table 3: Testset details.**
| Degradation | # Images |
|---|---|
| Noise + JPEG | 50 |
| Noise + Low light | 30 |
| Motion Blur + Noise + JPEG | 30 |
| Rain + Noise + JPEG | 20 |
| Haze + Noise + JPEG | 30 |
| Haze + Rain + Noise + JPEG | 20 |
| Motion Blur + Rain + Noise + JPEG | 20 |
| Total | 200 |
---
`Q2`. **Explanations of "rankings" in Table 1**
We've clarified the ranking system:
For individual test sets:
X/Y format: X is the average ranking, Y is total combinations.
Example: 6.4/64 means average 6.4th best out of 64 options.
For overall average:
We use percentages due to varying combination counts of different restoration tasks.
Process: Calculate percentage ranking per set, then average.
Example: 12.9% means top 12.9% on average across datasets.
---
`Q3`. **Additional limitations and future research directions**
The primary limitation of our study is the confined scope of models and tasks examined. While our research offers valuable insights into RestoreAgent’s performance across several degradation scenarios, it does not encompass the full spectrum of restoration models or image degradation tasks currently available.
Another limitation pertains to the limited generalization capability of current image restoration models. These models often exhibit a notable decrease in performance or fail to respond adequately when faced with even minor variations in image degradation patterns. This limitation greatly narrows our selection of model tools, requiring us to choose more robust and generalizable model tools. The challenge underscores a critical need in the field of image restoration: future models must go beyond simply overfitting training data.
Our future work will focus on significantly expanding the range of image restoration models incorporated into our multimodal large language model. This expansion aims to enhance RestoreAgent’s capabilities across a broader scope of restoration tasks and degradation types. By integrating a more diverse set of state-of-the-art models, we seek to create a more comprehensive and versatile restoration framework.
---
`Q4`. **Step-wise Re-planning and Rollback.**
You are correct in your understanding. In the revised version of our paper, we have added a detailed discussion on this feature, which we call "Step-wise Re-planning and Rollback."
In the answer to Q1, we have actually added some details (indexes 2, 3, and 4 of **Table 2**).
**Table 4** shows experiments on a complex dataset with four image degradation types. While single prediction performs well, iterative step-wise replanning offers modest improvements. This indicates strong initial performance, with replanning serving as a refinement tool for incremental enhancements.
**Table 4: Step-wise planning.**
| | balanced | ranking /64 |
|---|---|---|
| Human Expert | 5.42 | 21.2 |
| RestoreAgent | 6.35 | 5.7 |
| RestoreAgent + Step-wise | **6.38**| **4.5** |
---
`Q5`. **Will this test set be made publicly available?**
Yes, we will.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their efforts and replies. The authors' rebuttal addressed my questions about implement details. It's refreshing to tackle image restoration with the Agent paradigm, and I'm sure this work can inspire the community a lot! Overall, this work is quite good and I wish to see the follow up work. | Rebuttal 1:
Rebuttal: Dear AC and all reviewers,
We sincerely appreciate your time and efforts in reviewing our paper. We are glad to find that reviewers recognized the following merits of our work:
- **Innovative contribution and strong motivation [DJ9b, NhBR, Ux72, 4Dbc]**:
The proposed RestoreAgent addresses the challenges of image restoration by effectively demonstrating the importance of task execution order and model selection in multi-task scenarios. This novel approach opens up new research directions in the field.
- **Impressive performance [DJ9b, NhBR, Ux72, 4Dbc]**: Experimental results indicate that RestoreAgent's decision-making capabilities in handling complex degradations surpass those of human experts and existing all-in-one models.
- **Well-written and clear [DJ9b, Ux72, 4Dbc]**: The paper is generally well written and clear to understand, with comprehensive analysis and visualization.
We also thank all reviewers for their insightful and constructive suggestions, which help further improve our paper. In addition to the pointwise responses below, we summarize the major revision in the rebuttal according to the reviewers’ suggestions:
- **Detailed data construction process [DJ9b, 4Dbc]**:
We have added detailed explanations on the construction methods for the training and testing datasets.
- **Discussion on Limitations and Future Directions [DJ9b, NhBR, Ux72]**:
We have expanded the discussion on the potential challenges and future research directions, including the integration of new degradation types, generalization to unseen degradations, and the limitations of existing frameworks.
- **Manuscript update [DJ9b, Ux72, 4Dbc, NhBR]**: We have included more visual comparisons, inference time, experimental results, and discussions in the main paper and Appendix, Furthermore, we have clarified descriptions that were previously unclear.
We hope our pointwise responses below can clarify all reviewers' confusion and address the raised concerns. We thank all reviewers' efforts and time again.
Best,
Authors
Pdf: /pdf/b36cf10e276de7c005fe32022389033bdc1f01a3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
FedGMark: Certifiably Robust Watermarking for Federated Graph Learning | Accept (poster) | Summary: This paper investigated the problem of watermarking the Federated Graph Learning (FGL) models. This paper proposed the first backdoor-based FGL watermarking framework, called FedGMark. Specifically, to tackle the issues of ineffectiveness and vulnerability of existing methods, FedGMark designed two modules respectively. One is a Customized Watermark Generator (CWG). CWG aimed to generate the watermarked trigger samples (graphs) using each client's secret key. The other is the Robust Model Loader (RML). RML guaranteed that the watermarked models were certifiably robust against layer perturbation attacks.
Strengths: - The first attempt to watermark federated graph learning models.
- The watermarked models are certifiably robust against attacks.
- Experiments on various datasets and models validate the effectiveness of FedGMark.
Weaknesses: My major concerns are as follows.
1. Unclear threat model: The threat model and the problem formulation of this paper is unclear. What's the capability of the adversary and the defender? And more importantly, who is the adversary to steal the FGL model? This paper proposed to watermark the FGL model from the client side, which means the clients should be trustworthy. Is the central server an adversary in this paper? To my best knowledge, the typical threat model of various attacks in FL (e.g., backdoor attacks or Byzantine attacks) assumes that some of the clients may be malicious. The author should add a section on the threat model or problem formulation and clarify why they make these assumptions. This may be helpful to better understand the problem the authors tried to solve.
2. Privacy concern: I also worry that utilizing FedGMark may raise privacy concerns. In Section 3.4, the watermarked client needs to use a subset of its training graphs as the watermarked graphs. However, in FL, the client's graphs are privacy-sensitive, and using them to verify ownership may lead to privacy leakage. This is contrary to the original purpose (preserve privacy) of FL.
3. Missing experiments on the robustness against backdoor defense: This paper considers three different watermark removal attacks. However, since FedGMark utilizes backdoor-based watermarking methods, it is important to validate whether FedGMark is robust against backdoor defenses.
4. Missing introduction to ownership verification: This paper lacks an important section to introduce the ownership verification procedure of FedGMark.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Clarify the threat model.
2. Address the privacy concern.
3. Analysis or experiments on the robustness against backdoor defenses.
4. Clarify the procedure of ownership verification in FedGMark.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: This paper does not include a discussion of the limitations. However, I think there is a strong assumption that the clients need to be trustworthy in FedGMark. A discussion on this assumption is necessitated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We thank the reviewer for appreciating the novelty of the studied problem and the proposed certified robust watermarking scheme against attacks.**
**W1: Clearly define the Threat Model and Problem; Who is the adversary to steal the FedGL model); Are clients and central server an adversary? Make assumptions clear**
Thanks for the suggestion! See Response to Comment#1 in the global rebuttal.
**W2: Privacy concerns raised by FedGMark**
**Response:** FedGMark does not pose additional data privacy concerns. Like classic FL, all (watermarked) clients *locally* process their watermark data and train the model, and then submit the trained model to the server. Hence, the server cannot access the private data.
**W3: Test backdoor defenses based attacks against FedGMark**
See Response to Comment#4 in the global rebuttal.
**W4: Introduce the "ownership verification" procedure of FedGMark**
Thanks for the suggestion! See Response to Comment#2 in the global rebuttal.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I still have two questions on W1 and W2.
- **About W1**: I think the assumption that all the clients are benign may be too strong. It is acceptable to assume that the clients (including the adversary) follow the training protocol to get a well-trained model. However, during the ownership verification, some clients may be offline and cannot provide the watermark samples. It is also possible for malicious clients to provide fake watermark samples. The discussion of this issue may help improve the soundness of this work.
- **About W2**: In practice, the model owner needs to send the watermark samples to a third-party judge or the adversary to calculate the outputs and verify the ownership. In this case, I think using a subset of the client's training graphs as the watermark samples may lead to privacy leakage. Existing client-side watermarking methods (e.g., [1]) tend to utilize noise-based watermark samples which are not privacy-sensitive. Can FedGMark address this issue?
[1] Watermarking in Secure Federated Learning: A Verification Framework Based on Client-Side Backdooring. TIST 2023.
---
Reply to Comment 1.1.1:
Title: Response by Authors
Comment: **Thanks for the great comments! Below we provide more clarifications and justifications.**
**Response to Comment#1:** Good point! We deem that our ownership verification is still robust against offline clients and malicious clients (if its number is less than 50%).
During ownership verification, each client provides its own watermark data to the trusted judge. When some clients are offline, the trusted judge can simply neglect them and only use participating clients’ watermark data for verification.
When facing malicious clients, their negative effect can be circumvented through a majority voting-based approach. Specifically, all clients provide their own watermark data to the judge and obtain the watermark accuracy per client. Though the watermark accuracy on malicious clients could be very low, the majority of benign clients can produce more number of high watermark accuracy, compared to the number of low accuracy. When the judge uses the majority-vote strategy, the final watermark accuracy is still high, ensuring the accurate ownership claim for benign clients.
**Response to Comment#2:** We first clarify that watermark data are not necessarily generated from the training/test samples. Remember the primary goal of backdoor-based watermarking is to force the model to memorize the relationship between the backdoor trigger (in the watermark samples) and the target label, while the samples to inject the trigger do not have constraints, i.e., they can be from training samples or artificially synthesized (which does not contain privacy information of any training/test data). For conveniences, existing methods inject backdoor triggers into the training/test samples (including Ref[1]). To validate this, we synthesize a set of random graphs using the popular Erdős–Rényi model (via the NetworkX toolbox) and the watermark samples are generated by injecting the learnt watermark on the synthesized graphs. Under the default setting, we test on Fed-GIN and show results below, where we observe WAs are very close to those shown in the paper on the four datasets.
|Watermark |MUTAG | PROTEINS | DD | COLLAB |
|:------------|--------------:|:-------------:|:-------------:|:-------------:|
| on train/test graphs | 0.81 / 0.90 | 0.72 / 0.86 | 0.73 / 0.65 | 0.73 / 0.75 |
| on synthesized graphs| 0.80 / 0.88 | 0.71 / 0.84 | 0.72 / 0.64 | 0.72 / 0.73 |
Furthermore, since all clients intend to verify model ownership, it is reasonable to believe that these clients are willing to provide their watermark data—whether generated from private training/test data or non-private synthesized data—exclusively to a trusted judge, with informed consent and in accordance with legal and ethical standards. From this perspective, the data is confidential between each client and the trusted judge. We acknowledge it is very interesting future work to design a provably private mechanism for model ownership verification that the verifier cannot access the watermark data but can guarantee the correctness of verification. | Summary: This manuscript introduces FedGMark, a backdoor-based watermarking method specifically designed to protect Federated Graph Learning (FedGL) models from illegal copying and model theft. They claim that the proposed FedGMark is the first method to safeguard the intellectual property of FedGL models, offering certified robustness against watermark removal attacks, leveraging unique graph structures and client information to create customized and diverse watermarks. Experiments demonstrate its effectiveness and robustness.
Strengths: The paper introduces FedGMark to address the overlooked vulnerability of FedGL model ownership and identifies three main challenges in current watermarking techniques: inapplicability to graph data, vulnerability to removal attacks, and lack of formal guarantees. The proposed method, including CWG and RML, is clear and intuitive, and the authors have provided comprehensive experiments to support their approach.
Weaknesses: 1. I strongly recommend setting a "Threat Model" subsection to clarify the potential security threats to FedGL. In my opinion, since the authors consider watermark removal attacks like distillation and finetuning, FedGL operates under a white-box setting.
2. The paper assumes attackers know the internal information of the target watermarked model, enabling distillation, finetuning, and layer-perturbation attacks. However, I find the white-box setting narrow and trivial. The authors should consider black-box attacks, which are more challenging and meaningful. Many studies on black-box attacks can be found.
3. In watermarking-related literature, robustness and fidelity are more frequently used terms than watermark accuracy and task accuracy.
4. In the "Inapplicable or Ineffective" item, the authors state, "For instance, they require input data to have the same size, while graphs can have varying sizes," which is not entirely accurate. For example, some Wavelet and DCT-based watermarking methods can be scalable.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to Weaknesses part
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please refer to Weaknesses part
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We thank the reviewer for appreciating the intuition and motivation of the proposed solution and the comprehensive evaluations to support the solution.**
**W1: Clearly define the "Threat Model"**
Thanks for the suggestion! See Response to Comment#1 in the global rebuttal.
**W2: Consider black-box attacks**
We emphasize this is a defense paper aiming to design a robust watermarking scheme for FedGL. As many lessons have learnt and to avoid the false sense of security [Carlini et al. 2019], an effective defense should be tested against the strongest white-box attacks. This is because a defense that is effective against (impractical) weaker attacks does not imply it is effective against stronger attacks. To this end, we assume the attacker knows all information of the target watermarked model, and *this setting actually makes our defense design the most challenging*. Moreover, if the defense can successfully defend against the strongest white-box attack on the watermarked FedGL model, it is naturally effective against any weaker attacks (thus including black-box attacks).
Nicholas Carlini et al., On evaluating adversarial robustness. arXiv, 2019.
**W3: Replace watermark accuracy main task accuracy with robustness and fidelity**.
Thank you for the suggestion. We will change the terms.
**W4: Statement on “... while graphs can have varying sizes” is not entirely accurate**
We agree that certain types of graphs (like wavelets) have the same size. However, our statement is for the general graph datasets whose graph sizes are varied.
---
Rebuttal Comment 1.1:
Title: Comment by Authors
Comment: Dear Reviewer,
As the interaction period is drawing to a close, we would like to kindly inquire whether our rebuttal has satisfactorily addressed all of your comments. Please let us know if further clarifications are needed.
Best,
Authors
---
Reply to Comment 1.1.1:
Title: Comment by Authors
Comment: Dear Reviewer,
We greatly appreciate your time and effort in reviewing our paper and providing constructive comments. We have dedicated significant efforts to addressing all of your feedback to the best of our abilities. As the interactive discussion period is drawing to a close, we are concerned about whether we have fully addressed your comments, as we have not received your feedback on our response. We would be grateful if you could confirm whether our responses meet your expectations or if more clarifications are needed within the limited discussion time.
Thank you once again for your valuable input.
Best,
Authors | Summary: This paper addresses the problem of protecting model ownership in the emerging domain of Federated Graph Learning (FedGL) by proposing FedGMark, a backdoor-based watermarking technique. The authors argue that existing watermarking approaches are either inapplicable to graph data or exhibit weaknesses in terms of robustness against removal attacks and lack of formal guarantees. FedGMark aims to overcome these limitations by leveraging graph structure and client information to learn customized watermarks, employing a novel graph learning (GL) architecture that enhances robustness, and providing certified robustness guarantees against layer-perturbation attacks.
Strengths: - The paper clearly outlines the limitations of existing watermarking techniques and presents a well-motivated approach to address them. The design of FedGMark, with its CWG and RML modules, is tailored to the specific challenges of watermarking in FedGL.
- FedGMark demonstrates promising empirical performance in terms of both main task accuracy and watermark accuracy. It outperforms the baseline approach (random graph-based watermarking) significantly, especially under watermark removal attacks.
- The paper provides theoretical guarantees for the robustness of FedGMark against layer-perturbation attacks, a unique and valuable contribution in the watermarking literature.
Weaknesses: 1. The reliance on pre-defined private keys for watermark generation may not be practical in all scenarios, and alternative key management methods should be explored.
2. The assumption of limited attacker knowledge about the watermarked model may not hold in practice. Evaluating FedGMark against more knowledgeable adversaries would provide a more realistic assessment.
3. The focus on FedAvg for model aggregation limits the exploration of other aggregation methods and their impact on watermark robustness.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you quantify the communication overhead of FedGMark during federated training, especially compared to random graph-based watermarking (in terms of local training time, size of watermarked data, etc.)?
2. How do you envision FedGMark being deployed in a real-world FedGL system? What practical challenges might arise during implementation and watermark verification?
3. How would the certified robustness guarantees be affected by more advanced watermark removal attacks beyond layer perturbation (e.g., those involving trigger reverse engineering)?
4. How would the effectiveness of FedGMark be affected if the attacker had more knowledge about the watermarking process, such as access to the CWG architecture or the private key generation method?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. FedGMark's evaluation focuses solely on FedAvg for aggregating client models. The impact of alternative aggregation methods (e.g., those prioritizing clients based on data quality or model performance) on both watermark robustness and overall FedGL model performance remains unexplored.
2. The paper acknowledges the increased computational cost of using more submodels (S) in RML but doesn't fully analyze the scalability of FedGMark. Further investigation is needed to understand how performance scales with different numbers of clients.
3. FedGMark relies heavily on structural modifications of the graph as the watermark. The effectiveness and robustness of alternative trigger designs, such as feature-based triggers, hybrid triggers, or combinations of different trigger types, have not been explored.
4. The paper lacks specific details about the hyperparameters used for training the GL models on the client-side. The impact of client training dynamics, particularly the choice of learning rate and the number of local epochs, on the watermarking performance and robustness of FedGMark remains unclear and requires further investigation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We thank the reviewer for appreciating the well-motivated approach, promising performance, and robustness guarantees.**
**W1: Alternative key management methods**
We clarify the predefined key is used by the Watermark Generator to know which local watermark is learnt for which client. It is like an identifier of the client and only needed and known by the client’s watermark generator to generate the customized watermark. This is different from the role of the key management methods in crypto. To avoid confusion, we will not use the term “private key”, but “client ID”.
**W2 (Q4): Against more knowledgeable adversaries.**
As suggested, we test the attacker that has access to CWG to manipulate the FedGL training. We assume some clients are malicious and test two attacks. First, we consider a *passive* attack where all malicious clients DO NOT use CWG to generate customized local watermarks. Model (b) in Table 5 shows the *maximum* WA decrease is 9\%, where *all* clients do not use CWG.
Second, we test an *active* attack where malicious clients modify their watermark data’s label to obfuscate the training. Specifically, all malicious clients’ watermark data are labeled (e.g., 2) differently from the target label (e.g., 1). The results below show MA/WA is marginally affected even with 20% malicious clients.
||0%|10%|20%|30%|40%|
|-|-|-|-|-|-|
|MU|.81/.90|.80/.90|.80/.89|.80/.80|.80/.71|
|PR|.72/.86|.72/.85|.72/.76|.71/.67|.70/.67|
|DD|.73/.65|.72/.63|.72/.58|.71/.53|.71/.52|
|CO|.73/.75|.74/.74|.73/.68|.73/.61|.72/.60|
**W3 (L1): Other aggregation methods**
We test M-Krum [Blanchard et al 18] and T-mean [Yin et al 19] that consider data quality (e.g., remove outlier clients). M-Krum filters a fraction $p$ of clients whose gradients largely deviated from others, while T-mean trims off a fraction $q$ of highest and lowest values for each parameter in clients’ models. We set $p=10\%$ and $q=10\% $, and the MA/WA results are as follows. We see these aggregators achieve a robustness-utility tradeoff, and MA and WA are not largely different.
||Avg|M-Krum|T-mean|
|-|-|-|-|
|MU|.81/.90|.78/.92|.75/.93|
|PR|.72/.86|.73/.85|.70/.87|
|DD|.73/.65|.72/.63|.70/.65|
|CO|.73/.75|.73/.74|.71/.77|
**Q1(L2): Scalability**
See Response to Comment#3 in the global rebuttal.
**Q2: (1) Deployment of FedGMark; (2) Challenges**
**(1) Deployment**: *Stage I: Server-client model for FedGMark training*
- Setup: Server and clients make an agreement on, e.g., the GL model, the aggregator, and server initializes the global model.
- Local training: Clients download the global model, define their watermark data, locally train their watermarked GL model (with the CWG and RML module) using the global model, and upload the updated local model to the server;
- Server aggregation: Server aggregates local watermarked models of selected clients to update the global watermarked model.
The final global watermarked model is shared by all clients for legal use.
*Stage II: Ownership verification.* See Response to Comment#2 in the global rebuttal.
**(2) Challenges:** We consider security and privacy threats, and data quality.
- **Security**: How to guarantee all clients and the server are benign or detect / remove malicious ones? How to mitigate more advanced attacks?
- **Privacy**: Though FL methods do not access the data, they may still leak data privacy in practice. How to provably protect the data privacy, while keeping the utility?
- **Data quality**: Low quality data negatively affects the utility. This could reduce the interest for clients with high-quality data to participate in the FL. How to ensure all data have high quality?
**Q3: Attacks use trigger reverse engineering**
See Response to Comment#4 in the global rebuttal.
**L3: Alternative triggers design**
**Response:** We add some details to adjust FedGMark to the suggested alternative triggers.
To learn feature-based triggers, we first select a set of nodes from a graph as the target nodes, and learn the watermarked features for the target nodes. We use a graph $G^i = (V^i, E^i, X^i) $ from client $i$ for illustration, where $X^i$ is the node feature matrix. We then define a feature-mask $M_f^i[v_j]=1$ if $v_j \in V_w^i$ and 0 otherwise, where $V_w^i$ is the watermark node set described in the paper. Then, we introduce a feature network (FeaNet) that learns watermarked node features as $X_w^i = FeaNet(X^i) \odot M_f^i$. The FeaNet takes input $X^i$ and outputs a matrix having the same size as $X^i$, e.g., it has the same architecture as GatingNet but adjusts the input size. The corresponding watermarked graph is defined as $G_w^i=(V^i, E^i, X_w^i)$. By generating a set of watermarked graphs {$G_w^i$} for client $i$, we minimize the loss on client $i$’s both clean graphs {$ G_c^i$} and {$G_w^i$}. More details about training refer to Section 3.4.
Further, to learn feature-structure triggers, we combine FeaNet (that gets $X_w^i$) with GatingNet/KeyNet (that gets $E_w^i$), and the watermarked graphs are $G_w^i=(V^i, E_w^i, X_w^i)$. We then minimize the loss on {$G_c^i$} and {$G_w^i$}.
We evaluate these triggers and the results are follows. We observe that structure information alone is sufficient to enable designing effective triggers.
||f| s| f-s|
-|-|-|-
MU|.81/.78|.81/.90|.79/.92
PR|.72/.77|.72/.86|.73/.87
DD|.72/.53|.73/.65|.74/.66
CO|.73/.67|.73/.75|.72/.76
**L4: Performance on hyperpara.**
By default, we set the learning rate (lr) to 0.01 and \#local epochs (le) to 5. The results with more lr and le are below:
||(le=5)lr=.01|.05|.1|(lr=.01)le=5|10|20
-|-|-|-|-|-|-
MU|.81/.90|.84/.89|.79/.90|.81/.90|.84/.9|.76/.92
PR|.72/.86|.71/.79|.71/.73|.72/.86|.72/.87|.72/.89
DD|.73/.65|.71/.57|.70/.53|.73/.65|.73/.66|.73/.69
CO|.73/.75|.72/.71|.70/.65|.73/.75|.74/.75|.73/.77
We see that a large lr may reduce WA, and WA slightly increases as le grows, indicating more thorough training makes our method perform better.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns. I am satisfied with the authors' responses and explanations. I will raise my rating to 6 and look forward to seeing the final version of the paper.
---
Reply to Comment 1.1.1:
Title: Response by Authors
Comment: Thanks for raising the score! We are happy that our response has addressed all your concerns. We will promise to include all the above results, clarifications, and discussions in the next version. | Summary: This work studies watermarking for federated graph learning (FGL) to protect the ownership of participants. It proposes a customized watermark generator for local clients that can capture the local graph structure and private client information, and a robust model loader consisting of multiple GL submodels and a majority-voting-based ensemble classifier, which can defend against the proposed layer-perturbation attack.
Strengths: 1. This work claims to be the first to study watermarking for FGL models.
2. The method can leverage local graph and client information to generate customized watermarks.
3. The paper introduces a layer-perturbation attack to further demonstrate the certifiably robustness of the proposed backdoor-based watermarking for FGL.
4. The work is well-motivated with preliminary studies.
Weaknesses: 1. The concept of ownership in FGL can be confusing and is not well-defined in this paper. For example, can every client claim ownership of the federated trained model? Since the watermarks from different clients are different, can any single client claim entire ownership? Additionally, for clients who participate in the FL but do not have watermarks, how can they claim ownership?
2. The motivation for using local customized watermarks is not clear. The following problems arise: (1) It is unclear how to conduct ownership verification. Should it use the global watermark or the local watermarks? (2) If using a global watermark, what is the necessity of employing customized watermarks, or what is the adequate way to aggregate the global watermark from customized watermarks? If using local watermarks, how can the customized watermarks be used across clients?
3. The method requires specific GL models (to be split to multiple submodels), which can be hard to adapt to existing FGL methods, especially for advanced FGL methods.
4. The motivation for incorporating submodels for GL is missing. Why is this design necessary?
5. (1) What does “layer indexes” for splitting GL models mean? From section 3.3, it is not clear how the submodels are split and how the split submodels are decoupled from each other regarding cascaded structures. (2) Additionally, structural information can be important for graph learning. How would discarding such structural information impact in this setting?
6. The global model is obtained by simply averaging uploaded clients’ models (not weighted by data size, or applying proxy terms for regularization). Can this method address the potential heterogeneity issue when local watermarks are highly disparate from each other?
7. The proposed method can introduce efficiency issues, as it significantly increases the number of parameters and computation time.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. When the set of selected clients for aggregation is different from the set of watermarked clients, can the method achieve stable convergence?
2. Is the layer-perturbation attack applied before or after submodel splitting? If it is applied after, does it perturb all submodels or not?
3. Out of curiosity, is it possible to federated learn the local watermarking? How do you expect this would perform?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Please see Weaknesses above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We thank the reviewer for appreciating the motivation and novelty of this work (first to study robust watermarking for FedGL models).**
**W1: Clarify the concept of ownership in FedGL**
In typical FL, a server and multiple clients collaboratively train a global model stored in the server, which is used by all clients for their tasks. Accordingly, in our ownership verification problem in FedGL, *all* clients design their own watermark data and *collaboratively train the watermarked global model*, which is for *joint ownership by all participating clients*.
Further, since all clients have devoted computation and data to the training, they have a strong intention to jointly protect their ownership of the model. Hence, we do not consider the case where the clients did NOT participate in watermark training, but claim the ownership of the model (actually these clients do not know how to do so).
**W2: (1) Global or local watermarks for ownership verification; (2) Motivation of developing local customized watermarks**
(1) We use the global watermark (integration of local watermarks) for ownership verification, in line with the fact that the watermarked global model is collaboratively learnt from all clients with their local watermarks. Figure 6 in Appendix also shows global watermark is more effective than local watermarks.
(2) Learning local customized watermarks is to utilize the unique property of each client, as different clients could have different properties (e.g., distributions of their data) and their optimal watermark could be different. Also, the learnt customized watermarks enhance the ownership verification performance. For instance, Table 5 shows watermark accuracy improves 8% with local customized watermarks.
**W3: Require split specific GL models to multiple submodels**
Sorry for the confusion! In our design, we do not split the existing GL model into multiple submodels. Instead, we can use *any* GL model (e.g., a 3-layer GIN) as a submodel and each client’s GL model is an ensemble of a set of submodels (More details in Response to **W5**). Hence, our approach can be easily adapted to any FedGL, where it can use any aggregator or base GL model. For instance., our experiments conducted on Fed-GIN means we use the average aggregator and GIN as the submodel.
**W4: Why incorporate submodels for GL**
Our goal is to ensure the *provable robustness* of our watermarked FedGL model against layer-perturbation attacks. Recall that we propose a majority-voting based ensemble classifier on the submodels of our GL. Based on this, we can guarantee the ownership verification is provably accurate, when the \#perturbed layers on GL models satisfies Eqn (1) in Thm 1.
**W5: (1) What does “layer indexes” mean? (2) Structural information is important.**
(1) A GL model contains multiple layers. E.g., a 8-layer GIN can be represented with layer indexes {$l_0, \ldots, l_7$}. Splitting this GIN into 4 submodels $\{GIN_1, \cdots, GIN_4\}$ with layer indexes {$l_0, l_1$} … {$l_6, l_7$} means $GIN_i$ contains layers $l_{2i}$, $l_{2i+1}$ from the GIN. *However, submodels splitted in this way are coupled from each other, making them unable to defend against layer-perturbation attacks*. To tackle this problem, we *design the novel GL model that is an ensemble of a set of independent submodels*, where each submodel is a base GL model, e.g., GIN or GCN.
(2) Yes, it is. All our submodels are GL models that take the whole graph as input, and hence retrain the graph structure information.
**W6: Can this method address the potential heterogeneity issue?**
Good question! Per the Response to **W2**, local watermarks are learnt by considering the unique properties in each client. Such unique properties may include the heterogeneity across clients’ data. To validate this, we test our method with non-IID graphs across clients, where each client holds a single label data. The MA/WA results are below:
|Dataset |MUTAG | PROTEINS | DD | COLLAB |
|:------------|--------------:|:-------------:|:-------------:|:-------------:|
| paper results | 0.81 / 0.90 | 0.72 / 0.86 | 0.73 / 0.65 | 0.73 / 0.75 |
| non-IID| 0.80 / 0.89 | 0.72 / 0.83 | 0.72 / 0.63 | 0.72 / 0.75 |
We can see FedGMark also performs well with non-IID datasets. This implies the learnt customized watermarks indeed capture the heterogeneity of clients’ graphs.
**W7: Efficiency issues**
See Response to Comment#3 in the global rebuttal.
**Q1: Selected clients for aggregation different from watermarked clients**
In our experiments, the server randomly selects a fraction (e.g., 50\%) of clients for aggregation in each training round. Also, all clients participating in the model ownership verification have watermarked data. Hence, there is no case where the set of clients selected for aggregation differs from the set of watermarked clients.
**Q2: Layer-perturbation attack before or after submodel splitting?**
It is before submodel splitting (ensemble).
**Q3: Federally learn the local watermarking?**
In our method, local watermarks are learnt using the global model (see Step 2 in Section 3.4), which is *federally* trained by all clients’ local models. From this point of view, local watermarks are also federally learnt by all clients.
---
Rebuttal Comment 1.1:
Title: Comment by Authors
Comment: Dear Reviewer,
As the interaction period is drawing to a close, we would like to kindly inquire whether our rebuttal has satisfactorily addressed all of your comments. Please let us know if further clarifications are needed.
Best,
Authors
---
Reply to Comment 1.1.1:
Title: Comment by Authors
Comment: Dear Reviewer,
We greatly appreciate your time and effort in reviewing our paper and providing constructive comments. We have dedicated significant efforts to addressing all of your feedback to the best of our abilities.
As the interactive discussion period is drawing to a close, we are concerned about whether we have fully addressed your comments, as we have not received your feedback on our response. We would be grateful if you could confirm whether our responses meet your expectations or if more clarifications are needed within the limited time.
Thank you once again for your valuable input.
Best,
Authors | Rebuttal 1:
Rebuttal: **We thank all reviewers for their constructive comments! We first summarize the global response to the common comments raised by the reviewers and then reply to individual reviewers’ comments.**
**Comment#1:Threat Model (djtK-W1 and yEMY-Q1)**
**Response:** Thanks for the suggestion! We add more details about our motivation, assumption, threat model and problem definition.
**Motivation, Adversary and Assumptions:** Watermarking is designed to safeguard **well-trained models** against threats like illegal copying and unauthorized distribution. An adversary could be, e.g.,
- a business competitor seeking to replicate a model to gain competitive advantages by significantly reducing development costs
- a malicious user who sells the model for profits
- a cybercriminal who uses the stolen model for malicious purposes such as conducting large-scale spam campaigns
In the paper, we follow existing methods [Shafieinejad et al., 21, Bansal et al., 22, Xu et al. 23, Jiang et al., 23], where the adversary is assumed to *know all details of the pretrained watermarked FedGL model*, but does NOT tamper with the training process. *This means all clients and the server are benign and follow the federated training protocol*, and the attack happens at the testing/inference time. We highlight this is in stark contrast to the *training-time* Byzantine attack on FL where some clients are malicious and they manipulate the training process.
**Threat model:**
- *Attacker’s knowledge:* The attacker has white-box access to the pretrained watermarked FedGL model. In addition, the attacker may also know some clean (unlabeled or labeled) training data, as well as watermarked data.
- *Attacker’s capability:* The attacker can modify the pretrained model via leveraging its white-box access to the trained model and its hold training and watermarked data. For instance, the attacker can finetune the pretrained model via the labeled training data. More details of the capabilities of considered attacks are described in Sec 2.3.
- *Attacker’s goal:* To remove the watermark based on its knowledge and capability, while maintaining the model utility. This allows it to illegally use the model without detection.
**Problem definition:** As the defender, we focus on protecting the ownership of the trained FedGL model from the aforementioned security threats. In particular, we aim to build a *certifiably robust* watermarking scheme for FedGL against the worst-case layer-perturbation attack, such that the learnt watermarked FedGL model can achieve two goals:
- High task accuracy (fidelity): predict correct labels as many as possible for clean test graphs
- High (certified) watermark accuracy (robustness): (provably) predict the target label as many as possible for test graphs with the watermark
**Comment#2: Ownership Verification (ZEKF-Q2 and yEMY-W4)**
**Response:** As FedGMark uses backdoor-based watermarking, the ownership verification procedure of FedGMark is similar to that of the standard backdoor-based watermarking (Line 35-40). Specifically, when suspecting the target FedGMark model is illegally used by others, the model owner (all the participating clients or their representative) can recruit a trusted judge for model ownership verification. Typically, the judge requests both the true model owner and the illegal party to provide some test data for verification. Only when the one knows the predictions by the target model for the provided test data by both parties, the judge will confirm this party the model ownership. In particular, besides providing the clean data by both parties that behave normally, the true model owner especially provides the designed watermarked data that only he knows the model behaves on. As a result, both parties know the prediction results on clean data, but the illegal party is hard to predict accurately on the watermarked data provided by the true model owner.
**Comment#3: Scalability of FedGMark (Y9Y7-W7 and ZEKF-Q1)**
**Response:** Compared with graph-based or non-robust watermarking methods, the computation overhead of FedGMark is mainly from the introduced submodel models (fixing all the other parameters, such as \#clients, \#iterations, to be the same). Particularly, the overhead scales linearly with the number of submodels $S$. Here we copy the runtime result from Table 9 in Appendix for reference, where $S=1$ is the runtime of the existing method.
|Runtime(s)|MUTAG|PROTEINS|DD|COLLAB|
|-|-|-|-|-|
|S=1|0.13|0.72|2.99|37.65|
|S=4|0.46|2.79|11.14|161.46|
|S=8|0.74|5.10|20.12|296.68|
|S=16|1.32|9.49|36.74|563.51|
We believe the computation overhead is not an issue in practice, compared with the importance of designing a provably robust watermark that aids the ownership verification of the FedGL model. Note that as all clients have devoted computation and data to the training, they have a strong intention to jointly protect their ownership of the FedGL model.
**Comment#4: More watermark removal attacks (ZEKF-Q3 and yEMY-W3)**
**Response:** Many existing works [Wang et al 20, Zhang et al. 21] show the trigger-reverse based backdoor detection/removal is vulnerable to the ``stealthy” backdoor. This is because the effectiveness of trigger reverse attacks largely depends on the statistical differences between clean data and backdoored data. Since we do not notice any graph backdoor trigger-reverse attack, we instead propose to quantitatively test the structure similarity between the generated watermarked graphs and the clean graphs. Here we use the metrics NetSim and DeltaCon proposed in [Wills and Meyer’20], with the range [0,1] and the higher value the larger similarity. The results are below:
Dataset|MUTAG|PROTEINS|DD|COLLAB
-|-|-|-|-
NetSim|0.97|0.98|0.99|0.99
DeltaCon|0.98|0.98|0.99|0.99
We observe the watermarked graphs and their clean counterparts are structurally very close. This implies that the proposed watermarks are hard to be detected or removed. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Combining Statistical Depth and Fermat Distance for Uncertainty Quantification | Accept (poster) | Summary: This paper introduces a new method for Out-of-Distribution detection based on the concepts of Lens Depth and Fermat distance. This method is used to see whether a sample has a similar representation in the penultimate layer of a Neural Network as the samples in the training data. The method is subjected to various tests of Out-of-Distribution detection and is shown to be on-par or exceeding alternative methods. However, the proposed method does not intrude on the training process of the model, and therefore cannot have a negative impact on the classification performance. Alternative methods assume a Gaussian Distribution in the hidden representation, but the use of (a modification of) Lens Depth allows estimating the “similarity” of the sample without assuming a certain distribution.
Strengths: - The application of Femat Distance and Lens Depth introduces mathematical concepts that are not common knowledge and not obvious to a Machine Learning audience. The application of these methods in OoD detection is new (originality)
- Previous literature is well cited, and the mathematical concepts are clearly and intuitively introduced, with clearly stated relevance (clarity)
The claims made follow naturally from the evidence and are not overstated. The evaluation is in line with common practice in the field of OoD detection (quality)
- The paper is well written and consistently builds a clear argumentation (clarity)
- Mathematical concepts are introduced with both formalism, and an intuitive explanation (clarity).
- The proposed method is competitive with other methods, and is minimally invasive to the training process. This could be helpful when then training process is outside of the control, for example for large pre-trained models (significance)
Weaknesses: - Small claims are not entirely accurate. Line 4 says there are “no assumptions” about the form of the distribution, but there are only minimal assumptions (see question 3). Line 262 claims that the proposed measure is a good measure of “uncertainty estimation”, but it’s only evaluated for OoD detection, so it may be wildly over/underconfident and behave poorly on aleatoric uncertainty. Line 323 conjects that OoD detection may ensure fairness, but I see no reason why. Line 5 claims that the proposed method is applicable to any classification model, but the performance is only tested for Neural Networks (quality/clarity)
- The explanation of Lens Depth may be made more intuitive with a visualisation to support Lines 94-99 (clarity)
- Presented results are not substantially better than previous methods. Authors argue that the main benefit is that the proposed method is minimally invasive to the training process, but the authors do not make a strong case on why this is necessary (significance)
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. How computationally expensive is LD after the improvements discussed in Section 4.5? Is it substantially faster/slower to do inference than e.g. DDU?
2. In Figure 4.2 you show that the LD still works with 200 samples to claim that the method also works for small datasets. At what dataset size does the method start to fail, and how catastrophic is that? A plot like Figure 4.2.B with decreasing sizes of the dataset may give this insight.
3. Consider Figure D.1. What if two of the “blobs” belong to cluster A and the last to cluster B, so that there are two classes (C=2) but in three clusters. Would LD then still behave as desired? If LD then gives undesirable results, wouldn’t you say that there is at least some assumption about the shape of the distributions?
4. How would the model perform if the two moons have more spread, to the point that the two classes might touch/overlap? Is there “uncertainty” between the two classes? I understand this is not the point of OOD-detection, but it can be a point of UQ. This might be a ‘limitation’ worth mentioning. LD is good at OOD-detection, but not for the general task of uncertainty estimation. Specifically Line 262 says that LD is a good measure for uncertainty estimation, but only OOD-detection and being monotically decreasing with accuracy are demonstrated. Estimating heteroscedastic aleatoric uncertainty and uncertainty calibration are not tested, but are properties of good uncertainty estimation. On Line 264 “uncertainty quantification” and is said, while OOD-detection is investigated, though I think they are not exactly the same.
5. In Figures 5.2b-5.2d the accuracy seems to plateau. Do the authors have any suggestions on what might be causing this, and how this might impact applications using LD?
6. One important use case I’d consider for minimally invading the training process is OoD detection with pre-trained models. Can you elaborate on whether this would be a good use case for your method? If it is, consider stating this in the paper as well, to argue clearly for why minimally invasive OoD detection is desirable.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors claim that their method works on all classification models, and without any assumptions on the distribution of the data. However, this is missing evidence. Authors only demonstrate effectiveness in Neural Networks on Computer Vision data. While it is true that the method may be applied to other models and other data, more research is needed to establish its effectiveness there. Other limitations are demonstrated and addressed. The positive conclusions are appropriately based on the findings and are not over-optimistic.
The authors discuss the high computational cost and demonstrate methods to make it more efficient, but it’s not clear what the remaining computational cost is.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First of all, thank you for your time and for this review. Here are our answers.
### Q1. How computationally expensive is LD after the improvements? faster/slower to do inference than e.g. DDU?
The computational expense depends on the number n of points in the proposed reduced LD, and the number of classes C. The complexity is of order O(Cn^2). At inference, it is slower than DDU, as DDU uses Gaussian assumption, and an empirical covariance estimation. Hence it uses simple and standard matrix linear algebra, which is friendly for standard Deep Learning packages.
With our method, we use non-standard operations, making it slower. We believe that there is much room for improvement using more native implementations, more parallelization etc.. especially when computing shortest paths for LD. But we considered this as not a priority.
The objective of our paper is mainly to introduce an approach that is fairly new to Deep Learning community.
### Q2. What dataset size does the method start to fail?
To respond to your question, we have added an experiment on the spiral dataset where we use only a certain percentage of the original data until LD fails to capture the original distribution. Please look at the figure in the pdf file attached in the global rebuttal to observe. Please note that we randomly sample a small portion of the original points. Hence, the sampled points can be concentrated in a small region instead of being distributed along the spiral. So, the sampled points can represent not very correctly the original distribution. With that being said, it is not surprising that $LD$ fails to capture the original support at around $5-6 \%$ of the original size. We hope that the figure will give you more insight about our method at small data regime. Finally, thank you for your recommendation and we will add that in the appendix.
### Q3. Consider Figure D.1. What if two of the “blobs” belong to cluster A and the last to cluster B, so that there are two classes (C=2) but in three clusters. Would LD then still behave as desired? If LD then gives undesirable results, wouldn’t you say that there is at least some assumption about the shape of the distributions?
This is a very interesting remark. In this example, we intentionally make 3 clusters very far away from each other to see the effect of our method. In the extreme scenario that you propose, there is a class with 2 clusters.
One could argue that in such case, these 2 clusters should not be too distinctive. This is because the main model in trained to well classify, so semantically similar inputs should be close to each other, leading to quite continuous cluster for each class. Hence, the "bad" effect could exist but should be very minimal.But in general, we agree that the cluster of each class should be sufficiently connected to have an ideal result. We will add that in our discussion.
### Q4. How would the model perform if the two moons have more spread.
In the case you are mentioning, we argue this is not a case of OoD uncertainty but a case of decision uncertainty. For this, other metrics such as predictive entropy should be a good candidate as it's related to uncertainty in the decision.
OoD on the other hand deals well with data scarcity. The aim of LD is to measure the Out-of-domain uncertainty, which is due to the zones where we do not have (or very little) data. As model is not trained in these zones, we would like that the model does not predict on these zones as it can behave very randomly due to scarcity of training data in these zones. This is unlike the case where the two moons have more spread (and even overlap), so we have enough data in the zone between the 2 classes.
Finally, we agree with your remark and it's worth mentioning in the discussion.
### Q5. In Figures 5.2b-5.2d the accuracy seems to plateau
We think that the plateau part corresponds to "difficult" regions where InD and OOD data are less distinctive, so the rejected data could contain both InD and OOD data.
Such a phenomenon is not specific to LD and appears for other methods as well. (see DUQ for example.)
### Q6. One important use case I’d consider for minimally invading the training process is OoD detection with pre-trained models:
Yes! This is indeed a point we have in mind, as SOTA models become often too large to retrain ourselves. But perhaps we did not insist enough on that. It will be added in the introduction.
Thank you for your recommendation. Ideally this should be a good use case where we have no idea about the distribution of data and we want to keep the original model intact to make sure that its performance on the main task is not impacted.
Kind regards,
The authors
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional insights and additions. The method is indeed promising, even if computationally expensive.
The proposed work is interesting and promising.
I thank the authors for their submission and the following discussion. | Summary: The paper presents a non-parametric approach to out-of-distribution (OOD) detection. Given a trained neural network classifier, it is proposed to combine the Lens Depth (LD) with the Fermat distance (in an improved form) to capture the geometry and density of the data in feature space. Without assuming any prior distribution, the paper classifies OOD samples for toys and small scale benchmarks.
Strengths: - The combination of the Lens Depth with the sample Fermat distance for the out-of-distribution problem is a solid and interesting contribution.
- The paper is well written and easy to follow. In general, the approach is clearly described.
- The results on small scale experiments are convincing.
- The approach presented does not include the training process of the model.
Weaknesses: - An extension of the related work to include papers on OOD would be necessary for the content of the paper.
- An additional evaluation metric would be helpful, e.g. FPR-95, ECE. This point should be addressed.
- A large-scale evaluation, e.g. ImageNet, is also missing. This is the main limitation of the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What is the reason for not performing the ImageNet evaluation, given that it is quite common in the topic?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper has a broader impact statement to discuss the idea of robust decision making.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First of all, thank you for your kind review. Here are our answers.
### Weaknesses
>1. Related work to include papers on OOD:
Thank you for your recommendation. We will add more references on OOD in related work.
> 2. An additional evaluation metric:
We do appreciate your kind recommendation, which is legitimate.
However, our choice of sticking to AUROC is almost forced upon us because single-forward UQ methods like DDU and DUQ, unfortunately, do not report metrics such as FPR-95.
Furthermore, the references we found such as [1] (and papers it cites) deal with ECE as a metric for OOD generalization but not OOD UQ (Uncertainty Quantification)
Thus, it is difficult for us to include a fair comparison with these metrics.
[1] Wald, Yoav, et al. "On calibration and out-of-domain generalization." Advances in neural information processing systems 34 (2021): 2215-2227.
> 3. Large scale evaluation.
See next question.
### Question:
This very legitimate question has been raised by another reviewer, and we reproduce the answer for your convenience.
Regarding the complexity of evaluation, we need a trained model on InD of size N and for every OoD example, the complexity is O(C N^2). Even if this is reasonable for a single example, which is relevant for applications, this can indeed become quite large for a full scale evaluation like ImageNet as InD. We estimate that would require approximately 60 hours on our hardware for a single run, which was not possible for this rebuttal, especially multiple runs are needed.
We did CIFAR100 / Tiny-ImageNet over 5 independent runs as it required approx 10 hours of experiments. And the presence of C=100 hopefully shows that the method scales well with larger dataset. To have a fair comparision, we use model Wide-ResNet-28-10 and the same training scheme as in DDU paper to train models on CIFAR100. We see that performance of our method is better or on par with strong baseline methods.
### Table. AUROC score with CIFAR100 as InD data and Tiny-Imaget as OOD data. Results of other methods are extracted from paper of DDU that were experimented on the same Wide-ResNet-28-10 model.
| Method | AUROC |
|-|-|
|LD (ours)|$0.8310 \pm 0.0013$|
|Softmax Entropy|$0.8153 \pm 0.0005$|
|Energy-based|$0.8133 \pm 0.0006$|
|SNGP|$0.7885 \pm 0.0004$|
|DDU|$0.8313 \pm 0.0006$|
|5-Ensemble|$0.8295\pm 0.0009$|
### Conclusion
Given our answers, we sincerely hope you will consider raising your score.
Kind regards,
The authors
---
Rebuttal Comment 1.1:
Comment: The revision has addressed most of my and other reviewers' points. I would still like to see more metrics in the evaluation protocol. For example, FPR-95 is quite useful for understanding how the proposed approach works. However, I am also in favour of the paper given the positive scores.
---
Rebuttal 2:
Title: Any comments?
Comment: Dear TY6a. The end of the discussion period is close. I would be grateful if you provide a feedback regarding authors’ answers to your review. | Summary: This paper proposes a new method for OOD detection/scoring based on the lens depth and Fermat distance, arguing that it has advantages over prior methods by being non-parametric, non-invasive, (almost) tuning-parameter-free, and quite effective in adapting to the unknown structure of the data to identify OOD points.
Strengths: 1. Subject matter is important
2. I found the paper really easy and fun to read.
3. 4.2 is a nice, simple, and practical modification—very natural and clearly successful!
4. Both the Lens Depth and Fermat Distance are nice, intuitive notions, and it is natural and fun to think about their combination!
5. I raise a number of conceptual issues below, but at the end of the day the demonstration of the method on standard data sets, comparing it to state-of-the-art methods, is fairly compelling, hence my high score.
Weaknesses: 1. LD is interesting and intuitive but what happens when the data falls into two disjoint clusters? Then won’t LD (with basically any distance I can think of, including Fermat distance) consider points in between those two clusters to be extremely central, despite the fact that, since they lie in neither distribution, they could reasonably be considered very OOD? Related: it seems the FD is infinite (whenever \beta>0) between two points separated by a region of zero density, suggesting that the sample version will be highly unstable in this setting, as it is should not converge at all but instead diverge to infinity. I see this is addressed in 4.4 by computing sample FD separately per cluster, but how were the clusters computed? Clustering is no trivial task, and given that things go wrong without clustering, I imagine S(x) in eq (4.2) depends rather heavily on the clustering. This (seems to me important) aspect of the proposed method seems underexplored/underexplained in the paper.
2. How does the convergence of the sample FD to the population FD depend on dimension? It’s a bit hard to believe it doesn’t suffer from some sort of curse of dimensionality, since it depends on a density and density estimation very much suffers from the curse of dimensionality. It seems many of the nice demonstrations of it in this paper occur in 2 dimensions (with the data lying nearly on a set of dimension 1), which doesn’t seem very representative of NN feature spaces.
3. Claim of “no trainable parameter” in the abstract is rather misleading, given the need for choosing both \alpha (ok there is a case made that maybe this isn’t too important) and the clustering.
4. Lit review is well-organized, but very focused on methods for NN OOD detection. The paper makes a big deal out of the method being non-intrusive, but another way of saying this is just that the proposed method is a way of scoring a point being OOD with respect to a distribution, which is a problem that, in general, has nothing to do with NNs or their feature representations. Surely there is a large body of work on outlier detection in statistics that could be considered in a similar light to this method, where one takes an off-the-shelf outlier detection method’s score and just applies it to the data transformed to be in the feature space of the NN? That is essentially what this paper is doing (though for a novel method, and I am not questioning its novelty). I just wonder what other existing methods are out there that could be doing something similar, even if they haven’t been explicitly applied to NNs.
5. Section 4.5 and Appendix E: choices II and III seem like they would rather seriously break the connection between the estimated LD and the true LD, since the k-means clustering will in general (and in typical circumstances) have clusters with very different numbers of points in them, so by reducing to the cluster centers (or center+’s), you are representing very different numbers of points with different centers. Another way to say it is that the density of the n points via methods II and III is quite different from that of the original N points (or via method I), and hence using them to compute the LD will be quite different in nature from using method I or the original N points. I would expect these methods (II and III) to not even have any kind of consistency property to the true LD of the original points, given their change in the density.
6. I appreciated the authors’ honesty in reporting LL ratio results as being better than their method (of course, it comes with a more complex process), but it seems worth noting that it is substantially better. Since all the AUROC scores are close to 1, it is natural to look at. 1-AUROC (so smaller is better), in which case the LL ratio gets 0.006 and LD gets 0.029, almost 5x higher. I don’t think the authors were misleading in presenting these results, but I found the two sentences (lines 252-254) highlighting the challenges associated with the LL ratio to be a bit vague, and the results might be more convincing if those challenges were made more explicit (possibly in an appendix if there isn’t room in the main paper).
7. I don’t find Fig 5.2 very convincing, since the monotonicity here is a pretty weak property and no comparison is made with other methods—my guess would be that many methods satisfy monotonicity. Is that not the case?
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. What is \alpha in Fig 4.1? Is it the same for all panels?
2. Nothing about the proposed method seems to have anything to do with NNs or their feature space, and in particular, it is never mentioned why the method is applied to data points in the feature space, as opposed to the raw data points. I can imagine the reason is that the method works better with relatively “nice” densities, with fewer clusters and continuous densities supported on smooth manifolds, but there is no mention of this in the paper, and it seems like it merits discussion. I did see the last sentence mentions the method can be applied to any model with a feature space, but again, why is a feature space (or a classification model) even needed?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: I guess some of my points listed under “weaknesses” could be interpreted as limitations, and I would like to see them better addressed/discussed. If they are (even if the authors don’t change their method at all), that would raise my score.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First of all, thank you for your kindly insightful reviews. Here are our answer.
Many aspects of the discussion will be added to the paper.
### Weaknesses
> W1: Two disjoints clusters for data?
Very natural question. You are perfectly right that the population (ideal) Fermat distance in Eq. (3.4) would be infinite due to vanishing density $f$. However, the sample FD would remain finite -- of the order of the distance to the power $\alpha$. The finite size effect thus stabilizes the value.
In fact, Theorem 2.3 in the reference [6] proves convergence of rescaled sample FD to population FD, with rescaling $n^\beta$ with $\beta = (\alpha-1)/d$. More precisely
$$ n^\beta D_{Q_n, \alpha}(x,y) \rightarrow D_{f, \beta}(x,y) $$
when the samples $Q_n = (X_1, X_2, \dots)$ are either $n$ iid with density $f$ or Poisson with intensity $n f$. It is crucial for $f$ to remain bounded from above and away from zero. As such, one could say that in between clusters, we need “very small density” but not “zero density”. Hence connectedness.
This purely mathematical answer is rather unsatisfying. A more practical take would be to argue that if a class is disconnected, there is a problem with the feature space. We agree that a better clustering would solve the issue.
But, here the clustering is given by the labeling, as we are in a supervised setting for classification. For a focus on the clustering (unsupervised), we found the reference [1] on the use of FD for clustering.
[1] Sapienza et al. "Weighted geodesic distance following Fermat's principle."
[6] Groisman et al. Nonhomogeneous euclidean first-passage percolation and distance learning.
> W2: Curse of dimensionality in the convergence sample FD to population FD.
In the previous convergence extracted from [6], the scaling does indeed depend on the dimension, thus reflecting a curse of dimensionality.
However, one could argue that this dependence is not too bad. Moreover [Theorem 2.7, 6] tackles the case of data living in an embedded manifold of dimension $d$, much smaller than the embedding space $\mathbb{R}^D$. In this case, only $d$ matters and not $D$.
This is a nice property for ML, where it is a common belief that data lives in much smaller dimensions that raw data points.
In the end, these are sensible arguments for the lack of a curse of dimensionality in using FD.
> W3. “No trainable parameter”
By "trainable param", one refers to params that are optimized (based on gradient descend for example) in the process. Here, $\alpha$ is chosen a priori and fixed during the process. And as you point out, it is not very important.
Also, perhaps there is a misunderstanding, but we point out (as in W1) that the clustering is part of the data. It is given by the classes in our supervised classification problem, which we want to make more robust thanks to OOD.
> W4. Focus on "non-intrusive" and "NNs", ignoring that the method applies beyond NNs.
Yes we agree with your remark. A comprehensive answer is given with the related Question 2.
> W5: Section 4.5 seem like they would rather seriously break the connection between the estimated LD and the true LD
Yes your remark is correct. This could lead to some change in the original density. However, at the end, our objective is to measure how "central" a point is w.r.t. our data and only LD matters. So, our motivation for using reduced methods is to find a config of points that cover well the support of the original data. If this is the case, even if there is change in density, the change of LD is minimal and the ordering of points by LD is not really impacted. That is, points that are "central" will remain with large LD and points near the the frontier of the original support should still have small value of LD.
> W6: LL ratio
In this method, instead of using directly the main model, one needs to train two supplementary generative models to estimate distributions. A first model is trained on ID data and a second one is trained on perturbed inputs. So that, the second model captures only the background statistics. Under suitable assumptions, authors show that the ratio between these two likelihoods cancels out the background information. Consequently, the LL ratio focuses on the semantic part of the input and so can be used as a score to distinguish ID from OOD data. This method needs an adequate noise in the way that the perturbed inputs contain only background information. This process itself is complicated as we need some supplementary dataset to choose the noise. Moreover, one needs to really carefully train these 2 generative models so that they can reflect the true underlying input density. This is quite complex. We will add this in the appendix.
>W7: Fig 5.2 on the monotonicity.
We made the figure because similar ones appear in the literature. We agree it is not fundamental. Nevertheless, we believe that this monotonicity is a reasonable sanity/reality check, which is nice to observe.
### Questions
> Q1. $\alpha$ in Fig 4.1?
We used alpha=3 in this experiment and it is the same for all panels.
> Q2 (and W4). Beyond NNs, and feature space? Why not raw data points?
You are perfectly right that the combination LD+FD can be used for outlier detection in a more general context than NNs. This is what we had in the mind in the very last sentence in the conclusion. We were perhaps not insisting enough in the paper.
Regarding other literature, we cited the works of Cholaquidis et al. from statistics on LD and the works of Groisman et al. from probability of FD. We are unaware of any works combining the two, aside from ours, for any outlier detection.
Regarding the use of raw data points directly, this indeed possible as in [1] for clustering. However, in the context of NNs and vision, working on pixels is not a good idea. Feature spaces are more efficient in this context as they extract the low dimensionality of the data more efficiently at a semantic level.
Kind regards
The authors
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their thoughtful and thorough rebuttal. I found it generally quite convincing, and was already quite positive on this paper. Clarifying that the clusters are just defined by the labels is important, sorry for my confusion on that! I do find the word "cluster" a bit unusual to refer to the points corresponding to an observed label, so the authors might consider rewording that. Anyway, I am raising my score to an 8. | Summary: The authors address the problem of out of distribution detection in supervised learning with particular focus on neural networks models. The developed method worj in some feature (embedding) space by measuring the statistical depth of the query point with respect to some reference set of points. The particular implementation combines lens depth function with Fermat distance. The authors validate the proposed approach in a series of experiments on simulated and real-world data.
Strengths: * The paper is very well-written and easy to follow.
* The considered problem is relevant for practice as there is a significant demand in efficient and non-intrusive methods for uncertainty quantification.
* The proposed approach is solid with all the steps being properly motivated.
* The authors did a significant effort to do a comprehensive literature review, experimental evaluation and analysis, though all the steps were not fully successful (see Weaknesses and Questions below).
[After rebuttal comment] I appreciate the answer by the authors and increase my score to 6. My main concerns were addressed.
Weaknesses: * While usage of statistical depth functions and distribution/manifold related distances looks logical, it is not clear why the particular choices of Lens Depth and Fermat distance were made.
* The baselines considered are not comprehensive enough and some of the baselines are not interpreted correctly by the authors of the present paper. In particular:
a. Non-Gaussianity of embedding distribution was directly considered in [1] aiming to improve over GDA. I think that is worth comparing with this method as the present paper target the same issue though with the completely different approach.
b. I believe that the authors incorrectly say that the difference between papers [2] and [3] is only in usage of spectral normalization. In my opinion, even more important is that [2] uses Mahalanobis distance as uncertainty measure while [3] considers the density of Gaussian mixture instead.
* The experiments are done with relatively simple datasets like CIFAR-10 for in-distribution data and SVHN/CIFAR-100/TinyImageNet as OOD. With the proposed approach being relatively lightweight, it is not clear why not to consider CIFAR-100/ImageNet as in-distribution with corresponding OOD choices (like ImageNet-R or ImageNet-O as OOD for ImageNet).
References
[1] Kotelevskii, Nikita, et al. Nonparametric uncertainty quantification for single deterministic neural network. Advances in Neural Information Processing Systems 35 (2022): 36308-36323.
[2] K. Lee, K. Lee, H. Lee, and J. Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. Advances in neural information processing systems, 31, 2018.
[3] J. Mukhoti, 362 A. Kirsch, J. van Amersfoort, P. H. Torr, and Y. Gal. Deep deterministic uncertainty: A new simple baseline. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 24384–24394, 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why Lens Depth was chosen and not other statistical depth functions like Half-space depth, Simplicial depth, ...
2. Why Fermat distance was chosen? One can consider many alternatives. For example, following manifold learning literature one can consider kNN graph constructed with Euclidean distance over embeddings and then computing shortest paths over the resulting graph.
3. Can you clarify how you implemented "GDA"-based methods? Did you use Mahalanobis distance or GMM-density?
4. Why didn't you do the experiments with more complex datasets? Is it due to high computational of LD + Fermat distance approach?
5. Have you tested effectiveness of reduced LD on more complex dataset than MNIST? Apparently, more complex models may lead to more complex embedding structure and require more points for approximation.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Comment on weakness
We believe that in [3], one uses GDA and not Gaussian Mixture Model (GMM). More precisely, GMM consists in calculating density for a point x as $p(x) = \sum_{i=1}^{C} w_i p_i(x)$, and so one needs to fit both $w_i$ (weight) and params $\theta_i$ of each $p_i$. (Here $\theta_i$ is the mean and covariance matrix). While, GDA consists in only fitting $\theta_i$ and then $p(x)$ is considered as $\max_{i} p_i(x)$. As both approaches lead to a mixture of gaussian, in some ML litterature, one usually confuse GDA with GMM. The most notable difference between these 2 approaches is that GMM has a smoothing effect of density value between the clusters , so larger value in these zones.
We did mention non-Gaussianity as a desirable feature of our method, as the method [1]. Yet it is not the main selling point: See Q1-Q2 below for synergies of LD+FD. We shall cite [1], but we believe that density estimation by kernels
- is not a far departure from GDA
- it does not provide a natural measure of centrality like Lens Depth
- needs bandwidth tuning
Thus instead of adding that benchmark, we chose to focus on Q4.
### Answers to questions
> Questions Q1-Q2. Why Lens depth vs Tuckey depth aka Half-plane depth or Simplicial depth?
Fermat distance vs Typical manifold learning eg Shortest paths after local kNNs?
Thank you for these questions, which we answer jointly, in order to stress that while the choice in (1) is for convenience, the choice for (2) is more fundamental. Moreover the choice (1)+(2) is synergistic and not independent.
The main idea of our method is the combination of a notion of depth or centrality w.r.t a distribution, and a measure of length that gives shorter distances for high density areas. Furthermore, it is highly desirable that the notion of depth adapts to the chosen distance.
Q1. As you relevantly point out, there are many notions of depth. And we chose not to delve on them. But indeed, it is easy to give a panorama of pros and cons:
- Tukey depth aka half-plane depth. Pros: Computationally simple, Naturally normalized. Cons: Euclidean, hence less synergy with Fermat distance. Half-place separation is very Euclidean.
- Simplicial depth. It counts the number of simplices of sample points that contain a given point. Cons: Not normalized into a probability, potentially exponential growth of number of simplices. Computationally complex / expensive as while there exists algorithms in 2D plane, it is not trivial in higher dimension.
- LD. Pros: Takes into account any distance. Normalized. Computationally middle complexity.
In summary, we can structure this discussion into the table.
| Depth notion | Adapts to any distance | Computational cost | Normalized into probability |
|-|-|-|-|
|Lens depth (LD)|Yes|Average|Yes|
|Half-space depth|No|Low|Yes|
|Simplicial depth|No as simplices are Euclidean|High|No|
LD is a well-rounded choice. And among these, it is the only one which can leverage the Fermat distance. If you think this is important, we could turn this explanation into an additional paragraph in the paper.
Q2. Indeed manifold learning is the logical tool for evaluating distances in a latent space. However typical methods such as those suggested do not have this extra feature of “shorter distances in high density areas”. Indeed, in the Fermat distance, the sum $\sum |q_i - q_{i+1}|^\alpha$ is small if the q_i’s are close and alpha is high.
And the idea of Fermat distance, inspired from percolation theory in statistical physics, is to use high parameter $\alpha$. This point is subtle yet important for us. Perhaps we should insist more on that.
All in all, we believe that we have made a sensible choice in combining LD for notion of depth, and Fermat for notion of distance.
>Q3.
GDA method: please see our comment on weakness above. The result is a mixture of Gaussians, as illustrated in Fig. 1.1 in our main paper.
> Q4. “Experiments with more complex datasets? High computational of LD + FD approach?”
> With being relatively lightweight, it is not clear why not to consider CIFAR-100/ImageNet as in-distribution with corresponding OOD choices.
As per the weakness you raised, we added CIFAR100 (InD) vs. TIny-ImageNet (OoD).
Regarding the complexity of evaluation, we need a trained model on InD of size N and for every OoD example, the complexity is O(C N^2). Even if this is reasonable for a single example, which is relevant for applications, this can indeed become quite large for a full scale evaluation like ImageNet as InD. We estimate that would require approximately 60 hours on our hardware for a single run, which was not possible for this rebuttal, especially multiple runs are needed.
We did CIFAR100 / Tiny-ImageNet over 5 independent runs as it required approx 10 hours of experiments. And the presence of C=100 hopefully shows that the method scales well with larger dataset. To have a fair comparision, we use model Wide-ResNet-28-10 and the same training scheme as in DDU paper.
### Table. AUROC score with CIFAR100 as InD data and Tiny-Imaget as OOD data. Results of other methods are extracted from paper of DDU that were experimented on the same Wide-ResNet-28-10 model.
| Method | AUROC |
|-|-|
|LD (ours)|$0.8310 \pm 0.0013$|
|Softmax Entropy|$0.8153 \pm 0.0005$|
|Energy-based|$0.8133 \pm 0.0006$|
|SNGP|$0.7885 \pm 0.0004$|
|DDU|$0.8313 \pm 0.0006$|
|5-Ensemble|$0.8295\pm 0.0009$|
> Q5. “Reduced LD on more complex dataset than MNIST?”
Yes, this was done.
Indeed the results summarized in tables 5.1 and 5.2 in the paper do use reduced LD. This is mentioned in the details provided in the Appendix A “Experiment details”.
And while the reduction is sizable, the results’ AUROC do not degrade.
Finally, let us mention that the details of experiment CIFAR100/Tiny-Imagenet will be added to appendix.
### Conclusion
Given our answers, we sincerely hope you will consider raising your score.
Kind regards,
The authors
---
Rebuttal Comment 1.1:
Title: Rebuttal well received
Comment: Dear authors,
your rebuttal was well received and you partially addressed my concerns. I will decide on the score changes after the discussion with other reviewers.
---
Rebuttal 2:
Title: Any comments?
Comment: Dear exfg. The end of the discussion period is close. I would be grateful if you provide a feedback regarding authors’ answers to your review. | Rebuttal 1:
Rebuttal: First of all, thank you all for your time and insightful reviews. Here are the main points in the rebuttal.
- We answered all the questions to best of our capability. In particular, the questions of Reviewer NNtx brought extensive mathematical discussions.
- We shall add / nuance multiple points about limitations.
- Reviewer kN5D remarked that Figure 4.2 shows that the LD still works with 200 samples. At what dataset size does the method start to fail, and how catastrophic is that?
In the attached pdf, you can find the figure showing experiment proposed by Reviewer kN5D. We hope that the figure will give you more insight about our method at small data regime.
- Reviewers exfg and TY6a kindly proposed to test our method on larger scale datasets to see if our method can scale well with more complex data. In the allotted time of the rebuttal, we added experiment with CIFAR100 as InD data and Tiny-ImageNet as OOD. And the presence of C=100 classes hopefully shows that the method scales well with larger dataset. We performed 5 independent runs. To have a fair comparison, we use model Wide-ResNet-28-10 and the same training scheme as in DDU paper.
### Table. AUROC score with CIFAR100 as InD data and Tiny-Imaget as OOD data. Results of other methods are extracted from paper of DDU that were experimented on the same Wide-ResNet-28-10 model.
| Method | AUROC |
|-|-|
|LD (ours)|$0.8310 \pm 0.0013$|
|Softmax Entropy|$0.8153 \pm 0.0005$|
|Energy-based|$0.8133 \pm 0.0006$|
|SNGP|$0.7885 \pm 0.0004$|
|DDU|$0.8313 \pm 0.0006$|
|5-Ensemble|$0.8295\pm 0.0009$|
Kind regards,
The authors
Pdf: /pdf/d9d0b252ed25e3fc2f825ffce8d09425d02921f7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Simplified and Generalized Masked Diffusion for Discrete Data | Accept (poster) | Summary: This paper proposes a new framework for masked diffusion models for generative modeling of discrete data. Masked diffusion models offer an alternative to autoregressive models for discrete data but have faced challenges due to complex formulations and unclear relationships between different approaches. This paper presents a simplified and generalized framework to address these issues, enhancing the performance and training of masked diffusion models.
The key contributions includes:
1. Simplification of Model Formulation: The paper establishes properties for the forward process and its time reversal using elementary arguments, provides a simple expression for the Evidence Lower Bound (ELBO), demonstrating it as a weighted integral over time of cross-entropy losses, and shows invariance properties similar to continuous space diffusions.
2. Re-derivation of Training Objectives: The paper demonstrates how various previously proposed discrete diffusion training objectives can be derived from the ELBO objective by altering parameterization, relaxing constraints, or modifying loss weighting.
3. Performance Improvements: The paper demonstrates state-of-the-art likelihood and zero-shot transfer results on text and image tasks using the proposed ELBO objective.
4. Generalized Masked Diffusion Model: The paper proposes a generalized masked diffusion model that allows state-dependent masking schedules, further improving predictive performance on test likelihoods.
Strengths: 1. The paper makes a notable contribution to the field of generative modeling for discrete data by introducing a simplified and generalized framework for masked diffusion models.
2. The quality of the paper is reflected in the thoroughness of its methodology and the robustness of its experimental validation.
3. The paper is well-written and clearly structured, making it accessible to both experts and those new to the field.
4. The significance of the paper lies in its potential to impact a wide range of applications in generative modeling for discrete data.
Weaknesses: 1. While the paper provides a robust theoretical foundation, there could be more emphasis on practical applicability. The paper could benefit from additional practical guidelines for implementing the proposed framework, such as more detailed pseudocode and specific implementation challenges.
2. The experimental results presented are strong, but the range of tasks and datasets could be expanded, such as VQ-Diffusion [1] for token-based text-to-image.
3. I am unfamiliar with diffusion models for text generation. For image generation, the paper has reported likelihood results, missing some other common metrics, such as FID and IS.
[1] Vector Quantized Diffusion Model for Text-to-Image Synthesis
Technical Quality: 4
Clarity: 4
Questions for Authors: Please see weaknesses.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have adequately described the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback! We are glad that you find our contribution notable, our methodology thorough and our experimental results strong and robust. We address each comment below
## Detailed pseudocode and specific implementation challenges
We thank the reviewer for the suggestion. Please refer to our response to questions shared by reviewers for pseudocode of training and sampling algorithms. We will add them in the final version. We have discussed implementation challenges in Appendix A of the paper. Please let us know if you have any additional questions about the implementation.
## The range of tasks and datasets could be expanded, such as VQ-Diffusion [1] for token-based text-to-image.
We thank the reviewer for the suggestion and will pursue this in future work. It's important to note that our pixel-level image modeling task is more challenging than modeling the latent space of vector quantized image encodings. While some discrete diffusion models have tackled high-resolution generation using the latter approach, to our knowledge, we're the first to go beyond 32x32 resolution for pixel-level modeling using discrete diffusion.
## Sample metrics for image generation
In our rebuttal pdf we added an FID evaluation using 50K randomly generated samples from our MD4 models on ImageNet
64x64. We compared different alpha schedules (Linear v.s. Cosine) and show that the cosine schedule leads to significant better sample quality. We further trained a class-conditional model on ImageNet 64x64 that boosts the FID to around 7. Although these are not state-of-the-art FIDs on ImageNet 64x64, we emphasize our models are optimized for likelihood using ELBO instead of sample quality, unlike the reweighted objectives used in continuous diffusion.
---
Rebuttal Comment 1.1:
Comment: I acknowledge having read the authors' rebuttal. My overall assessment of the paper remains unchanged, and I continue to support my current rating.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Thank you for the acknowledgement and the positive assessment! | Summary: The paper simplifies the mathematical formula for the absorbing state diffusion process. By doing so, the authors derive a continuous-time ELBO for masked diffusion models. Their method, MD4, achieves better perplexity scores than SEDD on text8 and zero-shot perplexity on numerous datasets.
Strengths: Simplifies the complex mathematical formulations for the absorbing state diffusion for D3PM.
Weaknesses: Weaknesses:
1. Weak empirical results
1. The zero shot numbers for D3PM in Table 1 look fishy. There are only 2 differences between Md4 and Absorbing State D3PM:
1. Mathematical simplification. In the discrete case (Eqn. 6), even though MD4 features a Simplified functional form for the ELBO, it shouldn't give it any performance benefits in terms of perplexity since it is mathematically equivalent to D3PM.
2. The improvement in ELBO could be because of the continuous time formulation. However, VDM [1] has shown that for gaussian diffusion, improvement from discrete (T=1000) to continuous time (T = $\infty$) barely improves the likelihood by less than 1%. For this reason, I request the authors to perform eval on an already trained model and report the perplexity numbers on text8 or OWT using Eqn (6) with T=100, 1000, 10000. If the numbers reported for D3PM in Table 1 are indeed correct, and if the entire improvement is coming from the continuous time formulation, then the discrete time MD4 should get a number that's comparable to D3PM's zero shot ppl numbers.
Questions: How did they retrain D3PM? Did they use the same transformer backbone as MD4? Did they use the same model size and data pre-processing scheme? Did they use uniform state or absorbing state diffusion process? The authors need to clarify this.
2. CIFAR10 Experiments. The AR baselines use old transformer models hence the comparison isn't quite fair. Current SOTA diffusion models on Imagenet 32 achieve a NLL of 2.55 [2] which is far better than the absorbing state diffusion models. So, I'm unsure about the takeaway from Table 3. In the conclusion section, the authors claim that "… on text and image data, the resulting masked diffusions outperform existing discrete and continuous diffusion models …" which is factually incorrect given that their method largely underperforms against gaussian diffusion [1, 2].
2. Limited evaluation of GenMD4. The authors mention that GenMD4 performs poorly on zero-shot tasks. I request the authors to quantify this poor performance by providing
1. Validation ppl numbers on OWT
2. zero-shot ppl numbers.
[1] Kingma, D., Salimans, T., Poole, B. and Ho, J., 2021. Variational diffusion models. *Advances in neural information processing systems*, *34*, pp.21696-21707.
[2] Sahoo, S., Gokaslan, A., Sa, C., Kuleshov, V., 2024. Diffusion Models With Learned Adaptive Noise. arXiv:2406.07524
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Clarification on D3PM experiments in Table 1 as mentioned in the "weaknesses" section in the reviews.
2. Why did the authors decrease the dropout to 0.02 for OWT experiments and not set it to 0? Diffusion models are heavily regularized due to the randomness in the input to the model and oftentimes don't require additional regularization such as dropout. Hence, an intuitive or an empirical explanation would be helpful.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time and the feedback! We clarify a few points.
## D3PM results & differences between MD4 and D3PM
* The reviewer is concerned if the D3PM results for zero-shot transfer tasks are comparable to MD4. We clarify that these D3PM results are from the SEDD paper (Lou et al., ICML 2024) which used the same setup as SEDD, including an absorbing state diffusion. This paper states:
"... baselines are ... D3PM (with Absorbing Transition) … We retrain both models … reuse our model architecture and match hyperparameters (i.e. model size, training specifications)."
To ensure direct comparability with MD4, we reimplemented SEDD and reproducing their results (see "SEDD Absorb (reimpl.)" in Table 1). This ensures a fair comparison between the models.
* The reviewer states that MD4's simplified ELBO form shouldn't provide benefits compared to D3PM as both are ELBOs. However, we emphasize that this simplification is crucial for a well-engineered, numerically stable ELBO implementation, leading to significantly better performance. To illustrate, we analyze D3PM's original implementation of the $KL(q(x_s|x_t, x_0)||p(x_s|x_t))$ terms in the ELBO (s = t - 1 for D3PM) without simplification:
1. Sample $x_t \sim q(x_t|x_0)$
2. Calculate $q_{s|0}(\\cdot|x_0)$ where $\cdot$ takes all values in the vocabulary.
3. Calculate $q_{t|s}(x_t|\\cdot)$.
4. Calculate the logits of $q(x_s|x_t, x_0)$ via $qlogits = \\log (q_{s|0}(\\cdot|x_0) + \\epsilon) + \\log (q_{t|s}(x_t|\\cdot) + \\epsilon)$.
5. Repeat 2-4, replacing $x_0$ with NN output probabilities to get $p(x_s|x_t)$ logits ($plogits$).
6. Compute $KL(q(x_s|x_t, x_0)\\|p(x_s|x_t))$ using log_q = log_softmax(plogits) and log_p = log_softmax(qlogits).
For masked diffusion, as shown by our simplification, many elements of $q(x_s|x_t, x_0)$ and $p(x_s|x_t)$ are zeros (see eq. (40) in App.). These terms can be removed but are instead included by D3PM. Thus, D3PM has to introduce an $\\epsilon$ to avoid inf - inf which gives NaNs in the KL computation. We perform an experiment in CIFAR10 to show the numerical issues: we replaced the MD4 objective in our code with D3PM’s ELBO and tried different $\\epsilon$s (1e-20, 1e-12, 1e-8, 1e-6). Only after increasing it to 1e-6 we avoided NaNs at the start of training. Still, we got NaNs after 300k iterations. The test BPD before NaNs was 3.03, while MD4 at this training iteration reports 2.89.
* We agree with the reviewer that, similar to VDM, the cont-time v.s. discrete-time difference will not make a huge difference in likelihood. However, our cont-time formulation remains a key contribution for these reasons:
* It specifies the diffusion model using a simple monotonic function $\\alpha_t$ representing unmasking ratio at t. This enables exploration of schedules not used in discrete-time models like D3PM where the model is specified with less intuitive jump probabilities $\\beta_i$ rather than masking ratios. One of our key findings is that the cosine alpha_t schedule produces the best sample quality. In our rebuttal pdf, we include a comparison between the widely used linear schedule and the cosine schedule, showing the latter leads to significantly better FIDs in ImageNet 64x64.
* It enables easy adoption of training techniques from Gaussian diffusions; e.g. the antithetic sampling used by VDM for estimating the time integral in the ELBO which reduces training variance—on CIFAR-10, this translates to ~0.02 improvement on BPD.
## CIFAR10 comparison with AR
The reviewer states that “The AR baselines use old transformer models hence the comparison isn't fair.” We have included the best AR results we found for CIFAR10. We're open to comparing with newer models if the reviewer can provide references. Also note our reported MD4 result of 2.78 is not fully optimized, e.g., increasing the batch size to 256 improves the result to 2.75 using the same 20M parameter model.
## CIFAR10 comparison with continuous diffusion
The reviewer states that “Current SOTA diffusion models ... achieve a NLL of 2.55 [2] which is far better than absorbing diffusion” We believe the 2.55 from [2] (which we assume refers to CIFAR-10, not ImageNet 32) isn't directly comparable to our results in Table 3 for these reasons:
* The 2.55 in [2] estimates exact log likelihood with probability flow ODE, while our results are upper bounds.
* The best variational bound result in [2] is 2.65, which relies on learnable schedules using NNs, while MD4 uses a fixed schedule.
* All continuous diffusion results in [2] Table 2 with BPD below 2.8 use extra image Fourier features (FFs) inputs from VDM. Comparing these with discrete diffusion is unfair since the latter assumes order-agnostic image data, i.e., the model is unaware of the proximity between pixels.
Also the impact of FFs becomes negligible at larger scale such as ImageNet 64x64. We have found that MD4 on ImageNet 64x64, by reducing the dropout rate to 0.0, gives 3.40 BPD which is the same as VDM relying on FFs.
We will revise the imprecise statement in the conclusion as "On text data, our masked diffusions outperform existing discrete and continuous diffusion models. For pixel-level image modeling, we significantly improve discrete diffusion results, outperforming similar-sized AR models and achieving comparable likelihoods to continuous diffusion models without Fourier features."
## GenMD4 on zero-shot
We clarify that we did not state GenMD4 performs poorly on zero-shot tasks. In fact, the results there are mixed. We have included both validation PPL numbers on OWT and zero-shot PPL numbers for GenMD4 in our response to shared questions by reviewers.
## Dropout rate for OWT
Following the reviewer’s suggestion, we conducted dropout rate tuning and found that lower rate is better: dropout rate 0.05 has eval PPL 24.63, dropout rate=0.02, eval PPL is 22.13, and dropout rate = 0.0, eval PPL is 21.86. We’ll update the results in the final version.
---
Rebuttal Comment 1.1:
Title: Update
Comment: >The AR baselines use old transformer models hence the comparison isn't fair.
By newer models, I was referring to newer transformer architecture variations like the Llama 2 flavor of transformers or transformers, transformers with RoPE etc. The comparison to SEDD is fair comparison as noted, but it would be good to see more AR. Also non-AR non-sequence-based diffusion baselines should still be listed in Table 3 for the image models as the performance is nowhere near SOTA for non-sequence based non-AR models and it may confuse readers who are not familiar with generative models. Comparison to SOTA Gaussian diffusion models should be drawn, even though they are unfavorable to this class of the methods.
I would encourage the author to study the numerical stability of their objective more in the final paper and it could make the submission much stronger.
I have raised my score 6 to reflect that some of my concerns have been addressed.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Dear Reviewer,
Thank you for raising the score! We are glad that the rebuttal addressed some of your concerns.
To address the other points you mentioned, could you clarify on the following points?
* Llama2-style transformers with RoPE embeddings are dense models. When applied to images (CIFAR-10 has a sequence length of 3072), they can be significantly more computationally expensive than the models in Table 3. All AR models in Table 3 are sparse or low-rank, while the diffusion models use UNet architectures rather than transformers to manage computational costs. Are you suggesting we compare to dense AR models? We believe such a comparison would be unfair in terms of computational requirements.
* By non-AR non-sequence-based diffusion baselines, are you referring to continuous diffusion models?
Best,
Authors | Summary: The paper proposes a streamlined and generalized framework for masked diffusion models, addressing the complexities and inefficiencies of existing models, including those based on Score Entropy Discrete Diffusion (SEDD). It introduces a continuous-time variational objective for masked diffusion models, simplifying the evidence lower bound (ELBO) to a weighted integral of cross-entropy losses. Additionally, the paper presents state-dependent masking schedules, enhancing the flexibility and performance of these models. The proposed methods demonstrate state-of-the-art results in text and image tasks, significantly improving likelihood and zero-shot transfer performance.
Strengths: - The paper offers a novel theoretical formulation of the continuous-time variational objective for masked diffusion models, simplifying the training process and ensuring consistency between forward and reverse processes.
- The introduction of state-dependent masking schedules provides a more adaptable approach, catering to the specific characteristics of the data and improving model performance.
- The proposed methods achieve state-of-the-art performance in both text and image generative tasks, significantly enhancing likelihood and zero-shot transfer capabilities.
- By reducing the ELBO to a weighted integral of cross-entropy losses, the paper makes the training and understanding of masked diffusion models more accessible and potentially more stable.
- The paper includes comprehensive experimental validation on various datasets, demonstrating the robustness and superiority of the proposed methods.
Weaknesses: - Despite the theoretical simplifications, the practical implementation of state-dependent masking schedules can still be complex and computationally demanding. Specifically, obtaining the starting x_T is challenging, and since the sampling process lacks stochasticity, sampling cannot be done from the completely masked state.
- The state-dependent models have a tendency to overfit to dataset statistics, which can limit their effectiveness in zero-shot transfer tasks.
- While the paper demonstrates superior performance, a more detailed comparative analysis with other state-of-the-art methods, particularly regarding computational efficiency and training times, would provide a clearer picture of the advantages.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Could the authors provide more insights into the practical challenges faced during the implementation of the state-dependent masking schedules?
- How does the proposed model ensure consistency between the forward and reverse processes, and how does this impact training stability compared to SEDD?
- Could the authors provide a detailed and separate description of the training and sampling algorithms, similar to what is provided in the Appendix of the SEDD paper, to better and more easily understand the proposed method?
- How sensitive is the proposed method to hyperparameter choices? Do multiple runs with the same hyperparameters yield consistent performance?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time you’ve taken to review our work and for the positive and constructive feedback! We are glad that you find our work "offers a novel theoretical formulation" and "achieve state-of-the-art performance" with " comprehensive experimental validation". We address each individual comment below.
## Practical challenges in implementation of state-dependent masking schedules
* Regarding computational cost, the state-dependent masking schedules and corresponding GenMD4 objective require only twice the computation of MD4 with the same network size. While slightly more expensive, this increase remains within affordable limits.
* Regarding sampling, we clarify that x_T is a full mask state even for the state-dependent schedule case. This is because the learned schedule $\\alpha_t = 1 - t^w$ still satisfies $\\alpha_1 = 0$ regardless of the value of $w>0$. We also clarify that the sampling process is stochastic in each step, please refer to the sampling algorithm described in our response to questions shared by reviewers.
## Effectiveness of state-dependent models in zero-shot transfer tasks
We have included the results of GenMD4 on zero-shot transfer tasks in our response to shared questions. The results are mixed: on some datasets (Lambada, PTB) GenMD4 results in significantly better zero-shot PPL numbers than MD4 while on other datasets (WikiText) GenMD4 is slightly worse. We believe this indicates some degree of overfitting to the dataset statistics of OpenWebText.
## Comparative analysis regarding computational efficiency and training times
We follow the reviewer’s suggestion to compare the training times of MD4/GenMD4 on OpenWebText using 128 TPUv3. We can see that MD4 is as fast as our reimplementation of SEDD while achieving better results. GenMD4 is slightly slower than MD4 due to the 2x computational cost.
* Gaussian Diffusion: 3.5 steps / s
* SEDD: 4.2 steps / s
* MD4: 4.2 steps / s
* GenMD4: 3.5 steps / s
## How the model ensures forward/reverse consistency and impact on training stability v.s. SEDD
For simplicity let’s consider the single-dimension case. As we have shown in Proposition 1, the true score function for a mask state $x_t=m$ has the following relationship with the conditional mean of $x_0$:
$$
s(m, t)\_j = \\frac{\\alpha_t}{1 - \\alpha_t} E[x_{0,j}|x_t=m]
$$
Note that here $x_0$ is a one-hot representation of the data therefore $\\sum_j x_{0,j} = 1$ and thus also $\\sum_j E[x_{0,j}|x_t=m] = 1$. This implies that
$$
\\sum_j s(m, t)_j = \\frac{\\alpha_t}{1 - \\alpha_t}.
$$
Since the transition rate matrix of the true reverse process depends on the score, the above equation implies a constraint that the reverse process has to satisfy in order to be consistent with the $\\alpha_t$-determined forward process.
As we have explained in Sec. 4 and App. F3, we can interpret both our method and SEDD as learning a model $s_{\\theta}$ for the true score function $s$. The differences are
* In MD4, $s_{\\theta}$ is parameterized as
$$
s_{\\theta}(m, t)\_j = \\frac{\\alpha_t}{1 - \\alpha_t} \\mu_{\\theta}(m)\_j
$$
where $\\mu_{\\theta}$ has a softmax output that produces a probability vector. Therefore, the score model also satisfies $\\sum_j s_{\\theta}(m, t)\_j = \\frac{\\alpha_t}{1 - \\alpha_t}\\sum_j \\mu_{\\theta}(m)\_j = \\frac{\\alpha_t}{1 - \\alpha_t}$.
* In SEDD, $s_{\\theta}$ is a free-form neural network model that outputs a real-valued vector, which has no guarantee it satisfies the constraint. This inconsistency between forward and reverse process leads to large variational gaps in the ELBO which we have empirically shown to lead to unstable training (see Figure 3).
## Detailed description of the training and sampling algorithms
Please refer to our response above to questions shared by reviewers.
## Sensitivity to hyperparameter choices and multiple runs
We test the sensitivity to multiple runs by re-training MD4-S with the same hyperparameter as the original one reported in the paper, where the original PPL on OpenWebText eval split is 22.126 and the re-trained MD4-S gets 22.134. In the pdf we uploaded for rebuttal, we include an analysis of the impact of masking schedule choice on sample quality, showing the benefits of a cosine schedule. | Summary: Summary: This paper introduces a framework for masked diffusions that consolidates previous research on the topic and organizes it into a cohesive structure. The authors also present a generalized model within this framework, which enables the use of state-dependent masking schedules and optimization of scheduler parameters.
Strengths: 1. The GenMD4 framework offers a valuable approach to optimize the forward process. In earlier studies, forward processes were typically manually designed and set within the model. However, GenMD4 adjusts the forward distribution to align with the estimated distribution, thereby improving the forward process. This innovation may serve as a source of inspiration for developing more effective forward processes.
2. This paper summarizes previous formulations of masked diffusion models and establishes the connections between them.
Weaknesses: 1. In line 90. The handling of $p(x _0|x _{t(1)})$ could be enhanced. Assuming $p(x _0|x _{t(1)}) \propto q(x _{t(1)} | x _0)$ is equivalent to assuming that $q(x _0)$ is uniformly distributed. In reality, it should be treated the same as other $p(x _s|x _t)$.
2. In line 114. When discussing multidimensional data, it is not straightforward to assume that the backward process factorizes across tokens. This is because the distribution $p(x _0)$ does not factorize across tokens. Achieving factorization necessitates a small time step dt, which may not be easily observable. Additionally, in the previous single-token scenario, dt dose not need to be small, indicating that one step is sufficient to model the distribution $p(x _0 | x _1)$. This aspect is crucial for multidimensional data and should be emphasized in a fundamental paper like this.
3. In append F. The presence of a non-zero $\alpha _1$ may result in the "medium brightness problem" [1]. However, there is no singularity when $\alpha _1$ is zero if log-SNR is not introduced, and the time interval can be extended to [0, 1].
4. In append G2. When applied to masked diffusion, $R_{kj}$ is zero when $ j \ne k$ and $j \ne m$. Given that $R_{kk} + R_{km} = 0$, $\tilde{q}$ can only take on one value (m), resulting in no additional variance.
5. In image experiments, MD4 employs masked noise, while $\tau$LDR uses Gaussian noise. We recommend conducting experiments with the same noise scheduler to demonstrate conclusively that MD4 is superior. If the goal of this paper is solely to establish that masked noise outperforms Gaussian noise, we recommend explicitly stating this claim. Additionally, we advise detailing the sampling method, as variations in methodology can influence the quality of generated samples.
[1] Common Diffusion Noise Schedules and Sample Steps are Flawed, Lin et al., 2024
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. GenMD4 has not been tested on image datasets. Could you please share the results of GenMD4 when applied to image datasets?
2. Since introducing GenMD4 results in additional variance, what if all tokens share the same w (referred to as "simplified-GenMD4")? This would result in less variance. Given that GenMD4's performance is close to MD4, can simplified-GenMD4 achieve the same BPC?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The method is only applied to masked diffusions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time you’ve taken to review our work and for the positive and constructive feedback! We are glad that you found our paper "offers a valuable approach to optimize the forward process”, “organizes prior work into a cohesive structure”, and "may serve as a source of inspiration" for future work. We respond to each of your comments below.
## The handling of 𝑝(𝑥0|𝑥𝑡(1)) could be enhanced
Thank you for the suggestion. In our implementation t(1) is tiny ($\\alpha_{t(1)} = 10^{-4}$) and we expect p(x0|xt(1)) to be very close to q(xt(1)|x0). We have also tried treating p(x0|xt(1)) the same as other p(xs|xt) and did not observe significant differences.
## Multidimensional data and backward factorization
Thank you for the suggestion. We agree with the reviewer that the true backward distribution becomes factorized only as $s \\to t$, which is why we adopt the continuous-time formulation throughout the work. We will follow the reviewer’s suggestion to make this more clear in the final version.
## Non-zero $\\alpha_1$
We originally introduced the non-zero $\\alpha_1$ to resolve numerical issues in GenMD4, where the REINFORCE LOO gradient requires calculating $\\log \\alpha_t$. We have since then used the same schedule for MD4 as well. We agree with the reviewer that the non-zero $\\alpha_1$ is unnecessary for MD4. Although we have not found any “medium brightness” problems, we will still rerun the MD4 experiment with $\\alpha_1 = 0$ and update the results.
## Appendix G2: Variance of Campbell et al. (2022)
We thank the reviewer for noting this. In the single dimension case, we agree that there is no additional variance via further simplification of the Campbell et al. bound. The additional variance comes from multidimensional data where the sum over $j$ becomes the sum over all neighbors in the forward process, i.e., the states that mask out a single dimension of $x_t$ that has not been masked yet. In this case, the Campbell et al. (2022) bound simplifies to
$$
\\beta(t) \\sum_{i: x_{t, i} \\neq m} \\log (x_{t, i}^\\top \\mu(x_t(i\\to m))_d
$$
where $x_t(i\\to m)$ denotes the state we get by masking out the i-th dimension of $x_t$. This equation requires $d$ function evaluations of the prediction model $\\mu$, thereby incurring additional variance if we estimate the sum with Monte Carlo. We will fix the discussion of Campbell et al. (2022) in the final version.
## 𝜏LDR with masked noise
We followed the reviewer’s suggestion to perform an additional experiment using 𝜏LDR with masked noise. We made sure the neural networks and training hyperparameters used by 𝜏LDR (masked) is the same with our MD4 experiment. The results are as follows (for comparison we also included the MD4 results)
| Method | BPD (CIFAR-10) |
| :------ | ------: |
| 𝜏LDR (masked) | <= 3.52 |
| MD4 | <=2.78 |
The results show that 𝜏LDR with masked noise suffers from the high-variance objective even with the same networks and training hyperparameters from MD4.
## Details of the sampling method
Please refer to our response to questions shared by reviewers.
## Results of GenMD4 on image datasets
We ran GenMD4 on both image datasets with the same architecture and hyperparameters as in MD4. On CIFAR-10, we get 2.7749 (GenMD4) v.s. 2.7847 (MD4). On ImageNet 64x64, we get 3.4233 (GenMD4) v.s. 3.4273 (MD4). We see consistent improvements in test BPD over MD4 throughout training although the improvement is smaller than that of text experiments (1.34 v.s. 1.37 BPC on text8 & 21.80 v.s. 22.13 PPL on OpenWebText validation set).
## Simplified GenMD4
We followed the reviewer’s suggestion to perform an additional experiment on text8 that only learns a scalar w for GenMD4 that is shared across all schedules. The result is 1.37 BPD which is almost the same as the result of MD4 and worse than GenMD4 with a vector w (1.34 BPD). We believe this is because the simplified GenMD4 reduces to MD4 with a learnable schedule (we can show this by choosing $\\alpha_t$ in (19) as a scalar schedule times an all-one vector) and MD4 is invariant to masking schedules.
---
Rebuttal Comment 1.1:
Title: Correction for Typos
Comment: Dear Reviewer,
We noticed a few typos in the original rebuttal and would like to correct them:
* It should be $\\alpha_{t(1)} = 1 - 10^{-4}$ in the response to "The handling of 𝑝(𝑥0|𝑥𝑡(1)) could be enhanced"
* The equation in the response to "Appendix G2: Variance of Campbell et al. (2022)" should be
$$
\beta(t) \sum_{i: x_{t, i} \neq m} \log (x_{t, i}^\top \mu(x_t(i\to m))_i)
$$ | Rebuttal 1:
Rebuttal: # Response to comments shared by reviewers:
We thank the reviewers for their feedback. Below we address the questions shared by reviewers. We also uploaded a rebuttal pdf that contains figures used to address individual questions/comments of the reviewers.
## bny3,Y9JA: Details of training and sampling algorithms
A single step of the MD4 training algorithm is described below:
Input: data batch $x_0$, network $\\mu_{\\theta}(\\cdot, t)$, masking schedule $\\alpha_t$ \
Draw a batch of $t$ from U[0, 1] using antithetic sampling \
Sample $x_t$ using $q(x_t|x_0)$ by masking out independently each dimension of $x_0$ with probability $1 - \\alpha_t$ \
Get prediction logits via $\\mu_{\\theta}(x_t, t)$ \
Calculate the cross entropy loss $CE$ using $x_0$ and the prediction logits and sum over the masked dimensions of $x_t$. \
Compute unbiased estimate of the negative ELBO as $-\\frac{\\alpha_t’}{1 - \\alpha_t} * CE$ \
Optimize the negative ELBO via autodiff.
Throughout the paper we simply run ancestral sampling from a discrete-time backward model. Specifically, the sampling algorithm is:
Input: Context size N, discretization grid $0 = t(0) < t(1) < \\cdots < t(T) =1$ \
Init: $x_1 \\gets [m, \\dots, m]$ \
for $i=T, T - 1, \\dots, 1$: \
$t \\gets t(i)$, $s \\gets t(i-1)$ \
for $n \\in [N]$, if $x_t^{(n)} = m$, draw $x_s^{(n)} \\sim \\mathrm{Cat}(\\frac{\\alpha_s - \\alpha_t}{1 - \\alpha_t} \\mu_\\theta^{(n)}(x_t, t) + \\frac{1- \\alpha_s}{1 - \\alpha_t} e_m)$ else $x_s^{(n)} \\gets x_t^{(n)}$ \
return $x_0$
We will follow the reviewer’s suggestion to include both algorithms in the final version.
## Y9JA, Pt68: Evaluation of GenMD4 on OpenWebText validation set and zero-shot tasks
* GenMD4 leads to better perplexity on OpenWebText validation set (this result is already shown in Figure 3 of the submission, here we report the precise values in a table):
| Method | Perplexity |
|-----------------------|-----------------------|
| Gaussian Diffusion | <= 27.28 |
| SEDD Absorb (reimpl.) | <= 24.10 |
| MD4 (Ours) | <= 22.13 |
| GenMD4 (Ours) | <= **21.80** |
* For zero-shot results, on some datasets (Lambada, PTB) GenMD4 results in significantly better zero-shot PPL numbers than MD4 while on other datasets (WikiText) GenMD4 is slightly worse. We believe this indicates some degree of overfitting to the dataset statistics of OpenWebText.
| Method | LAMBADA | WikiText2 | PTB | WikiText103 | IBW
| :------ | ------: | ------: | ------: | ------: | ------: |
| MD4 (Ours) | 48.43 | **34.94** | 102.26 | **35.90** | 68.10 |
| GenMD4 (Ours) | **33.31** | 41 | **65.06** | 41 | **52.1** |
Pdf: /pdf/7ca7196c49c779ad126e60f46e20e018c70c9661.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SocialGPT: Prompting LLMs for Social Relation Reasoning via Greedy Segment Optimization | Accept (poster) | Summary: Paper proposes a pipeline method of orchestration pre-trained foundation models to solve the social relationship classification problem. It uses vision models to extract information in text about the scene in the form of caption. Relevant information, i.e. age, gender, general description, of individual persons and objects are also extracted in text form via instance segmentation + masking + captioning. The generated text are then further converted to Social Story with a LLM. With the novel prompt engineering method, GSPO, another LLM will then generate the social relationship from the Social Story.
Experimental results on the challenging benchmarks, PIPA and PISC, indicates its strong performance with zero-shot setup. Extensive ablation studies were also done to evaluate the contributions of the various components. In particular, it clearly showed the merits of the "Social Story" design.
Strengths: Paper proposed a novel method to solve the challenging social relationship classification problem. The proposed method cleverly combine several state-of-the-art foundation models in a logical, intuitive, and yet non-obvious design to achieve state-of-the-arts experimental results.
Weaknesses: 1. Besides the clever design of the pipeline, the direct technical contributions is slightly on the weaker side as there is no obvious technical breakthrough. The proposed GSPO appears to be the main new technique introduced. However, I am not an expert in this area and will defer to other reviewers on its technical novelty and merits.
2. (minor) The use of the generic semantic segmentation model (SAM) may not be the optimal choice. There are much stronger Human Instance Segmentation methods which can replace the paper's custom SAM method. Such methods are specifically trained on person dataset to handle various challenging scenarios unique to human segmentation, e.g. heavy occulsion, human-like objects (e.g. maniquinn).
Ling, E., Huang, D., & Hur, M. (2022). Humans need not label more humans: Occlusion copy & paste for occluded human instance segmentation. BMVC.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Will/has the authors consider using pairwise attributes, besides the individual person attributes. E.g. relative age between pairs (older/younger), same/different clothings for the model?
2. Why are only 2 attributes (age/gender) used for the person instance? In prior works, other attributes such as wearing uniform are important attributes for certain type of social relationship, e.g. team members?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: (minor) There may be some unintended negative consequences of automatic classification of social relationship.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive feedback and the positive assessment of our work! Below, we detail our responses to the review concerns.
**W1: Technical novelty of GSPO**
Thank you for acknowledging the "clever" design of our pipeline. We agree that our major technical contribution lies in our proposed GSPO. Our SocialGPT transforms a visual classification task into a generative task of LLMs, presenting unique challenges in long prompt optimization that cannot be effectively tackled by existing methods. To address this issue, our GSPO method innovatively performs a greedy search utilizing gradient information at the segment level. The experimental results further validate the effectiveness of our proposed GSPO algorithm.
**W2: Human instance segmentation methods**
Thank you for your suggestion! In response, we incorporated the model suggested in [1] into our pipeline to directly compare it with our custom SAM method. The experimental results, presented in Table R4-1, indicate that the SAM model outperforms [1], achieving higher accuracy. The reason for SAM's superior performance is its ability to segment not only human figures but also non-person objects. This capability is particularly beneficial for social relation recognition, which relies on contextual object cues in addition to human figures. We will cite [1] and detail these additional experiments in the revised manuscript to provide a comprehensive view of our methodological choices.
Table R4-1. Results with different segmentation methods
|Segmentation| [1] |SAM|
|-|-|-|
|Accuracy (%)|57.26|64.10|
[1] Humans need not label more humans: Occlusion copy & paste for occluded human instance segmentation, BMVC 2022.
**Q1: Pairwise attributes**
Thank you for your valuable suggestion! We conducted experiments by adding one more relative attribute to our method. Specifically, we extended the queries to BLIP-2 to include not only individual age and gender attributes but also relative age or clothing differences between pairs.
The experimental results are summarized in Table R4-2. While the inclusion of relative age did not significantly alter the performance, suggesting that our original age attribute effectively captures relative age information, the addition of clothing attributes enhanced our accuracy by 1.03%. This indicates that incorporating complementary attributes can indeed enhance the performance of our model.
Table R4-2. Results with more attributes
|Attributes| Ours | + relative age | + clothing |
|-|-|-|-|
|Accuracy (%)|64.10|63.98|65.13|
**Q2: Other attributes**
Thank you for your question! As demonstrated in Table R4-2, incorporating additional attributes such as clothing information indeed enhances the performance of our framework. We would like to highlight that the primary focus of our paper is on developing a foundational model-based framework for zero-shot social relation reasoning and addressing the long prompt optimization issue within this framework. The decision on the number and type of attributes to generate task-oriented captions was driven by the desire to establish a simple baseline, which is not the main focus of our work. We initially selected age and gender as they are commonly referenced attributes that provide foundational insights for our zero-shot social relation reasoning baseline.
While we acknowledge the potential benefits of incorporating more diverse attributes, we opted to limit the scope of this initial study. We believe that exploring a broader array of attributes presents an exciting direction for future research, potentially leading to further improvements in performance.
**L1: Unintend negative consequences**
Thank you for raising this important concern. We recognize that automatic classification of social relationships can indeed lead to unintended negative consequences. To address this, we will expand our discussion in the broader impacts section of the paper.
In this expanded discussion, we plan to outline potential risks and propose mitigation strategies, such as implementing fairness and bias checks, and promoting transparent and responsible usage of our technology. We appreciate this suggestion and believe that addressing these concerns thoroughly will enhance the quality and ethical standing of our work.
---
Rebuttal Comment 1.1:
Title: Looking forward to your feedback
Comment: Dear reviewer,
Thank you for the comments on our paper. We have responded to your initial comments. We are looking forward to your feedback and will be happy to answer any further questions you may have. | Summary: This paper introduces SocialGPT, a modular framework for social relation reasoning that integrates the perception capabilities of Vision Foundation Models (VFMs) with the reasoning capabilities of Large Language Models (LLMs). To optimize prompts, the authors propose GSPO, a segment-based optimization algorithm for automated prompt tuning. Extensive empirical results validate the effectiveness of SocialGPT both quantitatively and qualitatively. GSPO consistently enhances SocialGPT's performance across various LLMs, and case studies demonstrate the framework's generalizability and interpretability.
Strengths: - The paper is well-organized, with a logical flow and clear explanations of each step.
- The proposed SocialGPT framework innovatively combines perception from VFMs with reasoning from LLMs, achieving competitive zero-shot performance and offering potential explanations for its reasoning process.
- Extensive experiments, ablation studies, and case studies comprehensively evaluate the framework's effectiveness.
Weaknesses: Section 3.2 mentions that using precise coordinates can pose challenges for LLM numerical reasoning. However, it appears in Figure 3 that the objects' positional relations in the social story are inferred from numeric coordinates provided in the dense captions with symbols. Does this coordinate-based inference lead to similar numerical reasoning challenges? Additionally, how are relative positional relations conveyed here using referral symbols?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weaknesses section above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive feedback and the positive assessment of our work! We are happy that you find our framework **innovative** and our experimental evaluation **comprehensive**. Below, we detail our responses to the review concerns.
**W1-1**
> Does this coordinate-based inference lead to similar numerical reasoning challenges?
Thank you for the question! Indeed, using precise coordinates for social reasoning in LLMs poses substantial challenges as it requires the model to understand spatial relationships and perform social reasoning simultaneously, which can be complex.
To mitigate these challenges, we adopt a two-step approach in our methodology: first, we generate textual social stories by converting coordinates into descriptive textual spatial information; subsequently, we use these stories for social reasoning. This decomposition simplifies the cognitive load on the LLM by separating spatial understanding from social reasoning.
Recent research [1][2] on LLMs supports this approach: they have demonstrated that breaking down complex tasks into simpler, manageable sub-tasks for multi-step reasoning significantly enhances LLM performance by reducing cognitive demands.
Our empirical results further validate this method. As shown in Table 2 of our paper, the performance of our proposed method surpasses that of the coordinate-based method, underscoring the effectiveness of our strategy in reducing the difficulties associated with numerical reasoning in social contexts.
[1] Decomposed prompting: A modular approach for solving complex tasks, ICLR 2023
[2] Least-to-most prompting enables complex reasoning in large language models, ICLR 2023
**W1-2**
> How are relative positional relations conveyed here using referral symbols?
Thank you for the question! To address this issue, we instruct the LLM to describe spatial relationships among objects and people using textual descriptors instead of relying on precise coordinates. This is achieved by using specific prompts that focus on textual descriptions rather than numerical data.
For instance, we use prompts such as: "Depict the spatial relationships between individuals and objects, as well as the spatial relationships between people", "Must use symbols <O..> and <P..> when referring to objects and people", and "Do not use coordinates [x1,y1,w,h], [x1,y1], [w,h] or numbers to show position information of each object". These prompts guide the LLM to generate social stories that describe spatial relationships. The full set of prompts used for generating these social stories is detailed in Figure 7 of our paper, providing a comprehensive view of our methodology.
Figure 3 in our main paper shows an example of how the generated social stories describe the relative positional relations: "At the center of the image, a young <P1> boy", "To his right, a middle-aged <P2> man", "On the left side of the frame, a <P3> woman". We observe that LLM can reason the relative positional relations with our carefully designed prompts.
---
Rebuttal Comment 1.1:
Title: Looking forward to your feedback
Comment: Dear reviewer,
Thank you for the comments on our paper. We have responded to your initial comments. We are looking forward to your feedback and will be happy to answer any further questions you may have. | Summary: This paper proposes a framework called SocialGPT for social relation reasoning, which combines vision foundation models and large language models. A greeedy segment prompt optimization methods is also proposed to prompt LLM. Experimental results show the effectiveness of the proposed method.
Strengths: ---The paper is well organized and written.
---The idea of combining VFMs and LLMs is reasonable.
Weaknesses: --- The paradigm of using VLMs for perceiving and LLMs for reasoning is currently a common solution for multimodal tasks. The main difference of this paper seems to be the use of a generated social story as the representation of visual content. As stated by the authors, LLMs perform best when working with human-readable natural language and often struggle with arithmetic reasoning tasks, which is why they design an additional process to generate social stories. However, the generation of social stories is also done by LLMs, which also suffer from the above difficulties.
--- The authors propose a candidate set consisting of alternative prompts for each segment and select the best-performing prompt from their combination. The final prompt is obtained by selection rather than generation, which limits the upper bound of the performance on the manually collected candidate set.
--- The function of SAM is to distinguish individuals in the image and obtain their coordinates. However, in the social story generation phase, the LLM (Large Language Model) discards the coordinates, retaining only the semantic information and losing the positional information. Conducting social relationship reasoning purely based on semantics may be insufficient. For example, in Figure 2, the social relationship is identified as a sibling relationship (brother and sister), but there are two boys in the image, both fitting the given description of "stands out in his vibrant red and green striped pajamas," making it unclear which individual P1 refers to.
Technical Quality: 3
Clarity: 3
Questions for Authors: --- Is the design of using LLMs for social story generation optimal, and why? Also, have the authors tried other approaches to generate social stories from dense captions instead of using LLMs?
--- In the part of reasoning with large language models, the social relation reasoning prompt is artificially divided into four partitions: System, Expectation, Context, and Guidance, but the motivation and reasonableness of such a design is not elaborated in the paper.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please see the weakness and limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the comments! Below we address the detailed questions. We hope that our responses will reflect positively on your final decision.
**W1-1: Common solution**
We are fully aware that leveraging foundation models for vision tasks is a growing trend, which also motivates our work. We would like to emphasize that while our SocialGPT provides a simple zero-shot social relation baseline using foundation models, our major technical contribution lies in our GSPO. It addresses the long prompt optimization issue, automatically optimizing the SocialPrompt.
**W1-2: Difficulties for social story generation**
Recent studies [1][2] on LLMs have shown that decomposing complex tasks, such as mathematical reasoning, into sub-tasks for multi-step reasoning, effectively reduces difficulties, and improves performance. Therefore, instead of directly using LLMs for social reasoning with raw, dense captions, we first integrate these captions into a coherent social story. This story conveys positional information through text rather than numerical coordinates, and then we perform social reasoning with the social stories. This two-step decomposition has reduced the reasoning difficulties and achieved higher performance. We conducted ablation experiments on social stories, and the results in Table R2-1 show that the introduction of social stories greatly improved performance.
Table R2-1. Ablation on social story
|social story|PIPA|PISC|
|-|-|-|
|without|45.31|37.42|
|with|61.58|45.13|
[1] Decomposed prompting: A modular approach for solving complex tasks, ICLR 2023
[2] Least-to-most prompting enables complex reasoning in large language models, ICLR 2023
**W2: Upper bound**
There are many works on automatic prompt tuning for LLMs; one popular direction is performing greedy exhaustive searches or selections over tokens. These methods [3][4] are well-recognized in the NLP field. They typically perform well and do not face any upper-bound issues. For example, AutoPrompt [3] identifies a candidate set of the top-k tokens and performs greedy selection, which has garnered over 1500 citations. Our GSPO further proposes performing a greedy search by utilizing gradient information at the segment level to address the long prompt optimization issue.
We want to clarify that, besides the example segment, the candidate sets for other segments are generated by ChatGPT and are not manually collected. For the example segment, the candidates can be chosen from the entire training set, thus GSPO is also not constrained by a limited candidate set. The results in Table 5 of our paper also validate the effectiveness of our GSPO.
[3] Autoprompt: Eliciting knowledge from language models with automatically generated prompts, EMNLP 2020
[4] Automatically Auditing Large Language Models via Discrete Optimization, ICML 2023
**W3: Losing positional information**
Sorry for the misunderstanding. We want to clarify that in our social story generation phase, we explicitly instruct LLMs to preserve positional information, rather than lose it. We discard only the numerical coordinates and use textual sentences to describe positional information. The corresponding prompts
> Illustrate the spatial relationship and depict the interaction between different people
and
> Depict the spatial relationships between individuals and objects, as well as the spatial relationships between people
are shown in Figure 7 of our paper.
Additionally, our generated social story could include very detailed positional cues, as exemplified in Figure 3 of our paper: "At the center of the image, a young <P1> boy", "To his right, a middle-aged <P2> man", "On the left side of the frame, a <P3> woman". As for the example in Figure 2, it actually contains weaker positional information due to the performance fluctuations of LLMs: "gathered", "Sitting beside him is a girl, <P2>". While this is not sufficient to reconstruct the original image details, it is adequate for LLM reasoning, which is our primary requirement.
To further verify the effectiveness of our position-related prompts in the social story generation phase, we compared scenarios with and without these prompts. The results in Table R2-2 show the effectiveness of our positional prompts.
Table R2-2. Ablation on positional prompts
|Method|w/o pos|w. pos|
|-|-|-|
|Accuracy (%)|58.32|61.58|
**Q1: Other approaches to generate social stories**
We considered and evaluated two alternative approaches alongside our LLM-based method: 1) BLIP-2: instructing BLIP-2 to generate social stories. 2) Concatenation: combining all dense captions into a single paragraph. Table R2-3 shows that the LLM-based method attains the best performance.
Table R2-3. Results of different social story generation approaches.
|Method|BLIP-2|Concatenation|LLM|
|-|-|-|-|
|PIPA|43.85|45.31|61.58|
|PISC|40.17|37.42|45.13|
**Q2: Four partitions**
The Four partitions are well motivated and supported by the recent research progress in LLM. Research [5] shows the system prompt "establishes the foundational context for the model’s responses". Study [6] demonstrates that assigning roles to LLMs can enhance their performance, which illustrates the importance of the Expectation prompt. According to [7], incorporating background knowledge benefits LLMs reasoning, strongly motivating the use of the Context prompt. As for the Guidance prompt, using in-context examples has become a common practice to improve LLM performance [8].
Besides the above evidence, Table 2 of our paper provides a detailed ablation, validating their effectiveness.
[5] Jailbreaking GPT-4v via self-adversarial attacks with system prompts, arXiv:2311.09127 (2023).
[6] Better Zero-Shot Reasoning with Role-Play Prompting, NAACL 2024.
[7] Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering, ACL Workshop 2023.
[8] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, NeurIPS 2022.
---
Rebuttal Comment 1.1:
Title: Looking forward to your feedback
Comment: Dear reviewer,
Thank you for the comments on our paper. We have responded to your initial comments. We are looking forward to your feedback and will be happy to answer any further questions you may have.
---
Reply to Comment 1.1.1:
Title: Looking forward to your feedback before the end of discussion period
Comment: Dear reviewer,
Thank you very much for your valuable feedback. As the discussion period is coming to an end, we kindly wish to inquire if our response has fully addressed your concerns. Please let us know if there are any additional concerns or questions. | Summary: This manuscript introduces SocialGPT, a modular framework designed to enhance social relation reasoning by combining Vision Foundation Models (VFMs) and Large Language Models (LLMs). SocialGPT utilizes VFMs to convert image content into a textual social story, followed by LLMs performing text-based reasoning. The paper further introduces the Greedy Segment Prompt Optimization (GSPO) algorithm to optimize prompts for LLMs, addressing the challenges of long prompt optimization. The proposed method achieves competitive zero-shot results on social relation recognition tasks and offers interpretable answers.
Strengths: - The GSPO algorithm provides an efficient method for optimizing long prompts, significantly improving the performance of LLMs in social relation reasoning tasks.
- SocialGPT achieves competitive zero-shot results on PIPA and PISC datasets, demonstrating the effectiveness of the proposed approach without additional model training.
- By leveraging LLMs for reasoning, SocialGPT can generate language-based explanations for its decisions, enhancing the interpretability of the results.
Weaknesses: - The approach involves substantial computational resources for both the perception and reasoning phases, potentially limiting accessibility and scalability for some users.
- The experiments, while promising, are primarily conducted on two datasets. Further testing on a broader range of datasets and tasks would strengthen the generalizability of the findings.
- The method assumes that the visual context provided by VFMs is sufficiently detailed and accurate, which might not always hold true in diverse real-world scenarios.
- The compatibility of the proposed method seems to be limited; Table 5 implies that LLaMA2-based SocialGPT performs very poorly compared to Vicuna. The proposed framework may work only for specific types of models.
Technical Quality: 2
Clarity: 3
Questions for Authors: - How can we evaluate generated social stories? It would be great if the authors could show how GSPO improves the quality of generated social stories.
- How GSPO can be performed without the ground-truth answer? The current formulation in section 4 seems to require the ground-truth to define the loss objective.
- What are the differences between the social story of SocialGPT and social relationships used in baselines? I feel that Image-based text explanation is not new.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See the weakness section and question above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments! Please find our responses to specific queries below. We hope that our responses will reflect positively on your final decision.
**W1: Substantial computational resources**
Our method leverages multiple foundation models, which may initially appear computationally intensive compared to traditional social relation methods that use dedicated networks. However, the paradigm shift introduced by foundation models offers a significant advantage: they enable solving multiple downstream tasks with just one set of models using different prompts, rather than training a separate model for each task. Using a single set of foundation models for diverse tasks, as demonstrated by the success of ChatGPT, is not only economical but also enhances scalability and accessibility. Our method facilitates deployment in scenarios where intelligent agents are expected to handle multiple tasks simultaneously. Therefore, our approach ultimately presents a more scalable and accessible solution, aligning with the future directions of intelligent systems deployment.
**W2: The experiments are conducted on two datasets**
Our choice to conduct experiments on two datasets (PIPA and PISC) aligns with common practices in the field. Previous SOTA methods such as Dual-Glance, GRM, SRG-GN, GR2N, and TRGAT have utilized these datasets, either individually or in combination, as benchmarks.
As there are no other commonly used social relation datasets, following the suggestion, we further collected and annotated a small dataset consisting of 100 samples from diverse real-world scenarios on our own. We then directly tested the zero-shot performance on this dataset, and the results below show that our method significantly outperforms the previous open-sourced SOTA method, GR2N.
Table R1-1. Results on a new dataset
|Method |GR2N|Ours(GPT-3.5)|Ours(Vicuna-13B)|
|-|-|-|-|
|Accuracy (%)|48.0|63.0|62.0|
**W3: Detailed and accurate visual context**
While converting all information from an image into text with VFMs can be challenging, obtaining **social-related** visual information sufficient for later reasoning is easier and feasible. Our SocialGPT introduces several designs to facilitate such a process: generating dense captions with SAM to cover all objects, inquiring social-related questions through BLIP-2 to obtain task-oriented captions, and generating social stories to depict the image from a social relations perspective. With these designs, our SocialGPT, without any fine-tuning on social relation datasets, achieves better performance on the PIPA and PISC datasets compared with SOTA methods that are specifically trained on these social datasets. This illustrates the effectiveness of the proposed framework. Furthermore, compared with previous methods trained on social datasets, foundation models usually provide better generalization in diverse real-world scenarios, which is also validated by the results in Table R1-1.
**W4: The compatibility**
The performance of our method is directly influenced by the reasoning ability of LLMs, as we use LLMs for final social relation prediction. Table R1-2 shows that with a more advanced LLM version, Llama 3 achieves performance comparable to that of other LLMs. These findings illustrate that our framework is adaptable and can effectively leverage advancements in LLM technology, showing the compatibility of our method.
Table R1-2. Results with different LLMs
|LLM|GPT-3.5|Vicuna-13B|Llama-3-8B|
|-|-|-|-|
|PIPA|64.1|66.70|67.82|
|PISC|53.43|65.12|65.34|
**Q1: Evaluate social stories**
To evaluate the quality of generated social stories, we adopt another LLM (GPT-4) as the judge. We input two different social stories for the same image into this judge LLM and ask it to decide which one describes a more vivid and detailed social story. Our social stories are generated using the prompts shown in Figure 7 of our main paper. We compare our results with a baseline method that instructs LLMs to fuse dense captions without our carefully designed prompts. The results in Table R1-3 show that our generated social stories are of high quality.
Table R1-3. Social story evalutaion
|Method|Baseline|Ours|
|-|-|-|
|Preference|0.13|0.87|
> It would be great if the authors could show how GSPO improves the quality of generated social stories.
We want to clarify that GSPO is not used for improving social stories; instead, it is used to find a better SocialPrompt (see Figure 2). This improved prompt can better instruct LLMs in social relation reasoning. We have included the SocialPrompts before and after using GSPO in Figures 9-11 of our main paper.
**Q2: Ground-truth answer**
We want to clarify that GSPO needs the ground truth class labels y but does not have textual ground truth since it deals with LLM outputs. This is the "free-form target" challenge, as mentioned in our paper. Our solution involves constructing the textual ground truth that starts with "The final answer is str(y)" and only supervising the first sentence of the LLM's output with it. More details and strategies to complement this design are presented in lines 194-226 of our main paper.
**Q3: Image-based text explanation**
Sorry, we do not fully understand this question and are unsure what is meant by "social relationships used in baselines". Could you please elaborate further? We are more than happy to answer your question. We try to answer it below based on our current understanding.
A social story is a textual paragraph depicting the social context of an image, while social relationships are the target categories, represented by a one-hot numerical vector without any textual explanations in previous social relation methods. Image-based text explanation is not the main focus of this paper. Instead, our paper aims to provide a zero-shot baseline for social relation recognition using foundation models. Building on this framework, the GSPO is proposed to automate the prompting process.
---
Rebuttal Comment 1.1:
Title: Looking forward to your feedback
Comment: Dear reviewer,
Thank you for the comments on our paper. We have responded to your initial comments. We are looking forward to your feedback and will be happy to answer any further questions you may have. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Customized Multiple Clustering via Multi-Modal Subspace Proxy Learning | Accept (poster) | Summary: This paper incorporates a multi-model subspace proxy learning (Multi-Sub) to design a novel end-to-end multiple clustering approach and utilizes the synergistic capabilities of CLIP and GPT-4 to align textual prompts expressing user preferences with corresponding visual representations. The main contributions of Multi-Sub can be summarized as follows:
1. Capturing user’s clustering interest: Existing works struggle to adapt to diverse user-specific needs in data grouping. To overcome this limitation, Multi-Sub explicitly captures a user’s clustering interest by learning the desired clustering proxy under a user’s interest and aligning textual interest with corresponding visual features.
2. Simultaneous optimized framework: The previous methods separated the representation learning and clustering stages. Different from them, Multi-Sub obtains both the desired representations and clustering simultaneously, which significantly improve the clustering performance and efficiency.
3. Extensive experimental validation: Extensive experiments on all public multiple clustering tasks demonstrate that Multi-Sub outperform other methods. Moreover, a series of ablation studies further verify the effectiveness of Multi-Sub.
Strengths: 1. In real world, data may have multiple aspects that they can be grouped into different clusters. However, existing methods solely consider a single partition. So, it is meaningful to propose an effective algorithm to overcome this problem.
2. The authors leveraged large language models (LLMs), including GPT-4 and CLIP, to align image and textual representations in the same subspace. Then, multi-modal subspace proxy learning is introduced to allow for the customized representation of data in terms specific to the user’s interests.
3. Experimental results on public datasets show that the Multi-Sub method has a significant improvement, indicating the effectiveness of the propose method.
Weaknesses: 1. To change the two-stage learning approach of previous works, Multi-Sub aims to learn representation and clustering simultaneously. However, Multi-Sub employs a two-phase iterative approach to align and cluster images in training process, including (1) Phase I: Learning and Alignment; (2) Phase II: Clustering and Optimization. I wonder if this is another form of two-stage task.
2. The description of Clustering Loss is not very clear in Section 3.4, how to determine that samples belong to the same class? By pseudo-label? Where did the pseudo-label come from?
3. In this paper, the authors introduced large language models (LLMs) to learn representations and bridge the gap of textual and image features. But does the direct use of a pre-trained large language model introduce a priori information about the category, which can lead to unsupervised scenarios being corrupted?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed social impacts and limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **W1: I wonder if this is another form of two-stage task.**
Thanks for your invaluable feedback. We will make it clear in the revision as follows. A two-stage process used by previous methods separates the representation learning and clustering entirely, where the representation learning is fully completed at first. This separation, however, can lead to sub-optimal clustering results, as the learned representations may not be fully aligned with the clustering objective. Instead, we obtain both the proxy word and the clustering alternatively. Concretely, we first learn the proxy word in a user-preferred subspace (i.e., **Eqn. (5)** in the submission). Then, we fix the proxy word and open the image encoder to obtain better image representations considering the clustering objective (i.e., the clustering loss at **line 240** in the submission). These two steps are repeated until convergence. Therefore, these two components can improve each other iteratively, which shows better performance than the separated two-stage strategy in previous methods.
---
### **W2: The description of Clustering Loss is not very clear in Section 3.4, how to determine that samples belong to the same class? By pseudo-label? Where did the pseudo-label come from?**
Thanks for pointing this out. During the clustering, the proxy word is fixed, which can be used to obtain the pseudo-labels to determine the inter-cluster and intra-cluster relationships. Concretely, an offline clustering algorithm (e.g., k-means) can be applied to the currently fixed proxy words to group the samples as the pseudo-labels. We will clarify it in the subsequent versions.
---
### **W3: Does the direct use of a pre-trained large language model introduce a priori information about the category, which can lead to unsupervised scenarios being corrupted?**
Thanks for your insightful comments. It should be noted that although the category information released by the LLM can help determine a better subspace for the proxy learning, it is hard to cover all ground-truth labels, especially for user-specific domains as in this work. The provided categories by the LLM can be too narrowed, too expansive, or without any overlapping. Therefore, the corresponding categories existing in the data are unknown or uncertain. Moreover, the exact label for each instance is still unknown as in most unsupervised scenarios. We will make it clear in the revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. Some of my concerns have been addressed. After reading the responses and the comments from other reviewers, I would maintain the original score.
---
Reply to Comment 1.1.1:
Comment: Many thanks again for your insightful comments for improving our work. If applicable, your further suggestion on any remaining issues would be greatly appreciated. | Summary: This paper presents an innovative approach for addressing the limitations of existing multiple clustering methods. By leveraging the synergistic capabilities of CLIP and GPT-4, Multi-Sub aligns textual prompts with visual representations to cater to diverse user-specific clustering needs. This method introduces a novel multi-modal subspace proxy learning framework, which automatically generates proxy words from large language models to represent data in terms specific to user interests. The experimental results demonstrate that Multi-Sub consistently outperforms existing baselines across various datasets. Overall, I believe this paper makes a substantial contribution to the field of deep clustering and holds significant practical application value.
Strengths: The paper offers several notable strengths that contribute to its overall impact and significance in the field of multiple clustering:
1. The integration of CLIP and GPT-4 for multi-modal subspace proxy learning is novel and effectively addresses the limitations of traditional multiple clustering methods.
2. Multi-Sub excels in capturing and responding to diverse user interests, providing tailored clustering results without requiring extensive manual interpretation. Moreover, the performance gains come at a low cost and seem relatively easy to achieve.
3. The writing is clear and easy to follow. The figures are well-drawn, allowing for a quick understanding of the research motivation and methodological design.
4. Extensive experiments on a wide range of publicly available datasets demonstrate the robustness and generalizability of the proposed method.
Weaknesses: Despite its strengths, there are some areas where the paper could be improved to enhance its clarity and applicability:
1. Although the paper mentions the hyperparameters used, a more detailed analysis and discussion on the sensitivity of the method to these parameters would be beneficial.
2. Given the method's iterative nature and the use of large models, there is a risk of overfitting, especially on smaller datasets. I am curious whether regularization techniques were used to address this issue?
3. Table 3 compares the impact of different text encoders on performance. Clearly, there are significant performance differences when using different encoders, and the authors have indeed analyzed this issue. However, I believe the reasons behind this phenomenon could be explored in depth. Intuitively, given that the input text is quite simple, the overall performance should not be particularly sensitive to the choice of text encoder.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Although the paper mentions the hyperparameters used, a more detailed analysis and discussion on the sensitivity of the method to these parameters would be beneficial.
2. Given the method's iterative nature and the use of large models, there is a risk of overfitting, especially on smaller datasets. I am curious whether regularization techniques were used to address this issue?
3. Table 3 compares the impact of different text encoders on performance. Clearly, there are significant performance differences when using different encoders, and the authors have indeed analyzed this issue. However, I believe the reasons behind this phenomenon could be explored in depth. Intuitively, given that the input text is quite simple, the overall performance should not be particularly sensitive to the choice of text encoder.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The limitations are discussed in the paper by the authors. There is no potential negative societal impact
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **W1: A more detailed analysis and discussion on the sensitivity of the method to these parameters.**
We greatly appreciate your suggestion. To show the sensitivity of the balancing factor $\lambda$ that is the only hyper-parameter in our proposal, the experiments were conducted on CIFAR-10. We varied the value of $\lambda$ from 0.1 to 1.0 in increments of 0.1 to observe its effect on the model's performance. As shown in **Figure 1** of the one-page PDF, we can observe that different values of $\lambda$ can work with our method and the optimal performance for "Type" \& "Environment" clustering is achieved when $\lambda$ is set to 0.5. When $\lambda$ is too low, the model focuses too much on maximizing the distances between different clusters, which can lead to less cohesive clusters. Conversely, when $\lambda$ is too high, the model emphasizes compactness within clusters at the expense of inter-cluster separation, leading to less distinct clusters. Therefore, we set $\lambda$ to be 0.5 for all datasets, which confirms the robustness of our method.
---
### **W2: Whether regularization techniques were used to address overfitting?**
We would like to clarify that the primary objective of our method is clustering rather than supervised learning where overfitting is a more prevalent issue. Specifically, clustering focuses on finding inherent patterns and structures in the data rather than fitting to specific target labels. Therefore, the regularization in clustering helps constrain the data subspace to observe data groups rather than alleviate the overfitting of a model. In this work, we have subspace under a user's preference in the proposed method for regularization. We will make this clear in the revision.
---
### **W3: There are significant performance differences when using different encoders, this phenomenon could be explored in depth.**
Thanks for your insightful comments. We conducted an additional analysis using the Maximum Mean Discrepancy (MMD) metric to quantify the differences in the feature spaces generated by different text encoders (i.e., CLIP, ALIGN, and BLIP).
| **Dataset** | **Clustering** | **CLIP vs. ALIGN** | **CLIP vs. BLIP** | **ALIGN vs. BLIP** |
| ----------------- | :------------: | :----------------: | :---------------: | :----------------: |
| **Fruit360** | Color | 0.234 | 0.198 | 0.211 |
| | Species | 0.189 | 0.172 | 0.183 |
| **Card** | Order | 0.215 | 0.202 | 0.219 |
| | Suits | 0.198 | 0.184 | 0.192 |
| **CMUface** | Emotion | 0.276 | 0.245 | 0.263 |
| | Glass | 0.231 | 0.217 | 0.225 |
| | Identity | 0.263 | 0.249 | 0.258 |
| | Pose | 0.245 | 0.228 | 0.239 |
| **Stanford Cars** | Color | 0.238 | 0.223 | 0.231 |
| | Type | 0.212 | 0.198 | 0.205 |
| **Flowers** | Color | 0.257 | 0.244 | 0.252 |
| | Species | 0.248 | 0.231 | 0.242 |
| **CIFAR-10** | Type | 0.193 | 0.178 | 0.186 |
| | Environment | 0.178 | 0.162 | 0.174 |
The MMD results indicate that although our text prompts are simple, the feature spaces generated by different text encoders exhibit significant distributional differences.
The effectiveness of a text encoder can vary depending on the specific clustering task. For example, ALIGN tends to excel in tasks with more abstract attributes, such as colors and emotions, while CLIP shows strong performance in identity-related tasks. This variability underscores the importance of selecting an appropriate text encoder based on the specific application requirements. The difference between text encoders may come from the different corresponding pre-training tasks and this will be an interesting future direction.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed responses to my feedback. After carefully reviewing the rebuttal, I am satisfied that they have fully addressed all of my concerns. The datasets used in this work is quite comprehensive, and the detailed description provided so far has greatly helped my understanding. Overall, the Multi-Sub shows strong potential for user-friendly clustering tasks, which I find highly valuable. The method's contribution is clear: it effectively aligns user needs with image-level features by using textual information as a bridge, all within an end-to-end training process that integrates both textual representations and clustering-oriented fusion.
In light of the novelty of the proposed method and the improvements made, I'm raising my rating to 7.
---
Reply to Comment 1.1.1:
Comment: We are very grateful for your appreciation and endorsement. Your feedback holds significant value in helping us enhance our work. We will carefully incorporate them in our revision. | Summary: The paper is about Multiple Clustering, which is an interesting topic. The authors propose a novel end-to-end multiple clustering approach that incorporates a multi-modal subspace proxy learning framework. The paper is well written and well organized. However, there are several concerns in the current version of the paper that addressing them will increase the quality of this paper.
Strengths: 1 The authors' idea of using large models to aid clustering is novel.
2 The paper is clearly structured and easy to understand.
3 The paper has sufficient experiments to support its point of view.
Weaknesses: 1 The authors point out that different clustering results can be given for different customization needs of users. Then it will bring several associations (not necessarily accurate): a. What should be done if the user's demand is exactly opposite to the potential clustering distribution? b. The experiments do give different clustering results for different demand types, if the user proposes a new type of demand, can the model also adaptively adjust?
2 Figure 2 is well drawn but could be further improved, some icons and fonts need to be adjusted.
3 The authors point out that their model is capable of outputting clustering results directly, and then there should be a corresponding formula to represent this. In addition, it is hoped that the authors will discuss further why, if it is not a difficult task to output clustering results directly, few previous methods have done so.
4 Authors should add details about the dataset, such as data size, feature types, etc.
Technical Quality: 3
Clarity: 3
Questions for Authors: Considering that the authors did not add an appendix, are there any other discussions or experiments?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **W1: Several associations.**
Thanks for your insightful comments. Multi-Sub works by learning in the user-preferred subspace. Therefore, it is theoretically unlikely that the learned representations are completely opposite to the user's demand under such aligned subspace. We will make it clear in the revision.
To evaluate how Multi-Sub adapts to new user demands not originally provided in the ground-truth of the dataset, we conducted an additional experiment using the Fruit dataset. Specifically, we introduced a new demand based on the shape of the fruits, with the prompt set as "fruit with the shape of *". We categorized the fruits into two shapes, that is, round and elongated. Although this specific demand may not be common in practical applications, it serves as an exploratory experiment to test the adaptability of our method.
The following results demonstrate that Multi-Sub successfully adapted to the new user demand of shape. The model learned to align the textual descriptions of shapes with the visual features, resulting in a clustering under the new subspace of shape. Thanks for the suggestion and we will include this experiment in the revision.
| **Method** | **NMI** | **RI** |
| ----------------- | ------- | ------- |
| MSC | 0.553 | 0.762 |
| MCV | 0.586 | 0.787 |
| ENRC | 0.603 | 0.825 |
| iMClusts | 0.629 | 0.821 |
| AugDMC | 0.643 | 0.844 |
| Multi-MaP | 0.717 | 0.875 |
| **Multi-Sub (ours)** | **0.752** | **0.891** |
---
### **W2: Figure 2.**
Thanks for pointing this out and we will carefully revise Figure 2 in the revision.
---
### **W3: Clustering Loss.**
Thanks for the suggestion and we will make the following clear in the revision. Previous methods often use a two-stage process that they separate representation learning and clustering to simplify the optimization process. This separation, however, can lead to sub-optimal clustering results, as the learned representations may not be fully aligned with the clustering objective. Instead, we obtain both the proxy word and the clustering alternatively and simultaneously. Concretely, we first learn the proxy word in a user-preferred subspace (i.e., Eqn. (5) in the submission). Then, we fix the proxy word and open the image encoder to obtain better image representations considering the clustering objective (i.e., the clustering loss at line 240 in the submission). These two steps are repeated alternatively until convergence.
---
### **W4: Datasets.**
Thanks for the suggestion. We will make the following descriptions clear in the revision.
1. **Stanford Cars**: This dataset contains 1,200 images of cars annotated with labels for both color and type (e.g., sedan, SUV).
2. **Card**: This dataset includes 8,029 images of playing cards with labels for rank (Ace, King, Queen, etc.) and suits (clubs, diamonds, hearts, spades).
3. **CMUface**: This dataset consists of 640 facial images with annotations for pose, identity, glasses, and emotions.
4. **Flowers**: This dataset includes 1,600 flower images labeled by color and species (e.g., iris, aster).
5. **Fruit**: This dataset comprises 105 images of fruits with labels for species (apples, bananas, grapes) and color (green, red, yellow).
6. **Fruit360**: Similar to the Fruit dataset, Fruit360 contains 4,856 images of various fruits with detailed annotations for species and color.
7. **CIFAR-10**: This dataset is structured to have 60,000 images clustered based on type (transportation, animals) and environment (land, air, water).
The data size, handcrafted features, and clusters for all datasets we used are also summarized in the following table. It is worth noting that, in our experiments, we apply both traditional and deep learning baselines. Traditional methods rely on hand-crafted features, while deep learning methods directly utilize the original images as input.
| **Datasets** | **# Samples** | **# Hand-crafted features** | **# Clusters** |
| :-------------: | :-----------: | :--------------------------------------------: | :------------: |
| Stanford Cars | 1,200 | wheelbase length; body shape; color histogram | 4;3 |
| Card | 8,029 | symbol shapes; color distribution | 13;4 |
| CMUface | 640 | HOG; edge maps | 4;20;2;4 |
| Fruit | 105 | shape descriptors; color histogram | 3;3 |
| Fruit360 | 4,856 | shape descriptors; color histogram | 4;4 |
| Flowers | 1,600 | petal shape; color histogram | 4;4 |
| CIFAR-10 | 60,000 | edge detection; color histograms; shape descriptors | 2;3 |
---
### **Question.**
We do have some additional experiments as in the **one-page PDF** that we will add in the revision. Those experiments include:
- Clustering performance based on shape demand on the Fruit dataset, demonstrating the adaptability of Multi-Sub for new demands not existing in ground-truth clusterings provided by the data.
- Sensitivity analysis of the balancing factor $\lambda$ on the CIFAR-10 dataset, indicating a best value of 0.5, which is expected since when $\lambda$ is too low, the model focuses too much on maximizing the distances between different clusters, which can lead to less cohesive clusters. Conversely, when $\lambda$ is too high, the model emphasizes compactness within clusters at the expense of inter-cluster separation, leading to less distinct clusters. Therefore, 0.5 is used for all datasets.
- Quantifying feature space differences generated by different text encoders using the Maximum Mean Discrepancy (MMD) metric, showing that even with simple text prompts in our work, different pre-trained text encoders vary in their abilities. | Summary: This paper introduces an end-to-end multi-clustering method that integrates a multimodal subspace proxy learning framework. It combines text prompts expressing user preferences with corresponding visual representations to achieve clustering based on user interests.
Strengths: 1.The clustering task, driven by user interests, aligns better with user preferences and is more applicable to real-world scenarios.
2.The experimental results are promising, and the methodology is clear and logical.
Weaknesses: 1.The contributions of the paper are not very clear. At first glance, it appears to merely combine CLIP and GPT, lacking innovative architecture.
2.The baseline methods chosen for comparison are neither cited nor introduced.
3.Section 5 discusses only limitations, lacking a discussion on broader impacts.
Technical Quality: 2
Clarity: 3
Questions for Authors: The evaluation metrics mentioned in the paper require comparing results with ground truth values. How were the multiple clustering ground truth values in the dataset obtained? How is the accuracy of these ground truth values ensured?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors only address the limitations of their work and do not discuss broader impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **W1: The contributions of the paper.**
Thank you for the suggestion. We will carefully emphasize our contribution in the revision as follows:
Given only a high-level user interest in an unsupervised scenario without any class labels or names, we cannot directly apply CLIP. Instead, we must learn the unknown proxy word in a continuous space, which is challenging. The subspace approach proposed in this work aids effective learning. More importantly, our proposal can simultaneously optimize the proxy word and the clustering result. We will also emphasize this in our framework figure in the revision.
---
### **W2: The baseline methods chosen for comparison.**
Thank you for pointing this out. We will provide detailed descriptions in the revision as follows:
1. **MSC** [2] is a traditional multiple clustering method that uses hand-crafted features to automatically find different feature subspaces for different clusterings.
2. **MCV** [1] leverages multiple pre-trained feature extractors as different views of the same data.
3. **ENRC** [3] integrates auto-encoder and clustering objective to generate different clusterings.
4. **iMClusts** [4] is a deep multiple clustering method that leverages the expressive representational power of deep autoencoders and multi-head attention to generate multiple salient embedding matrices and multiple clusterings therein.
5. **AugDMC** [6] leverages data augmentations to automatically extract features related to different aspects of the data using a self-supervised prototype-based representation learning method.
6. **DDMC** [5] combines disentangled representation learning with a variational Expectation-Maximization (EM) framework.
7. **Multi-MaP** [7] relies on a contrastive user-defined concept to learn a proxy better tailored to a user's interest.
---
### **W3: Broader impacts.**
Thank you for the suggestion. We will add the discussion that our proposal has potential in various applications like personalized marketing. For example, it can enhance advertisement effectiveness by tailoring clustering results to business preferences.
---
### **Question: How were the multiple clustering ground truth values in the dataset obtained?**
All multiple clustering datasets used in our experiments are public and come with pre-defined ground-truth labels in different aspects for evaluation that have been widely used. For datasets like Stanford Cars and CIFAR-10, the clustering ground truth labels are derived from their existing class labels. Below are the details of the datasets:
1. **Stanford Cars**: This dataset contains 1,200 images of cars annotated with labels for both color and type (e.g., sedan, SUV).
2. **Card**: This dataset includes 8,029 images of playing cards with labels for rank (Ace, King, Queen, etc.) and suits (clubs, diamonds, hearts, spades).
3. **CMUface**: This dataset consists of 640 facial images with annotations for pose, identity, glasses, and emotions.
4. **Flowers**: This dataset includes 1,600 flower images labeled by color and species (e.g., iris, aster).
5. **Fruit**: This dataset comprises 105 images of fruits with labels for species (apples, bananas, grapes) and color (green, red, yellow).
6. **Fruit360**: Similar to the Fruit dataset, Fruit360 contains 4,856 images of various fruits with detailed annotations for species and color.
7. **CIFAR-10**: This dataset is structured to have 60,000 images clustered based on type (transportation, animals) and environment (land, air, water).
The data size, handcrafted features, and clusters for all datasets we used are also summarized in the table. It is worth noting that, in our experiments, we apply both traditional and deep learning baselines. Traditional methods rely on hand-crafted features, while deep learning methods directly utilize the original images as input.
| **Datasets** | **# Samples** | **# Hand-crafted features** | **# Clusters** |
| :-------------: | :-----------: | :--------------------------------------------: | :------------: |
| Stanford Cars | 1,200 | wheelbase length; body shape; color histogram | 4;3 |
| Card | 8,029 | symbol shapes; color distribution | 13;4 |
| CMUface | 640 | HOG; edge maps | 4;20;2;4 |
| Fruit | 105 | shape descriptors; color histogram | 3;3 |
| Fruit360 | 4,856 | shape descriptors; color histogram | 4;4 |
| Flowers | 1,600 | petal shape; color histogram | 4;4 |
| CIFAR-10 | 60,000 | edge detection; color histograms; shape descriptors | 2;3 |
---
#### **References**
[1]. J. Guérin and B. Boots. Improving image clustering with multiple pretrained cnn feature extractors.In British Machine Vision Conference 2018, BMVC 2018, 2018.
[2]. J. Hu, Q. Qian, J. Pei, R. Jin, and S. Zhu. Finding multiple stable clusterings. Knowledge and Information Systems, 2017.
[3]. L. Miklautz, D. Mautz, M. C. Altinigneli, C. Böhm, and C. Plant. Deep embedded non-redundant clustering. In Proceedings of the AAAI conference on artificial intelligence, 2020.
[4]. L. Ren, G. Yu, J. Wang, L. Liu, C. Domeniconi, and X. Zhang. A diversified attention model for interpretable multiple clusterings. IEEE Transactions on Knowledge and Data Engineering, 2022.
[5]. J. Yao and J. Hu. Dual-disentangled deep multiple clustering. In Proceedings of the 2024 SIAM International Conference on Data Mining (SDM), 2024.
[6]. J. Yao, E. Liu, M. Rashid, and J. Hu. Augdmc: Data augmentation guided deep multiple clustering. In INNS DLIA@IJCNN, 2023.
[7]. J. Yao, Q. Qian, and J. Hu. Multi-modal proxy learning towards personalized visual multiple clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for he response. All my concerns have been addressed, and I recognize the contributions of this paper. Therefore, I will raise my score.
---
Reply to Comment 1.1.1:
Comment: We highly appreciate your acknowledgment and will carefully take your insightful feedback to improve our revision. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We sincerely thank your invaluable time and efforts invested in reviewing our submission. Your constructive and insightful feedback are greatly appreciated for improving our revision.
We have carefully responded to all the questions and concerns raised in the individual rebuttal sections. Additionally, we have a one-page PDF that includes all the related tables/figures. Please kindly check if necessary.
Thank you once again for your insightful comments and suggestions, which have greatly contributed to the improvement of our work.
Best regards,
Authors
Pdf: /pdf/ee996e2f7e4c0bf630abe090ac30e8b46db36b9f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Feedback control guides credit assignment in recurrent neural networks | Accept (poster) | Summary: The authors explore the relationship between feedback control and learning with recurrent neural networks (RNN). Specifically, they enforce a control signal onto a RNN that is used to generate a trajectory for a outreaching task, and then propose to use local learning rules on the neurons in the RNN. They show that with feedback control the network can adapt faster to perturbations, of the task and show that the local (in time) gradients are better aligned with the global ones.
Strengths: The claims are all very reasonable and well illustrated. I this is the first time such feedback-based learning used in proper control settings, which is surprising given that it is based on control theory.
Weaknesses: Main problem:
My main concern is that I the task chosen consists on bringing a system to a desired static target, so it is possible that there is no "out of equilibrium dynamics", rather the learning simply consists on bringing the "arm" to the required target and it just so happens that the shortest trajectory aligns with the velocity profile. While it could be that the trajectory is indeed learned (and with some implicit or explicit regularization it should be the case), the current task is not conclusive. If the point is to really learn a trajectory, the authors should have picked a task where the trajectory is a bit more complex than going to equilibrium. Maybe a limit cycle? Otherwise the work might be a minor modification of Meulemans et al.
Also, I fail to see the "biological circuits". If we are talking about recurrent neural networks, this is fine, but usually when we talk about circuits in biology we would refer to cell types (and this has a lot of constraints). In fact the authors themselves state that they are agnostic to the biological implementation, which is hardly in compatible with the title. I would replace it by recurrent neural networks.
Other issues:
- The key findings are not clear in the introduction. The term "inference learning" is only used there and in one of the figure, but it is not clearly defined. If the authors mean that feedback control can train an RNN then this has already been shown. For the second finding, "increased accuracy of approximate local learning rules" it would be better stated as increased accuracy WITH local learning rules (or something similar). For the third, the second order gradient is not really injected (this would suggest that the gradient is imposed on purpose); rather, the feedback control is implicitly related to second order optimization methods.
- Line 142: it seems natural that if the network is perturbed from its trajectory the feedback would be stronger to compensate for the perturbation. I don't see why this is "suggested". Also, the sentence is badly written "suggest that the during task... activity is increasingly by feedback").
- LInes 164 and 165. The authors say that " using a local learning rule without feedback control show an increasing performance gap compared to those trained with feedback control and BPTT". The sentence could be interpreted as if the network is trained with feedback control AND BPTT (combined). A better wording would replace AND by OR
- In 3.4 it is a bit hard to follow. It seems as if the authors are using an eligibility trace to train the RNN through BPTT. But this intermediate step might not be real BPTT as it is commonly used.
Literature issues:
- The work of Meulemans et al 2022b is credited with alleviating the temporal storage required by BPTT. While they did do that (and it is a good paper), I think that they based the memory load decrease on previous work (Bai et al., Deep Equilibrium models 2019), which if memory serves does use BP. The logic of my comment is that by training the equlibrium point of the network one can avoid the memory load, regardless of the training method.
- The connection between feedback-based learning and second order optimization has been is very closely related to Meulemans, et al. "A theoretical framework for target propagation" 2020. That paper mentions target propagation, but it is very similar to feedback based learning (as the authors probably can infer).
- This is a personal opinion, the authors do not need to take it into consideration: The biological plausibility claims seem to rely on the locality of the learning rules. While it's a requirement that learning rules should not break the laws of physics (or in this case basic biological knowledge), learning rules should at least have some basis on biology, which I am missing here. A brief mention of why would one think that the learning rules are close to biological ones would be welcome. My guess for this feedback-based work would be something with a temporal component such as temporal hebbian rules (ex: Aceituno et al. "Learning cortical hierarchies with temporal Hebbian updates." 2020)
Technical Quality: 3
Clarity: 3
Questions for Authors: I am not 100%, but I think that the work of Gilra and Gerstner had a very similar architecture. Could you mention what are the main differences?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: They did mention some limitations, but the key issue of whether this is general motor control (or shaping attractors) is not addressed.
Also, if the paper is about how feedback control guides credit assignment on biological circuits, being agnostic to the biological circuit is a problem, rather than a strength. To make a biological statement there should be some non trivial biological predictions, or some mention of what exactly does this bring to neuroscience that wasn't already known.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our work.
> My main concern is that I the task chosen consists on bringing a system to a desired static target...
Thank you for flagging this - we also trained RNNs on an additional, commonly used task, where the network has to generate both the sine and cosine wave given some input frequency (see cartoon in extra figure E). Here, the relevant perturbation is a frequency shift. Our adaptation to perturbation results remain qualitatively the same (extra figure F and G), expanding the generality of our method. We will include these results as an additional supplementary figure in the final manuscript.
> Also, I fail to see the "biological circuits".
Thank you for flagging this. In the discussion, we state that we are agnostic to the exact source and implementation of the feedback signal. For the RNN itself, while it does not directly correspond 1-1 to a single brain region, it has been used in the literature as a useful model of a non-linear dynamical system such as the brain. Hence, our study is more about the brain as an integrated system rather than a single brain region or circuit per say.
> .. The term "inference learning" is only used there and in one of the figure, but it is not clearly defined.
Thank you for flagging this. We are referring to the inference learning (IL) definition from Alonso et al. 2022, which is based on the predictive coding (PC) literature. In IL, neuron activities are modified first using some gradient, and only after this step are the weights modified. This contrasts with backpropagation, where weights are modified first and only. As defining this term in Figure 2/introduction may not be ideal, we have omitted it in our updated manuscript and instead included the relevant citation in the discussion. We now instead refer to simply "approximate learning in the activity space", due to approximate gradient alignment (which was not explicitly shown in previous work).
Alonso, Nick, et al. "A theoretical framework for inference learning." Advances in Neural Information Processing Systems 35 (2022): 37335-37348.
> For the second finding ...
Thank you for flagging this - here, we refer to accuracy in terms of approximation of the gradient, not performance accuracy, as the referenced section 3.4 outlines.
> For the third, the second order gradient is not really injected ...
Thank you for flagging this - you are right! We now changed this to read: "Feedback control enables more efficient weight updates during task adaptation due to the (implicit) incorporation of adaptive, second-order gradient into the network dynamics "
> Line 142: it seems natural that ...
Thank you for flagging this - we now changed this to simply read: "Thus, during the task perturbation, the network activity is increasingly driven by the feedback signal."
> LInes 164 and 165. The authors say that "using a local learning rule ...
Thank you for flagging this - we corrected this and we thank the reviewer again for these important detailed comments!
> In 3.4 it is a bit hard to follow. It seems as if the authors are using an eligibility trace to train the RNN through BPTT. But this intermediate step might not be real BPTT as it is commonly used.
Thank you for flagging this! In the beginning of section 3.4, our main goal is to briefly outline why biologically plausible learning is particularly hard in RNNs, due to the need for recursive computation of the past network states using the network Jacobian. This and the problems with the currently proposed biologically plausible solutions was explored in more detail in Marchall et al. 2020., where RFLO was shown to be quite a severe approximation of RTRL. This mainly serves as context for our main contribution of this section - that with feedback control, RNNs are more driven by their present than their past states, which simplifies the learning problem and allows for more accurate local learning.
> The work of Meulemans et al 2022b is credited with alleviating the temporal storage required by BPTT.
Thank you for flagging this. For simplification, as we are not exploring equilibrium systems in this work, we removed this specific claim in the updated manuscript.
> The connection between feedback-based learning and second order optimization ...
Thank you for flagging this citation omission! We now cite it in our updated section on related work.
> A brief mention of why would one think that the learning rules are close to biological ones would be welcome.
Thank you for this suggestion and citation, which has now been added to the discussion section of our updated manuscript.
> I am not 100%, but I think that the work of Gilra and Gerstner had a very similar architecture. Could you mention what are the main differences?
Thank you for flagging this! Our work is indeed similar to that of Gilra and Gerstner 2017 (eLife), with the main differences being 1) we use a rate instead of a spiking network, which simplified our analysis, such as gradients + Hessians 2) in their work, the feedback loop is trained separately from the the recurrent network, while we pretrain them together 3) in their work, in contrast to ours, the network output is close to reference from the very start of learning due to "tighter" feedback control, which further eases the learning problem.
---
Rebuttal Comment 1.1:
Title: Improvements appreciated, title still misleading
Comment: I appreciate the clarifications and extra work, I will upgrade my score.
However, a key concern is that the title is still misleading. The general view, which is nicely explained here [1], is that RNNs are useful models for generating hypothesis for biological circuits, but they are not biological circuits themselves. Generally, there is a crucial difference between proposing an hypothesis/model and implying (on the title!) that the hypothesis is true. As an example, the works of Gilra and Gerstner use the words recurrent neural networks, not biological circuits.
Furthermore, quoting from your reply: "..., it has been used in the literature as a useful model of a non-linear dynamical system such as the brain". The fact that RNNs are general models for non-linear makes it even more strange to use the words "biological circuits" in the title, as this reply would indicate that the results are valid of any non-linear system even beyond neuroscience. I know that this is not what the authors intended, but it highlights the point that RNNs are not biological circuits.
[1] Neural circuits as computational dynamical systems
---
Rebuttal 2:
Title: re: title still misleading
Comment: Thank you so much for reviewing our rebuttal.
We acknowledge your concern with our title. We had a couple of working titles, one of which was "recurrent neural networks" instead of "biological circuits", and we are happy to change it back to this.
We initially chose "biological circuits" because the motivation for our project was/is on the biological learning side, but we agree that revising it is a good idea. | Summary: Feedback controllers are ubiquitous in neuroscience but their functions are not fully understood. This paper studies how feedback control interplays with biologically plausible online learning on a standard motor control task. The authors show that:
- feedback control enables to adapt to task variations without any plasticity, by approximating the gradient of the loss with respect to the hidden states.
- it makes tractable approximations to RTRL more reliable by shrinking the recurrent Jacobian eigenvalues.
- it incorporates some second-order information in the weight updates, leading to faster learning.
Strengths: The paper studies an important understudied question, that is the interplay between feedback and learning.
The paper is overall well-written and is easy to read. The message is clearly delivered.
The experiments are carefully designed and well executed, and support the claims of the paper.
Overall, the paper will be an insightful read to the community.
Weaknesses: While the experiments are overall well executed, there are a few points that should be improved to make the paper's claims more robust:
- In the appendix, it is written that the learning rate is taken to be constant. To make claims about e.g. learning speed, the optimizer, in particular its learning rate, has to be tuned.
- Figure 5b: it is not clear from this experiment that RFLO-c contains some second-order information. The alignment with the 2nd-order gradient result is not convincing as the estimated gradient is more aligned to the first-order gradient than to the second-order one. This experiment needs to be improved for it to support its claim. The BPTT-c baseline that I mention below may be a good starting point for further analysis as the gradient of a "controlled loss" (which is not the case for RFLO).
- A BPTT-c/RTRL-c baseline would be an interesting add to disambiguate between the role of feedback control and approximate gradient estimation through RFLO. This baseline would include feedback control in the recurrent neural network dynamics and optimize for the MSE loss at the output. This would be useful in e.g. Fig3b and Fig5b.
Technical Quality: 3
Clarity: 4
Questions for Authors: - l98-99: can the author clarify the link between the use of a local learning rule and the rapid adaptations shown in neuroscience studies?
- Fig1: a, b, c legends are missing in the figure.
- l140-141: "that" missing after "feedback control,"? typo "outout".
- Fig2: "approximate inference learning": what do the authors mean by inference learning? I could not find any definition.
- l167-168: "does" missing + typo for "adaptatation".
- Appendix A.3: the authors mention that they use Adam with weight decay. The standard practice is to use AdamW instead (c.f. the AdamW paper for more detail). Can the author confirm that they are using AdamW?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The paper is theoretical and its limitations have been properly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper.
> In the appendix, it is written that the learning rate is taken to be constant. To make claims about e.g. learning speed, the optimizer, in particular its learning rate, has to be tuned.
Thank you for pointing this out. To address your concern, we have included a learning rate sweep for Figure 3b in the extra figures page (see figure B).
> Figure 5b: it is not clear from this experiment that RFLO-c contains some second-order information.
Thank you for raising this important point. To clarify, we are not claiming that our learning method performs exact 2nd-order optimization. Instead, it effectively interpolates between a 1st and 2nd order method, with the gradient showing a significant projection in both directions. In the extra figure A, we show the measured alignment between the 1st and 2nd order gradient as a baseline (yellow line). This baseline alignment is lower than the overlap of our gradient with both 1st and 2nd order directions, indicating that the observed alignment is not accidental. Additionally, having any curvature information allows the optimizer to better navigate the learning landscape and improve learning efficiency.
> A BPTT-c/RTRL-c baseline would be an interesting add to disambiguate between the role of feedback control and approximate gradient estimation through RFLO...
Thank you for the suggestion. We have added the BPTT-c baseline to Figure 3b in the extra figures page (see D) to help disambiguate the roles of feedback control and approx gradient through RFLO. It seems that BPTT+c performs worse than BPTT (remember that the test loss is reported in the open-loop setting, i.e. no feedback is provided to the network regardless of whether it was trained with it or not). This suggests that BPTT+c overfits to feedback control here, unlike RFLO+c. However, note extra figure B, which suggests that our choice of learning rate for BPTT+c may simpy not be ideal. For Figure 5b, however, this baseline is not applicable because in it, there is no plasticity in the weight matrix yet, and both gradients are computed in the activity space. While we could perform the same analysis in the weight space, calculating the Hessians of the full weight matrix is intractable due to poor scaling.
> l98-99: can the author clarify the link between the use of a local learning rule and the rapid adaptations shown in neuroscience studies?
Long-term skill learning is known to involve synaptic changes in the motor cortices (e.g. Roth et al. Neuron 2020). However, we don’t know how rapid learning is implemented in the brain, nor where it happens. In the neuroscience study that motivated our work (Perich et al. 2018), plasticity was not directly measured, due to the extreme difficulty of such measurements. Nevertheless, animals progressively learn to adapt better to persistent perturbations, which requires some form of rapid learning along the sensorimotor pathways. Thus, in the fine-tuning phase of this work, we enable plasticity in the recurrent layer of our RNNs, which serve as a model of a nonlinear dynamical system doing closed-loop control. Additionally, we use a learning rule that could feasibly be implemented in the brain, as it respects the locality constraint. Thank you for flagging this important point - we have now edited the relevant section in the main text to clarify this reasoning better.
> Fig1: a, b, c legends are missing in the figure.
> l140-141: "that" missing after "feedback control,"? typo "outout".
> l167-168: "does" missing + typo for "adaptatation".
Thank you for catching these errors!
> Fig2: "approximate inference learning": what do the authors mean by inference learning? I could not find any definition.
Thank you for flagging this. We are referring to the inference learning (IL) definition from Alonso et al. 2022, which is based on the predictive coding (PC) literature. In IL, neuron activities are modified first using some gradient, and only after this step are the weights modified. This contrasts with backpropagation, where weights are modified first and only. As defining this term in Figure 2 may not be ideal, we have omitted it in our updated manuscript and instead included the relevant citation in the discussion.
Alonso, Nick, et al. "A theoretical framework for inference learning." Advances in Neural Information Processing Systems 35 (2022): 37335-37348.
> Appendix A.3: the authors mention that they use Adam with weight decay. The standard practice is to use AdamW instead (c.f. the AdamW paper for more detail). Can the author confirm that they are using AdamW?
We use Adam, and we explicitly add the regularization terms (i.e., weight decay) directly to the cost function. This approach is functionally different from the weight decay correction in AdamW. While we did not use AdamW, it does not impact our results as we do not report any performance gains with Adam and instead use SGD during fine-tuning. We now make this explicit in the detailed methods section.
---
Rebuttal Comment 1.1:
Title: Answer to rebuttal
Comment: I have read the author's rebuttal and keep my score as it is.
Regarding the choice of the learning rate, it seems that the authors do not fully follow the standard ML practice:
- The optimal learning rate seems to be at the border of the lr range considered; it should not be the case.
- The figures comparing different methods should use the optimal learning rate for each method.
I trust that the authors will update their results accordingly.
---
Reply to Comment 1.1.1:
Title: Answer to answer to rebuttal
Comment: Thank you! We will update Figure 3b in the results accordingly given the learning rate sweep. | Summary: Recent work has shown that feedback signals can be critical to rapid adaptation in control tasks, and may explain how biological intelligence can make rapid adjustments when solving such tasks. This paper studies how feedback control achieves this. To do so, the authors train an RNN enhanced with feedback control on a common control task, and study how the feedback signal lead the network to achieve more rapid adjustments when perturbations are introduced. The 3 main findings are that the feedback signals align well with the optimal global gradient of the error, that they help the network better weigh current information (vs. less relevant past information) during perturbations, and that they indirectly inject second-order information to the RNN.
Strengths: - This work focuses on improving the theoretical understanding of an important method. Given that our understanding of many deep learning methods are woefully inadequate, such work is critically important for the field's development.
- The method and the results are clearly presented, the figures are excellent, and the writing is easy to follow.
Weaknesses: I am not familiar with feedback control and motor tasks; hence, I ask the AC to please take this into consideration to appropriately weigh my review. My remarks on the methods could be wrong or trivial. That said, I'll do my best to provide feedback.
- Several sections of the paper seem to just present results from previous work, including section 3.1 and the entirety of the methods section. This makes the contributions of this paper seem rather thin.
- I may be missing something, but some of the results seem minimally surprising. For example, in section 3.2, the authors state "...the feedback contribution to the overall network output increases during perturbation." But how could it not increase during perturbation? Isn't the network explicitly trained to use the feedback information to make corrections during perturbation? The same goes for the alignment between the feedback signal and the optimal global gradient, and the indirect introduction of second-order information-- is it not by design that the network use feedback to make corrections, and thus the larger the correction needed (i.e. the larger the optimal gradient) the larger the feedback signal? And is it not by design that second-order information gets introduced via the recurrent connections that enables the network to "save" information from previous timesteps in the hidden state?
- The authors claim that feedback control guides credit assignment in biological circuits, but uses BPTT during the pretraining phase of the RNN, which they acknowledge is not biologically plausible.
It seems to me that backprop is still doing much of the heavy lifting in terms of solving credit assignment, thus I'm not sure this claim is sufficiently justifiable. A more defensible claim given the current results may be that feedback control may guide motor adaptation in biological circuits.
Similarly, some parts of the intro and abstract strongly suggest that the presented method would perform credit assignment without suffering from the biological implausibilities of backpropagation (e.g. the abstract sets up the problem as "backpropagation is known to perform accurate credit assignment of error, how a similarly powerful process can be realized within the constraints of biological circuits remains largely unclear"), yet the actual method relies heavily on backpropagation.
- The experiments are performed on a single task, using a small single layer RNN with 400 hidden units, and therefore it's unclear whether the findings would scale to other tasks and larger architectures. Given that the primary goal of this paper is to improve understanding of an existing learning algorithm, and most of the analysis are performed via empirical testing, I believe it's important for the authors to demonstrate that their conclusions are robust over a wider range of tasks and hyperparameters/architectures.
Technical Quality: 2
Clarity: 4
Questions for Authors: - How does this work relate to hierarchical predictive coding, and to the feedback connections introduced by Hinton in [1] (and further explored by [2])?
- The learning setting presented in this work seem very similar to the setting of reinforcement learning, which also deals with control tasks and shifting distributions. Do you foresee these same results (i.e. feedback control improves performance) to carry over to some RL tasks? If not, what are the differences that limit these results from applying there?
[1] Hinton, G. (2022). The forward-forward algorithm: Some preliminary investigations. arXiv preprint arXiv:2212.13345.
[2] Ororbia, A., & Mali, A. (2023). The predictive forward-forward algorithm. arXiv preprint arXiv:2301.01452.
Confidence: 2
Soundness: 2
Presentation: 4
Contribution: 2
Limitations: I'd like to see an expanded discussion in the limitations section regarding the remaining aspects of the feedback-enhanced RNN that remain biologically implausible. Particularly, the usage of BPTT in a work that aims to explain how biological credit assignment is performed is quite troubling for me, given its significant biological implausibility. Ideally, I think the authors should show that their results hold on a network trained using a biologically-plausible learning rule enhanced with feedback control.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper.
> Several sections of the paper seem to just present results from previous work ...
While our work builds upon the task adaptation findings of previous work (Feulner et al. 2022), it offers a substantially distinct focus and set of contributions, as outlined in our introduction. Our primary goal is to investigate the mechanistic interpretability of the model published by Feulner et al. 2022., and to connect their results to the broader literature of online learning in recurrent neural networks. Specifically, we address why rapid adaptation occurs without any plasticity and how this benefits approximate, local learning algorithms, whose limitations have been previously studied by Marschall et al. 2020). Beyond learning accuracy, we also analyse learning efficiency and show significant differences between learning with and without feedback during the fine-tuning stage. Thus, despite the similarity in the problem setup and the methods presented in the main text, our analysis is distinct from Feulner et al. (2022), where the emphasis was on modeling primate observations and data. Here, we ask a different question: based on what we know about local learning in recurrent neural networks, why does this model work so well?
> I may be missing something, but ...
Thank you for your insightful comments. While it may seem intuitive that the feedback contribution to the overall network output would increase during perturbation, it is important to note that this behaviour is not explicitly designed in our networks. The same applies to the incorporation of second-order information. The key contribution of our work lies in empirically showing that these mechanisms naturally emerge from our model. Even if some aspects may seem anticipated, demonstrating this behaviour in a practical and previously unexplored context is a significant step forward in understanding online learning in recurrent neural networks, and specifically how it relates to biologically plausible learning.
> The authors claim that feedback control guides credit assignment in biological circuits ...
Thank you for raising this important question. Yes, this approach reflects principles commonly observed in biological learning systems, specifically the different time scales of learning:
- Foundational Development: Certain core circuits and functions may develop on a slower timescale, guided by an innate basis shaped over evolutionary time. This provides a foundational framework for subsequent learning (analogous to BPTT pretaining).
- Rapid, Contextual Adaptation: Building upon this foundation, biological systems exhibit rapid adaptability to specific environments and tasks. This occurs on a much faster timescale, and is thought to occur through fine-tuning of pre-existing circuits (analogous to fine-tuning with a local learning rule).
For example, infants possess a general capacity for movement, learn to walk relatively slowly, but quickly adapt to different terrains like ice. While BPTT plays a significant role in the credit assignment during pretraining, the rest of this work focuses on alternative, biologically plausible credit assignment during rapid, contextual adaptation, on perturbation not seen during pretraining.
> Given that the primary goal of this paper is to improve understanding of an existing learning algorithm ...
Thank you for raising this important point. We now include extra figures with this rebuttal that include the final adaptation to perturbation for different learning algorithms as a function of learning rate and hidden size (extra figures B+C). We also include results from an additional, commonly used synthetic task (where the network has to produce a sine and a cosine wave given some frequency, see cartoon in extra figure E), showing that our results remain qualitatively the same (extra figure F+G), expanding the generality of our method. We will add these results to the supplementary figures of the final paper.
> How does this work relate to hierarchical predictive coding, and to the feedback connections introduced by Hinton in [1] (and further explored by [2])?
In the FF network architecture, the activity vectors from the layer l+1 at previous timestep contribute to the activity vectors at layer l, which is a common feature with our work. However, nature of and reason for this contribution is different. In our case, the information communicated between the layers is a global error to control the activity space online. In FF, the information communicated uses layer activities themselves and is done to sync up local learning between different layers.
The follow up work links FF to the predictive coding framework, where the error neurons are explicitly represented. We link our work to other relevant literature in predictive coding in our discussion.
> The learning setting presented in this work seem very similar to the setting of reinforcement learning...
Even though, in principle, feedback control studied here resembles a reinforcement learning (RL) problem due to the interaction with the environment, there are some major differences from classic RL problems. For instance, unlike many RL problems, the feedback here is an explicit function of the network's prediction. Moreover, during pretraining, we are backpropagating through the environment itself, which is generally assumed to be impossible in most RL problems. However, our results may be relevant to certain RL subfields, such as movement control in robotics, and we briefly mention this in the "Impact Statement" section of our manuscript. Moreover, it may be interesting to expand our work to settings where feedback is less explicit (as we note in our limitations section), which would more closely link it to RL problems.
>I'd like to see an expanded discussion ...
Thank you for this suggestion. As explained above, here we use BPTT for pre-training of the networks only.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response.
I still hold the opinion that this work, while interesting, offers limited novelty over previous works it builds upon, and the findings are not sufficiently surprising given the context to be considered a major contribution. In addition, even though BPTT is used only for pre-training, there are no guarantees that the representations learned via BPTT and via biological learning rules are similar, and thus no guarantees that the findings during fine-tuning would equally apply to biologically learned representations.
However, many of my concerns have been addressed. Having read the reviews of other reviewers and the subsequent discussions, I now believe this work passes the bar for acceptance. I am changing my score from 3 to 5 to reflect this. | Summary: The paper studies the effect of feedback control on motor learning in recurrent neural networks, finding that feedback control improves learning performance and better aligns with the true gradient w.r.t. the task.
Strengths: - Alignment with the true gradient is an interesting result and helps explain why feedback works
- The authors study alignment from different perspectives (e.g. step-wise/full gradients, Newton method)
- The task the authors consider is widely used in monkey experiments, therefore it should be possible to adapt the conclusions to real data or use them to guide new experiments
Weaknesses: - The training setup is rather limited; it would be interesting to see training done for other tasks and architectures (or RNN sizes).
- The paper might benefit from some theoretical analysis of why the feedback signal alings with the true gradient, although it’s not clear if that can be easily done.
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the difference between RFLO and RFLO+c? Does the first lack the feedback term in Eq. 1? This should be clearly stated within Sections 2.2-2.4.
Line 141: “outout”
Line 141: “ is increasingly by”
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper.
> The training setup is rather limited; it would be interesting to see training done for other tasks and architectures (or RNN sizes).
Thank you for raising this important point. We also trained RNNs on an additional, commonly used task, where the network has to generate both the sine and cosine wave given some input frequency (see cartoon in extra figure E). Here, the relevant perturbation is a frequency shift. Our adaptation to perturbation results remain qualitatively the same (extra figure F and G), expanding the generality of our method. We also show results for different RNN sizes (extra figure C, final adaptation loss as a function of learning algorithm and hidden layer size) - where we mostly see the expected increase in accuracy post adaptation with increased network capacity. We will include both of these results as an additional supplementary figure in the final manuscript.
> The paper might benefit from some theoretical analysis of why the feedback signal alings with the true gradient, although it’s not clear if that can be easily done.
Given that the feedback weights approximately align with the transpose of the forward weights and the feedback is a linear projection of the loss function derivative (the error), the (approximate) alignment of the feedback with the gradient of the network activities is somewhat expected. However, why this would aid local learning is not entirely clear. In Section 3.4 and Figure 4 of our work, we address this question empirically by demonstrating that the output of the networks with feedback control relies less on past and more on present states. This reliance reduces the bias introduced by the severe Jacobian approximation by the eligibility trace, as studied by Marchall et al. (2020), making it more accurate. We acknowledge that further theoretical analysis could provide additional insights and leave this exploration for future work.
> What is the difference between RFLO and RFLO+c? Does the first lack the feedback term in Eq. 1? This should be clearly stated within Sections 2.2-2.4.
Thank you for highlighting the lack of clarity in Figure 3(b). You are correct in that +c indicates the presence of feedback in Eq.1 during adaptation to persistent perturbation via local or non-local learning. We now clarify this in both the figure caption and methods section 2.4:
"Note that when the same weight matrix $\boldsymbol{W^{fb}}$ is used for both control and learning, we denote this by adding $+c$ to the respective learning rule."
> Line 141: “outout” Line 141: “ is increasingly by”
Thank you for catching these typos!
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications and additional experiments! Having read other reviews and responses, I'm keeping the score of 7 as I think it's an interesting work. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their careful review of our manuscript and all the insightful comments!
Here, we attach a single page with extra figures to support our individual rebuttals below.
Pdf: /pdf/78611e01e7397e8abbe130e031a6fcbd24668fab.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Beyond Redundancy: Information-aware Unsupervised Multiplex Graph Structure Learning | Accept (poster) | Summary: - This paper presents an information theory approach to obtain a single graph fused from a multiplex graph, which preserves
- sufficient task-relevant information
- while removing task-irrelevant noise.
- A learnable graph augmentation strategy is also developed.
- The learned graph and representation can be applied to different types of tasks.
- The effectiveness is supported by extensive experimental results.
Strengths: - This paper is well-motivated.
- The authors find that each graph contains much unique task-relevant information, which is ignored by mainstream contrastive learning-based methods.
- This paper develops multiple graphs non-redundancy principle, which lays the foundation for multiplex graph data process.
- Two random and generative graph augmentation strategies are accordingly built to capture view-unique task information.
- The experimental results are promising.
- The framework demonstrates a clear advantage over existing methods, including advanced supervised approaches, highlighting its potential for broad application.
- This paper provides the code and all experimental settings for reproducing the results.
Weaknesses: - The difference between the existing non-redundancy principle and multiplex graph non-redundancy is unclear. Please clarify it.
- The proposed InfoMGF-LA runs out-of-memory on MAG data. The reason should be given.
- It is possible that the proposed method cannot handle real-world large-scale graph. It should be addressed in the future and discussed in the conclusion part.
- The difference between the proposed method and DGM is unclear.
Technical Quality: 3
Clarity: 3
Questions for Authors: I list them in **Weaknesses**.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Some imitations are addressed in $\S5$.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thoughtful feedback. Your constructive criticism is invaluable in refining our work. Below, we give point-by-point responses to your comments.
1. **The difference between the existing non-redundancy principle and multiplex graph non-redundancy is unclear. Please clarify it.**
The primary distinction between multiplex graph non-redundancy and existing non-redundancy principles lies in our consideration of the complexity and uniqueness of graph-structured data. Existing work mainly focuses on non-redundancy in bimodal/view data of images and texts [1], considering that different views may have unique task-relevant information. For image and text data, the non-redundancy is intuitively easy to understand and verify. However, real-world graph data is often complex and not intuitive, making the analysis of non-redundancy very tricky and difficult. Therefore, we propose the concept of "The Unique Relevant Edge Ratio" and conduct empirical research, creatively explaining the non-redundant properties of graph data from a structural connectivity perspective. Furthermore, this also demonstrates that relying solely on decoupled node representation learning is insufficient for capturing both shared and task-specific information in multiplex graphs; graph structure refinement or relearning is necessary.
[1] Liang et al. "Factorized Contrastive Learning: Going beyond Multi-view Redundancy." NeurIPS (2023).
2. **The proposed InfoMGF-LA runs out-of-memory on MAG data. The reason should be given.**
As a large-scale real-world multiplex graph dataset, MAG’s PAP graph contains a substantial number of edges—10,067,799 to be precise. The primary reason InfoMGF cannot be successfully trained lies in its use of learnable generative augmentation, which necessitates probabilistic modeling of each edge in the original graphs (Equation 4 of our paper). During model optimization, the weight of each edge contributes to gradient calculations. The significant memory requirements render InfoMGF-LA impractical for training on MAG data. Despite this limitation, our alternative approach, InfoMGF-RA, still achieves remarkable performance. We believe that this outcome underscores the efficacy of our method.
3. **It is possible that the proposed method cannot handle real-world large-scale graph. It should be addressed in the future and discussed in the conclusion part.**
We appreciate your concern regarding the scalability of our method. We will include a discussion on scalability in Section 5 “Conclusion and Limitation,” which will make our contributions and limitations more complete and accurate. To enable InfoMGF-LA to handle real-world large-scale graph data successfully, we will focus on developing scalable and learnable graph augmentation methods to replace the existing edge-wise probability modeling augmentation. According to current research [2], most deep graph generation methods still cannot escape the complexity of $\mathcal{O}(N^2)$ or $\mathcal{O}(\vert E\vert)$, making them unsuitable for large-scale graph data. This issue becomes even more challenging when considering node attribute information. In the future, we may explore using diffusion-based generative models to address large graph augmentation or generation.
[2] Zhu et al. "A Survey on Deep Graph Generation: Methods and applications." LOG (2022).
4. **The difference between the proposed method and DGM is unclear.**
The main difference between InfoMGF and DMG [3] lies in the task objectives and potential applicability. Specifically, we aim to address the reliability of multiplex graph structures by leveraging unsupervised graph structure learning to preserve task-relevant information while removing task-irrelevant noise. In contrast, DMG is a graph-fixed method that focuses solely on node representation learning. This limitation prevents DMG, and similar graph-fixed methods, from effectively handling noisy real-world graph data. InfoMGF, as a data-centric GSL framework, can achieve both graph structure refinement and representation learning, highlighting its broader range of potential applications.
[3] Mo et al. "Disentangled Multiplex Graph Representation Learning." ICML (2023).
Once again, we sincerely appreciate your time and effort in reviewing our paper. Your constructive criticism has been invaluable in refining our work, and we hope these adjustments and explanations can address your concerns satisfactorily.
---
Rebuttal Comment 1.1:
Title: Thanks for the raising
Comment: We are grateful to Reviewer gjsJ for suggesting this improvement! If you have any additional questions or concerns, please feel free to discuss them with us. | Summary: The paper introduces InfoMGF (Information-aware Unsupervised Multiplex Graph Fusion), a novel framework aimed at addressing the issue of graph structure reliability in Multiplex Graphs. The primary goal is to refine graph structures to eliminate noise and maximize task-relevant information. Theoretical analysis and comprehensive experimental results validate its effectiveness.
Strengths: 1. Originality: The paper addresses a critical gap in Unsupervised Multiplex Graph Learning (UMGL) by focusing on the reliability of graph structures, which is often overlooked in existing research.
2. Quality: The proposed InfoMGF framework effectively refines graph structures to eliminate noise and maximizes both view-shared and view-unique task-relevant information. Theoretical analyses provided in the paper validate the effectiveness of InfoMGF in capturing task-relevant information and improving graph fusion. Extensive experiments demonstrate that InfoMGF outperforms various baselines and even sophisticated supervised approaches in different downstream tasks.
3. Clarity: The paper is generally clearly written and well organized.
Weaknesses: 1. Scalability: The framework involves several steps. Though the paper provides the complexity analysis in Appendix for each step, it is still unclear what is the overall complexity.
2. Reproducibility: The authors share the code for reproducibility. However, I didn’t see the datasets.
3. Accuracy: The authors should check for the few grammatical and spelling errors that occur in the text.
Technical Quality: 3
Clarity: 3
Questions for Authors: As above.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thoughtful feedback. Your constructive criticism is invaluable in refining our work. Below, we give point-by-point responses to your comments.
1. **Scalability: The framework involves several steps. Though the paper provides the complexity analysis in Appendix for each step, it is still unclear what is the overall complexity.**
We apologize for the unclear and incomplete description of the time complexity. We will revise the time complexity analysis and include the overall complexity. Here is the revised version:
We first analyze the time complexity of each component in InfoMGF. In this paragraph, let $V$, $N$, and $m$ represent the numbers of graphs, nodes, and edges, while $b_1$ and $b_2$ denote the batch sizes of locality-sensitive $k$NN and contrastive loss computation. The layer numbers of graph learner, graph encoder GCN, and non-linear projector are denoted as $L_1$, $L_2$, and $L_3$, respectively. The feature, hidden layer, and representation dimensions are denoted as $d_f$, $d_h$, and $d$, respectively. We analyze the complexity of $k$NN and GCN in scalable versions. Before training, scalable SGC is applied with a complexity of $\mathcal{O}(Vmrd_f)$ related to the aggregation order $r$. During training, we first conduct a graph learner with scalable $k$NN requiring $\mathcal{O}(VNL_1d_f+VNb_1d_f)$. For the GCN encoder and non-linear projector, the total complexity is $\mathcal{O}\left(VmL_2d_h+Vmd+VNL_2d_h^2+VNd_h(d+d_f)+VNL_3d^2\right)$. Within the graph augmentation module, the complexity of feature masking is $\mathcal{O}(Nd_f)$. The learnable generative graph augmentation in InfoMGF-LA has a complexity of $\mathcal{O}(VNd_fd_h+Vmd_h+VNd_fd)$, where the first two terms are contributed by augmented graph generator, and the last one is for the decoder. For InfoMGF-RA, the random edge dropping requires $\mathcal{O}(Vm)$ time complexity. For the loss computation, the complexity is $\mathcal{O}(V^2Nb_2d)$.
To simplify the overall complexity, we denote the larger terms within $L_1$, $L_2$, and $L_3$ as $L$, the larger terms between $d_h$ and $d$ as $\hat{d}$, the larger terms between $b_1$ and $b_2$ as $B$. Since the scalable SGC operation only needs to be performed once before training, its impact on training time is negligible. Therefore, we only consider the total complexity during the training process. The overall complexity of both InfoMGF-RA and InfoMGF-LA is: $\mathcal{O}(VmL\hat{d}+VNL\hat{d}^2+VNd_f(\hat{d}+L)+VNB(d_f+V\hat{d}))$, which is comparable to mainstream unsupervised GSL models, including our baselines. For example, SUBLIME [1] needs to be trained on each graph in a multiplex graph dataset, and its time complexity is: $\mathcal{O}(VmL\hat{d}+VNL\hat{d}^2+VNd_f(\hat{d}+L)+VNB(d_f+\hat{d}))$, which only has a slight difference in the last term compared to the time complexity of our method.
[1] Liu et al. "Towards Unsupervised Deep Graph Structure Learning." WWW (2022).
2. **Reproducibility: The authors share the code for reproducibility. However, I didn’t see the datasets.**
At the end of the abstract, there is a link to an Anonymous GitHub repository that contains the complete code for our method. All the used multiplex graph datasets are publicly available, and we have included the corresponding references for each dataset. We will release all the datasets after acceptance and include them along with the complete code in the GitHub repository.
3. **Accuracy: The authors should check for the few grammatical and spelling errors that occur in the text.**
We will carefully review and correct any grammar and spelling errors in the manuscript. We focus on improving the clarity and coherence of our writing while ensuring language accuracy.
Once again, we sincerely appreciate your time and effort in reviewing our paper. Your constructive criticism has been invaluable in refining our work, and we hope these adjustments and explanations could address your concerns satisfactorily.
---
Rebuttal Comment 1.1:
Comment: Thanks to the author for the reply, I have no doubts anymore and I will improve my score.
---
Reply to Comment 1.1.1:
Title: Thanks for the raising
Comment: We are grateful to Reviewer ei6T for suggesting this improvement! If you have any additional questions or concerns, please feel free to discuss them with us. | Summary: The paper introduces InfoMGF, an innovative framework for Unsupervised Multiplex Graph Learning (UMGL) that addresses the often-overlooked issue of graph structure reliability. InfoMGF refines graph structures by removing task-irrelevant noise and maximizing task-relevant information through mutual information maximization. Extensive experiments demonstrate its superior performance over various baselines and even some supervised methods, validating its effectiveness in enhancing node representation learning.
Strengths: - New Problem Formulation: The paper pioneers the investigation of graph structure reliability in multiplex graphs, which is a significant advancement in the field. Multiplex graphs enrich the representation of real-world systems and its analysis is very difficult inherently.
- Theoretical Analysis: The several theorems are quite interesting and provide a solid foundation for the proposed method. In particular, Theorem 3 proves the necessity of fusing multiplex graphs.
- Extensive Evaluation: The framework is thoroughly tested against various state-of-the-art methods on both node clustering and classification tasks, showcasing its robustness and effectiveness across different tasks. The comparison methods are representative and new.
Weaknesses: - Robustness: Fig.4 shows that the proposed method is very robust to structure noise. However, more analysis is needed. Both InfoMGF and SUBLIME are structure learning methods. Compared to InfoMGF,Why does the performance of SUBLIME degrade rapidly in the case of edge deletions?
- Clarity: The paper develops two algorithms in this paper: InfoMGF-RA and InfoMGF-LA. However, it is a little confusion that what is the difference in their objective functions.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. There are some small errors in Algorithm 1. In particular, the title of it is InfoMGF-LA, however, line 11 also includes the operation for InfoMGF-RA.
2. The proposed method depends on the assumption of optimal augmentation. How to guarantee that the used feature and structure augmentations are optimal? It is still unclear to me.
3. The authors discuss the robustness against structure noise. How about feature noise? Could you share your intuition on this matter?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper discusses the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thoughtful feedback. Your constructive criticism is invaluable in refining our work. Below, we give point-by-point responses to your comments.
**W1. Fig.4 shows that the proposed method is very robust to structure noise. However, more analysis is needed. Both InfoMGF and SUBLIME are structure learning methods. Compared to InfoMGF. Why does the performance of SUBLIME degrade rapidly in the case of edge deletions?**
As you can see in Fig.4, our model is significantly more robust than SUBLIME when deleting edges. This can be attributed to the following factors:
1. When edges are deleted, the task-relevant information contained in the graph tends to decrease. SUBLIME, as a single graph method, cannot fully leverage the task-relevant information from different graphs. In contrast, InfoMGF considers multiplex graph non-redundancy, enabling it to capture both shared and unique task-relevant information from all graphs. Consequently, when the edge deletion ratio is substantial, InfoMGF outperforms SUBLIME significantly.
2. Fig.4 shows the robustness comparison between InfoMGF-LA and other methods. Compared to existing methods (including SUBLIME) that use random graph augmentation, the novel learnable generative augmentation used in InfoMGF-LA is more reliable and interpretable. This approach retains more task-relevant information while reducing irrelevant noise. As a result, our method is more robust across various scenarios.
**W2 & Q1. The paper develops two algorithms in this paper: InfoMGF-RA and InfoMGF-LA. However, it is a little confusion that what is the difference in their objective functions. There are some small errors in Algorithm 1. In particular, the title of it is InfoMGF-LA, however, line 11 also includes the operation for InfoMGF-RA.**
We apologize for any confusion. InfoMGF-RA and InfoMGF-LA have the same optimization objective (Equation 10), with the primary difference lying in the generation of augmented graphs. Graph augmentation includes both feature and structure augmentation. Notably, both RA and LA use simple yet effective random feature masking for feature augmentation. However, for structure augmentation, RA and LA employ random edge dropping and learnable generative augmentation, respectively. For InfoMGF-RA, both feature and structure augmentations are non-parametric methods, so its overall objective is Equation 10. In contrast, InfoMGF-LA requires to train the learnable generative augmentation module (optimized by Equation 11). Therefore, we propose an **alternating optimization strategy** to iteratively optimize the total model loss ($\mathcal{L}$) and the augmented graph generator loss ($\mathcal{L}_{gen}$), which is described in lines 233-237 of our paper.
Regarding your question about Algorithm 1, there seems to be a misunderstanding. Since InfoMGF-LA also requires random feature masking to generate augmented features, the description in Algorithm 1 is accurate. To make the distinction between InfoMGF-RA and InfoMGF-LA clearer, we will include the algorithm for InfoMGF-RA in the final version.
**Q2. The proposed method depends on the assumption of optimal augmentation. How to guarantee that the used feature and structure augmentations are optimal?**
In fact, due to the unsupervised setting where label information is unavailable, obtaining rigorously theoretically guaranteed $G_v^\prime$ remains challenging. Therefore, we have to approach optimal augmentation from a practical perspective.
Previous research often uses random augmentation, assuming that most of the perturbed information is task-irrelevant. Additionally, existing methods commonly augment features and structures simultaneously, as both typically contain rich task-relevant information. In contrast, our approach takes further steps:
1. **Feature Augmentation**: We continue to employ simple yet effective random feature masking. Extensive pretraining studies in graph learning [1], CV [2], and NLP [3] have demonstrated that random masking performs remarkably well when augmenting information-dense data, without requiring intricate techniques.
2. **Structure Augmentation**: Random edge dropping may lack reliability and interpretability, so we propose the learnable generative augmentation, which implements edge-wise probability modeling in the original graphs. To effectively train the augmented graph generator, we design the loss function $\mathcal{L}_{gen}$ based on **the principle of optimal augmentation**. The reconstruction term constrains the augmented graph to retain essential information, while the mutual information minimization term aims to reduce irrelevant noise within the augmented graph. We delve into the optimization process and advantages of our proposed augmentation in the relevant section of our paper (lines 183-198).
In summary, due to the unavailability of label information in unsupervised tasks, we cannot achieve rigorously theoretically guaranteed augmentation. Nevertheless, we adhere to the principle of optimal augmentation to develop learnable generative augmentation and its optimization objectives. Extensive experiments further validate the effectiveness of our approach.
[1] "Graph Contrastive Learning with Augmentations." NeurIPS (2020).
[2] "Masked Autoencoders Are Scalable Vision Learners." CVPR (2022).
[3] "UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training." ICML (2020).
**Q3. The authors discuss the robustness against structure noise. How about feature noise? Could you share your intuition on this matter?**
Following your advice, we add some additional experiments on this issue, where the experimental results and analysis are in the PDF file of the Global Author Rebuttal.
Once again, we sincerely appreciate your time and effort in reviewing our paper. Your constructive criticism has been invaluable in refining our work, and we hope these adjustments and explanations can address your concerns satisfactorily. | Summary: The authors develop a novel approach to improve Unsupervised Multiplex Graph Learning by refining graph structures to eliminate noise and maximize relevant information. The method utilizes mutual information maximization to integrate multiple graph views effectively. Theoretical validation and comprehensive experiments show that the proposed method outperforms existing methods.
Strengths: 1. Multiplex graph provides an efficient representation of complex systems. This paper focuses on non-redundancy issue, which is a new perspective and opens up a new avenue for future research.
2. The proposed method adopts an unsupervised and generalized approach. Its performance surpasses several supervised approaches, underscoring its potential for practical applications.
3. The framework’s performance is validated through comprehensive experiments and compared with more than 20 methods.
4. Visualization is also a strong point of this paper. The figures of node correlation, heatmaps of the subgraph, and unique relevant edge ratio are very illustrative.
Weaknesses: 1. According to Table 1 and 2, it seems that the proposed method improves more on clustering than classification.
2. Overall, this paper is well-organized. However, the writing could be improved in terms of tone and words.
3. There are too many notations, which are confusing.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is there any explanation about why the method performs better on clustering than classification?
2. How to solve the above issue?
3. The font of k in the caption of fig.5a is not correct.
4. In the Appendix, the authors proof Proposition 1, however, there is no corresponding one in the main paper.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thoughtful feedback. Your constructive criticism is invaluable in refining our work. Below, we give point-by-point responses to your comments.
**Q1 & W1 & Q2. According to Table 1 and 2, it seems that the proposed method improves more on clustering than classification. Is there any explanation about why the method performs better on clustering than classification? How to solve the above issue?**
Thanks for your concern. Your observations are accurate and well-founded. The bigger improvement of our method in clustering compared to classification can be attributed to the following factors:
1. **Differences in Baselines**: In the clustering task, we primarily compare our method against unsupervised multiplex graph learning (UMGL) methods. As summarized in the Introduction, existing UMGL methods are graph-fixed, which means they struggle to handle real-world noisy graph data. In contrast, for a comprehensive evaluation, we compare our method against a wide range of supervised/unsupervised single graph structure learning (GSL) methods in the classification task. These GSL methods refine the graph structure, leading to better baseline performances. This partially explains the substantial improvement in clustering compared to classification.
2. **Incorporating Pre-trained Knowledge**: In the clustering task, we evaluate the node representations $Z$ of the learned fused graph by K-means. Notably, $Z$ not only contains information from the learned fused graph but also incorporates ample pre-trained knowledge from the GCN encoder parameters. This pre-trained knowledge contains more underlying information inside multiplex graph data, thereby enhancing node clustering performance. However, in the classification task, we retrain a new GCN on the learned fused graph to align with existing GSL paradigms, which means that the classification performance reflects only the quality of the learned graph and lacks the richer pre-trained knowledge. This also explains why the improvement in classification is smaller compared to clustering.
**Future Approaches**: In the future, we will leverage transfer learning to migrate pre-trained knowledge to a broader range of downstream tasks, including classification. For instance, we can initialize the classifier for classification tasks using pre-trained GCN parameters and fine-tune it to improve the performance. Additionally, we will also explore cross-domain transfer learning, enhancing our method’s out-of-distribution detection and generalization capabilities for handling real-world graph data. Furthermore, we will delve into multiplex graph structure learning in supervised or semi-supervised scenarios. By leveraging annotated information, we aim to improve the quality of the learned graphs.
**W3. There are too many notations, which are confusing.**
Sorry for the trouble caused by our symbols, we agree that too many notations would confuse the readers. Hence, we summarize frequently used notations in our paper as a **Table** in the PDF file of the Global Author Rebuttal. Meanwhile, we also explain some important symbols in the following:
1. **Graph symbols.** Such as $G_v=\\{A_v,X\\}$, $G^s_v=\\{A_v^s,X\\}$, and $G^{\prime}_v=\\{A_v^{\prime},X^{\prime} \\}$ to denote the original, refined, and augmented $v$-th graph respectively. The learned fused graph is denoted as $G^s=\\{A^s,X\\}$. It is worth noting that $A^s$ and all $A^s_v$ are generated via the graph learner.
2. **Vectors and matrices used in graph learning.** Most notations appear in the graph learning stage, for instance, we use $X_i^{v}$ to denote the view-specific feature of $i$-th node in $v$-th graph. $H^v$ is the node embeddings of the $v$-th original graph in graph learner, which is used to generate the refined graph. $Z^{v}=\mathrm{GCN}(A_{v}^{s}, X)$ is the node representations of the $v$-th refined graph in the GCN encoder, which is used for the loss calculation. It is obvious that these notations cannot be omitted for their diverse utilizes.
3. **Units for optimization.** During optimization, we introduce the mutual information lower/upper bound and loss function, which combine the aforementioned notations. Superscripts and subscripts are important for distinguishing different views and different nodes.
We will include the “Frequently Used Notations” table and explanations of the symbols in the Appendix. Thanks for your constructive suggestion.
**Q3. The font of k in the caption of fig.5a is not correct.**
Sorry for the mistake we made. We will make corrections on this issue in the final version of our paper. And we will conduct a thorough review of the paper to ensure there are no typographical or language errors.
**Q4. In the Appendix, the authors proof Proposition 1, however, there is no corresponding one in the main paper.**
Thank you for your valuable feedback. We will incorporate the corresponding proposition into the main paper, which will make our theoretical focus clearer and more prominent.
Once again, we sincerely appreciate your time and effort in reviewing our paper. Your constructive criticism has been invaluable in refining our work, and we are more than happy to add clarifications to address any additional recommendations and reviews from you! | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their valuable and insightful comments. We are glad that the reviewers find that our studied problem is novel and significant. Here, we provide a PDF file to further address the reviewers’ concerns regarding the clarity of the paper and the completeness of the experiments.
**P1: The robustness analysis against feature noise. (To Reviewer cF8g)**
Figure 5 in the PDF file shows the performance of InfoMGF and various baselines on the ACM dataset when injecting random feature noise. It can be observed that InfoMGF exhibits excellent robustness against feature noise, while the performance of SUBLIME degrades rapidly. As a single graph structure learning method, SUBLIME’s performance heavily relies on the quality of node features. In contrast, our method can directly optimize task-relevant information in multi-view graph structures (e.g., edges shared across multiple graphs are likely to be shared task-relevant information, which can be directly learned through $\mathcal{L}_s$), thereby reducing dependence on node features. Consequently, InfoMGF demonstrates superior robustness against feature noise.
**P2: The algorithmic process for InfoMGF-RA. (To Reviewer cF8g)**
Thank you for your question about the specific differences between InfoMGF-RA and InfoMGF-LA. In the PDF file, Algorithm 1 provides the algorithmic process for InfoMGF-RA. You can compare it with the algorithmic process for InfoMGF-LA in the Appendix of our original paper for a better understanding of their differences. Specifically, the total loss for both is the same, with the main difference lying in the graph structure augmentation methods. Additionally, InfoMGF-LA employs an alternating optimization strategy, which involves two optimization passes per epoch to optimize both the total loss and the augmented graph generator loss. In contrast, the optimization process for InfoMGF-RA is simpler, requiring only the optimization of the total loss.
**P3: Detailed explanations of frequently used notations. (To Reviewer 2Joh)**
Thank you for your suggestions regarding the notation definitions. To make the symbols used in the paper clearer and easier to understand, we have provided explanations in Table 4 of the PDF file under “Frequently used notations”. We will also include this table in the Appendix of the final version.
**P4: The scalability of InfoMGF. (To Reviewers ei6T and gjsJ)**
Thank you for your concerns about scalability. Here, we provide the overall complexity of our method. Let $V$, $N$, $m$, and $d_f$ represent the numbers of graphs, nodes, edges, and features. To simplify the overall complexity, we denote the larger terms within the layers of graph learner, graph encoder GCN, and non-linear projector as $L$, the larger terms between hidden-layer dimension and representation dimension as $\hat{d}$, the larger terms between batch sizes of locality-sensitive $k$NN and contrastive loss as $B$. The overall complexity of both InfoMGF-RA and InfoMGF-LA during the training process is $\mathcal{O}(VmL\hat{d}+VNL\hat{d}^2+VNd_f(\hat{d}+L)+VNB(d_f+V\hat{d}))$, which is comparable to mainstream unsupervised GSL models, including our baselines. For example, SUBLIME [1] needs to be trained on each graph in a multiplex graph dataset, and its time complexity is $\mathcal{O}(VmL\hat{d}+VNL\hat{d}^2+VNd_f(\hat{d}+L)+VNB(d_f+\hat{d}))$, which only has a slight difference in the last term compared to the time complexity of our method.
[1] Liu et al. "Towards Unsupervised Deep Graph Structure Learning." WWW (2022).
Pdf: /pdf/72ad9522e144fbe50795f1d1ef13bd69c467d01c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Co-occurrence is not Factual Association in Language Models | Accept (spotlight) | Summary: This paper distinguishes two forms of knowledge learning in the model:
1. co-occurrence statistics: from modeling the co-occurrence of entities in the text.
2. factual associations: from modeling entity relations established through implicit associations.
They synthesize two datasets where knowledge is represented in the above two ways. They show that models that learn factual associations can generalize better than models that learn co-occurrence statistics. They also show that models that learn from factual associations can utilize the knowledge better for reasoning.
They further study where the knowledge of these two different representations is stored in the model. They show that co-occurrence statistics are stored in the middle layer, while factual associations are stored in the lower layers. Accordingly, they propose to reset the middle layer while training the model. They show that this approach makes models generalize and do reasoning better.
Strengths: 1. The identification of the two forms of knowledge learning shed valuable insight on how models generalize from learned from the training data.
2. They create a dataset and an associated experiment, which can be used for further studies in the same direction.
3. They study where the knowledge is stored in the model. According to the findings, they propose a simple but effective approach to improve the models’ generalization ability and utilization of the knowledge for reasoning.
4. Their experiment is comprehensive. They utilize a benchmark dataset MQuAKE-T and they include fine-tuning only the lower layers as a baseline.
Because studying how language models acquire knowledge from training data is crucial for developing a better training paradigm and I found this paper solid and well-presented, I highly recommend this paper.
Weaknesses: 1. They only experiment with MQuAKE-T where each sentence encodes a piece of knowledge (subject-relation-object). The authors could experiment with some more realistic settings where a single sentence contains more than one piece of knowledge.
2. It would be interesting to see how model scaling affects the behavior. The authors could experiment with models of different sizes in the same model family.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Do you think models learn this two forms of knowledge differently when they are trained from scratch?
2. The paper "On Retrieval Augmentation and the Limitations of Language Model Training" could be related to your work.
3. Also, the title could be more informative.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes, it's addressed in the last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments, and we really appreciate the suggestions. Please find our response to the comments below:
**More than one piece of knowledge in a sentence**: we agree that the current datasets are limited in the number of pieces of knowledge in a sentence. We thank the reviewer for pointing out the limitation. We will consider expanding the dataset to include more complex sentences with multiple pieces of knowledge in future work.
**Model scaling**: we already have two different size models in the same model family (Llama 3 8B and 70B) in the experiments, and our main findings seem consistent across model sizes. We will consider additional experiments with even larger models (e.g., Llama 3.1 405B) in the revision of the paper.
**Training from scratch**: when trained from scratch, we suspect that the model could be more prone to relying on co-occurrence statistics due to a lack of basic semantic knowledge to understand the relationships in the text. Also, evaluating knowledge learning may be harder in this case, because models trained from scratch may not have sufficient in-context learning ability, making it difficult to evaluate the model's knowledge with reasoning tasks.
**Missing reference**: we thank the reviewer for pointing out the missing reference. We will include the reference in the revision of the paper.
**Improving the title of the paper**: we agree that the title could be improved to better reflect the main contribution of the paper. We will revise the titles in the revision of the paper.
We will also appreciate any further feedback or discussion from the reviewer, and are glad to provide additional information and clarification.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response, which addressed my questions. I would like to keep my score the same. | Summary: This paper studies how language models acquires factual knowledge during finetuning. It shows that narrative input tends to teach a model co-occurrence between entities, while referencing input teaches more about factual association. Models that learn factual association generalizes better to various question answering tasks than models that learn co-occurrence, especially for multi-hop reasoning tasks. By resetting different layers to the pretrained weights in models, the authors show that co-occurrence is mostly learned by the middle layers, while factual association is mostly learned by the lower layers. Based on this observation, the authors propose to reset the upper 2/3 layers to learn factual association when finetuning models on narrative input.
Strengths: - This paper studies how factual knowledge is learned by language models training on pure textual data, which is novel to my knowledge. The authors delivered clear lessons based on synthetic data and per-layer parameter ablation, and provided two solid solutions for real-world reasoning datasets. These lessons are important to the community of language models and reasoning.
- The paper is well structured and very easy to read. There are no typos and grammar errors.
Weaknesses: - The analysis of this paper is limited to triplets, which do not represent all kinds of knowledge in reasoning tasks. Can you extend the conclusions to more general knowledge forms?
- The authors do not provide enough insights why narrative input tends to teach co-occurrence statistics. The only insight I can find in the paper is that co-occurrence statistics can be learned faster (Line 245-247). I would suggest the authors discussing this more in Section 3.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The title does not clearly reflect the core contribution of this paper. May consider “How do language models learn factual association in finetuning?” Same for Section 3 header.
- Is it possible that language models learn factual association better from reference input is because reference input provides the same context for synonyms? I hypothesize that understanding “is” is identical would be harder than learning synonyms under the same context.
- Line 142-143: Are non-trivial positive comparison ratio and negation ratio sufficient to verify factual association? I feel they are only necessary but not sufficient.
- Figure 2: What is log likelihood ratio here? It is hard to get an intuition of what different scales mean here.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes. The authors have thoroughly discussed limitations of their analysis in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments, and we really appreciate the suggestions. Please find our response to the comments below:
**Forms of knowledge**: we agree that it will be interesting to study other forms of knowledge, such as quantitative knowledge, procedural knowledge, and probabilistic knowledge. Due to limited scope of the current paper, we will consider expanding the study to other forms of knowledge in future work.
**Reason of learning co-occurrence statistics from narrative text**: there are quite a few works showing that language models often learns to rely on simple statistical correlations such as word co-occurrence and lexical bias in the data, as we have discussed in the related work section. This is likely due to a general bias in neural network model to learn simple patterns faster and earlier than complex patterns, known as shortcut learning or "spectral bias". We will add more discussion on this in Section 3 in the revision of the paper.
**Improving the title of the paper**: we agree that the title could be improved to better reflect the main contribution of the paper. We will consider revising the titles in the revision of the paper.
**Same context for synonyms**: I am unsure that I understand the question correctly, if "synonyms" refer to entities of the same type, then both the narrative and referencing text has the same context (template) for entities from different triplets. The narrative text also have 10 paraphrasing variations for each triplet, which should provide sufficient contrast between different entities and semantic variations within the same entity (triplet) for the model to recognize the relation to learn.
**Purpose of the probing result**: as the reviewer said, the probing result alone is not sufficient to determine factual association. We use probing results to qualitatively show that the behavior of model trained on referencing text is consistent with true factual associations and inconsistent with co-occurrence statistics, therefore the model likely learned true factual associations. We then proceeded to use reasoning evaluations to confirm that it is so in the section that follows.
**Log likelihood ratio**: we thank the reviewer for pointing out the confusion, as the lines before Figure 2 only defines the likelihood ratio, missing an explanation of the log likelihood ratio used in the graph. Log likelihood ratio simply means the comparison ratio and negation ratio in log scale. We will clarify this in the revision of the paper.
We will also appreciate any further feedback or discussion from the reviewer, and are glad to provide additional information and clarification.
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for their response. Since my score is already positive, I will keep my score. | Summary: The work investigates the deficiencies of pretrained language models in learning factual knowledge, highlighting that these models tend to learn word co-occurrence statistics rather than true factual associations. The authors find that language models, when dealing with explicit relationships, are prone to merely memorize word co-occurrences and perform poorly on tasks that require reasoning.
Strengths: * This work shows that language models tend to learn word co-occurrence statistics instead of true factual associations. This finding is important for improving the knowledge learning of language models.
* The authors propose two methods to improve the learning of factual associations. First, by using text with implicit rather than explicit factual associations, they force the model to learn these associations. Second, by actively forgetting the learned co-occurrence statistics, they allow the model to better learn and retain factual associations.
* The proposed strategies significantly improve the model's performance in multi-hop reasoning tasks on both synthetic and real-world datasets, proving their effectiveness.
Weaknesses: * The generalization across different domains. This work synthesizes Country-City-Animal data, which is somewhat limited.
* Reasoning or memory? The purpose of implicit training is to force the model to understand factual associations through indirect connections, thereby enhancing its reasoning abilities. This approach will help the model perform better on complex, multi-step reasoning questions rather than simple memory tasks because of their training pattern. While, it can’t directly prove that referencing method can bring better memory than Co-occurrence. Moreover, for simple QA tasks, the Referencing method performs worse than the Narrative method. Different test tasks should be designed to verify knowledge retention. For instance, adding more noise and interference during simple QA tests to evaluate the robustness of memory. Design memory retrieval tasks that do not require complex reasoning to ensure that the tests only assess the model's ability to recall facts.
* Although it mentions that co-occurrence statistics and factual associations are parameterized in different layers of the Transformer model, it lacks a deep explanation of the specific mechanisms and reasons behind these phenomena.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments, and we really appreciate the suggestions. Please find our response to the comments below:
**Generalization of results across domains**: although we performed analysis mainly with the synthetic Country-City-Animals dataset (in order to ensure the knowledge is completely novel to the model, which helps us to cleanly ablate and evaluate learned knowledge in analysis), we also experimented on real-world datasets such as MQuAKE and 2WikiMultiHopQA in Section 4 which validated the effectiveness of our method.
**Reasoning or memory**: we agree that memorization and reasoning (generalization) are two different aspects of knowledge learning. We thank the reviewer for raising this point and we will more clearly discuss the connection and difference between the two aspects in the revision of the paper.
In existing work on knowledge editing, there have been multiple metrics designed to evaluate memorization, i.e., the model's ability to recall edited facts, such as varying the context and/or introducing semantic variations, similar to what the reviewer suggested. In our work, we mainly focus on the distinction between **memorization vs. reasoning** (generalization) and show that learned knowledge does not automatically generalize well to reasoning despite good memorization and the reasons behind it. We believe our work is a step forward from existing work on knowledge memorization and provides a new perspective on knowledge learning.
**Explanation of the parameterization phenomenon**: we agree that investigating the root cause of the different parameterization would help better understand the LM's behavior in knowledge learning. Model inspection and interpretability methods could be leveraged in this direction. We would leave this as future work and will discuss this in the limitation section of the paper.
We will also appreciate any further feedback or discussion from the reviewer, and are glad to provide additional information and clarification.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Thanks for your rebuttal and it addresses most of my question. I will keep the same score. | Summary: This paper investigates the learning of factual knowledge in pretrained language models, distinguishing between knowledge represented as word co-occurrence statistics and true factual associations. The authors find that language models tend to learn co-occurrence statistics, which do not generalize well to reasoning tasks, while factual associations, which generalize better, can be harder to learn. They propose two strategies to improve the learning of factual associations: training on text with implicit associations and using a method called active forgetting to discard learned co-occurrence statistics. Their experiments on synthetic and real-world datasets demonstrate that these strategies significantly enhance the models' ability to generalize factual knowledge in various reasoning scenarios. The paper includes a thorough layer-wise analysis of knowledge parameterization in transformer models finding different localization for co-occurence statistics vs factual knowledge in model weights.
Strengths: I think the strengths of this paper are in the following contribtions
- Identification of Knowledge Representations: The paper clearly distinguishes between two forms of knowledge representation in language models: co-occurrence statistics and true factual associations. This distinction is crucial for understanding the limitations of current models. Additionally, the detailed analysis of how co-occurrence statistics and factual associations are parameterized across different layers of transformer models provides valuable insights into the internal workings of pretrained models.
- Empirical Validation: The authors conduct comprehensive experiments using synthetic and real-world datasets to validate their claims. They show that models trained on implicit associations generalize better to reasoning tasks than those trained on explicit co-occurrence.
- Novel Training Strategies: They propose a training strategies to improve factual learning are innovative. Training on text with implicit associations and a method of actively forgetting learned co-occurrence statistics to unblock factual learning.
- Public Release of Resources: Finally, the release of the synthetic corpus and code to reproduce their reulsts can facilitate further research and experimentation in this domain.
Weaknesses: I did not find any major weaknesses in this paper.
The main ones, which are mentioned by the authors when addressing current limitations of their work are the following:
- Synthetic data split: how are you splitting your synthetic data? Are you evaluating on an unseen subset for both synthetic as well as natural dataset? I understood you are testing on unseen data for natural dataset and I am unsure if that's also the case for the synthetic dataset. Please clarify. This is the reason why I am, at the moment, giving a score of 6 for what would otherwise be a clear 7.
- Overhead in Data Preparation: Converting general text to forms with implicit associations for real-word data may require significant effort and sophisticated rewriting techniques, potentially limiting practical applicability.
- Limited Scope of Text Variations: The paper only considers two forms of text (narrative and implicit association). There is a need to explore more diverse textual representations to validate the findings comprehensively.
- Focus on a single type of reasoning: While the claims that learning implicit knowledge improve performance on complex reasoning tasks, the paper focuses on a specific type of reasoning. Other type of reasoning like logical or mathematical should be validated. Additionally, it is unclear whether the proposed finetuning method and data harm existing model performance on standard LLM benchmark. It would a nice addition to show whether the method in the paper do not conflict with existing model knowledge in other domains.
- Evaluation information: Taken from the appendix "For evaluation on question answering tasks, we report 5-shot exact match accuracy unless otherwise specified." Please add this in the main body of the paper and mention why you use this metric instead of others like F1 for QA tasks. Is it because all your tasks require a single word as gold label? Is this true also for the real-world dataset in table 3 (MQuAKE-T and 2WikiMultiHopQA)? Please add this info together with your generation parameters used at inference time (number of generated tokens/sampling parameters etc.)
-
---
Minor
- Missing reference: De Cao et al. Editing Factual Knowledge in Language Models, EMNLP 2021. This is an important reference when discussing model editing since it was among the first contribution in this area.
- line 200 the reference to Appendix 3.3 is wrong
----
### Final Recommendation
Overall, I think the claims are backed by well-presented empirical evidence and I vote for the inclusion of this paper to NeurIPS.
### Update post rebuttal
I increase my score from 6 to 7
Technical Quality: 3
Clarity: 4
Questions for Authors: - Have you tried evaluating the model in a 0-shot fashion? Given the model has been finetuned on that data it can be helpful to add 0-shot performance
- How do you compute shaded areas in figure 3? For instance, it seems that MC accuracy of Llama 3 70B Narrative trained does not show decrease performance on the lowest layer for first-to-last ablation while it does for last-to-first ablation, yet you shaded that area for both ablation. It can be informative to add additional info on the criteria you used to shade those areas
- To compute the comparison ratio, the score depends on the choice of the entity in the denominator. Given the small size of your synthetic data, unless you are already doing so,, can you marginalize across all other entities? Please clarify how you compute the comparison ration
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes in the limitations section after the conclusion on page 9
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments, and we really appreciate the suggestions. Please find our response to the comments below:
**Data split**: The training and evaluation data are always disjoint and are of different types. The training data is plain text, while the evaluation data is questions answering of different reasoning types (we described the synthetic dataset in detail in Appendix A.1). We will try to make this more explicit in the revision of the paper.
**Overhead in Data Preparation**: we agree that converting general text to forms with implicit associations may require significant effort. However, we think it is possible that LLMs such as ChatGPT could be prompted to perform such conversions automatically. We will try to investigate automatic methods for text rewriting in future work.
**Scope of Text Variations**: we agree that the text variations are limited in the current work. Finding other forms of knowledge-bearing text would help generalize the findings.
**Types of reasoning**: we included several types of reasoning in the evaluation, including reverse reasoning, indirect reasoning, and 2-hop reasoning. Indirect reasoning requires basic logical comparison between properties of two entities. Regarding mathematical reasoning, it would require numerical facts which are not included in the current dataset. We agree that it would be an interesting extension and we will consider in future work.
**Conflict with existing model knowledge**: we use the MMLU benchmark to evaluate the existing model knowledge in other domains, and found that on the synthetic dataset, training on the narrative text results in 5-shot MMLU accuracy 66.7 -> 66.5 (Llama 3 8B) 79.2 -> 79.3 (Llama 3 70B), and training on text with implicit associations results in 66.7 -> 66.7 (Llama 3 8B) 79.2 -> 79.0 (Llama 3 70B). The performance change on MMLU is minimal and seems within margin of error in evaluation, indicating that either training method has little direct interference on the existing model knowledge (catastrophic forgetting is not observed most likely because the training data size is small).
We will include evaluation on more common benchmarks in the revision of the paper.
**Evaluation details**: we use exact match as metric for QA mainly because all answers in the datasets used are either single words (synthetic, 2WikiMultiHopQA) or entity names (synthetic, MQuAKE, 2WikiMultiHopQA). Generation uses greedy decoding and stops whenever '\n' is generated ('\n' follows the answer of each question in the 5-shot context) or a maximum of 20 tokens is reached. We will include these details in the revision of the paper and move the details about the metrics to the main text for better clarity.
**Missing reference**: we thank the reviewer for pointing out the missing reference. We will include the reference in the revision of the paper.
**0-shot evaluation**: the models are base LMs finetuned only on text corpora (narrative or referencing), and are not instruction-finetuned or finetuned on any QA data. Therefore, we don't expect the models to perform well on 0-shot evaluation on QA tasks, which would not effectively evaluate the model's knowledge or reasoning.
**Criteria for shaded areas in Figure 3**: the shaded area shows the layers of significant performance change averaged over the two ablation directions. The shades are marked qualitatively and is mainly intended as a visual aid to help interpret the ablation curves.
**Computation of the comparison ratio**: the negative samples (entity in the denominator) are chosen from entities of the same type from the dataset, excluding the correct entity itself. The log likelihood ratio is averaged over all possible choices.
We will also appreciate any further feedback or discussion from the reviewer, and are glad to provide additional information and clarification.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I want to thank the authors for their work on the rebuttal.
I am glad to see that their method does not impact model existing knowledge as shown by the MMLU results.
The rebuttal addressed most of my concerns. I still believe other type of reasoning tasks and different text variations would make the paper much stronger and are also important contribution for future work. However, the paper as is provides a valid contribution on understanding learning dynamics in LLMs. I am happy to increase my score to 7 and I vote for the inclusion of this work in the conference | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Dynamic 3D Gaussian Fields for Urban Areas | Accept (spotlight) | Summary: This paper aims to perform view synthesis for dynamic urban scenes. This paper adopts 3DGS as scene geometry and uses neural fields to model the dynamic appearance of urban scenes. The neural scene graph is introduced to handle the movement of dynamic objects, and a deformation field is used to handle local articulated motions. Experiments show that the proposed approach outperforms baseline methods.
Strengths: 1. The presented pipeline well handles the dynamic appearance of urban scenes.
2. The experiments are sufficient and validate the effectiveness of the proposed approach.
3. The idea of combining neural fields with 3DGS is sound and effective.
Weaknesses: 1. The method presented in the paper takes 0.17 seconds to render an image at a resolution of 1550x2048, which is significantly slower than conventional 3DGS. Is the trade-off of such a significant sacrifice in rendering speed for quality improvement justified? Does the author have any solutions to address this issue?
2. The paper needs to evaluate the extent to which neural fields impact the rendering speed of 3DGS.
3. The pipeline figure of the paper should be clearer. The connections between the various modules are not easily discernible from the figure and its caption. For instance, it is not clearly depicted how the latent codes obtained from the scene configuration are inputted into the neural fields. Then, how are neural fields combined with 3DGS to represent static scenes and dynamic objects? The figure only shows simple association arrows. However, these modules are not merely input-output relationships. There are some combination operations between them.
4. The paper uses neural fields to represent appearance, which reduces the memory footprint but may also significantly impact rendering speed. Has the paper considered how to address this issue?
5. In Figure 2 of the paper, regarding the neural fields section, the symbols for static opacity correction and dynamic deformation are inconsistent with the descriptions in Section 3.2 of the paper. This is quite confusing.
6. I am curious whether the combination of neural fields with 3DGS could make the optimization of 3DGS unstable?
7. The non-rigid objects mentioned in the paper refer to cars, right? Or other objects? I did not see how the paper describes the modeling of cars. Although the paper mentions the use of scene graphs for modeling, I did not see how dynamic cars are represented using scene graphs. Does the paper treat dynamic cars as non-rigid objects directly? In this case, how can the large range of movement of dynamic cars be handled?
Technical Quality: 3
Clarity: 2
Questions for Authors: The presentation of this paper should be improved. Some important technical details are missing. The limitations from the introduction of neural fields should be discussed.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The limitations from the introduction of neural fields should be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful feedback and for taking the time to review our manuscript. Below we address the concerns raised.
1. Rendering speed compared to 3DGS: We hope to address this concern in our global response, where we show a) an improved runtime of 0.074 seconds when rendering frames at 1550x2048 resolution using 8.02M 3D Gaussians, and b) a detailed runtime analysis of our method. This analysis reveals that the rasterization and the scene graph evaluation necessary for rendering dynamic scenes account for more than 75% of the total runtime. While we acknowledge that rendering large-scale urban dynamic scenes is more complex than rendering small-scale static 3D scenes and thus the reported rendering speeds are significantly slower than in 3DGS [18], we believe that the complex nature of the scenes we render needs to be taken into consideration when examining these numbers.
2. Figure 2 unclear: We plan to improve the clarity of the figure for the camera-ready version, highlighting the information flow. To render an image at sequence $s$ and time $t$, we first evaluate the scene graph which returns the latent codes (i.e. the conditioning signals for the neural fields), as well as coordinate transformations and the active set of 3D Gaussians (i.e. which objects are present where). We then use the latent codes in conjunction with the sets of 3D Gaussians as input to the neural fields to retrieve the 3D Gaussian color and the opacity correction or deformation. After this, we use the coordinate transformations to compose the scene globally, i.e. placing the objects at the right locations. Finally, we render the scene with the 3DGS renderer.
3. Impact of neural fields on rendering speed: In our global response above, we show that querying the neural fields accounts for only 13% of the total rendering speed. This is because we use efficient hash-grid based neural fields inspired by [53]. Therefore, although the neural fields are slower to query than the spherical harmonics used in 3DGS [18], it does not critically impact rendering speed.
4. Figure 2 symbols: We thank the reviewer for pointing this out, indeed we abbreviated $\delta \mu_k^t$ in the figure to $\delta$ to make it more concise. We will correct this in the camera-ready version.
5. Impact of neural fields on optimization: In our experiments, the optimization was not negatively impacted by the introduction of neural fields. In particular, we show in Tab. 4a that replacing the SH color function with neural fields does not lead to a significant change in performance while halving the GPU memory requirements (row 2 vs. 3).
6. Dynamic vs. non-rigid objects: We apologize for the confusion. In L196-197 of the paper, we refer to vehicles (e.g. cars) as rigid dynamic objects since their motion can be described with a single rigid body transformation, while we refer to pedestrians, cyclists, and others as non-rigid dynamic objects since their motion is articulated, i.e. cannot be represented by a single rigid body transformation. We represent the global motion of all dynamic objects with our scene graph, transforming the objects from an object-centric canonical space to world space with rigid body transformations. Additionally, we represent the local, articulated motion of non-rigid objects like pedestrians as deformation in canonical space. We hope this clarifies the reviewer’s concern and we will adjust our writing in the camera-ready version accordingly.
We hope that our response adequately addresses the reviewer’s concerns. We are happy to provide more information during the discussion phase.
---
Rebuttal Comment 1.1:
Comment: After reading the rebuttal and other reviewers' comments, I think that this paper reaches the bar of NeurIPS. | Summary: This paper proposes a hybrid neural scene representation for dynamic urban driving scene modelling. The method utilizes 3D Gaussians as an efficient geometric scaffold and neural fields to represent appearance, thereby reducing memory. To account for transient scene geometry variations caused by weather, seasons, and other factors, the authors introduce an opacity attenuation field that modulates the scene geometry. For modeling dynamic actors in the scene, an object-centric representation is used, with a non-rigid deformation in the canonical space to animate objects such as pedestrians. Experiments demonstrate that the proposed method achieves state-of-the-art performance while rendering faster than previous methods.
Strengths: * The paper is well-written and easy to follow.
* The decomposed representation of appearance significantly reduces memory usage.
* It models transient scene appearance and geometry, as well as non-rigid objects like pedestrians.
* The evaluation and ablation study are comprehensive.
* The paper demonstrates visually superior results compared to baselines such as SUDS and ML-NSG.
Weaknesses: * The rendering of the proposed scene representation requires query appearance from the neural fields, it is unclear whether this will impact rendering speed compared to spherical harmonics representation.
* This paper lacks a comparison with recent neural field baselines such as UniSim and NeuRAD for urban driving scenes. Additionally, there is no comparison of the speed to 3D Gaussian baselines.
* How to control the non-rigid objects in the scene? e.g., animating the pedestrians given a sequence of human poses.
* Is it feasible to render other sensor modalities in autonomous driving, such as LiDAR?
Technical Quality: 3
Clarity: 4
Questions for Authors: This paper addresses a practical and important problem in autonomous driving. The writing is clear, and the results are promising. I look forward to the authors' response to the concerns I raised above.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors discussed the limitations of modeling other sensor phenomena such as rolling shutter effects, motion, and more complex camera models. They also discussed the broader societal impacts of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to review our manuscript and the constructive feedback on our work. Below we address the concerns raised.
1. Impact of neural field query on rendering speed: We address this point in the global rebuttal posted above. In a nutshell, we show that the neural field query accounts for only 13% of the total runtime, so while the neural field query is slower compared to spherical harmonics, the impact on the rendering speed is marginal.
2. Additional comparisons: We thank the reviewer for pointing us to UniSim and NeuRAD. While at submission time, we were not aware that the code for either of the two papers was public, we noticed that the code for NeuRAD was released by now. Therefore, we provide an additional comparison on KITTI MOT in Tab. 2 of the PDF attached to the global response and show that our method outperforms NeuRAD. Regarding further comparison to 3D Gaussian baselines, we refer the reviewer to Tables 2 and 4a of our paper, where we compare to StreetGaussians [73] and the SH color representation used in 3DGS [18] (row 2 vs. 3), respectively.
3. Control of non-rigid objects: We build a model capable of coping with general dynamic urban scenarios, which exhibit a wide range of dynamic actors. Thus, the design of our method is such that it allows for general modeling of non-rigid objects, e.g. pedestrians holding a stroller or shopping bags (see Fig. 8 in our supplemental material for an example), cyclists, and animals. Specifically, the articulated motion of pedestrians is controlled via time input to the deformation head, not via human pose estimates.
4. LiDAR rendering: While we did not consider rendering LiDAR measurements in our work, it is possible with our representation. However, we note that the 3DGS renderer assumes a pinhole camera model, which differs from the LiDAR sensor model. Therefore, ray-tracing-based approaches for rendering 3D Gaussians should be utilized [Yu et al.]. In addition, the LiDAR intensity and ray drop probability should be modeled with the neural fields following e.g. NeuRAD. The LiDAR point cloud can then be rendered from rays generated by the LiDAR sensor model. We consider this as an interesting direction to explore for future work.
We hope this adequately addresses the reviewer’s concerns. We are happy to provide more information during the discussion phase.
[Yu et al.] Gaussian Opacity Fields: Efficient and Compact Surface Reconstruction in Unbounded Scenes | Summary: The paper presents a novel 3D scene representation for novel view synthesis (nvs) in dynamic urban environments where, in particular, under heterogeneous imaging environments. The proposed representation relies on existing ingredients: 3D Gaussian Splatting, learned static/dynamic object instances, and a global scene graph.
The resulting system yields very strong results on a series of public autonomous driving benchmarks.
Strengths: ### + Readability.
Overall, in its current state, the paper's readability is relatively good. The main ideas, concepts, are mostly well discussed, conveyed, and articulated, throughout the paper.
### + Practical usefullness of the considered problem.
### + Structure, and Organization of the Contents.
The presentation is mostly on point and each dedicated section of the paper is properly balanced. The use of text real-estate is fair.
### + Relative simplicity of the conceptual contribution.
### + The amount of implementation details is very good.
### + The reported performance.
### + Implementation details for reproducibility: excellent.
Weaknesses: ### - (1) Positioning of the conceptual contribution vs. the competitive landscape.
In particular, the proposed method looks very much like a revisit of Panoptic Radiance Fields [49] by replacing the NeRF component byt 3D Gaussian splats.
While this is perfectly fine, this merits a targeted, transparent discussion in the main paper to help the reader understand the whereabouts of how the proposed contribution relates (or not) with such pieces of litterature.
### - (2) How much does it cost?
Missing piece of information regarding the resource usage, memory footprint, typical timings etc to better understand the downsides of using the provided method.
### - (3) (To a lesser extent) Certain contents in the paper are unclear.
Figure 4: what is happening? Adding color annotations or boxes would definitely help.
Technical Quality: 4
Clarity: 4
Questions for Authors: I do not have more questions or suggestions than the ones underlying the aforementioned weaknesses.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors provide one dedicated paragraph that reasonably addresses such considerations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful feedback and for taking the time to review our manuscript. Below we address the concerns raised.
1. Positioning of the conceptual contribution vs. the competitive landscape: We addressed this in L114-117 of our paper, but we will make the distinction clearer in the camera-ready version. Specifically, while our work shares similarities to [49] in how the rigid, global motion of dynamic objects is handled, our method fundamentally differs in several key aspects: 1. We introduce a mechanism to handle both transient geometry and varying appearance across multiple, *heterogeneous* captures, a key component that [49] is not addressing 2. This mechanism and the efficiency of our approach enable the reconstruction of much larger urban areas with significantly more dynamic objects than in [49], 3. While [49] excludes non-rigid objects, we model the articulated motion of non-rigid dynamic objects, and 4. We do not rely on semantic priors to represent dynamic objects accurately.
2. Runtime/memory cost: We address it in our global response. In short, we show that the neural fields introduced by our method do not dominate the runtime. We report peak memory consumption in Tab. 4a and show that our method leads to a reduced memory footprint compared to the SH color representation in 3DGS [18].
3. Figure 4: Thank you for pointing out this issue. The important bit is happening in the middle of the image: The lady in the pink sweater is getting out of the dark grey car in front of the ego-vehicle. It illustrates the ability of our method to handle complex, articulated motions. We will add colored boxes to the camera-ready version to make it clear.
We hope our response adequately addresses the reviewer’s concerns. We are happy to provide more information during the discussion phase.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Dear Authors,
Having read (all of) the rebuttal contents, here is my response.
Most of my concerns have been adequately addressed but there is one misunderstanding regarding point 2 in your reply:
My point was refering to runtime and resource usage numbers wrt the competition, in particular [49] with indicative metrics, even at the top level.
1 and 2 above address this without typical ranges, this is what is missing and I would strongly suggest to integrate one line in the camera ready version to clarify this as it is an additional plus favoring the proposed contribution.
I will maintain my positive rating.
Warm regards.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the clarification. We will integrate the line clarifying the runtime and resource usage compared to the competition including typical ranges in the camera ready version as suggested. | Summary: This paper works on novel view synthesis (NVS) for large-scale, dynamic urban scenes. This paper proposes a neural scene representation called 4DGF, which uses 3D Gaussians as an efficient geometry scaffold while relying on neural fields as a compact and flexible appearance model. The proposed method integrates scene dynamics via a scene graph at global scale while modeling articulated motions on a local level via deformations. The method significantly outperforms baselines in terms of speed and rendering quality on three benchmarks.
Strengths: 1. The idea of combining Gaussian Splatting and neural fields to model geometry and appearance, respectively, is very interesting. This makes a lot of sense considering the efficiency and the advantages of each of the two representation. This is definitely a more scalable approach to large-scale scenes compared to prior work.
2. Extensive experiments have been conducted to validate the proposed method, this includes comparing with recent baselines on three benchmarks and the ablation studies that carefully examine each component. Moreover, the rendering quality improvement and the speedup is very significant on all three datasets.
3. The paper is very well-written and easy to follow. Implementation details are sufficiently discussed for reproducibility.
Weaknesses: 1. I appreciate the authors' including a video in the submission. I found sometimes there's a large foggy region near the camera (e.g., the regions on the right during the 5-6th second), do the authors have any explanations on that? Is it caused by any limitations discussed in Sec. 5?
2. I understand that this paper mainly focuses on large dynamic scenes. I am curious how this hybrid representation performs on 3D statics scenes (e.g., the benchmarks that the original 3DGS have been tested on). This seems to be a more straightforward way to see the effect of using neural fields instead to model appearance.
Technical Quality: 3
Clarity: 4
Questions for Authors: Minor questions/suggestions:
What does GPU memory in Tab. 4 (a) mean? Is it peak memory?
In Fig. 1 inputs, the blue/orange colors for the image boundaries are also used for "geometry" and "appearance" respectively. I assume there's not such a correspondence between the input and geometry/appearance. So maybe you could change the input boundaries to different colors.
FIg. 2: extra space in "Scene configuration"
Tab. 4 appears before Tab. 3, maybe you could switch the order.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The limitations seem to have been sufficiently discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to review our manuscript and the constructive feedback on our work. Below we address the concerns raised.
1. Foggy regions in the demonstration video: While these artifacts may be caused by the limitations discussed in Sec. 5 like white balance or focus blur, we suspect it is due to an issue with 3DGS optimization. In particular, we noticed that 3DGS tends to produce a small number of very large, semi-transparent 3D Gaussians, especially when pruning thresholds are set generously. In Fig. 1 of the PDF attached to the global response, we show there are a few 3D Gaussians with a huge mean scale (>1.0), while the vast majority is small (<0.001 mean scale) compared to the scene bounds that are approximately within [-1.0, 1.0]. Concurrently proposed techniques like scale regularization [Kheradmand et al.] could mitigate this issue.
2. Experiments on static 3D scenes: In Tab. 4a, we show that our method performs similarly when using neural fields as color representation compared to the SH representation from 3DGS [18] (row 2 vs. 3). This illustrates that, on single-sequence, homogeneous data, SH and neural fields are equivalently suitable for modeling appearance, both for static and dynamic scene parts. This finding is in line with concurrent work focusing on static 3D scenes [Lu et al.]. In contrast, Tab. 4b shows that, on multi-sequence, heterogeneous data, modeling appearance and transient geometry with neural fields is essential for NVS performance. However, we agree this is an interesting experiment and will add this comparison to the camera-ready version.
3. GPU memory in Tab. 4a: We report peak memory consumption as it is the deciding factor when attempting to train a 3DGS model on a specific scene.
4. Suggestions on Fig. 1/2, Tab. 3/4: Thank you for spotting these issues. We will correct them in the camera-ready version.
We hope our response adequately addresses the reviewer’s concerns. We are happy to provide more information during the discussion phase.
[Kheradmand et al.] 3D Gaussian Splatting as Markov Chain Monte Carlo
[Lu et al.] Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering
---
Rebuttal Comment 1.1:
Comment: I have read through all reviewers' comments and the rebuttals by authors. My concerns have been addressed, and I would like to keep my original score. | Rebuttal 1:
Rebuttal: We thank all reviewers for their helpful and constructive feedback. We appreciate the positive reception of our work. The consensus is that the combination of neural fields and 3D Gaussians as scene representation constitutes an interesting (f8M6), conceptually simple (6enk), and effective (51SN) solution to practical problems with 3DGS [18], i.e. memory consumption and thus scalability to large-scale scenes (f8M6, 7pd2) as well as modeling of complex real-world phenomena such as transient geometry and appearance across captures and non-rigid objects (7pd2). The reviewers point out that the improvements over existing work are convincing with visually superior results compared to prior art (f8M6, 7pd2, 6enk), and acknowledge the comprehensive evaluation and ablation studies (f8M6, 7pd2, 51SN) validating the effectiveness of our approach. Finally, most reviewers agree that the paper is well-written, easy to follow (f8M6, 6enk, 7pd2), and the presentation is mostly on point (6enk). The reviewers also acknowledge the extensive discussion of experimental and implementation details for reproducibility (f8M6, 6enk).
A shared concern among the reviews is that a runtime analysis should be included. We are thankful for this suggestion and a) include a detailed runtime analysis in the attached PDF, and b) *improve the runtime of our algorithm by more than 2x*. In particular, we implement two improvements: First, we significantly reduce the number of queries to the neural fields by skipping out-of-view 3D Gaussians after projection. Second, we implement a vectorized version of the neural field query that retrieves the outputs for all dynamic objects in parallel. Next, we measure the runtime of each pipeline component and report the results in Tab. 1. It shows that the runtime is dominated by rasterization and scene graph evaluation, accounting for more than 75% of the total average runtime, not by the neural field queries (approx. 13%). This is because we render views at high resolution with a large number of 3D Gaussians which is computationally demanding even for the efficient rasterization of 3DGS [18], and because we represent areas with hundreds to thousands of dynamic objects, making the retrieval of the 3D Gaussians and latent codes costly. For the neural fields, we use very efficient hash-grid based encodings in conjunction with tiny MLPs, inspired by InstantNGP [53]. Thus, simply replacing the neural fields with the SH function of [18] would not lead to a significant speedup. Note also that our method is significantly faster in simpler scenarios with fewer 3D Gaussians per scene, e.g. Waymo Open.
In addition, we provide a comparison in terms of time per query of the SH function used in 3DGS [18] versus our hash-grid based neural field query. While the SH function is faster to evaluate than the neural fields, it is also severely limited in representation capacity: It is not capable of handling varying appearance across input sequences (weather, time of day, season), transient geometry (construction sites, tree leaves), articulated motion of dynamic objects (pedestrians, cyclists), and large-scale scenes with several millions of 3D Gaussians due to memory constraints. Thus, we emphasize that the use of neural fields does not merely improve results, but *enables applications that 3DGS [18] is not capable of modeling*. We will dedicate a paragraph to the runtime analysis in the camera-ready version.
Pdf: /pdf/b2d49902b3656b7593b0b48c46f48b4ab9fd39d4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Causal vs. Anticausal merging of predictors | Accept (poster) | Summary: The paper explores the potential differences in predictor merging when approached from causal versus anti-causal directions. The results from MAXENT and CMAXENT indicate that in the causal direction, the solution converges to logistic regression, whereas in the anti-causal direction, it converges to Linear Discriminant Analysis (LDA). The study also examines how the decision boundaries of these two solutions vary when only partial bivariate distributions are observed, highlighting implications for Semi-Supervised Learning (SSL) and Out-Of-Variable (OOV) generalization.
Strengths: The paper investigates the differences that arise in predictor merging from causal and anti-causal perspectives. It demonstrates through MAXENT and CMAXENT that the causal direction results in logistic regression, while the anti-causal direction leads to Linear Discriminant Analysis (LDA). Additionally, the paper analyzes how the decision boundaries of these two methods change when only some bivariate distributions are observed, discussing the implications for Semi-Supervised Learning (SSL) and Out-Of-Variable (OOV) generalization.
Weaknesses: 1. **Small scale dataset**: The main weakness is the small scale of the data and models studied in the paper. I believe the challenge of reducing computational cost with mixture-of-expert models is more relevant to larger models. The authors however only presented results on small bivariate distributions. Experiment results with larger models are appreciated. If experiments with larger models are not feasible, I hope authors can discuss potential limitations of the study under those larger-scale/multivariate scenarios. Do you expect the findings in Eqn 1&2 change in larger-scale/multivariate setups?
2. **Lacks comparison**: This paper lacks sufficient comparisons with other papers. Can you explain what are the differences and advantages of the proposed method, compared to the pi-Tuning method proposed in [1] (Section 3.2 and Section 3.3)?
3. **Contributions are obvious from the observations given in MAXENT and CMAXENT**: The overall paper seems like a consolidation of few previous papers.
4. **Non-causal**: The paper has studied the causal and anti-causal setups but from the signal processing point of view pleas study the non-causal settings.
5. **Results under the availability of Noise and biases**: The paper has shown result in a toy example which is good for overall pipeline understanding, but not sufficient to understand it from ML perspective. For example: what if the random variable leverages some amount of noise and have biases that imposes skewness in the distribution?
References:
[1] Wu, Chengyue, et al. "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation." International Conference on Machine Learning. 2023.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. **Noise**: What if we have observation noise (which is very common in MRI dataset)? Will the results still hold?
2. **Co-variance shift**: Not sure whether I am missing anything, but, what if there is a co-variance shift?
3. A curiosity is whether all the theoretical results hold if the distributions are not Gaussian?
4. From a signal processing point of view can we get results for non-causal setups?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The major limitation of the paper is the experimentations that assumes the data distribution to be gaussian. The paper should include non-causal results and its interactions with causal and anticausal models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We will first give some general comments, then answer each one of the points in the weaknesses and last answer the questions.
We noticed the strengths of the paper are the same as the summary. Is this correct?
Weaknesses:
1. The question about scaling is an important one. We believe that the results of Equations 7 and 10 (and Theorem 5) would remain unchanged whenever we have $D$ experts to merge. We studied the bivariate case because it allowed us to easily analyse and visualise the implications of our theoretical results. Furthermore, we recently got an interesting insight about the results in high dimensions. Intuitively, the result is that using the same data as constraints, the resulting model in the anticausal direction gives more weight, relative to the causal direction, to those variables that have higher covariance with the target variable.
2. We are unaware of other papers that look at merging of predictors in the case of causal and anticausal learning. If the reviewer knows one such analysis we would be happy to compare it to ours. We read [1] in detail but we found that beyond the use of the term pooling of experts there is no strong relation between [1] and our work. Our work studies the asymmetries between merged predictors whenever we can make causal assumptions, whereas [1] proposes a method to do merging of experts in the setup of foundation models with relation to transfer learning, which we do not talk about in our paper. There is no causality involved in [1].
3. We disagree with the reviewer with this statement. CMAXENT has been studied for the causal discovery and merging data scenario Garrido Mejia, et.al. (2022). What we study here is theoretical asymmetries from merging of experts with different causal assumptions. These are different tasks and we chose CMAXENT because it allows us to merge data, and allows us to include causal information on the final model. Furthermore, we analyse the geometry of the decision boundaries of different models using the same data. This is the only study we know that does such analysis a we find it to be of high value for the NeurIPS community.
4. We do not see how the analysis of non-causal scenarios is relevant to the purpose of our paper, which is studying the differences arising from causal assumptions when merging experts. There is previous research, referenced on our paper, that studies merging of experts using MAXENT without any causal assumptions (see references [21], [23] and [30] on our paper for some examples).
This is an interesting question but, as the previous one, out of scope for the current work. Furthermore, there is research on the results of MAXENT under data noise, whereas there is very little research on merging of experts and causality.
Questions:
1. Although this is an interesting question, we find it to be out of scope as we mentioned above. In this paper we are interested mainly in the asymmetries arising from the differences between causal questions on merging of experts.
2. As usual in causality, if there is covariance shift one needs to understand whether the model is causal or anticausal, as a distribution shift can be interpreted as an intervention under the causal lens. If the model is causal, then the it is robust against distribution shifts, if the model is anticausal, then the model will not be robust against distribution shifts.
3. We do not assume that the distribution of any covariate is Gaussian. This is a result of MAXENT under mean and (co)variance constraints. Several common distributions are the result of a MAXENT problem under moment constraints: Bernoulli, Exponential, Gaussian, etc. An excellent resource to understand the relation between common distributions and Maximum Entropy is Wainwright and Jordan (2008).
4. If the reviewer is interested in merging of experts without making any causal assumptions they can check references [21], [23] and [30] on the main document and more recently:
- Vetter, J., Moss, G., Schröder, C., Gao, R., & Macke, J. H. (2024). Sourcerer: Sample-based maximum entropy source distribution estimation. arXiv preprint arXiv:2402.07808.
---
Rebuttal 2:
Title: Kind request to respond
Comment: Dear reviewer,
We hope that we were able to address your concerns and clarified some misunderstandings. Let us know if you have any further concerns and, if not, ask to reconsider the score. We appreciate the time and effort you have spent to review our paper.
Thank you,
The authors | Summary: The authors give a treatment of the mixture of experts problems using the idea of maxent; they use this as a tool to discuss how to merge causal and anti-causal inferences on the same data, in part as a way to assess the quality of the data being analyzed.
Strengths: The discussion of the differences and merging of causal and anti-causal analyses was strong and appreciated.
Weaknesses: The framing of the paper, I felt, missed a lot of literature and possible approaches to the issue being addressed. That is, the paper is framed as a discussion of merging of expert models (which can be an important problem), and maxent is proposed as a method for doing this. But the merging of experts problem is itself framed as a problem of inferring causal graphs where each expert has access to only part of the data. The problem of overlapping dataset has been extensively studied, but no reference is made to that literature, or to any ideas that literature has proposed which may compete with the maxent proposal here. More recently (actually not that recently) the discussion has taken a turn into discussions of privacy, for contexts, e.g., where different hospitals have access to their own dataset but may want collaborate on building causal model without risking making available by inference the identities of their respective patients--i.e., the so-called "federated learning" problem. This has been extensively studied also in the literature to date with many proposals given for how to address it. I think this paper could benefit from a literature review of this sort to place the proposed ideas in context, with comparisons made to alternative methods, or at least reasons not to compare to particular methods that make sense.
Also, the paper is mainly theoretical but could have benefitted from discussion of an empirical or simulation example.
Technical Quality: 4
Clarity: 3
Questions for Authors: Would the authors be willing to expand their literature review to encompass more of the literature relevant to what is being called the "mixed of experts" issue?
Would the authors be willing to include an empirical example, if not in the main test, then in the Supplement, with a pointer from the main text? It would be very helpful to know whether these ideas about causal/anti-causal merging are helpful empirically or in simulation.
Would the authors be willing to provide software to allow Neurips readers to easily evaluate the ideas in this paper?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: This is a theoretical paper, entirely, so does not benefit from worked examples, so it is difficult to judge impact. I did not see any discussion of societal impact. I also did not see any discussion of promises of software or data to help users assess reproducibility of any results (as empirical results are not assessed).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to read our paper in detail and appreciate that they ranked the soundness and contribution of the paper as excellent. We would like to answer the reviewer’s concerns outlined in the weakness section, clarify some points that might have not been clear in the article and answer the reviewer’s questions.
Weaknesses:
About literature and privacy: We appreciate the reviewer’s comment on the literature. However, it is almost always the case that one can include more related work. We will include more work on overlapping datasets and admit that our own point of view is biased by the causal setting. We would also like to invite the reviewer to point us to some relevant papers about privacy and federated learning, which we will be happy to study and reference. We were under the impression that federated learning usually does not study the setting where the variable sets differ (i.e., the merging problem), but following your comment, we have been able to find some papers that study “vertical federated learning”, which we shall discuss. Our impression is that this area focuses more on practical applications and less interested in theoretical statements (and they do not connect the problem to the marginal problem of statistics), but it will be interesting to make those connections.
About inferring causal graphs: The reviewer mentions in the weaknesses section that we are framing the merging of experts’ problem as one of inferring causal graphs. This is not entirely correct, and we would like to clarify it here (and we will do it in the revised version of the text as well). What we want is to come up with a single predictor of our target variable $Y$, given a combination of predictors. We have the information of the causal graph, so we don’t need to infer it. We explore the differences in the solutions of the merging of experts under different causal assumptions (in our case, whether the variables are causes or effects of our target variable).
Questions:
About literature expansion: We will expand the literature with the overlapping datasets literature, and we are happy to expand it with privacy and federated learning related research.
About an example: This is a fair point. However, we believe that the greatest merit of the paper are its theoretical contributions and not a potential empirical example. Indeed, Figure 2 is such an example for one particular data generation process, where we can clearly see the difference in the slope of the decision boundary for data with the same moments, under different CMAXENT solutions. If the reviewer is curious about the results of a particular example, we would be happy to run those tests and include them in the final version of the paper.
For a potential real world example (which we can include on the main paper) in the medical domain. If $Y$ is the presence/absence of a disease, the causal scenario is given when $X_1,\dots,X_n$ describe risk factors, while $X_1,\dots,X_n$ being symptoms would be anticausal. Research results from different labs may provide predictors of $Y$ from different sets of risk factors and/or different sets of symptoms. Combining them into a joint predictor which uses the union of all features would be highly valuable, but it also requires to include the right causal assumptions: risk factors cause diseases cause symptoms.
About code: We are happy to share the code for our paper upon acceptance. However, we believe that most of the value of our paper comes from the theoretical insights we are giving on the problem of merging of experts when different causal assumptions are made. In fact, the code to produce Figure 2 consists of a very basic data generation process and a common python library to estimate the decision boundaries using the logistic regression and linear discriminant analysis, as Remark 2 and Theorem 5 of our paper point out.
---
Rebuttal Comment 1.1:
Title: Perhaps a survey paper on federated learning would suffice.
Comment: I honestly think that for purposes of your paper, referring to a survey paper for the federated learning literature and maybe looking at some of the references in it would be sufficient--the main goal being to show how your proposal fits into the literature. Here's one from 2021:
Zhang, C., Xie, Y., Bai, H., Yu, B., Li, W., & Gao, Y. (2021). A survey on federated learning. Knowledge-Based Systems, 216, 106775.
This is three years old, but as I said, the literature is no longer new.
---
Reply to Comment 1.1.1:
Title: Survey paper and other references on the camera-ready version of the paper
Comment: We thank the reviewer for the reference. We checked it on detail and, as we mentioned on the rebuttal, we believe there are interesting connections that could be exploited between federated learning (in particular vertical federated learning) and the framework we talk on our paper, and in general the marginal problem in causality. We will include this reference on the camera-ready version of the paper.
If the reviewer considers it appropriate, could you please increase the score of our paper? if not, what does the reviewer believe could be clarified, or discussed in order to do so? | Summary: This paper studies the problem of learning a mixture of experts (predictors) where individual predictors have been learned with different causal constraints. It studies different asymmetries that arise when we merge different predictors using the Causal Maximum Entropy (CMAXENT) objective. It goes on to show that different data-generating processes lead CMAXNET to reduce to different objectives under some restrictive setting. Next, they show how the learnt predictors will have different decision boundaries under different data moment restrictions.
Strengths: 1. The paper is well-written and easy to follow.
2. The contribution of this paper, however, restricted to a simple setup, is novel. The author shows that under different assumptions on the data-generating process, the CMAXENT objective will yield different predictors and establish necessary and sufficient conditions under which the predictors are different.
Weaknesses: 1. The connection with the OOV generalization literature is not discussed properly. In particular, it would be interesting to see how this paper's observation relates to the paper "On causal and anti-causative learning" (ICML 2012) and the guarantees they have for generalization to the distribution shift.
2. In the introduction and abstract, the authors mention that they study the problem of merging the two predictors, i.e., one predictor trained to assume an anti-causal data generating process (DGP) and another assuming causal. Next, in Sections 3 and 4, the authors show the closed form of each predictor separately under different DGPs. However, little is said about the final "combined" predictor and its generalization properties. See Question 1 for more.
Technical Quality: 3
Clarity: 4
Questions for Authors: **Questions**:
1. By merging the predictor, I understand learning a combined model using both the causal and anti-causal predictor. Is my notion of "merging" predictors correct, or am I missing something?
**Typos**:
1. Line 235: I think you mean X2 will be irrelevant in the estimation of the target predictor.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes, they are added to the checklist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reading of our paper. We also thank the reviewer for pointing that the soundness and the contribution of the paper are good and the presentation is excellent. We will start giving a short comment on the reviewer's summary and start answering to the weaknesses and questions.
Comment about the reviewer’s summary: When we study the difference in the decision boundaries, we study the case where the moments are the same and not different. This is precisely one of the most surprising results of the paper.
Weaknesses:
We appreciate the reviewer's comment about the connection between the literature; in the revised version of the paper we will be more explicit about the connections between “On causal and anticausal learning” and some of the more recent OOV literature.
In particular on the paper “On causal and anticausal learning” they study common Machine Learning tasks (distribution shift, semi-supervised learning, transfer learning) under the light of causal assumptions. They answer questions of the type: can we perform this task if the data comes from this assumed -causal- process? In this paper we do the same but with the task of merging of experts. Here we ask: What are the differences of the resulting models, if we assume the predictors are from a causal process versus an anticausal one? We answer this question using CMAXENT because CMAXENT 1) allows merging of predictors, and 2) allows to include causal information. Nevertheless, we hypothesise the resulting asymmetry would hold also for any other model that has these two properties.
In relation to OOV, we give an example of a DGP and a model so that we can do OOV. The OOV literature is fairly recent, in fact “on causal and anticausal learning” does not study an OOV scenario. We consider OOD and OOV "dual" modes of generalization but their precise link is yet to be understood. This is not part of the current papers technical contribution.
We can include an extended discussion of the above in the revised version of the paper.
This is an interesting question and we indeed have the results for the combined DGP but did not include them in the submission to keep it somewhat decluttered. Since the reviewer believes in the value of the combined DGP we can include it in the final version. As a summary of the results, the resulting conditional distribution of $Y$ given $X$ is the product of the two conditional distributions found in Proposition 1 and Theorem 5. This is expected, as given $Y$, the parents and effects of $Y$ are independent and so their joint distribution is the product of their distributions.
Questions:
We will improve the introduction and explanation of certain terms in the manuscript, this is valuable feedback. We do understand merging of experts as the task of putting predictors together into a single model. However, the results show that it matters whether you assume that the predictors are produced with causal or anticausal variables. If you can make these assumptions, then a different question arises: What do we want the combined model for? If we need the model to estimate causal effects then we conclude we should not use the assumed anticausal covariates, if it is about prediction, we should merge all the models available.
Typo:
Thank you for reading the paper in such a level of detail. We will correct the typo and double check if we missed more.
---
Rebuttal Comment 1.1:
Comment: I thank the author for their time and effort spent on the response. The response answers my question. I have increased my score to 6. | Summary: This paper studies the differences and properties that emerge when one uses causal, anticausal features for prediction.
Strengths: **S1.** This work makes several interesting observations of causal and anticausal predictors under their parametric assumptions.
**S2.** This work suggests some potential considerations for practitioners dealing with feature sets that contain both types of information.
Weaknesses: **W1.** The primary weakness of this work is that the connections are underexplored empirically and in more complicated settings, e.g., higher dimensions and discrete data.
**W2.** While I do not have an issue with the simplifications you have made to make the connections clear, the lack of more general results combined with a lack of real-world datasets that exhibit properties resembling the observations from your analysis limit the impact of this work is insufficient for the venue.
**W3.** Some of the observations merely confirm properties already known, e.g., the asymmetries on causal and anticausal directions [1-2].
[1] Schölkopf, Bernhard, et al. "On causal and anticausal learning." arXiv preprint arXiv:1206.6471 (2012).
[2] Janzing, D., and B. Schölkopf. "Causal inference using the algorithmic Markov condition. to appear in IEEE Transactions on Information Theory." See also http://arxiv. org/abs/0804.3678 (2008).
Technical Quality: 3
Clarity: 3
Questions for Authors: **Q1.** Do your results hold with incomplete causal sets?
**Q2.** Are there any connections between your observations and robustness to distribution shifts?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The setting considered is too theoretically and empirically simple to be convincing about real-world tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We would like to clarify certain points related to the weaknesses and respond to their questions as best we can.
W1 (about high dimensionality): in the examples we studied, we used two covariates but the results would be similar if we had predictors of more covariates. That is, if we had $D$ covariates we would obtain a transformed logistic regression in the form of Equation 7, but instead all the variables for which we have $Cov(Y,X_i)$ would appear on the right-hand side of the equation. Likewise, for the anticausal direction we would obtain the LDA algorithm where the Gaussians are $D$-dimensional as opposed to bivariate Gaussians. We used only two covariates to be able to visualise the differences in the geometry of the decision boundary.
Furthermore, we recently got an interesting insight about the results in high dimensions. Intuitively, the result is that using the same data as constraints, the resulting model in the anticausal direction gives more weight, relative to the causal direction, to those variables that have higher covariance with the target variable.
W1 (about discrete data): there are two options here:
- First, if the target variable is discrete, in which case our results remain unchanged. In fact, Theorem 7 is written for a discrete (and not bivariate) target variable, and thus a generalisation of the previous results.
- Second, the covariates are discrete. If the reviewer is referring to this case, we would agree that the study of discrete covariates requires a separate analysis. However, to showcase the asymmetries between causal and anticausal merging we decided to resort to cases that admit an analytical solution, which result in differences that are easier to interpret.
W2: (about a potential application of the results) Merging predictors will get increasingly relevant for instance in the medical domain. If $Y$ is the presence/absence of a disease, the causal scenario is given when $X_1,\dots,X_n$ describe risk factors, while $X_1,\dots,X_n$ being symptoms would be anticausal. Research results from different labs may provide predictors of $Y$ from different sets of risk factors and/or different sets of symptoms. Combining them into a joint predictor which uses the union of all features would be highly valuable, but it also requires to include the right causal assumptions: risk factors cause diseases cause symptoms.
W3: We know the content of references [1,2] in detail. Neither of them describe a merging scenario. The paper [1] first studied the implication of causal asymmetries (causal vs. anticausal) for machine learning, for the case of prediction (but not merging). This indeed inspired us, as mentioned in the introduction. The relation to [2] is much more remote: that paper develops a theory of causality that uses Kolmogorov complexity rather than statistical (Shannon) information to study conditional independence properties of causal graphs. It is fundamental for the justification of independent causal mechanisms, which in turn is related to machine learning, but we felt that we did not have to cite it. If the reviewer feels otherwise, we are happy to change this, and include a discussion.
Q1: We are not completely sure by what the reviewer means with incomplete datasets. Assuming the reviewer is asking about the case where we do not have predictors for all causal parents, $Y$ (that is, in the causal scenario there may be additional causes). If we do not have predictors for each causal parent of $Y$, then we can still combine the predictors into a single model, however we are not going to obtain an approximation of the causal mechanism from the causal parents of $Y$ to $Y$ (recall the causal mechanism is simply $P(Y\mid PA(Y))$). As a consequence, we might not be able to compute some causal quantities like interventional distributions without any further assumptions. Of course if the goal of merging the experts is purely prediction there is no consequence of not having all the causal parents, however, as we have shown on the paper the geometry of the decision boundary (regardless of whether one wants to predict or perform causal tasks) -will- change depending on the causal assumptions.
Q2: This is an interesting question and it relates to our interpretation of Q1. In particular (in the causal scenario), if we have the sample averages of the cross-moments between the causal parents of $Y$ and $Y$, then CMAXENT gives us an approximation of the causal mechanism of $Y$ that is robust to distribution shifts. To be more precise, Grunwald and Dawid (2004), and Farnia and Tse (2016) prove that MAXENT and conditional MAXENT are robust Bayes in the family of distributions with the predetermined constraints. That is, it is the distribution chosen from the set of distributions that has the same expectations as those given as constraints, so that the expected value of the log loss is minimised in expectation.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarifications
Comment: Thanks for your clarifications. On Q1. Yes, my question was about "incomplete causal sets," i.e. only some subset of Pa(Y) is observed. It's important to note that the robustness of E[y | pa(y)] is not the same for E[y | S], S \subset pa(y). I want to clarify what, if any, of your results change, assuming one only has access to S rather than pa(y).
Additionally, I have increased my score to a 6. | Rebuttal 1:
Rebuttal: We thank all the reviewers for reading our paper and the interesting questions they asked. We also appreciate some of the reviewers considering the paper has a valuable contribution to the community, being sound and well presented. We invite the reviewers to increase the score if they feel their questions were satisfactorily answered.
In the revised version of the paper we will include an extended discussion about our results in higher dimensions, some potential applications of our theoretical insights and we will extended our literature review to include some references on overlapping datasets, and some interesting papers on federated learning where they have a similar setup as ours (though without the causal considerations). | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SelfCodeAlign: Self-Alignment for Code Generation | Accept (poster) | Summary: - The paper introduces SelfCodeAlign, a fully transparent and permissive self-alignment pipeline for code generation in LLMs without relying on extensive human annotations or distillation from larger models. SelfCodeAlign generates instruction-response pairs from seed snippets, evaluates responses with test cases, and fine-tunes models based on successful executions. The approach shows superior performance over state-of-the-art methods, including GPT-3.5-Turbo-based distillation, particularly in HumanEval+ benchmark. The pipeline demonstrates effectiveness across various model sizes, emphasizing the benefits of self-generated data over teacher models with smaller performance gaps.
- Overall, I feel that SelfCodeAlign is a very easy workflow to follow and I see much potential for such pipelines that do not depend on distillation or human annotations. I recommend an accept.
Strengths: ## Originality
- The paper adequately cites related work, clearly identifying gaps such as the lack of transparency in existing methods, which is a key motivation for their work.
## Quality
- The submission is technically sound with both quantitative and qualitative analysis.
- The authors provide detailed experimental results, demonstrating significant performance improvements over baselines.
- The inclusion of both large-scale and small-scale model evaluations further strengthens the quality of the research.
- In terms of ethical considerations, they have considered all terms of use as well as the data in code snippets.
## Clarity
- Well organized paper, except for appendix.
## Significance
- The results are highly significant, as SelfCodeAlign achieves performance improvements, notably surpassing models that are an order of magnitude larger in size. This work addresses the challenge of instruction tuning without human annotations or distillation, offering a scalable and transparent solution that advances the state of the art in code generation.
Weaknesses: ## Originality
- Perhaps similar to this paper [Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models](https://arxiv.org/pdf/2312.06585) ? Even if it is different, I think that this should also be part of your baseline comparison as well.
## Quality
- The qualitative examples provided in the appendix are excessively long, which may overwhelm the reader and obscure the main differences and contributions of the methodology. It would be beneficial to reduce the number of examples or to shorten them, focusing on highlighting the key differences and improvements over baseline methods. Additionally, the examples are presented in black and white with no descriptions or annotations, making it difficult to discern their significance. Providing clearer, annotated examples with concise explanations would enhance the readability and impact of this section.
- I do not see any weaknesses discussed in this work, for example, in what scenario do you think does this methodology not work? Why is the score still not perfect? (or for eg, below 80% accuracy)
Technical Quality: 3
Clarity: 3
Questions for Authors: - What about experiments/benchmarking on models that uses the GPT4 family as part of distillation?
- Why did you only limit it to 33B, what about 70b?
- Line 121: What is the difficulty for? It is subjective, so how does the difficulty aid the model/aid in finetuning of models?
- Line 228-233, Table 7: Any reason why the increasing trend is not consistent, for eg for StarCoder2-15B, the score decreased when tuned with DeepSeek-Coder-33B data?
- Line 232, Table 7: Why did you not fill up the blank cells just like the last row? This would have ensured that your statement is true for all models, because you are just basing off CodeQwen1.5-7B model only.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are stated. However, I think the authors miss one important point – the reliance on the seed snippets. The performance of the model is based on what it was finetuned on, so if the seed snippets are not sufficiently diverse or representative of the target tasks, then it might have resulted in a large drop in accuracy.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comprehensive review and important suggestions! Also thanks for pointing out the presentation issues in Appendix, which we will fix in the revision. We provide our responses to your questions and concerns as follows.
> Q1: What about experiments/benchmarking on models that uses the GPT4 family as part of distillation?
Please kindly note that we included two GPT-4-distilled models in Table 1: OpenCodeInterpreter-DS-33B [1] and Magicoder-S-DS-6.7B [2]. We also conducted an experiment directly distilling GPT-4 responses to our 74k SelfCodeAlign dataset. As shown in the table below, GPT-4o still generates 14% erroneous code, and training on the dataset produced by SelfCodeAlign outperformed GPT-4o distillation.
|Teacher Model|Execution Pass Rate|HumanEval+|
|-|-|-|
|GPT-4o|86%|65.9|
|CodeQwen1.5-7B|100%|67.1|
[1] Zheng et al. OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement.
[2] Wei et al. Magicoder: Empowering Code Generation with OSS-Instruct.
> Q2: Why did you only limit it to 33B, what about 70b?
Thank you for your suggestion. Due to resource constraints, we were only able to experiment with the 33B model as the largest. We plan to explore larger models in future work as we acquire additional resources.
> Q3: Line 121: What is the difficulty for? It is subjective, so how does the difficulty aid the model/aid in finetuning of models?
Great question. The difficulty attribute is randomly sampled to create problems of varying complexity, which is important for code instruction-tuning datasets according to prior research [3]. To demonstrate the effectiveness of the "difficulty" attribute, we analyzed the 74k SelfCodeAlign dataset. As shown in the table below, the decreasing pass@1 rate as difficulty increases shows that the attribute is meaningful. SelfCodeAlign generates 10 responses for each instruction and keeps any that pass. This approach leads to higher overall success rates (pass@10) and more similar rates across difficulty levels, creating a more balanced final dataset.
|Difficulty|Pass@1|Pass@10|Number of Samples|
|-|-|-|-|
|Easy|30.8|77.2|25.6k
|Medium|27.9|75.9|24.6k|
|Hard|25.5|74.6|23.7k|
[3] Luo et al. WizardCoder: Empowering Code Large Language Models with Evol-Instruct.
> Q4: Line 228-233, Table 7: Any reason why the increasing trend is not consistent, for eg for StarCoder2-15B, the score decreased when tuned with DeepSeek-Coder-33B data?
The inconsistent trend aligns with our observation in lines 226-228: when the performance gap between models is small, a base model may benefit more from self-generated data than from data generated by a slightly stronger teacher. While DeepSeek-Coder-33B has a higher HumanEval score, StarCoder2-15B outperforms it on math and code reasoning benchmarks [4], indicating their overall performance gap is not substantial.
[4] Lozhkov et al. StarCoder 2 and The Stack v2: The Next Generation.
> Q5: Line 232, Table 7: Why did you not fill up the blank cells just like the last row?...
Good point. We intentionally left those cells blank. Most models in Table 7 were evaluated using both self-generated and stronger-model-generated data to demonstrate self-alignment effectiveness and show distribution shift impact. The last row is an exception, as it represents our strongest base model, which doesn't have a "stronger" model to generate data from.
> C1: Perhaps similar to this paper Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models…
Great suggestion. We'll discuss this paper in our revision. The key difference is that SelfCodeAlign expands and improves both input and output spaces systematically, while "Beyond Human Data" focuses solely on gathering high-quality outputs from a fixed original dataset. This highlights our approach's broader scope in generating diverse coding problems and solutions.
> C2: The qualitative examples provided in the appendix are excessively long…
We appreciate your feedback. We will improve our presentation and make all the examples concise, clear, and easy to comprehend in our revised manuscript.
> C3: I do not see any weaknesses discussed in this work…for example, in what scenario do you think does this methodology not work? Why is the score still not perfect?
Thank you for noting this. We did discuss limitations in Section 6, but we can expand further. The main weakness of the methodology is its reliance on test execution. In practice, not all solutions can be verified through execution, and generated tests can also be erroneous. These factors contribute to imperfect scores.
> C4: ...if the seed snippets are not sufficiently diverse or representative of the target tasks, then it might have resulted in a large drop in accuracy
This is an excellent comment. We ensure seed quality and diversity through rigorous mining, filtering, and deduplication, which we describe in Appendix A. We will further elaborate on this point in our revision. | Summary: The authors proposed SelfCodeAlign that finetunes the model based on the filtered data generated by the same model itself. The authors conduct experiments to show that SelfCodeAlign outperforms most open-sourced models that were finetuned on public code dataset.
Strengths: The code generation problem is important and the results (compared to models trained on public dataset) are promising.
Weaknesses: Compare to models that are distilled/trained on non-disclosed data, the performance of SelfCodeAlign is not as competitive. The presentation can be improved, see **questions**.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Can you properly highlight the row in table 1?
2. I would suggest to give a brief summarization of the component analysis after line 92.
3. It seems crucial to me to understand why SelfCodeAlign outperforms other dataset (for example, GPT-generated dataset), is there an analysis on how these datasets differ? Also, you mentioned distribution shift across different models in 4.1, is there a qualitative/quantitative comparison between the code generated by different models?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Limitations have been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable review and suggestions! We provide our response as follows.
> Q1: Can you properly highlight the row in table 1?
Thanks for the feedback. We appreciate your suggestions for improving Table 1. Could you kindly provide more specific details regarding your concerns, such as which rows you believe need to be highlighted? This will help us make the necessary adjustments more effectively.
> Q2: I would suggest to give a brief summarization of the component analysis after line 92.
Absolutely. We will ensure to include the summarization in our revised manuscript.
> Q3: …why SelfCodeAlign outperforms other dataset (for example, GPT-generated dataset), is there an analysis on how these datasets differ? Also, you mentioned distribution shift … is there a qualitative/quantitative comparison between the code generated by different models?
Great questions. Regarding your first question, as shown in Section 4.2, execution filtering and code correctness are important to the effectiveness of self-alignment. The Evol-Instruct [1] and OSS-Instruct [2] datasets used in the paper are direct distillation from GPT-3.5-Turbo, which means they may include incorrect code that harms the model. To verify this, we use the GPT-4o model to generate responses for the 74k SelfCodeAlign dataset and compare it with the original dataset, keeping the instructions the same. The table below shows that while GPT-4o is stronger, it still generates 14% erroneous code, and using its outputs for instruction tuning is less effective than using execution-filtered outputs from CodeQwen1.5-7B:
|Teacher Model|Execution Pass Rate|HumanEval+|
|-|-|-|
|GPT-4o|86%|65.9|
|CodeQwen1.5-7B|100%|67.1|
Second, Section 4.1 indicates that a base model can benefit more from data within its own distribution than a shifted teacher distribution. In the table below, we compare the perplexity of our base model, CodeQwen1.5-7B, on self-generated data and two GPT-generated datasets. It shows that CodeQwen1.5-7B has a lower perplexity on self-generated data, suggesting it is easier to learn from. This observation complies with the finding from [3] that self-generated positive data with lower perplexity yields better finetuning performance.
|Dataset|Perplexity|
|-|-|
|SelfCodeAlign|2.12|
|OSS-Instruct|2.20|
|Evol-Instruct|3.76|
For your second question, we compute the perplexity of CodeQwen1.5-7B on the outputs from different models to quantitatively measure the distribution shift. The table below shows that self-generated data achieves the lowest perplexity:
|Model that Generates Data|Perplexity|
|-|-|
|StarCoder2-3B|2.67|
|Llama3-8B|2.73|
|StarCoder2-15B|2.28|
|DeepSeek-Coder-33B|2.25|
|CodeQwen1.5-7B|2.13|
Please also kindly refer to Appendix C.2, which provides qualitative examples of the outputs generated by different models at each step of the SelfCodeAlign framework.
[1] Luo et al. WizardCoder: Empowering Code Large Language Models with Evol-Instruct.
[2] Wei et al. Magicoder: Empowering Code Generation with OSS-Instruct.
[3] Setlur et al. RL on Incorrect Synthetic Data Scales the Efficiency of LLM Math Reasoning by Eight-Fold.
> C1: Compare to models that are distilled/trained on non-disclosed data, the performance of SelfCodeAlign is not as competitive.
We want to kindly highlight that SelfCodeAlign still excels over many larger, proprietary models, including Gemini Pro, Mistral Large, and CodeLlama-70B-Instruct. As also mentioned by Reviewer 4fJs, “The results are highly significant, as SelfCodeAlign achieves performance improvements, notably surpassing models that are an order of magnitude larger in size”. Additionally, in Q3, we demonstrate that our approach outperforms direct distillation from the much stronger GPT-4o model.
Meanwhile, SelfcodeAlign is the first fully transparent and permissive pipeline for self-aligning code LLMs without extensive human annotations or distillation. We are more than happy to further discuss this point during the discussion period.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response, my concerns are addressed, and I am raising my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to read our response. We truly appreciate it! Should you have any new questions or concerns, please don't hesitate to let us know. | Summary: This paper introduces SelfCodeAlign, an entirely transparent and permissive pipeline designed for self-aligning code large language models without the need for human annotations or distillation. By applying SelfCodeAlign to CodeQwen1.5-7B, the authors generated a dataset containing 74k instruction-response pairs. They then fine-tuned CodeQwen1.5-7B using this dataset, resulting in SelfCodeAlign-CQ-7B, which demonstrates robust performance on the HumanEval+ benchmark.
Strengths: 1. The performance is satisfactory: SelfCodeAlign-CQ-7B achieves a pass@1 score of 67.1 on HumanEval+, outperforming larger models like CodeLlama-70B-Instruct (65.2), which is a significant achievement.
2. The process is auto-mated: This paper introduces a novel self-alignment pipeline including concept extraction from seed code, task generation, multiple response generation, and execution validation. This approach is independent of human annotations or large model distillation, making it easy to be applied.
3. Scalability: Experiments demonstrate the method's applicability to models ranging from 3B to 33B parameters, showing good scalability across different model sizes.
Weaknesses: 1. Lack of Diversity in Generated Tasks: While the method aims to produce a variety of coding tasks, it is unclear how this diversity is achieved or measured. There is a risk that the generated tasks may be biased towards certain types of coding problems, which could limit the model's ability to generalize effectively.
2. Overreliance on Self-Generated Tests: The method relies heavily on tests generated by the model itself to validate responses. This self-validation approach could result in a feedback loop where the model learns to create tests that are easy to pass, rather than generating truly challenging or comprehensive tests. The paper does not address how this potential issue is mitigated.
Technical Quality: 3
Clarity: 3
Questions for Authors: Refer to the weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Refer to the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comprehensive review and suggestions! We address your questions as follows.
> Q1: Lack of Diversity in Generated Tasks: While the method aims to produce a variety of coding tasks, it is unclear how this diversity is achieved or measured…
Good question. We ensure task diversity through a rigorous seed gathering pipeline (Appendix A) with quality filtering and deduplication, and 21 carefully designed, diverse few-shot examples (Appendix D) to enhance response generation variety. To measure diversity, we conducted comprehensive evaluations across 7 coding benchmarks covering 4 different problem types. The evaluation results consistently demonstrate SelfCodeAlign's effectiveness in different coding tasks.
> Q2: Overreliance on Self-Generated Tests: …This self-validation approach could result in a feedback loop where the model learns to create tests that are easy to pass, rather than generating truly challenging or comprehensive tests…
This is a great point. We've implemented two key strategies to mitigate the risk of generating overly easy problems. First, during instruction generation, we have an attribute “difficulty” whose value is randomly sampled from easy/medium/hard. Second, for each instruction, we generate 10 different responses and choose any of the passing responses. This ensures that the model is exposed to a variety of problem-solving approaches and not just always the easiest path. In the table below, we analyze the 74k SelfCodeAlign dataset. The decreasing pass@1 rate with increasing difficulty shows the attribute is meaningful, while the higher and more consistent pass@10 rates help to create a balanced dataset:
|Difficulty|Pass@1|Pass@10|Number of Samples|
|-|-|-|-|
|Easy|30.8|77.2|25.6k
|Medium|27.9|75.9|24.6k|
|Hard|25.5|74.6|23.7k|
We also experimented with directly distilling GPT-4o responses to the SelfCodeAlign dataset. As the table shows, our approach outperforms direct distillation from GPT-4o, despite using a weaker base model. Notably, even GPT-4o generates 14% erroneous code, highlighting the importance of execution-based validation in our method.
|Teacher Model|Execution Pass Rate|HumanEval+|
|-|-|-|
|GPT-4o|86%|65.9|
|CodeQwen1.5-7B|100%|67.1|
---
Rebuttal Comment 1.1:
Comment: Hi,
Thank you for your thoughtful response. My main concern has been addressed, and I'd like to update my score to a 5.
---
Reply to Comment 1.1.1:
Comment: Thanks for your feedback! We sincerely appreciate the time you took to read our response. If you have any further questions or concerns, please feel free to reach out. | Summary: This paper proposes a pipeline for generating synthetic instruction tuning data. The method consists of the following steps: 1. data filtering is applied to seed coding data to select high quality examples; 2. base LLM is used to generate a set of coding concept and category based on the seed data; 3. base LLM is used to generate coding instruction, response and test; 4. generated examples are selected based on the code execution result.
Strengths: 1. the paper focuses on using base model to generate synthetic data to self-improve, which is an interesting and useful angle for synthetic data generation
2. the method is evaluated on several different coding LLM benchmarks which shows the effectiveness of the method
3. there are also ablation experiments verifying the contribution of specific design choices in the framework.
Weaknesses: While using base model to self-improve is an interesting and useful direction, synthetic data generation could be improved by using a stronger LLM than the base model. It is not clear from the paper whether the proposed framework would be effective compared to previous methods if we use a stronger LLM to synthesize the data. The synthetic data generation could also be potentially improved by having multiple rounds of data generation process.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Have you tried this framework using stronger LLM to generate synthetic data?
2. Can you get even better performance by running several rounds of data generation with improved base model?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable review and suggestions! We put our responses to your questions as follows.
> Q1: Have you tried this framework using stronger LLM to generate synthetic data?
Thank you for your question. We want to kindly highlight that we explored this point in Section 4.1, which examines the effectiveness of the SelfCodeAlign framework on various base models. Table 7 demonstrates that all models benefit from the framework, and weaker models (e.g., StarCoder2-3B) can achieve larger performance gains when trained on data synthesized by stronger models (e.g., DeepSeek-Coder-33B).
> Q2: Can you get even better performance by running several rounds of data generation with improved base model?
Thank you for your insightful suggestion. We agree that an improved base model could potentially produce better synthetic data. While time constraints during the rebuttal period prevent us from conducting additional experiments, we appreciate your point and plan to incorporate this idea with relevant experiments in our final revision. | Rebuttal 1:
Rebuttal: We deeply appreciate all the reviewers for their insightful feedback and suggestions for our work. In our responses below, we address each primary question (denoted as Q) or comment (denoted as C) raised by the individual reviewers. Additionally, we will revise our paper to incorporate editorial suggestions. Should there be any misunderstandings of the questions, please kindly let us know; we are eager to communicate with all the reviewers throughout the discussion period. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
pFedClub: Controllable Heterogeneous Model Aggregation for Personalized Federated Learning | Accept (poster) | Summary: The authors address a key issue in personalized federated learning, which enables clients with heterogeneous model structures to participate in federated learning with consideration of effectiveness and efficiency. This method is based on model assembly and reassembly, in which the blocks and layers can be treated as modules. After that, the server selects the personalized models and assigns them to the clients. The received models will be used as the teacher to guide the local update. The authors run extensive experiments to demonstrate the effectiveness of their algorithm.
Strengths: 1. This paper is well-organized and clearly motivated. Its logical structure and presentation aid comprehension, while the clear and accessible framework and figures enhance readability. Experiments, discussions, or analyses robustly support each claim.
2. The focus on controllability renders the algorithm more applicable in real-world scenarios, allowing for greater human involvement in the model generation process. The authors effectively demonstrate the utility of their design through experimental results.
3. The authors have performed extensive experiments, including principal studies on image datasets, ablation studies, hyperparameter evaluations, and thorough discussions. These efforts confirm the validity of the techniques and provide deep insights into the paper's contributions.
Weaknesses: 1. Based on the algorithm itself, it includes the reassembly, assembly, matching, and other operations. The reviewer may be concerned about the computational burden compared with the one without any controllability.
2. How to select the anchor block and why needs to be stated clearly.
3. According to the experiment results, the reviewer is wondering about how this approach can be used with the public data with/without labels and the possible reason why it is robust to the public data with or without labels.
Technical Quality: 4
Clarity: 3
Questions for Authors: Please see the above weaknesses.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: No negative social impact to the reviewer’s best knowledge. The study of K should be put in the main content as that is an important part of the algorithm.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: `>>> W1`
Thank you for the reviewer's valuable feedback. We addressed the computational cost comparison in Section 4.5, using computation time as a metric against pFedHR. The results, presented in Figure 4, show that pFedClub generally requires less time than pFedHR and offers more consistent performance. Additionally, we detailed the system running time comparisons with the baseline in Appendix E. Our findings demonstrate that to achieve the target accuracy, our proposed approach is more time-efficient than pFedHR, further underscoring its efficiency.
`>>> W2`
Thank you for the question. The anchor block selection process is detailed from line 162 to line 169. For each block, we implement a random substitution, with the first substituted block designated as the anchor. This randomness enhances the diversity of candidate model generation. For a clearer explanation of this process, we have included a detailed algorithm in Appendix A, from lines 3 to 7.
`>>> W3`
Thank you for the reviewer's question. On the server side, public data is used solely to assist in (1) calculating the representations of the blocks for function identification and clustering, and (2) conducting stitching layer tuning. These operations utilize the data as foundational information without exploiting any specific attributes of the data itself. Consequently, our proposed approach is compatible with both labeled and unlabeled public data.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. I will keep my score positive as it is. Thank you. | Summary: This paper presents a controllable model reassembly approach to enable heterogeneous model cooperation in federated learning. The designed CMSR algorithm provides the control of the space to save the computational cost. Furthermore, the approach also achieves model personalization for each local client. They test the proposed approach on benchmark datasets and compare with other baselines under different settings.
Strengths: 1, This paper targets one of the challenges in federated learning, which is the model heterogeneity. To the best knowledge, most existing related works are based on knowledge distillation. This work presents a controllable approach to conduct block assembly and reassembly from local models to achieve heterogeneous model cooperation and model personalization. The idea itself is interesting and practical.
2, They take efficiency, generalization, and personalization into consideration. They provide comprehensive analysis and provide detailed discussion under various settings, which support their statement soundly.
3, Their presentation and logic are both easy to follow and understand. The framework, experiment results, and discussion are clearly presented.
Weaknesses: Weakness
1, In their approach, the authors employ K-means clustering. The reviewer is curious about how the value of K is selected and how this selection influences the results.
2, One of the main contributions compared to pFedHR is the enhanced controllability. I am interested in understanding the nature of this controllability, specifically the extent to which the generated models can be controlled.
3, The paper focuses solely on image classification. Adhering to the review guidelines, the reviewer is not requesting additional experiments, but the reviewer is interested in exploring whether the existing methodology could be applicable to other tasks.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1, Please address the concerns in the weakness part.
2, After the blocks are stitched, the parameters of the blocks and/or the stitching layer would be trained? If trained, how are they trained? If not, how do you deal with the parameters of the stitching layers?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: There are no potential negative societal impacts of the work. There are two limitations as follows:
(1) This approach still raises extra computational cost at the server side.
(2) That would be great if this approach can be extended to other tasks and other domains.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: `>>> W1`
Thank you for the reviewer’s question. In the main paper, we set K to 4, following the methodology outlined in [1]. To further explore how the value of K affects our results, we conducted a hyperparameter study on K. The results of this study are presented in Table 5, with detailed analysis provided in Appendix G due to page constraints. We observed that performance slightly improves with a larger value of K within a certain range. This improvement may be due to the fact that a larger K can differentiate functions more specifically.
[1] Wang et al. "Towards personalized federated learning via heterogeneous model reassembly." Advances in Neural Information Processing Systems 36 (2024).
`>>> W2`
Thanks for the reviewer’s question. Our controllability is demonstrated in sec 3.2.3 controllable block-wise substitution. Furthermore, we provided the controllability analysis in section 4.4. We define a hyperparameter $\phi$ to quantify the controllability of the personalized models generated by pFedClub. Also, we add the flexible controllability parameter $\eta$ to decide the generated model size. The results are shown in Figure 3. Compared with pFedHR (shown in blue), the generated models by the pFedClub (shown in red) and pFedClub+(shown in green) are significantly smaller than the ones generated by pFedHR.
`>>> W3`
Thank you for the reviewer's valuable comments. As we mentioned in the limitations section, our experimental results are currently limited to image classification tasks. However, given the generalized design and intuitive nature of our approach, we believe it can be adapted to other tasks, provided that the models can be assembled and reassembled. We are committed to exploring these possibilities in future research directions.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. After reading the answers, they have addressed my concerns. I would like to increase my score appropriately.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your time to check the rebuttal and your recognition of our work. | Summary: The paper proposes a `pFedClub` method for personalized federated learning that enables controllable heterogeneous model aggregation, addressing limitations of existing approaches such as lack of personalization, privacy concerns, and uncontrolled model size growth.
Extensive experiments conducted on three benchmark datasets using various CNN-based model structures validate the effectiveness of the proposed method under both IID and non-IID settings.
Strengths: - They conduct extensive experiment including the discussion about the hyparameter $K$ to validate the controllability of the proposed method and computational efficiency on the server.
Weaknesses: 1. The writing and structure of the paper need improvement, particularly in the "Order-constrained Block Search" paragraph. The concept of order is unclear, especially the meaning of $q < u$ in line 177. It's not evident whether this refers to a similarity score or another metric. The author should provide a clearer explanation of this constraint.
2. In equation (1) on line 141, the meaning of 'CKA' is not defined. The authors should explain what CKA stands for and how it's calculated. Additionally, it's unclear whether this computation occurs on the server. If clients must transmit input $x_{m,i}^t$ and output to the server, this raises privacy concerns that should be addressed.
3. The paper doesn't specify whether the features $x_{m,i}^t$ and $x_{n,j}^t$ in equation (1) have the same dimensions. This should be clarified to ensure a proper understanding of the similarity calculation.
4. The sampling process for the Anchor Block selection is ambiguous. The probability distribution over all models for this selection is not clearly defined.
Overall, the authors should formulate the proposed method more rigorously, using well-defined notations and providing clear explanations for each step of the algorithm. This would significantly improve the paper's readability and reproducibility.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Table 5 indicates that the model achieves the best performance in both IID and non-IID settings when K equals the number of activated clients. However, this raises a question about the necessity of the K-means method in this scenario. When K equals the number of activated clients, the input data naturally satisfies the minimization objective of the K-means algorithm, rendering the clustering step redundant. It would be valuable to explore how K affects the results when the number of activated clients increases, for example, to 10 or 20. This analysis would provide deeper insights into the scalability and robustness of the proposed method.
2. While the current experiments focus on CNN-based structures, it would be beneficial to validate the proposed method on other neural network architectures. Specifically, evaluating pFedClub on Transformer-based structures would demonstrate its versatility and applicability across different model types. This expansion of the experimental scope would strengthen the paper's contributions and broaden its potential impact in the field of federated learning.
3. The supplementary material should include a comprehensive set of implementation details for all methods used in the comparisons. This should encompass not only the proposed pFedClub method but also such as the hyparameter for baseline approaches.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: `>>> W1`
Thank you for pointing this out. We follow existing work [1] to maintain the natural order of the blocks, as we aim for the generated candidate models to be similar to handcrafted network structures. Here, the natural order index is defined as the position of each block. For example, CNN1 listed in Appendix Section C consists of four layers: Conv1, Conv2, FC1, and FC2, with corresponding block order indexes of 1, 2, 3, and 4. In the generated model, we do not expect the FC1 layer to appear before the two convolutional layers, Conv1 and Conv2. Therefore, we constrain the order of blocks when generating candidate models.
We also have an example in Figure 2. In step 2: Order-constrained Block Search, if the first selection for the target network (in red) is the second block (in green), the next selected block for block 2 of the target network must have an index larger than 2. In this example, we select two “3” blocks as candidates for the second block replacement. Similarly, the third block replacement will use indexes “4” and “5”. We will add these details in the final version of our paper.
[1] Yang et al. Deep model reassembly. NeurIPS, 2022.
`>>> W2`
Thanks for the reviewer’s suggestion. CKA represents the centered kernel alignment, which is a method generally used to calculate the similarity of the neural network representations [2]. Specifically, given two kernels of feature representations K and L, we first calculate the Hilbert-Schmidt Independence Criterion (HSIC) by HSIC(K,L) = $1/(n-1)^2$ tr(KHLH), where n is the dimension of the representation of the features, and H is the centering matrix. CKA(K,L) = HSIC(K,L)/$\sqrt{(HSIC(K,K)HSIC(L,L))}$. Due to the page limit, we did not add those specific details in the original paper. We will add them in the final version.
As we stated in Sec. 3.2, all the operations including CKA occur on the server. The clients do not need to transmit x and output to the server. Instead, clients only transmit their model parameters to the server, following the conventional federated learning approach. Upon receiving these parameters, the server uses public data as input to the initial block of models, with the output of each block serving as input for the subsequent block. We have detailed the experimental settings with public data, demonstrating that our method performs effectively with both labeled and unlabeled public data, outperforming other baselines.
[2] Kornblith et al. "Similarity of neural network representations revisited." ICML, 2019.
`>>> W3`
Thanks for the reviewer’s valuable question. To ensure the generalization of our approach, the dimensions of the two features do not have to be the same, and CKA is able to handle this situation based on what we have addressed in W2 [2].
`>>> W4`
Thanks for the reviewer’s suggestion. The anchor block selection process considers the groups G. For model $w_m^t$, assuming its the first block $B_{m,1}^{t}$ belongs to the group $G_k^t$, we randomly select one block from $G_{k}^{t}$ as the substitution. Any block from the group $G_k^t$ has the equal probability $1/G_k$ to be selected (Line 169), where $G_k$ is the number of the blocks belonging to $G_k^t$. With such a strategy, compared with the strategy using the two blocks with the largest similarity score, it increases the diversity of the generalized candidates. At the same time, this random strategy is effective, generalized to all models, and simple to implement. As the reviewer suggested, we will add more descriptions of the anchor block selection process in the final version.
We do appreciate the reviewer’s suggestion. We introduced the proposed algorithm CMSR in the appendix A. As the reviewer suggested, we will further enhance the readability and reproducibility in the final version.
`>>> Q1`
The clustering processing is based on the functionality in Line 139 with no direct relation with the number of clients based on our design. With an increasing cluster number in some range, it indicates that we will get a more specific approach to differentiate the functions of each block, which may have some boost on the performance, as shown in Table 5.
We totally agree with the reviewer about the exploration of the activated client number. We have conducted the experiments and shown the results in Table 4. In Table 4, we maintain the same value of K = 4 as the main experiment. Then we study a different total number of clients and different active ratios under the IID and non-IID settings. Furthermore, we provided the experiment observations and respective model scalability analysis in Appendix F. Combining the results in Table 4 and Table 5 and its analysis in Appendix G and F, it demonstrates the scalability and robustness of our proposed method.
`>>> Q2`
Thanks for the valuable question. In our limitation part, we admitted that our existing approach has not been tested on other tasks, such as those with the transformer-based structures. We sincerely appreciate the constructive suggestions from the reviewer. We will add more discussion in our limitation part and expand our work following this direction in the future.
`>>> Q3`
We appreciate the reviewer's suggestion. Our experiment setup, detailed in Section 4.1, includes descriptions of the datasets, baselines, client model deployment, and implementation specifics. We have ensured that common parameters are consistently maintained across different baselines while retaining the unique parameters of each baseline at their default values. As suggested by the reviewer, we will include a more comprehensive set of implementation details in the final version of our paper.
We are sincerely grateful for the reviewer's constructive questions and suggestions. We hope our responses have adequately addressed your concerns. Thank you once again for your valuable feedback, which has significantly contributed to enhancing the quality of our paper.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer AXhq,
We greatly appreciate your constructive comments, which have significantly clarified our paper. As the discussion phase nears its end, could you kindly confirm whether our responses have satisfactorily addressed your concerns?
Thank you,
Authors of Paper 3650
---
Rebuttal Comment 1.2:
Comment: Thank you for your response. I am inclined to raise my score to 5. However, I recommend that the authors include more details and explanations to improve the readability of the paper.
The approach to aggregating heterogeneous models is particularly interesting. I encourage the authors to validate the proposed method on a broader range of neural network architectures.
---
Reply to Comment 1.2.1:
Comment: Dear reviewer,
Thanks for your time to consider our reply. We appreciate your valuable comments. We will add more details and explanations to improve the readability of our paper in the camera-ready version. Also, we will extend our proposed approach to a broader range of neural network architectures as the reviewer suggested.
Thank you. | Summary: This paper addresses heterogeneous model aggregation in federated learning. To this end, the authors introduce pFedClub, which aims to generate personalized models for federated clients while ensuring that the models remain within size constraints. Specifically, pFedClub consists of three main steps: first, it decomposes models into multiple blocks and clusters them using the K-means algorithm; second, it replaces original blocks with others from the same clusters to create a set of candidate models; third, it selects the optimal personalized model for each client using a public dataset and an initial model transferred to the server. Extensive experiments illustrate its significant improvement over existing methods in this field.
Strengths: 1. The work is well motivated and explores an interesting problem in federated learning.
2. The presentation of this paper is clear, and the authors comprehensively and intuitively describe the proposed pFedClub.
3. The paper conducts sufficient experiments and compares the proposed method with previous works. The numerical results demonstrate the superiority of pFedClub.
Weaknesses: 1. The proposed work requires a public dataset, which is unsuitable in federated learning due to the privacy concerns. Is this work applicable to a public dataset different from the training data distribution? For example, the clients collaboratively train a model for CIFAR-10, while the server holds a public dataset from ImageNet.
2. Although the proposed work achieves remarkable under convolutional neural networks, it is unclear how pFedClub performs under transformers. Is the proposed work suitable for a setting where clients hold three different sizes of LLM, i.e., LLaMA-7B, LLaMA-13B, and LLaMA-70B?
Technical Quality: 3
Clarity: 3
Questions for Authors: See **Weaknesses**
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See **Weaknesses**
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: `>>> W1`
Thank you for your comments. We would like to emphasize that our research question addresses a significantly challenging problem, where each client maintains a unique model. Aggregating these heterogeneous models on the server is particularly difficult without the use of public data. Additionally, we adhere to established work that serves as baselines in our experiments, which also incorporate public data during model training. While we acknowledge that differences in data distributions between public and private data can affect performance, our model design mitigates this issue, similar to the approach taken by pFedHR.
Following the reviewer’s suggestion, we conducted an additional experiment where clients train models using CIFAR-10, and the server utilizes a public dataset from ImageNet. All other hyperparameters were kept consistent with those listed in Table 1 for Model Zoo 1 under the non-IID setting. The results of this experiment are as follows:
| Public Data | Labeled | Unlabeled |
|-------------|---------|-----------|
| CIFAR-10 | 73.62 | 72.87 |
| ImageNet | 71.50 | 69.33 |
We observe that using ImageNet as the public dataset does decrease performance compared to using CIFAR-10 directly. However, the performance drop is limited, and it still outperforms the best baseline, pFedHR (Labeled: 69.88 and Unlabeled: 68.54), as shown in Table 1.
`>>> W2`
Thank you for your feedback. In our current work, we focused on reassembling convolutional neural networks in our experiments, as they consist of distinguishable layers. However, our model design is flexible and can also be applied to transformer-based models, as demonstrated in [1], which segments transformer models into functional blocks and reassembles them into candidates. We did not test the transformer or more advanced models mentioned by Reviewer 5tH7 due to the following reasons: This work focuses on controllable personalized model generation, specifically designed for resource-constrained clients. Training transformer-based models from scratch typically requires a large amount of training data, which may be impractical for clients using mobile devices due to computational costs. Therefore, we did not include experiments with transformers in this study. We can include additional results involving transformer models in the final version of our paper if required.
[1] Yang, Xingyi, Daquan Zhou, Songhua Liu, Jingwen Ye, and Xinchao Wang. "Deep model reassembly." Advances in neural information processing systems 35 (2022): 25739-25753.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I find the rebuttal addresses my concerns. I believe this is a good paper and benefits ML community. Therefore, I decide to increase my score from 6 to 8.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We are delighted to have addressed your concerns and appreciate the improved score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Interpreting Learned Feedback Patterns in Large Language Models | Accept (poster) | Summary: This submission tries to tackle one big question in the field of interpreting the data-driven preference learnt by RLHF in human language. The technical path this submission took is to train probe on SAE features to distinguish between good and bad RLHF features.
Strengths: + The attempt to interpret what happens during RLHF training is a good direction to pursue.
+ Releasing the SAE direction and training code could be an excellent news to the community.
Weaknesses: + Unclear why have to probe on top of SAE feature. SAE greatly increase the dimensions of the features, leading to overfitting---you can find a separating plane for whatever classification task in this high-D space. Lacking comparison to normal probing.
+ Considering the problem from a dynamical perspective can be fruitful. Noted that the authors did ablate the features and observe a performance drop on preference dataset. But it's also interesting to see the progress of RLHF training, how it warps the features spaces, even the SAE features' relative importances.
Technical Quality: 3
Clarity: 4
Questions for Authors: + I wonder if it could be interesting to conduct the same analysis on the reward model, if the reward model is another language model that is open-weight and trained. Can we compare the two representation space, one trained for discrimination and the other trained for generation?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the insightful review.
We are pleased that you found our research direction good, and that releasing our SAE infrastructure would be beneficial to the community.
## Clarification on SAE Feature Probing
> "Unclear why have to probe on top of SAE feature. SAE greatly increase the dimensions of the features, leading to overfitting… Lacking comparison to normal probing."
We want to emphasize several key points:
1. Our SAEs do not modify the dimensions of the activations. The hidden size is the same value as the input size.
2. The accuracy of the probes we report in our paper is the accuracy on an unseen test dataset.
3. Based on your response, we have included a comparison to probing on the raw activations in the PDF attached to the rebuttal visible to all reviewers (Table 2).
Our findings show that probing on the SAE outputs does not meaningfully affect the accuracy of the probes. It also provides the benefits of the inputs being probed being more interpretable ([Cunningham et al.](https://arxiv.org/abs/2309.08600), [Bricken et al.](https://transformer-circuits.pub/2023/monosemantic-features))
## Dynamical Perspective on RLHF Training
> "Considering the problem from a dynamical perspective can be fruitful… it's also interesting to see the progress of RLHF training, how it warps the features spaces, even the SAE features' relative importances."
We agree that studying the problem from a dynamical perspective would be very interesting. In response:
1. We will mention this as an exciting direction for future work in the camera-ready version of the paper.
2. We aim to include results of our method at various checkpoints throughout fine-tuning in the camera-ready version of our paper.
## Analysis of Reward Models
> "I wonder if it could be interesting to conduct the same analysis on the reward model… Can we compare the two representation space…?"
We agree this is another interesting direction:
1. We will include this in our discussion of future work.
2. [Riggs et al.](https://www.lesswrong.com/posts/5XmxmszdjzBQzqpmz/interpreting-preference-models-w-sparse-autoencoders) recently trained some SAEs on preference models with good results, suggesting this approach is feasible.
Challenges and potential solutions:
- Our fine-tuning tasks don't have true reward models (VADER uses lexicon labels, helpful-harmless and toxicity tasks use pairwise preference data).
- If this is a significant concern, we could train classification models for the pairwise preference data and compare the representation space of this classification model with our probes/sparse autoencoders.
The results in the PDF attached to the rebuttal visible to all reviewers will be incorporated into the camera-ready version of our paper.
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: I appreciate the reply from the author of this submission.
I think all of them make sense and in all addressed my concerns. Looking forward to seeing progresses on the mentioned directions. | Summary: The goal of this paper is to predict where patterns in LLM activations learned from RLHF diverge from the human preferences used for the RLHF training.
Given a base model and an RLHF tuned version of it, the method involves first identifying the 5 layers with highest parameter difference according to an L2 norm. Then two auto-encoders are trained over the activations from these layers. The encoder and decoder weights of the autoencoder are tied, and the output from these is preferred for studying the activations as they are expected to be more sparse, condensed and interpretable than the raw activations.
At inference time, for each input, the activations from the high divergence layers are computed, passed through the autoencoder and then aggregated. Given a pair of contrasting inputs, a linear probe is trained to predict activation deltas using the above aggregated autoencoder output as input. The output of the probe is meant to be a predicted feedback signal that can be compared to the ground truth fine tuning feedback. For sentiment analysis, a strong correlation is observed with the Pythia-160m model but this is weaker for Pythia-70m and GPT-Neo-125m.
For another validation the probes, GPT-4 is used to generate explanations of the features in the decoder weights of the autoencoders that get activated when the predicted feedback is positive. GPT4 is then prompted to predict whether or not these are relevant to the fine tuning task, based on a language description of the task. It is found that a feature identified by GPT-4 as relevant to the fine-tuning task is between twice and thrice as likely to be correlated with predicted positive feedback.
Strengths: The paper is quite accessible for a reader whose area of focus is not interpretability.
Weaknesses: As a reviewer not particularly experienced with work on interpretability, the takeaways of this paper are somewhat unclear. For example, if we finetuned a new model on one of the datasets used in this paper and trained probes in a similar way from its activations, what would that tell us about the the difference between the base and RLHF versions of that model? Alternately, is the goal to discover information about a model where the base and RLHF-tuned versions are available but the data is not, and hence we do not know what factors might have influenced the preference annotations that guided the annotation.
I did not fully understand how the activation deltas are calculated. While most of the paper is fairly readable to a reviewer with a different area of focus, this aspect could be improved.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. I don't feel like I understood the concept of the activation delta. The paper states "We compute the activation deltas for a given contrastive triple as the difference between the positive 212 and neutral element and negative and neutral element under the ℓ2 norm". For any input x, there is a set of values $\hat{a}$. For calculating an L2 norm these still need to somehow be aggregated. Since the probe input $A_{concat}(x)$ is already a concatenation of these values, I assume it is not simply an L2 norm of the two $A_{concat}$ vectors, as then it is unclear what the probe would learn.
2. Assuming that the probes trained in this paper do obtain information about the preferences underlying human preference data, how do we make use of that information?
Confidence: 1
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The paper has discussed limitations but not broader impact of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review and appreciate that you found our paper accessible.
## Clarifying the Paper's Objectives and Takeaways
> "…the takeaways of this paper are somewhat unclear. … if we finetuned a new model on one of the datasets used in this paper and trained probes in a similar way from its activations, what would that tell us about the the difference between the base and RLHF versions of that model? Alternately, is the goal to discover information about a model where the base and RLHF-tuned versions are available but the data is not, and hence we do not know what factors might have influenced the preference annotations that guided the annotation."
Our method's primary objective is to **measure the extent to which a fine-tuned model has learned the fine-tuning feedback**. To clarify:
- We do not intend to contrast base and fine-tuned models (although Appendix C briefly investigates this).
- Our goal is not necessarily to find information about fine-tuned models where the fine-tuning data/reward model is unknown.
We hope practitioners will use our method to evaluate and improve their post-training techniques based on how well they cause models to learn the fine-tuning feedback.
Key findings and implications:
1. Our method considers model internals, unlike other metrics for the success of RLHF (e.g., fine-tuning loss or output evaluations).
2. While our probes can accurately recover whether feedback is positive or negative (Table 3), they struggle with more granular feedback used in PPO fine-tuning (Table 4).
3. This could indicate either:
a) The model hasn't learned fine-tuning feedback in sufficient detail, or
b) Low probe accuracy (which we argue against, see below)
We support the accuracy of our probes through:
- Showing similarity between probe-identified and GPT-4-identified LFP-related features (Table 6)
- New results in our global rebuttal PDF:
- Probes less correlated with fine-tuning feedback still learn related patterns (Table 1 in the PDF attached to our global rebuttal)
- Inputs can often be separated by reward through dimensionality reduction (Figure 1 in the PDF attached to our global rebuttal)
- Probe accuracy in predicting the label of a word in the VADER lexicon is correlated with the frequency that the fine-tuned model generates that word (Figure 2 in the PDF attached to our global rebuttal).
These results suggest that fine-tuned models may not have learned fine-tuning feedback in a detailed manner, rather than indicating probe inaccuracy.
## Clarification on Activation Delta Calculation
> "I did not fully understand how the activation deltas are calculated… I assume it is not simply an L2 norm of the two A_concat vectors, as then it is unclear what the probe would learn."
The activation deltas are calculated as follows:
For any contrastive triple (positive, negative, neutral), we provide two pairs of input to the linear probe:
a) Input_1 = SAE_repr of (positive)
Output_1 = $+ ||SAE\_{repr}(positive) - SAE\_{repr}(neutral)||$
b) Input_2 = SAE_repr of (negative)
Output_2 = $-1 \cdot ||SAE\_{repr}(negative) - SAE\_{repr}(neutral)||$
The L2 norm here serves as a signed distance measure between either the positive and neutral or negative and neutral pairs.
Although it might be unclear what the probe would learn if trained on a smaller number of examples, the only consistent difference over many examples should be the distance in activation space related to how different the feedback is in fine-tuning. For example, although a given positive and neutral pair of inputs may be separated in activation space in many ways other than just their implicit feedback, the only consistent difference over thousands of positive and neutral pairs should be the difference in their implicit feedback. In the PDF attached to the rebuttal visible to all viewers we show through dimensionality reduction that the separation between positive and negative activation deltas can be seen visually (Figure 1), which we hope shows that activation deltas are sufficient labels for probe training datasets.
## Utilization of Probe-Obtained Information
> "Assuming that the probes trained in this paper do obtain information about the preferences underlying human preference data, how do we make use of that information?"
If probes are accurate but unable to learn fine-tuning feedback precisely, it suggests the fine-tuned model hasn't fully learned the feedback. Practitioners might then:
1. Consider alternative explanations, such as:
- The model learning a proxy objective
- The model failing to find consistent patterns in its activations related to the feedback
2. Take remedial actions, such as further fine-tuning, or adjusting their post-training
## Addressing Broader Impact
> "The paper has discussed limitations but not broader impact of their method."
We will add a broader impacts section to the camera-ready version of the paper, covering the takeaways stated in this rebuttal and our global rebuttal.
All results from the PDF attached to the global rebuttal will be incorporated into the camera-ready version of our paper. | Summary: The authors propose an approach for measuring and interpreting the divergence between learned feedback patterns (LFPs, or simply the model's activation patterns) and the feedback reward distribution of the preference training dataset. To do so, they identify layers whose activations have moved the most during RLHF training and input these layers' activations into a sparse auto-encoder (SAE) that is trained to provide sparse representations of the LLM's activations. Then, they train probes to predict the feedback signal (e.g. reward, sentiment label) from the SAE's outputs. They use these probes both to measure the divergence of the LFPs from the actual feedback signals and to interpret which features are most important for the LFPs.
Strengths: - The authors ask an interesting question of whether we can measure and interpret the difference between a trained model's activation patterns and the preference distribution it has been trained on. The interpretability aspect of this question is interesting, since it can help us better understand what exactly the model has learned (or not learned) from its training dataset.
- The authors provide a good explanation of why sparse auto-encoders are being used for this task (rather than interpreting the raw model activations), as well as the limitations thereof.
Weaknesses: - The effectiveness of this probing method seems to rely on many key assumptions being true, such as (i) sparse autoencoder outputs being more interpretable than the original model's outputs, (ii) sparse autoencoder output representations being faithful to the original model's representations, (iii) the probes being accurate, and (iv) GPT-4 being accurate/faithful when giving descriptions of each feature. There is very little experimental evidence provided for confirming that any of these assumptions are true, and these claims are difficult to test in the first place.
- In fact, the authors mention that a likely reason for the low correlation between the probe's predictions and the VADER lexicon (for some models) is "the complexity of the probe's task...a linear regression model is unlikely to recover such granular rewards accurately from just the activations" (L265-266). Although they do find a high correlation for one model, the insufficiency of this probe implies that it is not effective for accurately measuring the divergence between the model's activation patterns and the feedback label distribution. If the correlation is low, we cannot tell whether that is the probe's failure, or if the model has not acquired strong LFPs, or some combination of the two. Since this probing technique is a central contribution of the paper, I would expect stronger probes and more rigorous evaluation of the effectiveness of the probes.
- How can one ensure that GPT-4's interpretations of the features are accurate or faithful?
- Table 5 purports to check whether the predicted LFP-related features were actually important and useful to the LLM, but the numbers before and after ablation are often very close together (or identical, in the case of GPT-Neo-125m). It would be helpful to report confidence intervals or standard errors to check whether these differences are significant. But as it currently stands, this table's results does not seem to strongly support the claim that the predicted LFP-related features are indeed relevant to/critical for LFPs.
- Lack of clarity in explaining methods:
- Much of the writing about the methods is unclear, contradictory, or omits many details. For instance, the explanation of the logistic regression probe in L233-234 says "we label the concatenated activations as positive or negative based on the averaged activation deltas for each token over the entire input sequence, and train a logistic regression model to classify the activations," which would suggest that this probe's inputs are the activations. But L493 (in the appendix) says "...we give a positive or negative label to concatenated autoencoder outputs based on the sign of their activation delta. We then train a logistic regression model to predict the labels from the concatenated autoencoder outputs," which suggests that the inputs are actually the autoencoder outputs, not the original model's activations. Which is it?
- In Section 3.4, how is GPT-4 prompted to provide the explanations?
- Given how confusing and verbose the methodology is, I would encourage the authors to write out some of the procedures in equation form, rather than long paragraphs of text.
Technical Quality: 1
Clarity: 3
Questions for Authors: Questions are above.
Confidence: 3
Soundness: 1
Presentation: 3
Contribution: 2
Limitations: The limitations section was well-written and thorough, and covered many of the concerns I had myself. An additional limitation is that this method is computationally expensive and requires both training another model and running inference on a sufficiently powerful LLM (e.g. GPT-4) to interpret the features. In this paper, most of the results were for smaller models (under 1B params), and it is unclear whether this method would be scalable to larger models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review.
We are pleased that you found the question our paper studies interesting, and our explanation for using sparse autoencoders good.
## Key Assumptions
> "The effectiveness of this probing method seems to rely on many key assumptions being true…"
While we agree with the reviewer that the method relies on many key assumptions, we think that these assumptions are reasonable and are based on prior work:
### 1. Sparse autoencoder outputs being more interpretable
Several studies support this assumption:
- [Cunningham et al.](https://arxiv.org/abs/2309.08600), [Bricken et al.](https://transformer-circuits.pub/2023/monosemantic-features), and [Templeton et al.](https://transformer-circuits.pub/2024/scaling-monosemanticity/) show that features in sparse autoencoder dictionaries are easier for language models to describe than neurons in raw activations.
- Bricken et al. and Templeton et al. found that these features are easier for humans to understand through manual analysis.
- Bricken et al. also show that sparse autoencoders enable finding effectively invisible features in raw activations.
- Additional work ([Rajamanoharan et al.](https://arxiv.org/pdf/2404.16014), [Gao et al.](https://arxiv.org/abs/2406.04093), and [Rajamanoharan et al.](https://arxiv.org/abs/2407.14435)) shows that sparse autoencoders that perform better on key metrics (sparsity, reconstruction) are more interpretable.
After papers such as Cunningham et al. and Bricken et al., this assumption is common in interpretability, and motivates the use of sparse autoencoders for interpretability.
### 2. Sparse autoencoder output representations being faithful to the original representations
- Bricken et al. show that replacing MLP activations with sparse autoencoder outputs incurred only 21% of the loss that would be incurred by zero ablating the MLP.
- Rajamanoharan et al. reduce this to 2% with their improved sparse autoencoder.
We consider these results to show faithfulness of sparse autoencoder outputs to activations.
We argue that this assumption is in line with other interpretability work, and that experimentally supporting it in our paper is out of scope.
### 3. Probe accuracy
We agree there is more work to be done in validating our probes. In our submission, we validated our probes using GPT-4 feature descriptions, finding a strong overlap between the probes and GPT-4 descriptions (Table 6). We offer additional validation of our probes in the PDF in our global rebuttal:
- We show that the probes are learning patterns related to the fine-tuning feedback (Table 1 in global rebuttal).
- We demonstrate that inputs can be separated by their reward using dimensionality reduction, showing there is structure in the probes' input data tht they can exploit (Figure 1 in global rebuttal).
- Probe accuracy in predicting the label of a word in the VADER lexicon is slightly correlated with the frequency that the fine-tuned model generates that word (Figure 2 in global rebuttal).
We commit to integrating these results into the camera-ready version of our paper to better support our probes.
### 4. GPT-4 prediction accuracy
[Bills et al.](https://openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html) and Cunningham et al. have performed detailed validations of GPT-4 generated feature descriptions:
- They prompt another LLM to predict how that feature would activate for a collection of tokens if that description were correct.
- They measure the correlation of these predictions and the true activations.
- Bills et al. also conduct additional validation, using humans to validate the feature descriptions and ablation experiments.
We would like to highlight that our method of generating feature descriptions with GPT-4 is common in prior literature, such as Cunningham et al., Bricken et al., [Neo et al.](https://arxiv.org/pdf/2402.15055) and Templeton et al. We argue that even if the GPT-4 feature descriptions are sometimes inaccurate, the overlap between the GPT-4 descriptions and probes still helps to validate the probes.
## Addressing Specific Concerns
> "If the correlation is low, we cannot tell whether that is the probe's failure…"
We hope that through some of our additional results, we have increased the rigor of our evaluation of the effectiveness of the probes. We acknowledge that additional validation would be useful, but believe that the rigor of evaluation with our updated results is in line with the standards of interpretability literature. Further improving the validation of our probes is our primary objective for the camera-ready version of our paper.
> "as it currently stands, this table's [Table 5] results does not seem to strongly support… that the predicted LFP-related features are relevant to LFPs."
We note that Table 5 was an ablation of the features classified by GPT-4 as related to LFPs and was not intended to support our probes.
> "Much of the writing about the methods is unclear, contradictory, or omits many details…"
In an effort to address this concern, we have corrected the issues you highlighted, clarifying that the description on L233-234 is correct, and not L493. The camera-ready version of our paper will include a clearer method section, with more of the writing replaced with equations, in line with your comment.
> "In Section 3.4, how is GPT-4 prompted...?"
The camera-ready version of our paper will include an appendix with the full prompts used to generate feature explanations, but they are taken from the [public repository](https://github.com/openai/automated-interpretability) of Bills et al.
> "In this paper, most of the results were for smaller models (under 1B params)... it is unclear whether this method would be scalable to larger models."
We included results for Gemma-2b in our paper (L152), and note that recent work such as Gao et al. has trained sparse autoencoders on models in the GPT-4 family with success, showing scalability.
---
Rebuttal Comment 1.1:
Comment: Thank you to the reviewers for your responses -- I appreciate the detailed citations and answers to my questions.
The new experiments are indeed helpful for validating the probes, though this binary classification task is not a particularly difficult one.
I am still somewhat concerned by the evidence presented for some of the assumptions. For example, re: assumption (2), the authors reference Bricken et al., but a 21% change in loss seems quite significant. Although Rajamanoharan et al. reduce this to 2%, this is using a new type of SAE that is not utilized in this paper. Furthermore, models with similar losses can still have very different learned features, and as such this does not seem like solid evidence for the faithfulness of the interpretations of SAE features.
> We note that Table 5 was an ablation of the features classified by GPT-4 as related to LFPs and was not intended to support our probes.
Understood. The text of the paper is misleading on this point -- e.g., in L288-292: "To ensure that the features identified by GPT-4 are related to LFPs, we zero-ablate those features ... finding that this ablation causes consistent or worse performance ... Our results suggest that our probes are finding features relevant to LFPs, supporting our analysis in 4.1." Was this not about Table 5? Also, Table 5 is not explicitly referenced in the text, so it is difficult to identify which claims in the text relate to Table 5.
Regardless, I think the original point still stands -- in some cases, there is hardly a difference before and after ablation. If one of the key assumptions is that GPT-4 is capable of interpreting and identifying important features, then this table does not seem to provide support for this assumption. Having error bars or some measure of variance would help too -- it is hard to interpret whether the differences are significant here.
> GPT-4 prediction accuracy
Bill et al. mention themselves that "However, we found that both GPT-4-based and human contractor explanations still score poorly in absolute terms." Although the method seems promising in the related literature, the absolute explanation scores are still quite low. It is a method that requires further innovation, and should not be relied upon as a source of ground truth. They also note the high prevalence of polysemantic neurons, which makes it difficult to provide succinct and specific explanations for each neuron.
---
Reply to Comment 1.1.1:
Comment: > The new experiments are indeed helpful for validating the probes, though this binary classification task is not a particularly difficult one.
Table 1 and Figure 2 in the PDF attached to our global rebuttal support the VADER probes, which are trained to perform a more complex task than binary classification. For Figure 1, we argue that although the task is simple, it still demonstrates our point that the probes can separate inputs based on the implicit feedback.
> re: assumption (2), the authors reference Bricken et al., but a 21% change in loss seems quite significant. Although Rajamanoharan et al. reduce this to 2%, this is using a new type of SAE that is not utilized in this paper.
The 21% change in loss might be misleadingly large, as Bricken et al. performed that experiment on a one layer transformer, meaning that using the autoencoder outputs could effect the model much more significantly than if there were many MLPs, as a larger fraction of the model is effected. The reduction in loss is also not absolute, it is relative to a zero ablation of the MLP, which would likely perform better than random due to the attention parameters being untouched.
> "The text of the paper is misleading on this point -- e.g., in L288-292... Was this not about Table 5?"
The reasoning here was that if there was overlap between the features GPT-4 classified as related to LFPs and the features frequently active when the probe predicted positive implicit feedback in activations, and ablating the GPT-4 classified features reduced performance on the fine-tuning task, then the probe features were also likely related to LFPs. That is why we state that this supports the accuracy of the probes.
This is perhaps a convoluted method of validation, and we hope to include a more direct form of this experiment in our camera-ready paper. Specifically, we want to conduct the ablation experiment with the probe features and GPT-4 classified features separately.
> Although the method [Bills et al.] seems promising in the related literature, the absolute explanation scores are still quite low. It is a method that requires further innovation, and should not be relied upon as a source of ground truth
We acknowledge the imperfections of this method, however we did not intend to use the GPT-4 feature descriptions as ground truth. The strong overlap of the GPT-4 classifications and features frequently active for positive inputs in our probes still helps support the probes even if the feature explanations contain inaccuracies. This correlation suggests that they are both identifying many of the same features as related to LFPs.
We thank you for your consideration, and hope that you will consider our response going forward. | Summary: The paper investigates how large language models (LLMs) learn preferences from human feedback during fine-tuning using reinforcement learning (RLHF). The authors introduce the concept of Learned Feedback Patterns (LFPs) to describe activation patterns in LLMs that align with human feedback. They aim to measure the accuracy of these patterns in capturing human preferences by training probes on condensed representations of LLM activations. The probes predict the implicit feedback signal in these activations and compare it to true feedback.
Strengths: - The introduction of LFPs provides a new perspective on understanding how LLMs learn from human feedback. This concept helps in quantifying and interpreting the alignment between LLM activations and human preferences.
- The authors validate their probes by comparing neural features correlated with positive feedback against GPT-4’s descriptions of relevant features. This cross-validation strengthens the reliability of their findings.
- The use of synthetic datasets to elicit specific activation patterns in LLMs adds to the reproducibility and robustness of the study. These datasets are also made publicly available for further research.
Weaknesses: - The study primarily focuses on a few specific models (e.g., Pythia-70m, GPT-Neo-125m) and tasks (sentiment generation, toxicity), which might limit the generalizability of the findings across different LLMs and applications. More recently released models are of more value for studying RLHF patterns and verify that the method can be generalized. The patterns are easy to extract because that the used data are quite obvious to encode and decode.
- While the probes show significant accuracy for certain tasks, the paper notes weaker correlations for more granular reward predictions, suggesting that the approach might struggle with highly detailed feedback signals. The issue of feature superposition in dense, high-dimensional activation spaces poses a challenge to fully interpreting the learned features. Although sparse autoencoders mitigate this to some extent, the problem remains a significant obstacle.
- The validation process relies on GPT-4’s ability to describe neural features, which introduces a dependency on another model’s interpretability. This could introduce biases or inaccuracies if GPT-4’s descriptions are not perfectly reliable.
- The paper acknowledges that while their method identifies features involved in feedback signals, it does not provide a mechanistic explanation of how these features interact or influence the expected feedback signal. This limits the depth of interpretability.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can you elaborate on how your findings can be practically applied to mitigate risks associated with LLM deployment, such as manipulation of user preferences or harmful behaviors? What strategies would you recommend for developers to monitor and adjust LFPs in deployed models?
- Your validation relies on GPT-4’s feature descriptions. Have you explored other methods or models for validating the identified features? How do you ensure the robustness of these validations?
- Have you tested your method on other LLM architectures or tasks beyond sentiment analysis and toxicity detection? If so, what were the results? How do you anticipate the effectiveness of your method would vary with different model sizes and types?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insight and time.
We appreciate that you found LFPs gave a new perspective on how LLMs learn from human feedback, and that our use of synthetic data and GPT-4 contributed well to the paper.
## Model and Task Selection
> "The study primarily focuses on a few specific models… and tasks… More recently released models are of more value"
We'd like to clarify and expand on our model and task selection:
- In addition to the smaller models you mention (Pythia-70m, Pythia-160m and GPT-Neo-125m), we experimented with **Gemma-2b**, a larger and more recent model released in February 2024 (L152).
- Beyond sentiment generation and toxicity tasks, we included the **helpful-harmless task**, designed to simulate real-world RLHF (L146-149).
- Results for Gemma-2b and the helpful-harmless task can be found in **Table 3** of our paper.
We hope these results help address concerns about the generalizability of our findings to larger models and realistic applications.
We also studied models fine-tuned using different reinforcement learning algorithms (PPO and DPO), demonstrating that our method generalizes across RLHF algorithms.
## Addressing Weaker Correlations and Feature Superposition
> "…the paper notes weaker correlations for more granular reward predictions… feature superposition… poses a challenge to fully interpreting… features."
We offer the following comments:
1. Lower probe correlation in the controlled sentiment generation task may indicate that the model hasn't learned the granular specification of reward for that task, rather than a failure of the probe.
2. Additional evidence in our global rebuttal shows:
- Probes learn patterns related to fine-tuning feedback even when they have low accuracy to it (Table 1 in global rebuttal PDF).
- Inputs can often be separated in terms of their probe classifications through dimensionality reduction (Figure 1 in global rebuttal PDF).
- Probe accuracy in predicting the label of a word in the VADER lexicon is slightly correlated with the frequency that the fine-tuned model generates that word (Figure 2 in global rebuttal PDF).
Our method will benefit from recent improvements in training sparse autoencoders (e.g., [Gao et al.](https://arxiv.org/abs/2406.04093) and [Rajamanoharan et al.](https://arxiv.org/abs/2407.14435)), which should reduce the effects of superposition.
## Validation Process and GPT-4 Reliability
> "The validation process relies on GPT-4's ability to describe neural features… This could introduce biases or inaccuracies if GPT-4's descriptions are not perfectly reliable."
We acknowledge this concern and note:
- [Bills et al.](https://openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html) and [Cunningham et al.](https://arxiv.org/abs/2309.08600) have conducted detailed validations of GPT-4 generated feature descriptions against manual and auomatic analysis.
- Our method for generating feature descriptions with GPT-4 is common in prior literature (e.g., [Cunningham et al.](https://arxiv.org/abs/2309.08600), [Bricken et al.](https://transformer-circuits.pub/2023/monosemantic-features), [Neo et al.](https://arxiv.org/pdf/2402.15055), and [Templeton et al.](https://transformer-circuits.pub/2024/scaling-monosemanticity/)).
## Practical Applications and Risk Mitigation
> "Can you elaborate on how your findings can be practically applied to mitigate risks associated with LLM deployment, such as manipulation of user preferences or harmful behaviors? What strategies would you recommend for developers to monitor and adjust LFPs in deployed models?"
Our method aims to help practitioners identify how well their fine-tuned models have learned the fine-tuning feedback. We recommend:
- If probes indicate divergence between LFPs and fine-tuning feedback, that practitioners consider alternative post-training methods.
- This divergence may suggest the model is learning a proxy for the feedback or failing to find consistencies in high-reward generations.
## Method Generalization
> "Have you tested your method on other LLM architectures or tasks beyond sentiment analysis and toxicity detection? If so, what were the results? How do you anticipate the effectiveness of your method would vary with different model sizes and types?"
Our method has been tested on:
- Tasks beyond toxicity and sentiment analysis, specifically a helpful-harmless task mimicking real-world RLHF (L146-149).
- Various transformer-based LLMs with architectural differences (Pythia models, GPT-Neo-125m, and Gemma-2b), for example in the attention variant they use.
We anticipate our method would recover more information about fine-tuning feedback from larger models due to their increased parameter count and expected better performance on fine-tuning tasks.
## Alternative Validation Methods
> "Your validation relies on GPT-4's feature descriptions. Have you explored other methods or models for validating the identified features? How do you ensure the robustness of these validations?"
While we haven't explored alternative models for generating feature descriptions:
- [Bricken et al.](https://transformer-circuits.pub/2023/monosemantic-features) and [Templeton et al.](https://transformer-circuits.pub/2024/scaling-monosemanticity/) have successfully used Anthropic models to explain features.
- We don't foresee issues with using models other than GPT-4.
- Even if GPT-4 descriptions are sometimes inaccurate, the overlap between probes and GPT-4 descriptions still aids in probe validation.
The results from our global rebuttal PDF will be incorporated into the camera-ready version of our paper.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thank you for your response and clarification on my questions. I have also read other reviewers' comments. That being said, it sounds that many of your justification statements are based on other papers' claims which may not directly verifiable in the submission itself. I found Reviewer dzK7 shared similar concerns and thus prefer to keep my score as it was.
---
Rebuttal 2:
Comment: Thank you for your response. We believe we have addressed the concerns raised in your original review and appreciate the point brought up in your response.
> it sounds that many of your justification statements are based on other papers' claims which may not directly verifiable in the submission itself
Most of our assumptions are based on either peer-reviewed publications or seminal interpretability papers that are standard references in mechanistic interpretability. Other assumptions we try to validate in our paper. We acknowledge that our paper could have better highlighted how each assumption has been validated in prior work.
Regarding the sparse autoencoder assumptions:
* Recent work that only studies sparse autoencoders does not justify these assumptions experimentally, and usually defers to past work (e.g., [Rajamanoharan et al.](https://arxiv.org/pdf/2404.16014), [Rajamanoharan et al.](https://arxiv.org/abs/2407.14435) and [Gao et al.](https://arxiv.org/abs/2406.04093))
* In each of these papers sparse autoencoders are a much larger component of the method than in our paper. As a result, we think it would be out of place for us to re-justify these assumptions in our submission.
* The peer-reviewed [ICLR print of Cunningham et al.](https://openreview.net/pdf?id=F76bwRSLeK) is titled “Sparse Autoencoders Find Highly Interpretable Features in Language Models”, which was a specific assumption mentioned by Reviewer dzK7. These results have been further validated in work such as [Bricken et al.](https://transformer-circuits.pub/2023/monosemantic-features) and [Templeton et al.](https://transformer-circuits.pub/2024/scaling-monosemanticity/)
For the GPT-4 feature descriptions, prior work cited in our rebuttal has validated the descriptions extensively. We note that we did not intend to use the GPT-4 descriptions as a ground truth, and that the correlation of the GPT-4 classifications and our probes (Table 6 in our submission) shows that both methods identify similar features as being related to LFPs. Even if the feature explanations contain inaccuracies, we believe they still help validate the probes.
We hope that the additional experimental validation of our probes in the PDF attached to our global rebuttal helps support the accuracy of our probes.
We appreciate your time and consideration. We hope that since the reviewer-author discussion period is still ongoing, you will consider our response here and to Reviewer dzk7. | Rebuttal 1:
Rebuttal: # Global Rebuttal
We thank the reviewers for their incisive feedback. In this comment, we summarize the additional results in the PDF attached to this comment, points made across multiple reviews, and our responses to those points. Note that unless otherwise specified, the figures referred to in this rebuttal are in the attached PDF, and not our paper.
* **Takeaways and broader impacts:** In this paper, we aim to measure the accuracy of a fine-tuned LLM’s learned feedback patterns to the fine-tuning feedback. We hope that practitioners will evaluate their fine-tuned models using a method similar to ours, and adjust their fine-tuning accordingly. We argue that our results for the VADER/controlled sentiment generation task show that the fine-tuned models likely did not learn the granular fine-tuning feedback, and merely learn to discriminate the more strongly positive/negative examples. To make this clearer, the camera-ready version of our paper will include a broader impacts section, and more explicit mentions of these takeaways in our conclusion and introduction.
* **Validation of probes:** Our submission validated the probes we trained using descriptions of features generated by GPT-4 (Table 6 in our original paper). Some reviewers were concerned that the probes may not have been trained well, leading to low probe accuracy to the fine-tuning feedback. Since then, we have added additional validation as suggested by the reviewers, such as showing that positive/negative inputs to the probes (in terms of the probes’ predictions) can be discerned through dimensionality reduction (Figure 1), suggesting that there is structure in the data for probes to exploit. We also show that that the low accuracy of the probe on specific words in the VADER lexicon is anticorrelated with the frequency the fine-tuned model outputs those words (Figure 2). We believe these results help support the hypothesis that the low accuracy of the VADER probes to the fine-tuning feedback is caused by the fine-tuned models failing to learn the granular fine-tuning feedback, and not due to poor probe accuracy. Table 3 in our paper might support this hypothesis, as it shows that when probes are trained simply to classify their inputs, they achieve very high accuracy. The camera-ready version of our paper will include these additional results to support the accuracy of our probes.
* **Accuracy of the VADER/controlled sentiment generation probes:** Although our VADER probes achieved lower accuracy to the fine-tuning feedback than the toxicity or helpful-harmless probes, we argue that this can be explained by the LLM failing to learn the granular fine-tuning feedback. We support this claim by showing that the VADER probes achieve higher accuracy when only the sign of their prediction is considered (Table 1), showing that they do find structure in their training training data, and with Figure 2, described in the previous dot point. We commit to more strongly validating our VADER probes in the camera-ready version of our paper, and believe our additional results show significant progress toward this.
* **Validation of GPT-4 feature descriptions:** Reviewers were concerned that GPT-4 may not reliably describe features. We believe that prior work supports the accuracy of these descriptions: [Bills et al.](https://openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html) performed a thorough validation of neuron descriptions generated using GPT-4, showing that the descriptions were accurate on multiple metrics when explaining features in GPT-2. [Cunningham et al.](https://arxiv.org/abs/2309.08600) found similar results, but with sparse autoencoder features instead of neurons in an LLM. Our feature explanation method is taken directly from these works. [Bricken et al.](https://transformer-circuits.pub/2023/monosemantic-features) found similar results using Claude 2. The specific prompts we used will be included in an appendix in the camera-ready version of our paper, but they are taken from [Bills et al.](https://openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html), and their public [GitHub repository](https://github.com/openai/automated-interpretability). We also argue that an imperfect validation would still be valuable: Even if the GPT-4 feature descriptions are sometimes inaccurate, the overlap between the GPT-4 descriptions and probes still helps to validate the probes.
Pdf: /pdf/60b4e531371d1671b8ca4b81d02a710b8699d30a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
ReVideo: Remake a Video with Motion and Content Control | Accept (poster) | Summary: ReVideo presents a novel view of video editing by modifying content with input trajectory to create new content. It designs a three-stage strategy to wrestle out the problem of ignoring motion control when direct training. The main contribution of this work relies on the new task of editing motion via user-specified trajectory while keeping the original video movement. The editing results are superior and photorealistic.
Strengths: 1. The first video editing work on creating new motion and content.
2. Good writing; the paper is easy to follow, and the motivation and three-stage training strategy on decoupling content and motion control is reasonable. The proposed SAFM learned a dynamic fusion weight at different timesteps.
3. The editing results are photorealistic and adhere to the original motion or follow user-specified trajectory with no artifacts.
Weaknesses: 1. The author did not provide the method or explanation of how ReVideo edits the first frame, making the total editing pipeline not end-to-end for users.
2. Part of the original video motion, like mouth movement in the Zuckerberg->robot (head6) and tail movement in dog->lion, is not kept in the edited video.
3. I would like to know how the drag-based editing method handles non-rigid motion, such as the rotation of car tires from a side view. In examples like sea2 and sea2_2, where a shark and a dinosaur are added, the limbs of the animal seem unable to move, making the video look unrealistic. However, in soccer and some human-centric examples, the legs of dogs and people can move normally. Therefore, I would like the authors to add an example of a vehicle moving on the road from a side view, including the movement of the wheels, to address my concerns. This may be a limitation of the drag-based method.
3. There is no quantitative comparison of the ablation study; I understand that the image results in Fig 7 are clear, but only one video qualitative ablation is not reasonable.
4. There are no qualitative video comparisons with other methods in the supp or project page, but only Fig 6, and the automatic metrics are worse than pika even though I understand the clip scores are not accurate, which can not reflect temporal consistency accurately. I suggest the author supply the comparison video between Revideo and other methods in the rebuttal phase.
5. The training cost of three stages: even though Revideo makes great progress in creating new motion, training cost like GPU costs, time costs, memory costs and so on, is still a problem since users prefer to edit a video in a zero-shot manner when using a pretrained video generation model and the compared methods like AnyV2V is training-free.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The method to edit the first frame needs to be declared.
2. Non-rigid motion-like side view wheels movement of cars.
3. Qualitative video comparisons with other methods.
4. The inference time/training cost comparison with other similar methods.
5. What about ReVideo performing in editing multiple objects simultaneously in the same video?
6. Can ReVideo work on text-to-video generation models?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: no significant limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Q1 The Editing of the First Frame
Thanks for this suggestion. The editing method for the first frame is arbitrary, like the setting in AnyV2V. The results presented in the paper utilize text-guided image inpainting tools and InstructPix2Pix. Note that in our framework, editing the first frame is not mandatory. Users can choose to keep the content and modify the motion of the video. We will add a detailed explanation of the editing workflow in the paper.
## Q2 Some Motion is Not Kept
We need to clarify that our editing framework uses sparse motion control (trajectory lines) in the editing region, which is challenging to handle slight motions as you mentioned. To support such motion control, our framework may need to incorporate dense motion representations, such as optical flow. A potential solution is to mix sparse trajectories with dense optical flow during the training process, allowing our method to apply sparse and dense motion control selectively. We will explore this design in future work.
## Q3 Non-rigid Motion
Thank you for this constructive suggestion. We add a video of editing a side-view driving car in **Fig.9** in the attached PDF. We can see that while the car's rigid body is moving, the car's wheels are also rolling (although it might not look entirely realistic). Similar phenomena can be observed in some demo videos in our supplementary, such as adding a tank on the desert. The tank's wheels are moving in a regular pattern, and the shadow behind the tank moves accordingly. We believe that this phenomenon, including the natural motion of dogs and humans that you mentioned, is due to the priors of physical world that the base model (SVD) learned from large-scale video data. The unnatural movement of the shark is also caused by the limitations of SVD's prior. To verify this, we use SVD to perform image-to-video generation based on the shark image. The result in **Fig.9** in the attached PDF shows that while the water surface has natural ripples, the shark remains stationary. We believe that by using a more powerful base model (such as the Sora architecture), we can achieve more natural editing results.
## Q4 Quantitative Comparison of the Ablation Study
To address this concern, we measure the editing accuracy under different settings in our ablation study, as shown in the table below. We use a point tracking model (CoTracker) to extract trajectory in the edited results and then calculate the Euclidean distance with the target trajectory. We will add this quantitative comparison to our paper.
| | w/o SAFM | SAFM w/o time adaptation | Tuning all control module in stage3 | Tuning spatial layers in stage3 | ReVideo |
|:-:|:-:|:-:|:-:|:-:|:-:|
| Accuracy (Pixel) | 37.34 | 8.92 | 42.26 | 44.78 | 5.21 |
## Q5 Qualitative Video Comparisons
Yes, it is necessary to provide comparison videos of different methods. However, due to the rebuttal policy, we cannot provide external links. Instead, we select two videos to present in **Fig.10** in the attached PDF. One can see that AnyV2V has gradual changes in video content in some editing samplings, leading to instability. Pika and AnyV2V both tend to produce static editing results, due to the lack of motion control. While such static results may achieve higher scores in CLIP, it is not natural enough. Additionally, Pika tends to fail in some challenging editing scenarios, such as transforming a part of a lizard into a robot.
## Q6 Training Cost
We agree that our implementation involves some training costs compared with training-free methods. However, our approach offers significant performance gains:
(1) Our method allows precise customization of local content and motion in videos, which is not achievable with existing training-free methods.
(2) Our method can produce high-quality and stable outputs. Training-free methods often struggle with video generation quality and stability of the edited content.
(3) Our method has an advantage in inference complexity. Some training-free methods, such as AnyV2V, require a long time of ddim inversion to ensure the editing quality. The table below shows the time costs of different methods during the inference stage. The experiment is conducted on an A100 GPU, with the video resolution being 768x768. Results show that our method has significantly lower time costs compared to other methods. Therefore, we believe that our training cost (4 A100 GPUs) is reasonable and necessary. We will add the complexity analysis to our paper.
| | InsV2V | AnyV2V | ReVideo |
|-|:-:|:-:|:-:|
| Inference Time (s) | 132 | 303 (DDIM Inv) + 80.9 (Inference) | 26.8 |
## Q7 Performance in Editing Multiple Objects Simultaneously
In the demo video in our supplementary, we show examples of editing six individual petals in the same video, as well as editing multiple flowers and balloons in one video. This demonstrates the robustness of our method in multi-object editing. Additionally, **R1** raises a concern in **Q.5** about the impact of the number of editing regions on performance. Results in **R1-Q.5** show that each editing region has a high independence, and the impact of the region numbers is almost negligible.
## Q8 Whether works on text-to-video models
Yes, it is possible, but the implementation would be more complex than with an Image-to-Video model. The image condition port in the Image-to-Video model can naturally be used as the input of the edited first frame, whereas the Text-to-Video model lacks this port and would require additional training to incorporate it.
---
Rebuttal 2:
Comment: My concerns have been thoroughly addressed in the rebuttal. Overall, this paper is well-structured, offering significant contributions and demonstrating impressive performance. It brings fresh perspectives to the research field, so I have decided to upgrade my final rating to accept.
---
Rebuttal Comment 2.1:
Comment: Thank you for your efforts in the review, improving our paper to a higher standard. We will revise our paper based on your comments and suggestions. | Summary: The paper presents a video editing method that enables precise localized adjustments to content and motion within specific areas of a video. It introduces a three-stage training strategy and a spatiotemporal adaptive fusion module to integrate edits across frames and locations effectively. This method allows for complex editing tasks such as changing content while maintaining motion, adding new motion to static content, and simultaneously modifying both elements.
Strengths: - The paper introduces a novel challenge of editing both content and motion in specific video areas and combines techniques from diffusion models and video editing to achieve nuanced control.
- The three-stage training strategy enhances the robustness and effectiveness of the edits, supported by experimental validation that demonstrates superior performance compared to existing methods.
- The paper is well-organized and clearly explains complex concepts, including the innovative spatiotemporal adaptive fusion module and detailed training strategy.
Weaknesses: - The decoupling training could cause some artifacts. Although the paper demonstrates these artifacts could mostly be alleviated by deblocking training. I can still see some blocky/unnatural results in the result videos.
- The training is quite complicated and separated into three stages. I feel the training strategy could 'overfit' this particular video dataset.
- This method is more like a direct combination of video diffusion and ControlNet.
- More detailed implementation specifics, particularly regarding parameter settings and the architecture of the spatiotemporal adaptive fusion module, are needed.
- The method's computational demands and potential scalability issues are not adequately addressed. For example, what kind of GPU does one need to perform training and testing?
- The paper focuses heavily on technical aspects with less consideration of user interaction.
Technical Quality: 2
Clarity: 2
Questions for Authors: - What kind of GPU does one need to perform training and testing?
- Will the authors release the training and testing code along with pre-trained models upon acceptance?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Quality limited by SVD.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Q1 About Artifact
We agree that our method still has room for improvement. We want to clarify this concern from two points:
(1) **The challenge of this task and our novelty.** Our method is the first attempt at local content and motion editing for videos. In Section 3.2 of the main paper, we conduct extensive toy experiments to demonstrate the challenge of this task, as it requires overcoming significant coupling between control conditions. In implementation, we meticulously design both the training strategy and the model architecture to address this challenging task.
(2) **Editing Performance.** We present several useful applications and high-quality editing results in the paper, which is unachievable for existing methods with a significant gap. Although some artifacts exist in the edges, as you concerned, they are often difficult to detect in the result videos.
We will continue to improve our method in the future.
## Q2 Overfitting in Training
We need to clarify that although the three-stage training strategy might be a bit complicated, our extensive experiments in the paper (Section 3.2) demonstrate that the problem we face is indeed challenging. Therefore, we believe that using a progressive learning approach to tackle this difficult problem is reasonable, and the role of each training stage is demonstrated in the paper. We would greatly appreciate any references or suggestions you could provide to help us improve our current training strategy.
In Section 4.1, we show that our training data includes 10 million videos from the WebVid dataset. Therefore, overfitting is not a concern under such a dataset scale. The test results also demonstrate the diversity of the editing scenarios.
## Q3 Like a ControlNet
We need to clarify that ControlNet is a commonly used method for condition inputs. In this paper, we also introduce that our method uses conditional injection of ControlNet. Moreover, combining video diffusion with ControlNet cannot solve our problem, and it is not a contribution of this work. The key contribution of our work lies in the proposing of a novel training strategy to solve the unexplored fine-grained video editing task. Additionally, we design a spatiotemporal adaptive fusion module to improve the fusion of motion and content control.
## Q4 Detailed Implementation of SAFM
Our SAFM consists of four convolutional layers, each followed by a SiLU activation layer, with a Sigmoid function applied at the end. The SAFM has 2 million parameters, which is negligible compared to the 671 million parameters in the SVD model. We will add this detailed description to the paper.
## Q5 Computation Demand
In Section 4.1 of the paper, we describe that our training is conducted on four A100 GPUs. During the inference, it requires 22GB of VRAM to edit a 16x768x1344 video. In the table below, we present a comparison of computational complexity for different methods. The experiment is conducted on an A100 GPU, with the editing target being a 16x768x768 video. The results demonstrate the advantage of our method in inference efficiency. Therefore, we believe our approach is efficient and flexible. We will add a detailed description of complexity demand in the paper.
| | InsV2V | AnyV2V | ReVideo |
|-|:-:|:-:|:-:|
| Inference Time (s) | 132 | 303 (DDIM Inv) + 80.9 (Inference) | 26.8 |
## Q6 Focuses Heavily on Technical Aspects
We want to clarify this concern via the interactive design and capability of our method. To offer an intuitive user interaction, we chose the sparse and easy-to-interact trajectory lines as the motion control. In the editing mode, users can selectively customize content or motion. In scenarios where motion control is difficult to specify, our method can also automatically generate the motion in the editing region. Therefore, our method is user-friendly in controlling inputs and functions. We will include a detailed description of user interaction in the paper.
## Q7 Open Source & Limitation
Yes, we will open-source all the code.
We need to clarify that our method is not limited to a specific base model. The reason we use SVD as the base model is that SVD is the best open-source model currently available to us. Using a better base model can further improve our editing results.
---
Rebuttal 2:
Comment: Dear R2, we would like to know if your concerns are addressed. If any questions, we can discuss them in this phase.
---
Rebuttal Comment 2.1:
Comment: I really appreciate the author's rebuttal. However:
1. The authors claim that the proposed method's novelty is beyond ControlNet. However, I am still not convinced. From Figure 2 in the paper, the two potential structures are just two slightly different ways of conditioning ControlNet. The trainable control module and zero convolution layers are exactly the same as ControlNet. The only difference is that you replace the condition (originally depth, canny edge map, normal, etc.) with a content map specifying the editing region and the motion trajectory. I still cannot say this analysis is a contribution. I'd say this is rather than an ablation on network architectural design.
2. In order to keep the original motion from the original video. This paper further proposes a data construction strategy, as shown in Figure 4, to decouple the motion between the editing region and the original region. However, this approach is still too naive, as it just performs CutMix [1] to combine two videos into one. However, in this case, the goal is to manipulate motion. I am still not convinced this stage is a contribution.
3. The training pipeline, separated into three stages, is very complex. There will be many hyperparameters that could influence the training in each stage. Also, the entire pipeline relies on many existing off-the-shelf techniques/datasets such as SVD and WebVid (This dataset is actually banned and no longer available due to some legal issue. Thus, the proposed method will not be reproducible. However, this is not the fault of this method.), and CoTracker. If more advanced techniques show up, one needs to tweak all the parameters of the entire training process to make the proposed method work. This makes this method not general
4. Following the previous comment, the proposed pipeline is more like an engineering work by combining existing techniques (SVD, ControlNet, CoTracker, CutMix) and using a massive computational power (4x A100), directly training for a long time (6 days) on a largescale dataset (WebVid). I really appreciate the edited video results. Some of them are amazing. In some of them, I still see artifacts (which might be the cons from SVD). In the entire paper, I do not see many novelties other than architectural designs and training data augmentation.
Due to the above reasons, I am still leaning towards rejection. However, I would like to hear and discuss more with the authors and other reviewers.
Best,
Reviewer ZSh9
[1] Yun, Sangdoo, et al. "Cutmix: Regularization strategy to train strong classifiers with localizable features." Proceedings of the IEEE/CVF international conference on computer vision. 2019.
---
Rebuttal 3:
Comment: We need to clarify that:
1. **ControlNet is a commonly used method for condition injection, not a solution for our problem (as verified in Section 3.2). The contribution of this paper is to propose effective solutions for local video editing.** Additionally, we modify ControlNet by designing a region-adaptive fusion method to make the condition embedding more suitable for this task.
2. **Why is the decoupling training strategy not considered one of our contributions?** Dear reviewer, have you ever seen other similar solutions for video editing? The reference you provided is an image classification method—what relevance does it have to our paper? Just because two different images are mixed together?
3. **Our method is successfully validated on SVD and WebVid data, but this does not mean that our method can only be implemented using SVD and WebVid.** It can be applied to other data and base models. We believe that addressing a challenging problem by a progressive learning approach is reasonable. In many complex generation tasks, such as EMU2[1], the training difficulty is much greater than ours.
[1] Generative Multimodal Models are In-Context Learners
4. Firstly, we are addressing an unexplored and challenging task that requires carefully designed training processes and fusion modules. This is not a simple engineering patchwork. Our core components did not previously exist. Secondly, what is the relevance of the CutMix to our work? Why can it replace our core contribution?
Dear R2, we are eager to address these misunderstandings and look forward to your response.
---
Rebuttal Comment 3.1:
Comment: 1. Novelty: Modifying existing methods doesn't automatically constitute novelty. Your paper shows good designs and results but appears incremental to ControlNet. Clarify what fundamentally differentiates your approach from ControlNet beyond architectural changes.
2. Methodology: The cut-and-paste approach you're using is common in various video processing tasks, e.g., [1], not unique to your method. Referencing MixCut was to highlight this point, not to directly compare classification and video editing tasks.
3. Generalizability: You claim your method can generalize beyond SVD and WebVid, but haven't demonstrated this. Without broader validation, the method's applicability to other datasets or base models remains unclear.
4. Contributions: While some components in your work are new, they appear to be engineering efforts rather than fundamental contributions. The reference to CutMix was to point out similar data augmentation techniques in video processing, not to equate it directly with your method.
[1] Liu, Yang, Zhen Zhu, and Xiang Bai. "Wdnet: Watermark-decomposition network for visible watermark removal." Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2021.
---
Rebuttal 4:
Comment: 1. ControlNet cannot accomplish our task. This is the fundamental difference. The following works all used ControlNet for conditional injection. We do not claim ControlNet as one of our contributions. It is a commonly used model, just like Stable Diffusion.
[1] Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models
[2] ControlVideo: Training-free Controllable Text-to-Video Generation
[3] CameraCtrl: Enabling Camera Control for Text-to-Video Generation
[4] EVA: Zero-shot Accurate Attributes and Multi-Object Video Editing
[5] CCEdit: Creative and Controllable Video Editing via Diffusion Models
[6] Text-Animator: Controllable Visual Text Video Generation
[7] IMAGE CONDUCTOR: PRECISION CONTROL FOR INTERACTIVE VIDEO SYNTHESIS
2. [1] is a paper on watermark removal, where a watermark image refers to the watermark being overlaid on the original image. This overlay is a task, not a solution. We still cannot find any connection between this and our method.
[1] Liu, Yang, Zhen Zhu, and Xiang Bai. "Wdnet: Watermark-decomposition network for visible watermark removal." Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2021.
3. Our method is the first attempt at this task and has already achieved state-of-the-art (SOTA) results. Even if adjustments are needed for other models and datasets, the issues found and solutions proposed in this work are still insightful.
4. CutMix you referenced is entirely different from our approach in both purpose and methodology. We do not think combining two images to enhance image classification performance can replace our contribution. What we propose is a motion-decoupled training strategy, not data augmentation. This decoupling strategy is effective and does not exist in our community.
---
Rebuttal Comment 4.1:
Comment: I found out I severely misunderstood the operation of the motion decoupling part. The goal here is to manipulate different motions between the foreground (editing region) and background. Then, the purpose is largely different from MixCut or other cut-and-paste approaches. Could the authors further claim the purpose of the motion decoupling? Is my current understanding correct?
---
Reply to Comment 4.1.1:
Comment: This understanding is correct. In our experiments, we find a significant motion coupling between the foreground (editing region) and the background (please refer to Section 3.2). The model tends to learn how to estimate the motion state of the foreground (editing region) by relying on the motion in the background, thereby neglecting the condition of motion trajectory. To address this issue, we propose the motion decoupling training as you mentioned. We may not describe this clearly enough in the paper, and we will make a revision for this part.
---
Rebuttal 5:
Comment: Dear authors,
Thank you very much for the clarification. However, I still feel the paper is somewhat incremental by combining a ControlNet-like motion injection module with augmented (motion) training data. The approach and task themselves are new and novel. Furthermore, the result videos are impressive, with barely noticeable artifacts. I would like to raise my rating to borderline accept.
Thanks again to the authors for the clarification and discussions!
---
Rebuttal Comment 5.1:
Comment: Thank you for your efforts in the review. Your comments help improve our method, and we will continue to enhance our approach in future work. | Summary: This paper presents ReVideo, a new approach for precise local video editing of both content and motion. It introduces a coarse-to-fine training strategy to progressively decouple content and motion control, and a spatiotemporal adaptive fusion module to integrate them effectively. Experiments show ReVideo can modify local video content, customize motion trajectories, or change both simultaneously, and extend to multi-region editing.
Strengths: - This appears to be the first attempt at exploring local editing of both content and motion in videos using diffusion models. Being able to modify content and motion trajectories in specific regions is a novel capability compared to prior work.
- The proposed three-stage coarse-to-fine training strategy to progressively decouple content and motion control is an interesting technical approach to deal with the core challenge.
- The spatiotemporal adaptive fusion module is another novel component to integrate the content and motion conditions across sampling - steps and spatial locations.
- Extending the approach to allow multi-area editing without requiring specific training demonstrates flexibility.
- Most of the visual and quantitative results show improvements over prior methods
Overall, this paper addresses a timely and important topic with significant potential benefits for the community. Despite some weaknesses, the reviewer recommends acceptance, considering this is a relatively new area and the paper presents promising results. The score may be adjusted based on the quality of the rebuttal.
Weaknesses: ## Practicality of the Editing Workflow
The current editing interface requires users to specify both a target content image and a set of motion trajectories. While this allows for fine-grained control, it may not be the most intuitive or efficient workflow for common editing tasks. Consider the scenario of object removal - the user would need to carefully craft a content image with the object removed and ensure that the remaining motion trajectories are consistent. An alternative approach could be to directly specify the regions to remove and have the model infer the appropriate content and motion changes automatically. The paper would benefit from a more detailed discussion of the practical trade-offs and usability considerations of the proposed editing framework.
## Limited Motion Control
While the method allows for editing the motion of individual objects, it assumes that the overall scene motion (camera movement, background motion) remains fixed. This limits the applicability of the approach in scenarios where the goal is to modify the global motion patterns (e.g. stabilizing shaky footage, changing the camera viewpoint).
## Precise Placement and Key Contributions of this Paper
While the individual technical components (e.g. coarse-to-fine training, adaptive fusion) are well-motivated, it's worth considering whether similar strategies have been explored in related domains. For instance, progressive training to handle multi-factor variation has been used in GANs, and spatially-adaptive normalization is common in style transfer. Drawing more connections to such related work would clarify the novelty of the specific adaptations made here.
## Content-Motion Entanglement
- The key technical contribution of the paper is the decoupling of content and motion information through a coarse-to-fine training strategy. However, it's not clear if this decoupling is complete or if there are still some residual entanglements between the two factors. For instance, the edited content may still contain some motion information that could interfere with the specified motion trajectories, leading to artifacts or inconsistencies. A more thorough analysis of the content-motion separation and its impact on the editing quality would be informative.
- Is decoupling content and motion the only way to address the issue - could a joint representation learning approach work instead? Acknowledging alternate strategies would help justify the chosen approach.
- **Figure 4 is not very intuitive. It would benefit from additional justification, theoretical analysis, and insights into why such a simple composition from two videos is effective.** This is a key concern.
## Multi-area Editing
- The extension to multi-area editing is a nice addition, but the paper could go further in characterizing the challenges involved. Are there issues with preserving global coherence across multiple edited regions? How does the method scale with the number of regions? Providing such details would give a more complete picture of the capability.
## Clarity and Reproducibility
- Implementation details: There are some missing specifics that could hamper reproducibility. For instance:
- How exactly are the editing regions defined during training - what is the procedure for randomly sampling them?
- What metrics are used for the "threshold filtering" of motion trajectories and how were the thresholds chosen?
- Are there any data augmentation, regularization or optimization tricks used during training?
## Evaluation Metrics
The quantitative evaluation relies primarily on low-level metrics like PSNR and LPIPS, which may not fully capture the perceptual quality and coherence of the edited videos. Additional metrics could provide a more comprehensive assessment:
- Metrics that specifically measure the consistency of the edited regions with the target content and motion (e.g. using an object detector or tracker).
- Metrics that evaluate the temporal stability and smoothness of the edited videos (e.g. some metrics that are used in video inpainting tasks, Please refer to [this repo](https://github.com/MichiganCOG/video-inpainting-evaluation) for details).
- Human evaluations of the overall realism, coherence, and faithfulness to the editing inputs (e.g. through user studies).
## Robustness Evaluation and Ablation Studies
While the paper does include ablations for a few key components (e.g. SAFM, training stages), there are other design choices that are not fully explored. For instance:
- How important is the choice of motion representation (trajectory vs. alternatives)? Testing with different motion inputs would reveal the sensitivity to this factor.
- What is the impact of the trajectory sampling strategy and hyperparameters? Varying the number and selection of trajectories could provide insight into the robustness.
- How does the performance vary with the size and shape of the editing regions? A systematic evaluation across different region properties would be informative.
- Only the end-to-end video editing pipelines are compared, but not the individual technical components. For instance, how does SAFM compare to simpler fusion schemes used in prior work?
- Input noise and perturbations (e.g. in the content image or motion trajectories)
## Dataset Complexity
- While the approach achieves good results on the chosen datasets, it's unclear how well it would generalize to more complex video content (e.g. with dynamic backgrounds, scene changes, occlusions etc.). Discussing the potential failure modes and current limitations would help scope the contribution appropriately.
- The examples shown in the paper are largely limited to simple object-level edits in relatively constrained scenarios (e.g. clean backgrounds, single objects). It's unclear how well the method would perform on more challenging videos with complex scenes, multiple objects, occlusions, camera motion, etc. Testing on a wider range of video complexity would help establish the generality of the approach.
## Editing Scenarios
The paper demonstrates a few key editing applications (e.g. object addition/removal, motion editing), but there are other important scenarios that are not explored, such as: performing semantic-level edits (e.g. changing the action or interaction between objects).
Showcasing the method's performance across a fuller range of editing tasks would demonstrate its versatility.
## Open Source
Will the code for training and inference be released?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weakness section.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Q1 Practicality of Workflow
**Fig.1** and **Fig.2** in the attached PDF show that our method can still produce smooth results without specifying trajectory. This is due to our inherent capability to predict the motion in editing area via unedited content, enabling automatic motion generation when trajectory is difficult to specify. However, the content reference is essential in current framework. We will discuss the practicality in our paper.
## Q2 Limited Motion Control
We need to clarify that global motion pattern in the editing region is not our editing target but needs to be consistent with unedited areas. The consistency is automatically produced without fixed motion assumption.
## Q3 Precise Placement
Thanks for the suggestion. We will discuss related works in progressive learning and multi-condition control.
## Q4 Entanglement
- Residual entanglement. Yes, it still exists and can be observed by reducing the weight of motion condition (**Fig.3** in the PDF). We believe it is useful and mild. **(1)Useful.** Inspired by **Q.1**, the entanglement can automatically produce the motion, when it is difficult to specify the motion of editing region. **(2)Mild.** Various editing results show that this entanglement is almost unnoticeable in default inference.
- Other solutions. Yes, that's certain. But we think decoupled representations are necessary.
- Decoupling training. The insight of why it works mainly comes from Sec.3.2 in our paper. In Sec.3.2, we use several toy experiments to show the strong coupling issue. The method in Fig.4 ensures no motion correlation between the editing and unediting region, preventing the model from estimating the motion in editing region via unedited content. We will add a more intuitive explanation in the paper.
## Q5 Multi-area Editing
In **Fig.4** in the PDF, we select a complex wavy line as the editing target and gradually add editing regions. We compute editing accuracy by Euclidean distance between the trajectory in editing result (extracted by CoTracker) and the target. Results below show slight impact of region numbers on accuracy and consistency. We will add this in our paper.
|Region Number|1|2|4|8|10|
|-|:-|:-|:-|:-|:-|
|Acc.|4.68|4.57|4.67|5.15|5.40|
|Temporal Consistency|0.9926|0.9920|0.9918|0.9905|0.9908|
## Q6 Clarity
In Sec.3.2 and A.1 of our paper, we describe the trajectory sampling. Threshold $l_{Th}$ is empirically defined as the mean length of sparsed trajectories. Each video has one editing area, the minimum bounding rectangle of sampled trajectories. It is expanded to at least 64x64 pixels. Data augmentation includes sampling frames at varying intervals and randomly reversing the video. We will update these details in our paper.
## Q7 Metrics
- Measurement in editing region. Using a tracker helps evaluate our motion control, especially in ablation. Other methods do not have motion control. The content consistency in editing region can be compared among different methods. Below table shows CLIP Score between the editing region and target description.
|Methods|InsV2V|AnyV2V|Pika|ReVideo|
|-|:-:|:-:|:-:|:-:|
|Local CLIP Score|0.2255|0.2378|0.2402| 0.2516|
- Temporal stability. In our paper, we use CLIP-Image score to evaluate the temporal consistency, which is used in several related works (e.g., AnyV2V, Pix2Video). The reference repo is helpful, but most metrics need a target video, which is unachievable in editing task. After reviewing, PSNR Consistency can measure the temporal stability, as shown below.
|Methods|InsV2V|AnyV2V|Pika|ReVideo|
|-|:-:|:-:|:-:|:-:|
|PSNR Consistency|34.94|34.12|40.51|39.26|
- Human evaluation. In Tab.1 in our paper, we conduct a user study for overall video quality and editing accuracy. We agree that more evaluation terms is helpful. We will extend it accordingly.
## Q8 Robustness and Ablation
- Choice of motion representation. Yes, trying more motion representations is helpful. But we only find the trajectory that meets our sparse and interactable requirement.
- Impact of trajectory sampling. Training with significantly moved trajectory is crucial. Below table shows its significant impact. In contrast, trajectory numbers have little impact.
||Random sampling|ReVideo|
|-|:-:|:-:|
|Acc.|13.38|5.21|
- Impact of size and shape. In Sec.A.2 of our paper, we show our support for irregularly shaped regions. Below table shows the performance after randomly expanding the width/height of editing regions on test set. The size and shape have little impact on performance, even when the editing region is entire video.
||Default|+$l\in$[32,64]|+$l\in$[64,128]|Global|
|-|:-:|:-:|:-:|:-:|
|Acc.|5.21|5.06|5.32|5.48|
- Ablation of SAFM. "w/o SAFM" in Sec.4.3 is a simpler fusion scheme that fuses motion and content embedding by adding and convolution. It fails in complex motion control. This result and weight visualization in Fig.5 can verify the need of region-adaptive fusion.
- Input perturbations. We try two content perturbations in **Fig.5** in the PDF. The base model's prior on high-quality videos can mitigate content degradation, and motion control works normally. If the trajectory is perturbed, the motion will follow the perturbation.
We will update the above discussion in our paper.
## Q9 Dataset Complexity
In **Fig.6** and **Fig.7** in the PDF, we try some complex scenarios. Our method can handle dynamic lighting and texture, but scene change affects the content quality, which is a failure case. We will include a discussion in our paper.
## Q10 Editing Scenario
Our test has simple semantic interactions, such as wearing a hat or glasses. In **Fig.8** in the PDF, we test a more complex interaction where two balloons collide. They do not bounce off each other but overlap. This failure is because our base model (SVD) is weak in physical world modeling. We believe our method can achieve such interactions with a more powerful base model. We will discuss this failure in the paper.
## Q11
Yes, we will release all code.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal and efforts to address the concerns raised. I still believe this work is timely and addresses an important topic.
After considering the comments from other reviewers and your rebuttal, my opinion and suggestions are as follows:
1. The task scope and editing capabilities need to be defined more precisely. The initial submission did not clearly discuss the limitations and capabilities in the introduction and conclusion sections.
2. While the decoupling method is well-motivated, the proposed solution is straightforward and similar to proposals seen in many common computer vision tasks. There is little technical analysis and theoretical explanation provided, especially regarding potential artifacts introduced by the proposed method.
3. Given the current state of the field, the architecture is reasonable and enables motion control by trajectory, despite its complexity and the need for tuning many hyperparameters. However, a more concise solution may emerge in the future. To facilitate reproducibility, I recommend providing more detailed documentation of the training details in the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your constructive suggestions. Your very detailed comments in the review improve our paper to a higher standard. We will make a revision accordingly. | null | null | Rebuttal 1:
Rebuttal: We appreciate the efforts of all the reviewers, ACs, and PCs. We have carefully read and addressed all concerns. **Since we are limited to 6,000 characters per reviewer during the rebuttal phase, we could only provide brief responses to some questions.** If there are any further issues, we are happy to address them during the discussion phase.
We include the figures and videos in the attached PDF. Please note that you need to use **Adobe Acrobat** to open the PDF to watch the videos.
Pdf: /pdf/ec59620e0a89cdf5f45c90b016c618fbf457315a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Implicit Optimization Bias of Next-token Prediction in Linear Models | Accept (poster) | Summary: This paper studies the implicit bias of the gradient descent on the Next-Token Prediction (LTP) problem in linear models. They first formulate this NTP problem as minimizing the cross-entropy (CE) loss over distinct contexts, each tied with a sparse conditional probability over the token space. They then provide the necessary conditions for the CE loss to reach the entropy lower bound, i.e., the NTP-compatible condition and the NTP-separable condition. Then, they prove one sufficient condition for those two conditions is oevrparameterization, i.e., the dimension of the embedding space d is larger than the number of distinct contexts in the dataset. Assuming both compatible and separable conditions, they then prove the directional convergence of the minimizer of the CE loss within a certain range and the directional convergence of the GD iterate towards the direction of the solution of an NTP-SVM.
In general, I think this paper delves into a good and important problem: the optimization path and implicit bias of NTP mechanism. The authors provided a good formulation, and the proof is solid.
Strengths: 1. They investigate an interesting and important problem: the optimization path and the implicit bias of NTP.
2. Their formulation of NTP into the CE minimization over distinct contexts is novel.
3. They provide rigorous theoretical results and the proofs are solid, to my knowledge.
Weaknesses: 1. The main issue of this paper is that, for the NTP-compatible and separable conditions to hold, one needs d > m. Does this overparametrization condition usually hold in practice or not? To my knowledge, in practice, the embedding dimension d is much smaller than the number of training data. Since m is not the number of training data and can be much smaller than that, it is not clear to me whether this assumption is possible in practice.
2. There are some paragraphs that are not very clearly written. For example, in lines 154-157, why does equation 4 constrain W^p w.r.t. this subspace? Why is the solution W* unique, assuming equation 4 has a solution? I think those can be expressed as lemmas to make them clearer. In line 148, the authors claim that (3a) holds if and only if the data satisfies the NTP-compatible condition. The 'if' direction is trivial, but the other direction needs a more rigorous proof.
Technical Quality: 3
Clarity: 2
Questions for Authors: See above.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and the constructive questions raised. We appreciate the careful read.
We hope that our responses below answer your questions.
**Q1. for the NTP-compatible and separable conditions to hold, one needs d > m.**
Here are the key points to consider about the condition $d>m$ of Lemma 1:
1. First, this is only a sufficient condition, not a necessary one. In general, whether the conditions are satisfied depends on the geometry of embeddings and the sparsity pattern of the language training set.
2. Second, the result holds for linear models (fixed embeddings). It is an open question (Line 358-360) for future work to derive sufficient conditions for nonlinear models. The NTP-compatibility/separability conditions, which are themselves both necessary and sufficient for the NTP training loss to reach its lower bound, independent of linearity, provide the appropriate formulations for such an endeavor.
Specifically, in nonlinear models, the total number of parameters can be increased by augmenting the width or depth, rather than directly modifying the embedding dimension $d$ as in linear models. This is similar to known results for one-hot classification: For linear models, $d > m$ is a sufficient condition for separability of data (where $m$ is the number of samples and $d$ the ambient dimension), but this condition is relaxed when adding one hidden nonlinear MLP layer, where data can be separable provided $d > m/W$ (with $W$ being the width of the hidden layer). By increasing the width and making $dW$ the total number of parameters, the requirement on $d$ is relaxed. Deriving analogous results for the NTP setting is an interesting future direction. This remains non-trivial since an MLP layer shall be replaced by a sequence-to-sequence architecture (such as transformer), adding complexity to the analysis. Additionally, we expect that the result depends on the sparsity patterns of the support sets.
We appreciate the question and will elaborate on Lines 358-360 in the future work section towards relaxing the current limitation of $d>m$ due to linearity.
**Q2. In lines 154-157, why does equation 4 constrain $W^p$ w.r.t. this subspace? Why is the solution $W^\star$ unique, assuming equation 4 has a solution?**
Assuming Eq. (4) has a solution, say $W_1$, every other solution takes the form
$W^p = W_1 + W_{\text{null}}$, where
$W_{\text{null}}$ (the null-space component) is orthogonal to
$(e_z - e_{z'})h_j^T : z \neq z' \in S_j, j \in [m].$
Thus,
$W_{\text{null}} \in F^\perp$. We will reword Line 155 according to this.
Assuming it exists, the solution to Eq. (4) is unique *when constrained to the data subspace $F$*. To see this suppose $W_1$ and
$W_2$ are both solutions in $F$. Since both are solutions, $W_1 - W_2$ is orthogonal to $F$. However, because $F$ is a subspace, $W_1 - W_2$ must belong to $F$.
Therefore, $W_1 = W_2$, which is a contradiction. We will include this in the appendix for completeness.
**Q3. In line 148, the authors claim that (3a) holds if and only if the data satisfies the NTP-compatible condition. The 'if' direction is trivial, but the other direction needs a more rigorous proof.**
Thank you for highlighting this. Upon review, we agree that our initial statement in line 148 requires more precise phrasing. It is straightforward to see that (3a) implies NTP-compatibility as described in Equation (4). However, the converse is not inherently true; NTP-compatibility alone does not ensure (3a) holds.
For (3a) to be valid, both NTP-separability conditions (Equations 6a and 6b) must also be satisfied in addition to Equation (4). Here’s a proof outline:
Assuming Equations (4), (6a), and (6b) hold for some $W^p$ and $W^d$, we define $W_\gamma=W^p+\gamma W^d$.
Decompose the inverse of $S_z(W_\gamma h_j)$ as:
$$
1/S_z(W_\gamma h_j) = 1+ \sum_{z'\neq z\in S_j }\exp\left( (e_z'-e_z)^TW_\gamma h_j \right) + \sum_{v\not\in S_j }\exp\left( (e_v-e_z)^TW_\gamma h_j \right)
$$
Using the properties from (4) and (6a) for in-support pairs:
$$
(e_z'-e_z)^TW_\gamma h_j = \log\left(p_{j,z'}/p_{j,z}\right)
$$
leading to:
$$1+ \sum_{z'\neq z\in S_j }\exp\left( (e_z'-e_z)^TW_\gamma h_j \right) = 1+\sum_{z'\neq z\in S_j } p_{j,z'}/p_{j,z} = 1+(1-p_{j,z})/p_{j,z}=1/p_{j,z}.$$
For out-of-support pairs, applying (6b) and considering the limit as $\gamma\rightarrow\infty$, we have:
$$
\exp\left( (e_v-e_z)^TW_\gamma h_j \right) = \exp\left( \gamma (e_v-e_z)^TW^d h_j \right)\cdot \exp\left( (e_v-e_z)^TW^p h_j \right) \leq \exp\left(-\gamma\right)\cdot \exp\left( (e_v-e_z)^TW^p h_j \right) \stackrel{\gamma\rightarrow \infty}{\longrightarrow} 0
$$
Putting these together gives $1/S_z(W_\gamma h_j) \stackrel{\gamma\rightarrow \infty}{\longrightarrow} 1/p_{j,z}$ as desired.
We will include the above calculations within a formal proof of Proposition 1 in the appendix and we will remove the 'iff' statement from Line 148. Thanks again for catching this!
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
Thank you again for your thoughtful comments and careful read. Did our responses answer your questions? We look forward to your feedback before the discussion period ends! | Summary: This work studies the implicit bias of optimization in next token prediction tasks by analyzing the structure of the decoding matrix at infinite time. The paper introduces two novel conditions under which the loss reaches its minimum theoretical value and demonstrates that if these conditions hold (which can be, for example, the case when the model is overparameterized), then after GD training, the decoding matrix will converge (in direction) to a matrix reminiscent of the maximum-margin matrix in "standard" classification.
Strengths: This work studies a timely topic (next token prediction) and approaches it from a learning theoretic perspective (implicit bias of optimization), which has proven to be very fruitful in "standard" classification. The assumption of sparse contexts is clever and should be of wider applicability. The results are novel and analogous to similar results that were proven for "standard" classification. Furthermore, the presentation is comprehensive, with many pointers to related work, which help contextualize this paper's contributions.
Weaknesses: A weakness, which the authors do acknowledge in their work, that prevented me from giving a higher score is that there is no clear connection between the structure of the weights and generalization, as there exists in "standard"/one-hot classification. As a result, it is unclear how much insight can be derived from the current result. I would appreciate the authors' thoughts on this.
Minor: The text is too dense in places, with the authors trying to include more details than what the space permits. I would suggest moving some of the discussion in Sections 6 and 7 to the Appendix to facilitate a smoother flow.
Technical Quality: 3
Clarity: 3
Questions for Authors: A minor suggestion: lines 32-34 appear to require rephrasing.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors thoroughly discuss the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your encouraging feedback and for endorsing our paper.
**Q: A weakness, which the authors do acknowledge in their work, that prevented me from giving a higher score is that there is no clear connection between the structure of the weights and generalization, as there exists in "standard"/one-hot classification. As a result, it is unclear how much insight can be derived from the current result. I would appreciate the authors' thoughts on this.**
Indeed, more work is needed to connect the structure of the weights to generalization. In fact, the question of generalization is at the center of what we envision as follow-up work. Although much of this is still in early stages, we would like to share some of our thinking around linking implicit-bias optimization results to generalization in the language setting below.
The first line of thinking relates to how to quantify generalization. One option is test loss (NTP loss with expectation over fresh contexts). In this case, we have reason to believe that it might be possible to extend techniques from standard classification that relate the generalization gap to training loss via algorithmic stability, e.g. [SK22]. The way the ‘structure of the weights’ would come into such a result is by ensuring that there exist weights, sufficiently close to initialization, such that training loss becomes $\epsilon$-close to its entropy lower bound. Since approximating the lower bound requires NTP-separability, we expect the max-margin weights to give the best candidate for selecting those weights. However, two challenges need to be resolved with this approach: the first is technical and involves extending the above-mentioned prior work to accommodate the imbalanced distribution of (distinct) contexts, which breaks the ‘identically distributed’ assumption. The second is more conceptual and relates to the second line of thinking discussed below: what is a ‘good’ model for the context embeddings and for the sparsity patterns?
Beyond test loss, it is unclear to us what the right analogue of one-hot error in standard classification is. A possibility is to study specific tasks for which such a metric is clearly defined. A concrete example could be the bracket-tracing task Dyck, for which [Mur+23] recently demonstrated that NTP-trained transformers can generalize well, a phenomenon they call structural grokking, but only after extensive training. We conjecture that the source of inductive bias for this structural grokking phenomenon is actually the NTP training, and that the framing of NTP in our paper might provide the right tools to study this since: (1) the model of NTP as sparse soft-label classification fits the synthetic language dataset Dyck, for which the experiments are reported in [Mur+23]. (2) It is experimentally suggested that structural grokking occurs only after extended training and only after the NTP loss saturates (i.e., when NTP-separability and compatibility conditions kick in). (3) The principle of margin-maximization has been very recently connected to the grokking phenomenon, although only in one-hot settings of modular arithmetic [Moh+23, Mor+24].
The second line of thinking involves identifying appropriate models for context embeddings and for sparsity patterns of the next-token distribution. Directly modeling context embeddings appears necessary if we are to hope that linear models can still provide (to some extent) insights on language generalization. Simply put the question becomes: what is the simplest (discrete) analogue of the Gaussian mixture model (features distributed normally around a mean vector for each class) which we often use to model image-classification data and produces insights on things like benign-overfitting and optimal loss functions even within linear models? Even if we decide we need to push beyond linear models, we still need to consider modeling the sparsity patterns of the next-token distributions, as this is critical in determining conditions for reaching the entropy lower bound and determining the structure of weights via the NTP max-margin program.
Finally, we hypothesize that considering the conceptual meaning that certain weights might carry could be a way forward. Concretely, decoder weights correspond to word embeddings and last-layer activations ($h_j$ in the paper) correspond to context embeddings. Identifying the structure of these weights translates to the structure of word and context embeddings (see Lines 361-7 for a possible way for future work to arrive at these). Although how to explicitly relate this structure to generalization is at the moment still an open question, at the very least, this direction could lead to insights on the functionality of large language models by understanding how NTP maps language sparsity patterns to word/context representations. Importantly, such characterizations could also investigate linguistic regularities such as word analogies (e.g., representations where king-man+woman=queen). This relates to generalization, since the ability of a model to produce representations for which such arithmetic operations hold has been linked experimentally to better downstream generalization [Mik+13].
[SK22] Stability vs Implicit Bias of Gradient Methods on Separable Data and Beyond
[Mur+23] Grokking of Hierarchical Structure in Vanilla Transformers
[Moh+23] Grokking modular arithmetic can be explained by margin maximization
[Mor+24] Feature emergence via margin maximization: case studies in algebraic tasks
[Mik+13] Efficient Estimation of Word Representations in Vector Space
**Q: text too dense.**
Well received. It makes sense to slightly shorten Sections 6 and 7 by moving some parts to the appendix.
**Q: lines 32-34.**
We appreciate the careful read. We will correct the typo by replacing "when" with "of" in Line 32.
---
Rebuttal Comment 1.1:
Comment: I apologise for my delayed response, which was due to force majeure.
> We conjecture that the source of inductive bias for this structural grokking phenomenon is actually the NTP training, and that the framing of NTP in our paper might provide the right tools to study this since: [...]
In case my input is useful, I agree with this conjecture.
I would like to sincerely thank you for your comment. It taught me many things. In general, I believe that the paper is a solid contribution and should be accepted.
---
Reply to Comment 1.1.1:
Comment: Thank you for your kind words and for sharing your thoughts on the conjecture—this is encouraging!
We appreciate your time and support. | Summary: This paper studies the structural properties of the solutions selected by gradient-based optimizers among the many possible minimizers of the NTP objective, the central challenge being to discern the "implicit bias" of the optimizer towards particular solutions.
Strengths: - The paper is generally well written, and the notation is very clear.
- The paper provides a a very interesting starting point for studying the solutions found by gradient descent in NTP settings
Weaknesses: While the paper provides a a very interesting starting point for studying the solutions found by gradient descent in NTP settings, it's not very clear whether margin maximization practically corresponds to any meaningful takeaway in language modeling.
Technical Quality: 3
Clarity: 3
Questions for Authors: Just the remark in the weaknesses.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and for the positive feedback and score.
**Q: While the paper provides a a very interesting starting point for studying the solutions found by gradient descent in NTP settings, it's not very clear whether margin maximization practically corresponds to any meaningful takeaway in language modeling.**
We agree that this paper is a starting point and indeed we are currently working on extending it further. Below we’d like to share some of our early-stage thinking to help convey our vision for this line of work and its relevance to language modeling.
A first idea is to use the problem formulation and convergence framework as a basis for characterizing the geometry of word and context representations of language models trained with NTP. In our paper, we take the initial step by fixing the context representations and characterizing word embeddings. An idea to also explicitly account for context representations is to assume that context embeddings are freely optimized together with word embeddings. We briefly discuss this in Lines 361-7. An advantage of this approach is that it circumvents the complexities of a specific architecture (e.g., transformers) and isolates NTP from it. However, this requires assuming a large enough model with sufficient expressive power to generate such unconstrained embeddings. Recall that our current framing of the NTP paradigm as a sparse soft-label classification over distinct contexts, together with identifying the necessary and sufficient conditions for reaching the entropy lower bound, continue to hold in this setting. Thus, the technical question that future work needs to address is extending the convergence analysis to the bilinear setting where both $W$ and $h_j, j\in[m]$ are optimized. Our conjecture, pending further analysis, is that the convergence result will be of similar flavor: when projected to an appropriate data subspace, $F$, the word and context embeddings converge in direction to the solution of an appropriately defined margin-maximization problem. But, unlike the linear case where $F$ depends both on the embeddings and on the sparsity patterns of the language data, in this new setting, it is natural to expect that $F$ would only depend on the latter.
It follows then that the (corresponding) margin maximization program would establish a direct mapping between language data statistics as encoded in the sparsity patterns of the training data and the geometry of representations. We deem this interesting in its own right. Moreover, down the road, we hope this might also help enhance model interpretability and explainability, as well as provide a way to algorithmically (e.g., by modifying the CE loss) mitigate unfavorable imbalances in language data (e.g., rare words/contexts). Indeed, characterizing the geometry of word representations has a long history in NLP literature. This dates back to at least the work [LV14], which studies word geometry of the Skip-gram with negative sampling objective in word2vec. This has been used to provide insights on ‘word analogies’ and inspire algorithms that modify the geometry of representations (e.g., making them more isotropic [Aro+16]) towards improving linguistic regularities [MBV17]. More recently, there are many works of similar flavor on modern transformer models trained with NTP, but to the best of our knowledge, most of these are heuristic/experimental, often resulting in contradictory claims. We see an opportunity for a theoretical framework to complement such work.
Additionally, we envision that the results can be used to gain insights into how language models generalize. To give a more concrete example, it could be possible to use the results to theoretically investigate the empirically observed phenomenon of ‘Grokking of Hierarchical structure’, i.e., the ability of models to infer hierarchical structures in language data when trained far beyond the point of saturating the training accuracy [Mur+23]. While [Mur+23] report this phenomenon in transformers, we conjecture that the source of this structural grokking phenomenon is actually the inductive bias of NTP training. Various reasons lead us to believe that the framing of NTP in our paper might provide tools to study this: (1) the model of NTP as sparse soft-label classification fits synthetic language datasets such as Dyck, for which the structural-grokking experiments are reported in [Mur+23]. (2) It is experimentally suggested that structural grokking occurs only after extended training and only after the NTP loss saturates (i.e., when NTP-separability and compatibility conditions kick in). (3) The principle of margin-maximization has been very recently connected to the grokking phenomenon, although only in one-hot settings of modular arithmetic [Moh+23,Mor+24].
Overall, while more work is needed to materialize the paper’s results into direct language modeling insights (e.g., the relation of representation geometry to language data statistics and how it impacts linguistic regularities, as well as generalization phenomena like structural grokking), we hope the above discussion (which we are happy to elaborate upon in the paper) convinces that it is a worthwhile endeavor.
[LV14] Neural Word Embedding as Implicit Matrix Factorization
[Aro+16] A latent variable model approach to psi-based embeddings
[MBV17] All-but-the-top: Simple and effective post processing for word representations
[Mur+23] Grokking of Hierarchical Structure in Vanilla Transformers
[Moh+23] Grokking modular arithmetic can be explained by margin maximization
[Mor+24] Feature emergence via margin maximization: case studies in algebraic tasks
---
Rebuttal 2:
Comment: Dear Reviewer,
We appreciate your support of our submission. We hope that our response regarding the potential takeaways of our study for language modeling sparked some interest. If you have any questions about these ideas, we are happy to elaborate. In any case, we look forward to hearing your feedback before the discussion period ends.
---
Rebuttal 3:
Comment: Thank you for your response and for shedding light on how future directions and practical implications. Although this paper does not lie squarely within my area of expertise, I think it is a good paper, and will be raising my score to an accept.
---
Rebuttal Comment 3.1:
Comment: Thank you again for your feedback and support.
We appreciate your decision to raise the score. | Summary: This study investigates the structural properties of solutions chosen by gradient-based optimizers for next-token prediction (NTP), framing NTP as cross-entropy minimization across various contexts with sparse conditional probability distributions over a finite vocabulary. It focuses on the optimization bias of gradient descent (GD), characterizing how GD selects parameters that equate the logits’ differences of supported tokens to their log-odds.
Strengths: This study enables deriving the data-entropy lower bound in NTP for understanding the optimization and generalization properties of NTP models.
Weaknesses: The study's focus on linear models analyzing CE loss for NTP may limit its novelty and applicability, making its contributions to the field appear unclear compared to existing research.
Technical Quality: 2
Clarity: 2
Questions for Authors: Q.1 Please clarify the differences and advantages of your study compared to the following existing research. What new insights does this study provide, and why are they important? Specifically, while these existing studies highlight the critical role of attention in NTP, your study omits this aspect. Could you explain why it is still valid to disregard attention in your analysis?
Mechanics of Next Token Prediction with Self-Attention
Yingcong Li, Yixiao Huang, M. Emrullah Ildiz, Ankit Singh Rawat, Samet Oymak
Max-Margin Token Selection in Attention Mechanism
Davoud Ataee Tarzanagh, Yingcong Li, Xuechen Zhang, Samet Oymak
Transformers as Support Vector Machines
Davoud Ataee Tarzanagh, Yingcong Li, Christos Thrampoulidis, Samet Oymak
Q.2 When considering next-token prediction (NTP) using sequence data, distinct contexts might differ by only a single character and are expected to be interrelated. Does the assumption of independence and identically distributed (i.i.d) data in Eq.(2) not pose a problem in this scenario?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The limitations of this study include its reliance on the simplicity of the analyzed model, unclear distinctions and advantages over existing research, and its omission of key aspects such as the properties of attention mechanisms.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
Below, we clarify the distinctions from the references you mention and explain why our problem setting differs from studying self-attention/transformers, focusing instead on the NTP paradigm. While we have detailed these discussions in the submission, we repeat them here for your convenience, respecting the reviewing load.
We hope these clarifications lead to a re-evaluation of your score!
**Q1: Comparison to works on implicit bias in transformers:**
We have discussed these references in detail in the submission. For your convenience, we summarize them below.
[44] Mechanics of Next Token Prediction with Self-Attention - Li et al.
[79] Max-Margin Token Selection in Attention Mechanism - Tarzanagh et al.
[80] Transformers as Support Vector Machines - Tarzanagh et al.
In Lines 327-328, we explain that our convergence results relate to a conjecture in [79] about implicit optimization bias in transformers, though our findings are different in nature. This is further elaborated in Appendix B, Lines 648-667. The discussion there is self-contained and directly answers your questions. We reproduce it below:
“*As already mentioned in Sec. 6, our work is closely related to [79], where the authors investigate the implicit bias of self-attention in transformers. The insight put forth in the prequel [80] is that softmax attention induces implicit-bias behaviors that bear similarities to vanilla implicit bias of one-hot prediction. Concretely, [79] studies GD optimization of one-layer self-attention with fixed decoder and one-hot binary classification. They show that, in the limit, GD finds attention weights that converge in direction to the solution of an SVM problem that separates optimal tokens
from non-optimal ones. Their non-convex setting introduces locally optimal SVM directions to which GD may converge depending on initialization. Different to them, the NTP setting that we study involves predictions over multiple categories and is not one-hot. Also, while they fix the decoder, here, we fix the embeddings. In these respects their results are rather different. More similarities arise when [79] replace the linear decoder with a MLP, which they note can induce multiple optimal tokens per sequence. This leads them to formulate a more general token-separating SVM program, which similar to ours confines the separation on a certain data subspace. However, the operational nature of the programs remains different as theirs optimizes attention weights and separates tokens within a sequence, while ours optimizes decoder weights and separates context embeddings based on their respective support sets. More importantly, while [79] only conjectures the convergence of GD to their general SVM program, we leverage convexity in our setting to prove an analogous statement rigorously. Eventually, as we move lower in our top-down approach and consider architecture-specific embeddings generated by attention, we anticipate to see integration of our ideas with theirs.*”
Additionally, Lines 677-684 compare our work to Li et al. [44]:
“*Upon completing this paper, we became aware of independent contemporaneous research by Li et al. [44] that also examines the implicit bias of self-attention with a fixed linear decoder in next-token prediction scenarios. Unlike our study which utilizes the widely adopted CE loss, their approach is based on log-loss, which renders the training loss convex, a similarity shared with our model despite the inclusion of self-attention. Both our results and those of Li et al. substantiate the conjecture posited by Tarzanagh et al. [79], albeit in very distinct settings. Notably, contrary to both [79] and [44], we unveil the optimization intricacies of the NTP paradigm, even within the simplest linear settings.*”
We believe these detailed comments clarify the distinctions to the above references. If you have further questions, please kindly let us know and we are happy to elaborate.
**Q2: Could you explain why it is still valid to disregard attention in your analysis?**
Our key message is that self-attention and next token prediction (NTP) are distinct. NTP, which involves predicting the next token given preceding tokens using cross-entropy loss, is used across transformer-based models, state-space models, and LSTMs (see footnote 3). Thus, studying NTP separately from transformers/self-attention is valid.
By isolating NTP, we highlight essential aspects of the problem often overlooked (e.g., [47,50]). For example, modeling NTP in language settings as soft-label classification over sparse probabilistic labels and deriving conditions for reaching the entropy lower bound. These foundational aspects remain valid regardless of the embedding architecture (see Lines 192-195).
Our convergence results assume fixed embeddings and training only classifier weights (aka word embeddings). In Section 7, we suggest two avenues to extend this analysis. First, studying architecture-specific embeddings, including those generated by self-attention (Lines 352-357). Second, exploring architecture-agnostic embeddings via the unconstrained features model (Lines 361-367). This approach leads to a model where both context and word embeddings are freely optimized, helping to understand the geometry of context and work embeddings once the NTP loss reaches the lower bound.
**Q.3 distinct contexts might differ by only a single character and are expected to be interrelated. Problem with independence and identically distributed (i.i.d) data assumption in Eq. (2)?**
There is no i.i.d. assumption in Eq. (2). Rather, Eq. (2) is a reformulation of the empirical NTP loss expressed in terms of distinct contexts, with the summation over these contexts.
If we have two contexts that differ by a single character, they are still considered distinct. Thus, Eq. (2) applies to them as it corresponds to two different distinct embeddings, say $h_{j_1}$, $h_{j_2}$ with $j_1,j_2\in[n]$.
---
Rebuttal 2:
Comment: Dear Reviewer,
Did our response address your questions? Specifically, did it resolve your concern about how our work positions itself in relation to the papers you mentioned?
We look forward to your feedback before the discussion period ends.
Thank you for your time.
---
Rebuttal Comment 2.1:
Comment: Thank you for your response, I will raise my score.
On Q2.
Let me confirm the motivation.
Does your response mean that next token prediction can be separated from a specific model to understand its properties?
If so, can such a separated theory provide a unified explanation of the phenomena of a specific model?
In that case, it it better to experimentally demonstrate the findings obtained in this paper by using multiple different models.
On Q3.
When inputting a context in a sliding window, the sequence that corresponds to the previous answer is partially included in the context. Isn't this not i.i.d even if it is distinct?
---
Reply to Comment 2.1.1:
Comment: Thank you for your response! And we appreciate your willingness to raise the score.
**On Q2:** Yes, that’s right. The thesis we aim to communicate is that NTP as a training framework has intrinsic properties worth investigating, independent of the underlying architecture. This is not to say that architecture doesn’t matter, but significant insights can be gained by focusing on NTP itself. Our paper highlights this idea by examining a fixed embeddings setting, which allows us to isolate NTP properties. We provide experiments in the paper for this setting. More broadly, we envision using the proposed problem formulation and convergence framework to characterize the geometry of word and context representations in language models trained with NTP. Specifically, we aim to answer: how do the statistics of language data map to the geometry of representations during training?
In our paper, we take an initial step by fixing the context representations and characterizing word embeddings. To also account for context representations, one idea is to assume that context embeddings are freely optimized along with word embeddings. We briefly discuss this in Lines 361-367. An advantage of this approach is that it circumvents the complexities of a specific architecture (e.g., transformers) and isolates NTP from it, provided the model is large enough with sufficient expressive power to generate such unconstrained embeddings. Our current framing of the NTP paradigm as a sparse soft-label classification over distinct contexts, combined with identifying the necessary and sufficient conditions for reaching the entropy lower bound, still holds in this setting. Thus, the technical question for future work is extending the convergence analysis to the bilinear setting where both $W$ and $h_j, j \in [m]$, are optimized. We conjecture that this will lead to an appropriate margin maximization program that establishes a direct mapping between language data statistics, as encoded in the sparsity patterns of the training data, and the geometry of representations. We find this inherently interesting, and such characterizations could also investigate linguistic regularities, such as word analogies (e.g., representations where king-man+woman=queen). Additionally, they might help enhance model interpretability and explainability, and provide a way to algorithmically (e.g., by modifying the CE loss) mitigate unfavorable imbalances in language data (e.g., rare words/contexts).
**On Q3:** You are correct. The contexts are not i.i.d., but this does not create any problems in the current formulation. E.g., as mentioned: There is no i.i.d. assumption in Eq. (2) and such contexts are still viewed by the optimization as distinct contexts. What the implicit bias viewpoint of NTP suggests is that the relationship between learned context representations depends on the sparsity patterns of their next-tokens. That is, if two contexts (regardless of their degree of partial inclusion) are followed by a similar set of words (even if the probabilities of each word differ for the two contexts), their representations will tend to be more aligned, and vice versa. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On $f$-Divergence Principled Domain Adaptation: An Improved Framework | Accept (poster) | Summary: This study addresses the gap in the theory and algorithms of unsupervised domain adaptation based on f-divergence proposed by Acuna et al. 2021. Specifically, while the theory uses absolute values, the algorithms do not, and this issue is resolved by introducing a single scaling factor. The newly proposed f-DD generalization bound is derived based on Rademacher complexity, and tighter bounds are obtained using the localization technique.
As a specific domain adaptation algorithm, an adversarial type algorithm is proposed, yielding favorable results in benchmarks.
Strengths: This study bridges the gap between theory and practice in existing UDA methods based on f-divergence, advancing the foundational research in domain adaptation. Furthermore, the derivation of sharper bounds using the recently introduced localization technique for DA is highly commendable as a contribution to the theoretical framework of DA.
The authors validate their theoretical contributions with empirical results, showing superior performance on popular benchmarks.
Weaknesses: While the empirical validation is strong, it is limited to specific benchmarks. Broader validation across diverse datasets and tasks would strengthen the findings. It is nice to present some insight into what kind of dataset the proposed f-DD works well (and why), and also into what kind of dataset it does not work well (and why).
Technical Quality: 3
Clarity: 3
Questions for Authors: Please provide the source of the technique that resolves the absolute value with a scaling parameter.
Additionally, clarify whether the claim in this paper—that there is no need to adjust the scaling parameter—holds generally.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you sincerely for the valuable feedback on our paper. Our responses follow.
>- While the empirical validation is strong, it is limited to specific benchmarks. Broader validation across diverse datasets and tasks would strengthen the findings. It is nice to present some insight into what kind of dataset the proposed f-DD works well (and why), and also into what kind of dataset it does not work well (and why).
**Response.** Thank you for your suggestion. We have included additional experimental results on the VisDA-2017 dataset in the updated PDF (please see our general response).
Regarding when $f$-DD performs well or poorly: we believe $f$-DD tends to work effectively when the marginal distributions $P_X$ differ, but the conditional distribution $P_{Y|X}$ is similar or close across the source and target domains (e.g., in cases of covariate shift). This is because the calculation of prediction disagreement between domains relies on the marginal distributions. Conversely, if there is a significant difference in the conditional distributions $P_{Y|X}$ between the two domains, $f$-DD's effectiveness may be limited. However, such issue may exist in most domain discrepancy guided UDA algorithms.
In such cases, utilizing additional knowledge about target domain labels, potentially through advanced pseudo-labeling techniques, might improve performance. We note that our theoretical framework assumes zero knowledge of target labels, so having such knowledge could require adjustments to our problem setup and refinement of our theory.
>- Please provide the source of the technique that resolves the absolute value with a scaling parameter.
**Response.** We believe that incorporating an absolute value function into the variational representation of $f$-divergence, as done by Acuna et al. (2021), is an unusual approach for upper bounding some objective quantities. This method changes the definition of $f$-divergence, as discussed at the beginning of our Section 4. In contrast, introducing a scaling parameter into the variational formula to achieve a better upper bound is a more conventional approach with a well-established history in generalization theory, such as PAC-Bayesian generalization bounds. For a detailed exploration of the importance of scaling parameters in upper-bounding techniques using $f$-divergence, we refer to [R1].
[R1] Rohit Agrawal and Thibaut Horel. Optimal bounds between f-divergences and integral probability metrics. In ICML 2020.
>- Additionally, clarify whether the claim in this paper—that there is no need to adjust the scaling parameter—holds generally.
**Response.** We note that we do theoretically justifies this in our Proposition 1 (simply because the surrogate loss $\hat{\ell}$ and $t$ are both unbounded). Specifically, in our Proposition 1, when the hypothesis space $\mathcal{H}$ is sufficiently large and the outputs of $\hat{\ell}\circ h'$ and $t\ell\circ h'$ span the entire space of $\mathbb{R}$ (as the crudest choice), then one expects that any optimal $\hat{\ell}\circ h'$ that maximizes $\tilde{d}$ can be matched by a corresponding $t{\ell}\circ h'$ to achieve the same value of $d$, and vice versa. This suggests that optimizing over $t{\ell}\circ h'$ is equivalent to optimizing over $\hat{\ell}\circ h'$. In practice, we do not know if $\mathcal{H}$ is large enough so that this argument holds, hence, we also empirically validate that optimization over $t$ is unnecessary in practice. See the Table 10 on Page 9 for details about optimizing over $t$.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications on my concerns. Now I'm tend to the acceptance and raise my score.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: We are pleased to hear that your concerns have been addressed, and we sincerely appreciate your decision to raise the score. | Summary: This paper studies the learning theory aspect of the domain adaptation problem, where the key is to bound the estimation errors between expectations over shifting distributions. Specifically, this work improves the recently developed $f$-divergence-based generalization analysis, where the main results ensure a tighter generalization upper bound and the consistency between theory and method. For finite sample setting, a sharp bound is provided to accelerate the asymptotic rate. Numerical simulation is conducted to demonstrate the superiority of the theory-guided method over the existing discrepancy-based framework.
Strengths: + The motivation is clear, i.e., improving the $f$-divergence-based bound and bridging the gap between method and theory, and the presentation is easy to follow.
+ The technical part is generally sound and the justifications are sufficient.
+ The experiment results are superior compared with recently developed generalization bounds.
Weaknesses: + Some notations are inconsistent in theoretical analysis.
+ The proposed algorithm needs further justifications.
+ The experiment comparison could be improved.
Technical Quality: 3
Clarity: 3
Questions for Authors: There seem no major faults in this submission, and I only have the following minor concerns.
Q1. Theory and methodology. The major result for the target error bound is provided in Eq. (4) in Thm. 4.1 and the specific bound w.r.t. KL-divergence is presented in line 162, where the induced learning objective consists of source risk and the square root of cross-domain KL-divergence. However, it seems that the optimization objective Eq. (5) considers the divergence without the square root directly. I understand the optimal solutions are the same for these two objectives (if the optimal solutions ensure 0 cross-domain discrepancies). But considering Eq. (4) is closely related to the major merit of this work, i.e., the tight bound, the consistency between Eq. (4) and Eq. (5) seems to be important. Some justifications are highly expected.
Q2. Method application. As far as I understand this work, the derived $f$-DD measure can be applied to existing works whose primary goal is discrepancy minimization. Thus, it could serve as a plug-and-play module for existing SOTA DA methods. Thus, some detailed discussions on the capability of $f$-DD w.r.t. existing methods are highly expected.
Q3. Following Q2, apart from the experiments in the current version, some comparisons between SOTA DA methods and their combination with $f$-DD objective are highly expected.
Q4. The clarity w.r.t. definitions could be improved, e.g., $K_{h',\mu}(t)$ depends on the hypothesis $h$ while the justification (i.e., line 178) is provided after the definition (i.e., line 176). A thorough check for these issues could improve the readability.
Q5. Some notations seem to be inconsistent. For example, the notation $I_{\nu}^{\phi}(h,h')$ in line 132 is inconsistent with $I$ in line 129; the notations $\mathbb{E}_{\nu}$ in line 132 seems to be incorrect (probably should be expectation over $\mu$?).
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations are discussed in the checklist, and there seems no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you sincerely for your careful reading and valuable feedback on our paper. Our responses follow.
>- The proposed algorithm needs further justifications.
>- Q1. Theory and methodology. The major result ... the consistency between Eq. (4) and Eq. (5) seems to be important. Some justifications are highly expected.
**Response.** The objective in the algorithm is in fact motivated by the findings from Section 5, where we demonstrate that when the error of the source domain is small, the $f$-DD term without the square-root function should more accurately reflect generalization (e.g., see Remark 5.3). Given that the empirical risk of the source domain is always minimized during training, we believe the final hypothesis lies within the subset of $\mathcal{H}$ (i.e., Rashomon set with $r_1$). Consequently, using Eq. (4) in this case is weak (as it has slow convergence). Note that $R^r_\mu(h)$ in Section 5 is upper-bounded by $r+r_1$, which we believe should not tend to infinity at the end of training.
We will provide additional justification to clarify the proposed objective function in Eq. (5).
>- The experiment comparison could be improved.
>- Q2. Method application. As far as I understand this work, the derived $f$-DD measure can be applied to existing works whose primary goal is discrepancy minimization. Thus, it could serve as a plug-and-play module for existing SOTA DA methods. Thus, some detailed discussions on the capability of $f$-DD w.r.t. existing methods are highly expected.
>- Q3. Following Q2, apart from the experiments in the current version, some comparisons between SOTA DA methods and their combination with $f$-DD objective are highly expected.
**Response.** Thank you for your insightful suggestion regarding our $f$-DD being a plug-and-play module for existing SOTA methods. In fact, we do compare our $f$-DD with SOTA results on the Office-31 dataset, as shown in Table 10 of Appendix E. In particular, for a fair comparison, we replaced ResNet-50 with pretrained Vision Transformer (ViT) and Swin Transformer backbones. The results are close to the SOTA, with $93.9$ (ours) vs. $95$ (SOTA) for ViT and $94.8$ (ours) vs. $95.3$ (SOTA) for Swin-based transformer. In Lines 1214-1227, we provide a detailed explanation for why our $f$-DD currently does not outperform SOTA. Specifically, our $f$-DD lacks additional ingredients such as advanced pseudo-labeling techniques, Mixup, and label smoothing, which are commonly invoked in UDA algorithms.
We acknowledge that integrating $f$-DD with SOTA methods could be beneficial, though it may require certain efforts for adjustments due to the complicated training objectives in these methods. Thus, rather than directly combining $f$-DD with SOTA methods, which could be complex, we have explored combining domain Mixup techniques with $f$-DD. Additionally, in response to other reviewers' requests for experiments on a larger dataset, we have included results on the VisDA-2017 dataset in the uploaded PDF (please find it in our general response). These results demonstrate the potential of $f$-DD to achieve SOTA performance.
We will include these new results in the next revision.
>- Some notations are inconsistent in theoretical analysis.
>- Q4. The clarity w.r.t. definitions could be improved, e.g., $K_{h',\mu}(t)$ depends on the hypothesis $h$ while the justification (i.e., line 178) is provided after the definition (i.e., line 176). A thorough check for these issues could improve the readability.
**Response.** Thanks for pointing out this. We will revise the definition of $K_{h',\mu}(t)$, and thoroughly proofread the paper to identify any similar issues.
>- Q5. Some notations seem to be inconsistent. For example, the notation $I^\phi_{\nu}(h,h')$ in line 132 is inconsistent with $I$ in line 129; the notations $\mathbb{E}_{\nu}$ in line 132 seems to be incorrect (probably should be expectation over $\mu$?).
**Response.** Thank you for catching that. You are correct—these are typos. Specifically, the $I$ in line 129 should be $I^\phi_{\nu}(h,h')$, and the expectation should be with respect to $\mu$. We appreciate your careful reading.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for providing responses to the concerns. My concerns are well-addressed.
Overall, I believe that this paper provides a tighter generalization error analysis framework by exploring a new 'change of measure' inequality via $f$-divergence, which is generally better than the commonly employed divergence based on the integral probability metric. Thus, I would like to improve my score accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We sincerely appreciate the reviewer’s positive feedback on our response and are grateful for the increased score. | Summary: This paper aims to develop an improved version of f-divergence-based unsupervised domain adaptation (UDA) learning theory. In particular, the authors introduce a novel f-divergence-based domain discrepancy measure (f-DD) by combining the two existing concepts, which are f-divergence and domain discrepancy. Based on that f-DD measure, the paper next provides a generalization bound on the target domain, which is shown to be sharper than the existing related bound. The experimental results consistently demonstrate that f-DD outperforms the original f-DAL in three popular UDA benchmarks, with the best performance achieved by Jeffereys-DD.
Strengths: The paper is well-written and easy to follow. The idea of introducing f-divergence-based UDA, targeting a better risk-bound on the target domain is novel and interesting. All the main statements of the paper are theoretically supported, though I did not have enough time to verify all of those propositions/theorems carefully.
The experimental results consistently demonstrate that f-DD outperforms the original f-DAL in three popular UDA benchmarks, with the best performance achieved by Jeffereys-DD.
Weaknesses: The novelty of the paper is quite limited since the f-divergence-based domain discrepancy measure (f-DD) is proposed by combining the two existing concepts, which are f-divergence and domain discrepancy.
In Theorem 5.2, the authors claim that the application of the localization technique gives a fast-rate generalization, they do not provide a concrete evidence. Could the author give some explanations/clarifications for that.
Moreover, the experimental part of the paper seems not be very convincing since it only provides experiments with quite small datasets (Office31, Office-Home, MNIST & USPS) and simple model (e.g., Lenet). It raises the concern about capability of f-DD in more complicated settings with large datasets and backbone network.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to my comments about the weaknesses of the paper.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you sincerely for your constructive comments. Our responses follow.
>- The novelty of the paper is quite limited since the f-divergence-based domain discrepancy measure (f-DD) is proposed by combining the two existing concepts, which are f-divergence and domain discrepancy.
**Response.** We respectfully disagree with the assessment that the novelty of our paper is limited due to the combination of $f$-divergence and domain discrepancy. Our paper aims to advance the existing framework rather than merely combining these concepts. The title of our paper highlights that we present an "improved framework" compared to the existing work (proposed by Acuna et al. (2021)), reflecting our focus on improving the previous results. While the variational representation of $f$-divergence is indeed discussed in earlier literature, the novel aspect of our paper lies in its application within a new $f$-divergence-principled theory for UDA.
We now summarize our novel theoretical contributions here: 1) Acuna et al. (2021) introduced the absolute value function into their discrepancy definition, leading to a disconnection from the original $f$-divergence, and potentially allowing it to go to infinity (as illustrated in Figure 1 in our paper). Their Lemma 1 claims that their notion is upper-bounded by $f$-divergence; but unfortunately their Lemma 1 is flawed. Specifically, their Eq. (B.5) in the appendix is incorrect as they use $\sup A = \sup |A|$ whereas essentially $\sup A \leq \sup |A|$, and hence the equality is not justified. This disconnection suggests that, strictly speaking, a rigorous $f$-divergence-based theoretical guarantee for the target error in UDA has not been established prior to our work; 2) In addition, we apply the localization technique in the analysis. The localization technique has been studied extensively in localized Rademacher complexity since 2005, or even earlier in the context of empirical processes theory. In UDA, before Zhang et al. (2020), the localization technique was mentioned in Example 4 of Hanneke & Kpotufe (2019). The novelty in our Section 5 lies in our novel application of localization: previous works like Zhang et al. (2020) utilize localization to achieve better sample complexity results (e.g., $\mathcal{O}(\frac{1}{n}+\frac{1}{m})$), while we directly incorporate the localization technique prior to proving the sample complexity results. In Lemma 5.1, we use it to remove the square-root function in Eq. (4) from Section 4, and as expected, it also helps to improve sample complexity in Theorem 5.2. Our proof techniques are in fact quite novel.
We sincerely hope the reviewer could reconsider the novelty and impact of our contributions in light of these clarifications.
>- In Theorem 5.2, the authors claim that the application of the localization technique gives a fast-rate generalization, they do not provide a concrete evidence. Could the author give some explanations/clarifications for that.
**Response.** We would like to direct the reviewer to Appendix D.12, as also referenced in Line 259 of the main text, where we provide a concrete example demonstrating this. Specifically, in the threshold learning example discussed in Appendix D.12, we show through straightforward calculations that the quantity
$5*\mathrm{D}^{h\_{\frac{1}{2}},\mathcal{H}\_{\frac{1}{4}}}\_{\rm KL}(\nu||\mu)+0.1*R^r_\mu=0.265$ is indeed smaller than $\sqrt{\mathrm{D}^{h\_{\frac{1}{2}},\mathcal{H}}\_{\rm KL}(\nu||\mu)}=0.36$. This illustrates that the localization technique results in a tighter bound.
>- Moreover, the experimental part of the paper seems not be very convincing since it only provides experiments with quite small datasets (Office31, Office-Home, MNIST & USPS) and simple model (e.g., Lenet). It raises the concern about capability of f-DD in more complicated settings with large datasets and backbone network.
**Response.** For additional experimental results of more complicated networks, please refer to Lines 1201-1227 in Appendix E, where we present results using the pretrained Vision Transformer (ViT) base model and the pretrained Swin Transformer. Additionally, ResNet-50 was used for the Office31 and Office-Home datasets, while LeNet was used only for the Digits dataset, where it has already demonstrated strong performance.
Furthermore, we have included experimental results on the VisDA-2017 dataset in the uploaded PDF file (please see our general response).
---
Rebuttal Comment 1.1:
Title: Upgrading my score to 6
Comment: Thanks to the authors for the detailed responses, especially for clarifying the scope of the paper. Since most of my concerns have been addressed, I upgraded the paper's score to 6.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We are glad the reviewer is satisfied with our response, and we really appreciate the improved score. | Summary: This paper improves the theoretical foundations of UDA proposed by previous work, named f-DD. By removing the absolute value function and incorporating a scaling parameter, f-DD yields novel target error and sample complexity bounds, allowing us to recover previous KL-based results and bridging the gap between algorithms and theory presented in Acuna et al. Leveraging a localization technique, this paper also develops a fast-rate generalization bound. Empirical results demonstrate the superior performance of f-DD-based domain learning algorithms over previous works in popular UDA benchmarks.
Strengths: 1) This paper holds significant theoretical significance in the field of UDA (Unsupervised Domain Adaptation);
2) The proof of the theorem is very solid;
3) The experiments are also sufficient.
Weaknesses: 1) The readability of the paper is poor. It is almost entirely composed of definitions, remarks, lemmas and theorems, lacking a figure to introduce the motivation of this paper and explain why the improved framework is effective. 2) It is difficult to reproduce the results, as the training objective (5) is very abstract and unclear how to implement it experimentally. 3) This paper requires a substantial foundation of reading other papers in order to be understood.
Technical Quality: 3
Clarity: 2
Questions for Authors: How to implement the training objective (5)?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: Please refer to the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you sincerely for your valuable feedback on our paper. Our responses follow.
>- The readability of the paper is poor. It is almost entirely composed of definitions, remarks, lemmas and theorems, lacking a figure to introduce the motivation of this paper and explain why the improved framework is effective.
**Response.** Thank you for highlighting this concern. We understand that the paper may seem heavy on definitions, remarks, lemmas, and theorems, as it falls under the "Learning Theory" category, where theoretical analysis is often central.
We appreciate your suggestion to include a figure to improve readability. We have added a figure in the updated PDF that provides an overview of our improved framework (please see our general response). Additionally, we note that our improved framework is more effective because it provides a better variational representation of $f$-divergence compared to previous approaches. This, being one of our key motivations, is inevitably supported by a theoretical perspective.
>- It is difficult to reproduce the results, as the training objective (5) is very abstract and unclear how to implement it experimentally.
>- How to implement the training objective (5)?
**Response.** We hope the uploaded figure in the general response could help to clarify the implementation of training objective. Specifically, it is implemented similarly to the adversarial training strategy proposed in DANN [R1]. To restate, our training objective is:
$$ \min_{h\in\mathcal{H}}\max_{h'\in\mathcal{H'}}R_{\hat{\mu}}(h)+ \eta {d}_{\hat{\mu},\hat{\nu}}(h,h').
$$
Our model is a deep neural network (e.g., ResNet-50) consisting of a representation network (i.e., $h_{\text{rep}}$) and two classification networks. The main classification network (i.e., $h\_{\text{cls}}$) is used for predictions, while the auxiliary classification network (i.e., $h'_{\text{cls}}$) is used to calculate the domain disagreement (e.g., $\ell(h\_{\text{rep}} \circ h\_{\text{cls}}, h\_{\text{rep}} \circ h'\_{\text{cls}})$).
For the outer minimization, $\min\_{h} R\_{\hat{\mu}}(h) + \eta \max\_{h'} {d}\_{\hat{\mu},\hat{\nu}}(h, h')$, we fix the parameters in the auxiliary classification network and jointly minimize the classification loss for the source domain data (i.e., $R\_{\hat{\mu}}(h)$) and the approximated $f$-DD between the source and target domains (i.e., $\max\_{h'} {d}\_{\hat{\mu},\hat{\nu}}(h, h')$). Note that $\max\_{h'} {d}\_{\hat{\mu},\hat{\nu}}(h, h')$ represents the empirical version of our $f$-DD. Then,
for the inner maximization, $\max\_{h'} {d}\_{\hat{\mu},\hat{\nu}}(h, h')$, we fix the main classification network $h\_{\rm cls}$ and then maximize ${d}\_{\hat{\mu},\hat{\nu}}(h, h')$ by training the auxiliary classification network.
This adversarial training strategy effectively minimizes both the source domain risk and the domain discrepancy between the source and target domains. We will provide additional details on the implementation to ensure completeness.
[R1] Yaroslav Ganin, et al. "Domain-adversarial training of neural networks." Journal of Machine Learning Research 17.59 (2016): 1-35.
>- This paper requires a substantial foundation of reading other papers in order to be understood.
**Response.** We acknowledge that understanding this paper may require some background knowledge. However, we have made efforts to self-contain all theoretical background within the main text or the Appendix to make it more friendly to a broader audience. We would appreciate it if the reviewer could point out more specific areas that may be unclear or less accessible to general readers. We are more than willing to improve the presentation of our paper and welcome any constructive feedback that can help enhance its clarity. We sincerely hope that requiring comprehensive background knowledge is not taken as a negative point against the acceptance of this paper. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their constructive comments and valuable feedback. In addition to addressing your individual comments separately, we have also uploaded a PDF file that contains a figure and a table. Specifically:
1. Figure: In response to Reviewer HkGQ's comment that the paper lacks a figure to illustrate the proposed method and that the implementation of the objective function is not straightforward, we have added an overview figure, which introduces the adversarial training framework of our $f$-DD-based UDA algorithm.
2. Table: Reflecting suggestions for additional experimental results: Reviewer CqSQ recommended conducting experiments on a larger dataset and with a more complex network structure, Reviewer EpWo suggested combining our $f$-DD with SOTA UDA methods and Reviewer vWvG also suggested testing $f$-DD on additional datasets and tasks.
To accommodate these suggestions, we have included additional results on the VisDA-2017 dataset in the table, which is more challenging than those used in our original paper. In particular, we replaced ResNet-50 with the Swin Transformer as the backbone, and we also attempted to apply domain mixup, a crucial component in many existing SOTA methods, into our $f$-DD algorithm. Specifically, we mixed source and target domains at the pixel level to create a third mixed domain, then jointly minimized the $f$-DD between the source and mixed domain and the $f$-DD between the target and mixed domain. This method improved performance, bringing it close to the SOTA method, e.g., PMTrans.
Due to time constraints during the rebuttal period, we trained $f$-DD for only 20 epochs, which is relatively few compared to SOTA methods. We expect that with additional effort in tuning hyperparameters, the performance of $f$-DD (with and without MixUp) can be further boosted.
We will include the figure and the new experimental results in the revision.
Pdf: /pdf/23a2a0369695c3b8e41ba0dc81d0cc3680333a75.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: In this paper, new expected risk analysis based on f-divergence is provided for the unsupervised domain adaptation problem. Although there are prior researches on expected risk analysis based on f-divergence, several issues have been pointed out, such as the fact that the variational representation of f-divergence used in these studies does not recover the Donsker-Varadhan representation of KL-divergence, and the use of the absolute value of the variational representation as a measure of domain discrepancy.
In this paper, to address these issues, the authors adopt an alternative variational representation of f-divergence and, based on this, provide an upper bound evaluation of the expected risk in the target domain, namely ``target risk $\le$ source risk + marginal domain discrepancy + joint error''. Additionally, a sample approximation version of the derived upper bound is also provided, allowing it to be estimated from the data (excluding the joint error part, as in conventional bounds).
Strengths: - The paper clearly discusses what are difficulties with the conventional DA theory using f-divergence and explains how it is solved by the proposed approach. Especially, this paper provides a solid theoretical foundation, with detailed assumptions and rigorous proofs that are well-documented in the appendix.
- Previous expected risk bounds in UDA have often been given by relatively simple inequality evaluations, following the formulation given by Ben-David et al. In contrast, a similar upper bound evaluation using the f-DD proposed in this paper requires an inequality evaluation for ``change of measure" (as given in Lemma 4.1), and it can be seen that this is not an incremental extension of the conventional DA theory.
Weaknesses: - I don't think there is enough information needed when trying to calculate the derived upper boundary from the sample. For example, $t_0$ in Lemma 4.2 and the construction of the Rashomon set in Sec 5 should be discussed in more detail.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Are no assumptions specifically placed on the hypothetical set $\mathcal{H}$?
- In Lemma 4.2, is there a way to estimate the value of t_0 when t_0 cannot be written in closed form (e.g. for KL or Jeffereys employed in the Experiments?)?
- The Rashomon set used in Section 5 appears to be in fact the set to be estimated (as the true expected risk of the source domain is unknown). How exactly is the Rashomon set constructed in this paper?
- In the experiments, three types of discrepancies are evaluated for the proposed method, namely KL-DD, $\chi^2$-DD and Jeffereys-DD. Then, do you have any insights on which discrepancy measure should be used for which type of problem? I think the question of which measure to use to evaluate domain discrepancy is critical not only for theorists but also for practitioners.
- Is the f-DD proposed in Definition 4.1 always 'better' than the existing f-divergence-based discrepancy (Definition 3.1)? I am wondering whether there are cases where using absolute values to define the discrepancy is an advantage.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you sincerely for your positive evaluation and constructive comments. Our responses follow.
>- Are no assumptions ...?
**Response.** Our theoretical results, except for Proposition 1 which requires $\mathcal{H}$ to be sufficiently large, do not rely on any specific assumptions about $\mathcal{H}$. The hypothesis class can be either finite or infinite. Additionally, in our localized $f$-DD, we restrict the entire hypothesis class to a smaller Rashomon set. While we do not require any assumptions on $\mathcal{H}$ here, we focus our analysis on the relevant hypotheses, as those that perform poorly on the source domain are unlikely to be found by any effective UDA algorithm.
>- In Lemma 4.2, is there a way to estimate ...?
**Response.** We have discussed the method for approximating $t_0$ (or the optimal $t$ equivalently) in Lines 339-354 (and Line 1183-1195), which is based on a quadratic approximation, and compared the results with simply setting $t_0=1$ in Table 4. Moreover, there are other methods to estimate $t_0$ as well. For example, assigning $t_0$ an initial value and considering it a trainable parameter, then applying SGD to update it during training.
We will highlight the estimation of $t_0$ in a more noticeable way in our next revision.
>- The Rashomon set used ... How exactly is the Rashomon set constructed in this paper?
**Response.** Please notice that in our experiments, we do not need to explicitly construct the Rashomon set or determine its threshold value. The hypothesis found by UDA algorithms is always within some Rashomon set of our interest. In practical scenarios, where the true risk of the source domain is unknown, its empirical risk serves as a proxy for the true risk. As long as the UDA algorithm minimizes the classification error of the source domain, the obtained hypothesis falls within a restricted hypothesis class, i.e., the Rashomon set with some relatively low threshold value $r$, although the exact value of $r$ may be unknown as it relates to the generalization error of the source domain.
We believe the confusion might stem from the motivation behind our localized $f$-DD. We apply localization techniques to show that the convergence rate based on $f$-DD could be faster if we restrict the entire hypothesis class to the Rashomon set. This approach aligns with practical applications because, for any UDA algorithm, regardless of how the source and target domains are aligned, a good classifier must first be trained on the source domain through empirical risk minimization. Given that the hypothesis should first perform well on the source domain, many hypotheses are not of interest in theoretical analysis. By removing such redundancy, the UDA algorithm enjoys a fast-rate generalization guarantee (cf. Theorem 5.2). Consequently, the training objective in our algorithm, namely Eq. (5), does not need to include a square-root function for $f$-DD, in contrast to Eq. (4), which considers the entire hypothesis class.
Additionally, to demonstrate the advantage of localized $f$-DD, we provide a threshold learning example in Appendix D.12, where the Rashomon set can be explicitly constructed since the ground truth hypothesis is known.
>- In the experiments, three types of discrepancies are ...
**Response.** When comparing KL-DD and our weighted Jeffreys-DD, we always recommend using the weighted Jeffreys-DD. This measure is a weighted combination of KL and reverse KL (i.e. $\gamma_1 \mathrm{D}\_{\rm KL}(\hat{\mu} \|\ \hat{\nu}) + \gamma_2 \mathrm{D}\_{\rm KL}(\hat{\nu} \|\ \hat{\mu})$). In most UDA problems, the difficulty of transferring knowledge from the source domain to the target domain may not be the same as transferring from the target domain to the source domain. From this perspective, the weighted Jeffreys-DD provides more flexibility to reflect this difference during training.
When comparing KL-DD and $\chi^2$-DD, while KL is theoretically smaller than $\chi^2$, their empirical performance is quite similar. In fact, $\chi^2$-DD may even give better results on some sub-tasks, such as A$\rightarrow$W in Office-31. We believe there may be some practical optimization benefits of $\chi^2$ on certain sub-tasks (e.g., large quantity being more informative), which might not be easily explained by generalization bounds.
Overall, we believe that weighted Jeffreys-DD is a good option in practice, especially when no further prior knowledge of the dataset is available.
>- Is the f-DD proposed in Definition 4.1 always 'better' ...
**Response.** In the context of approximating the true $f$-divergence between two distributions, our $f$-DD is indeed always better than the existing $f$-divergence-based discrepancy (Definition 3.1). This is because the additional absolute value function transforms all negative elements to positive, leading to a larger and more pessimistic approximation. Beyond the scope of approximating true $f$-divergence, while we are not aware of any hypothesis classes where Definition 3.1 is smaller than $f$-DD, from our current understanding, such cases are unlikely to be of practical interest. We are, however, open to discussing this further. Additionally, even without the absolute value function, the previous $f$-DAL defined in Acuna et al. (2021) is based on a weaker variational representation of $f$-divergence. As a lower bound of $f$-divergence, our $f$-DD is always better to theirs, as detailed in Appendix C.4.
Furthermore, it is worth noting that while the $f$-DAL paper proposes an absolute value $f$-divergence-based discrepancy, it removes the absolute value function in their algorithm. Thus, the practical impact of the absolute value function has not been demonstrated in any existing works. | null | null | null | null | null | null |
Enhancing Semi-Supervised Learning via Representative and Diverse Sample Selection | Accept (poster) | Summary: The paper suggests a new sampling method for the labeled set of semi-supervised learning. This sampling method, termed RDSS, selects a set of examples that is both representative of the data, and diverse. The paper shows that using such a sampling function improves both freematch and flexmatch, and compares it against other sampling methods, and methods from AL and SSAL.
Strengths: The idea of the paper is good, and is well supported by theory. The experimental setup does convince me that the suggested method is better than random sampling when picking the labeled set of SSL. However, a better comparison to previous works is required, see the Weaknesses section.
Clarity: The paper is clearly written, the idea is well presented and intuitive, and the paper is easy to read and follow.
Weaknesses: Some of the claims made by paper already appeared in previous art. Specifically, [1] showed that "traditional" AL methods do not pick bad labeled sets for SSL when compared to random sampling. [2] showed that when the labeled set is particularly small, instead of traditional AL techniques, one should focus on labeling examples that are more typical and diverse, showing that such a method can drastically improve both AL and sampling techniques for SSL. [3] presented sampling strategy, showing that picking examples that are representative and diverse examples for the labeled set of SSL improves it by a big margin in low-budget scenarios.
The proposed manuscript does not reference or compare to any of these works. This affects both the novelty, significance and quality of the proposed method: the novelty is somewhat more limited, as many of the ideas overlap with existing works. The significance of this work is impacted, as while the problem at hand is important, it is unclear if the presented ideas pose significant advancement over the existing methods, and the quality is diminished, as a lot of comparisons are missing in the experimental setup.
Specifically, any low-budget strategy could be potentially applied to SSL as well, so those methods should be compared against as well. See for example [4], [5].
Additionally, the vice-versa argument should also hold -- if AL methods can be applied in this case, this method can be used as a method for picking labeled examples for active learning purposes and should be tested as such, as the literature in AL is much broader than the literature of picking the labeled set in SSL, which can provide a much wider context for the given work.
In addition, the framing of the paper is a bit unclear to me. I think the paper could benefit from explaining use cases in which one has the option to pick in advance the labeled set for SSL, which is not already covered by AL use cases.
-------
[1] Mittal, Sudhanshu, et al. Parting with illusions about deep active learning. (2019).
[2] Hacohen, Guy et al. Active learning on a budget: Opposite strategies suit high and low budgets. (2022).
[3] Yehuda, Ofer, et al. "Active learning through a covering lens." (2022).
[4] Mahmood, Rafid, et al. "Low budget active learning via wasserstein distance: An integer programming approach." (2021).
[5] Wen, Ziting, et al. "NTKCPL: Active Learning on Top of Self-Supervised Model by Estimating True Coverage." (2023).
Technical Quality: 2
Clarity: 3
Questions for Authors: Can you please elaborate on how the proposed method differs from the idea suggested in [2]?
How is the problem of selecting the labeled set of SSL different from the problem setting of active learning?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: not relevant
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your careful review of our work. We hope our responses can address all your concerns.
**W1. Some of the claims made by paper already appeared in previous art [1-3]. The proposed manuscript does not reference or compare to any of these works.**
**Response:** Thank you for your comment. It is indeed correct that certain aspects of our methodology and initial hypotheses draw from established findings in the literature. The inclusion of these elements is essential for building upon a solid foundation of existing research. However, our work makes significant advancements beyond these foundational elements.
Specifically, even if our method is based on the perspective of representative and diverse sampling that previous studies have proposed, as we said to Reviewer Qb3b, multiple combinations of representativeness and diversity criteria are applicable to defining sample selection optimization objectives, but one should also provide an efficient algorithm and some theoretical guarantees derive an effective methodology that has better generalizability. In our paper, we artfully use kernel methods and the properties of RKHS to derive a simple but effective approach to model the sample selection problem that fulfils the aforementioned requirements for a generalizable method. In the experimental part, we have compared our method with more recent methods (e.g., USL [6] and ActiveFT [7]) and have achieved SOTA performance (refer to Table 1).
**W2. Any low-budget strategy could be potentially applied to SSL as well, so those methods should be compared against as well. See for example [4,5].**
**Response:** We follow your suggestions and compare our proposed RDSS with [4] and [5] by integrating them into FlexMatch and FreeMatch on CIFAR-10 with 40 labels. The experimental results show that RDSS outperforms the two methods.
| Methods | FlexMatch | FreeMatch |
| -------------- | --------- | --------- |
| Wasserstein [4] | 91.64 | 92.72 |
| NTKCPL [5] | 87.47 | 91.30 |
| RDSS (Ours) | **94.69** | **95.05** |
| | | |
**W3. If AL methods can be applied in this case, this method can be used as a method for picking labeled examples for active learning purposes and should be tested as such.**
**Response:** Thank you for your suggestions. In fact, we have already conducted comparisons with several SOTA AL methods, as demonstrated in Table 2 of the original manuscript. To ensure a more comprehensive evaluation, we have further supplemented our study with additional experiments comparing our approach against various AL methods. The results of these extended experiments can be found in our response to Reviewer Qb3b's Weakness 3.
**W4. The framing of the paper is a bit unclear to me. I think the paper could benefit from explaining use cases in which one has the option to pick in advance the labeled set for SSL, which is not already covered by AL use cases.**
**Response:** Thank you for your insightful feedback regarding the framing of the paper. We understand that the distinction between the use cases of SSL and AL may not have been sufficiently clear, and we appreciate the opportunity to clarify this aspect. In this work, we focus on scenarios where one can select a labelled set for SSL in advance, a situation that is distinct from traditional AL use cases. A particularly relevant example is in the context of medical trials. In such trials, labelling data (e.g., annotating medical images or patient records) often involves significant time and resources, typically requiring expert review. If the labelled data in later rounds of the trial must be determined based on the outcomes of previous rounds, as in a typical AL setup, the entire project timeline could be significantly extended due to the sequential nature of labelling and analysis. We will revise the manuscript to explicitly differentiate these use cases.
**Q1. Can you please elaborate on how the proposed method differs from the idea suggested in [2]?**
**Response:** Thank you for your question. Our standpoints are all from the idea of representative and diverse sampling, but each method has its own criterion for representativeness and diversity. This situation also happens when comparing our method to USL [6] and ActiveFT [7]. However, among all these works, our method exhibits not only a theoretical guarantee for generalization error but also a fast and effective optimization algorithm for practical sample selection tasks. Moreover, our method can be implemented in some research problems in the literature of statistical subsampling (e.g. [8]), which shows its potential in interdisciplinary scenarios or problems.
**Q2. How is the problem of selecting the labeled set of SSL different from the problem setting of active learning?**
**Response:** We understand that this is a critical issue. In SSL, the selection of the labelled set is typically predetermined or randomly sampled from an available pool of labelled data. In contrast, AL is an iterative process where the model actively queries an oracle (e.g., a human annotator) to label new samples that are expected to be the most informative for improving the model.
References:
[1] Parting with illusions about deep active learning. arXiv preprint, 2019.
[2] Active learning on a budget: Opposite strategies suit high and low budgets. ICML, 2022.
[3] Active learning through a covering lens. NeurIPS, 2022.
[4] Low budget active learning via wasserstein distance: An integer programming approach. ICLR, 2022.
[5] NTKCPL: Active Learning on Top of Self-Supervised Model by Estimating True Coverage. arXiv preprint, 2023.
[6] Unsupervised selective labeling for more effective semi-supervised learning. ECCV, 2022.
[7] Active Finetuning: Exploiting Annotation Budget in the Pretraining-Finetuning Paradigm. CVPR, 2023.
[8] Optimal Subsampling via Predictive Inference. Journal of the American Statistical Association, 2023.
---
Rebuttal Comment 1.1:
Title: answer to the rebuttal
Comment: I appreciate the authors' detailed rebuttal and the efforts made to address the concerns raised.
However, my major concern regarding the comparison to other works in the field remains. While I acknowledge the distinction between active learning (AL) and semi-supervised learning (SSL), I still believe that when the labeled set in SSL is predetermined, it can be viewed as a specific instance of AL—where the initial budget is zero, and the labeled set chosen for SSL serves as the active set. Given the extensive body of work addressing these settings in AL, particularly in low-budget AL scenarios, I believe the paper should include a more thorough comparison to such works. This would better position the paper within the current literature and clarify how it advances the state of the art, if at all.
Some of these concerns have been alleviated by the authors' inclusion of an initial comparison to low-budget AL methods in the rebuttal. However, I still find the current framing of the paper somewhat misleading. A peer-reviewed revision would be necessary to incorporate the required changes effectively.
In light of the partial resolution of my concerns, I am adjusting my score to 4. Nevertheless, I remain inclined to recommend rejection of the current revision.
---
Reply to Comment 1.1.1:
Title: Comparison with sampling methods used in low-budget AL scenarios
Comment: Thank you very much for taking the time to review our rebuttal. We acknowledge your concern regarding the comparison with other sampling methods used in low-budget AL scenarios [1-3]. While these methods can indeed achieve satisfactory performance in low-budget AL scenarios, they each have inherent limitations that prevent them from effectively adapting to SSL. Specifically, [1] and [3] involve a clustering step before sampling, which makes the sampling results heavily dependent on the quality of the clustering. Consequently, these methods require task-specific adjustments to the clustering algorithm, limiting their general applicability and making them unsuitable for our scenario. In contrast, [2] relies on multiple iterative rounds to enhance the model’s performance for more accurate selection of labelled samples. This iterative nature poses challenges when these methods are employed to determine labelled samples in a single pass, leading to suboptimal outcomes.
We have incorporated the above methods into FlexMatch and FreeMatch under four low-budget settings: CIFAR-10 with 40 labels, CIFAR-100 with 400 labels, SVHN with 250 labels, and STL-10 with 40 labels, and present the results in the table below. As observed, our method consistently outperforms across all settings, underscoring the superiority of our approach. Furthermore, in the theoretical section, we provide rigorous guarantees to demonstrate that our method is not only efficient but also generalizable.
| Dataset | CIFAR-10 (40) | CIFAR-100 (400) | SVHN (250) | STL-10 (40) |
| ------------- | --- | --- | --- | --- |
| *Applied to FlexMatch* | | | | |
| TypiClust | 91.58 | 46.57 | 90.36 | 74.44 |
| Wasserstein | 91.64 | 46.76 | 90.22 | 73.45 |
| NTKCPL | 87.47 | 44.48 | 91.13 | 73.69 |
| RDSS (Ours) | **94.69** | **48.12** | **91.70** | **77.96** |
| *Applied to FreeMatch* | | | | |
| TypiClust | 92.38 | 47.26 | 93.09 | 77.30 |
| Wasserstein | 92.72 | 46.53 | 92.12 | 74.18 |
| NTKCPL | 91.30 | 46.17 | 91.73 | 78.06 |
| RDSS (Ours) | **95.05** | **48.41** | **94.54** | **81.90** |
| | | | | |
[1] Active learning on a budget: Opposite strategies suit high and low budgets. ICML, 2022.
[2] Low budget active learning via wasserstein distance: An integer programming approach. ICLR, 2022.
[3] NTKCPL: Active Learning on Top of Self-Supervised Model by Estimating True Coverage. arXiv preprint, 2023. | Summary: This paper proposes a Representative and Diverse Sample Selection approach (RDSS) that utilizes a modified Frank-Wolfe algorithm to minimize a novel α-Maximum Mean Discrepancy (α-MMD) criterion, aiming to select a representative and diverse subset from unlabeled data for annotation. Experimental results demonstrate that RDSS consistently improves the performance of several popular semi-supervised learning frameworks and outperforms state-of-the-art sample selection methods used in Active Learning (AL) and Semi-Supervised Active Learning (SSAL), even under constrained annotation budgets.
Strengths: 1.This paper is in Well-written, logically organized, and smoothly expressed.
2. The presented results demonstrate the effectiveness of the proposed approach.
Weaknesses: 1. The author conducted tests on two baseline methods(FlexMatch [58] and Freematch [50]), but neither of them represents the current state-of-the-art.
2. Some details of the experiments are unclear, such as in Table 3.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The definition and usage of variable X in the article are inconsistent.
2. Is Y in the section starting from line 141 representing the point as X? If so, I suggest using notations like Xi, Xj for clarity.
3. In Chapter 6, the determination of the kernel and parameters seems arbitrary. Could you provide some proofs, theories, or experiments to justify them?
4. The SOTA models selected by the author in the experiments are somewhat outdated. It is recommended to include some more updated methods.
5. In the experiment section, the author mentions the limitations of stratified sampling. What do these limitations refer to? Why was this method excluded? From the results, it seems that stratified sampling outperforms the proposed method in several settings.
6. In Table 2, what sampling method did the other comparison methods adopt?
7. What dataset does Table 3 represent? What is the data distribution?
8. The author should objectively evaluate their method, including its limitations.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: as Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your insightful comments. Since Weakness 1 and Question 4, as well as Weakness 2 and Question 7, address the same issue, we have consolidated them accordingly.
**Q1. The definition and usage of variable X in the article are inconsistent.**
**Response:** Thank you for your careful review of our work. In this article, $\mathbf{X}$ is used for describing datasets, $\mathcal{X}$ denotes the feature space, and X is denoted as random variables. Could you please point out the inconsistent part?
**Q2. Is Y in the section starting from line 141 representing the point as X? If so, I suggest using notations like Xi, Xj for clarity.**
**Response:** We understand your concern. From line 141, y and Y are used rigorously to explain kernel tricks and the definition of MMD, which are commonly used in statistics literature. Nevertheless, we will consider your suggestion for the modification.
**Q3. In Chapter 6, the determination of the kernel and parameters seems arbitrary.**
**Response:** Here is the determination of the kernel and parameters.
*Kernel:* Our determination of kernel is based on the kernel choice in the study of two-sample test by [1]. In fact, according to Remark 1 (line 170), you can see that any positive-definite and characteristic kernel is applicable to our method. Among all kernels, Gaussian kernels are the most popular in the study of machine learning, so we take them as our suggestion.
*Bandwidth parameters:* The choice is also based on [1]. So far there is no theoretical guidance for us to determine the bandwidth parameters of Gaussian kernels in measuring similarity or representativeness. Some methods optimize this parameter in the learning process, but this idea is not applicable in RDSS.
$\alpha$: According to lines 246-250, we derived a finite-sample-error bound for MMD, which measures the representativeness, leading to our suggestion of the range of $\alpha$. In the experiments, the choice of $\alpha=1-1/\sqrt{m}$ significantly outperforms other choices.
**Q4 & W1. The SOTA models selected by the author in the experiments are somewhat outdated.**
**Response:** Thank you for your suggestion. Since the sampling process and the SSL process are relatively independent, comparing the sampling methods across different SSL models may have a limited impact. However, we have still compared different methods under two additional SOTA SSL approaches, i.e., ReFixMatch [2] and SequenceMatch [3]. The experimental results in the table below show that RDSS achieves the highest accuracy.
| Dataset | CIFAR-10 | | | CIFAR-100 | | | SVHN | | STL-10 | |
| -------------------- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Budget | 40 | 250 | 4000 | 400 | 2500 | 10000| 250 | 1000| 40 | 250 |
| *Applied to ReFixMatch* | | | | | | | | | | |
| ActiveFT | 75.62 | 93.56 | 95.64 | 26.86 | 56.97 | 71.65 | 96.17 | 96.97 | 62.53 | 87.53 |
| USL | 94.34 | 95.12 | 95.77 | 48.23 | 66.61 | 72.33 | 96.16 | 97.17 | 75.38 | 91.24 |
| RDSS (Ours) | **95.18** | **95.54** | **96.22** | **49.32** | **67.46** | **73.15** | **96.75** | **97.52** | **78.78** | **92.25** |
| *Applied to SequenceMatch* | | | | | | | | | | |
| ActiveFT | 78.91 | 95.02 | 95.11 | 31.94 | 55.49 | 70.38 | 93.65 | 94.20 | 69.44 | 89.44 |
| USL | 94.12 | 95.10 | 95.85 | 49.34 | 67.04 | 73.68 | 95.22 | 96.04 | 76.63 | 90.13 |
| RDSS (Ours) | **95.33** | **95.26** | **96.17** | **50.76** | **69.96** | **74.83** | **96.86** | **97.91** | **81.41** | **92.86** |
| | | | | | | | | | | |
**Q5. What are the limitations of stratified sampling. Why was this method excluded?**
**Response:** Stratified sampling requires prior knowledge of sample categories, and then random sampling is performed within each category. However, in many real-world scenarios, we do not have prior knowledge of sample categories, making stratified sampling inapplicable. This is its limitation. We have a more comprehensive description in the manuscript (Line 25 and Line 81).
**Q6. In Table 2, what sampling method did the other comparison methods adopt?**
**Response:** We are sorry for overlooking this. The sampling methods employed by the AL approaches in Table 2 are as follows:
*CoreSet* uses a greedy algorithm known as $k$-Center-Greedy to select a subset from the unlabeled dataset;
*VAAL* consists of a VAE and an adversarial network. The VAE tries to trick the adversarial network into predicting that all data points are from the labelled pool, the adversarial network learns how to discriminate between dissimilarities in the latent space. The samples predicted as "unlabeled" by the adversarial network are selected for labelling;
*LearnLoss* designes a loss prediction module for a target network, which predicts target losses of unlabeled samples. The samples with the top-$K$ predicted losses are selected to be labelled;
*MCDAL* utilizes two auxiliary classification layers to select samples with the largest prediction discrepancy between them as those requiring labelling.
**Q7 & W2. What dataset does Table 3 represent?**
**Response:** Thank you for your reminder. The dataset used in Table 3 is CIFAR-10. We will refine these details in the next version of our manuscript.
**Q8. The author should objectively evaluate their method, including its limitations.**
**Response:** Thank you for your insightful suggestion. We have discussed the limitations of our method in Appendix E.
References:
[1] A kernel two-sample test. The Journal of Machine Learning Research, 2012.
[2] Boosting Semi-Supervised Learning by bridging high and low-confidence predictions. ICCV, 2023.
[3] SequenceMatch: Revisiting the design of weak-strong augmentations for Semi-supervised learning. WACV, 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. I have no quetions.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for taking the time to review our rebuttal. We appreciate your acknowledgement and are glad to have addressed your concerns. Given the significance of our findings and the rigorous methodology we employed, we believe our work makes a valuable contribution to the SSL field. We would be grateful if you could kindly reconsider your evaluation in light of the clarifications and contributions highlighted in our rebuttal. | Summary: This paper proposes a new sample selection method, RDSS, for the SSL task. RDSS considers both the representativeness and diversity of the selected sample and achieves state-of-the-art performance. This is achieved by the proposed α-MMD criterion and an efficient optimization algorithm GKHR.
Strengths: 1. RDSS considers both representativeness and diversity of samples, which is a convincing strategy, and the experimental results also demonstrate the effectiveness of this motivation.
2. Sufficient theoretical analysis and experimental comparisons are conducted to demonstrate the effectiveness of the proposed method.
Weaknesses: I would like to see images of the actual selected samples and visualizations of the feature distribution to demonstrate that RDSS indeed balances the representativeness and diversity.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weakness.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to the weakness.
Flag For Ethics Review: ['No ethics review needed.', 'Ethics review needed: Research involving human subjects']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your insightful comments, which have greatly contributed to improving the quality of our paper. We hope our responses can address all your concerns.
**W1. I would like to see images of the actual selected samples and visualizations of the feature distribution to demonstrate that RDSS indeed balances the representativeness and diversity.**
**Response:** We follow your suggestion and upload a PDF that contains the actual samples selected using different sampling methods in the "global" response. It can be observed that in the samples selected by our method, the variation in the number of samples across different categories is minimal. In contrast, certain methods fail to select representative samples for every category. For instance, $k$-means does not choose any samples from the *horse* category. Additionally, other methods tend to select an excessive number of similar samples within a single category, thereby neglecting diversity. For example, the ActiveFT approach selects too many samples from the *airplane* category.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's rebuttal and the efforts made to address my concern. I have noticed the new visualization results. Additionally, I would like to ask whether the feature distribution and corresponding sample images in Fig 1 are the results of actual experiments or merely conceptual illustrations. If they are just conceptual illustrations, providing the distribution from actual experiments would be more convincing.
---
Reply to Comment 1.1.1:
Title: Thank you for your comments!
Comment: Thank you for your thorough review. We apologize for any confusion caused. The images presented in Figure 1 depict an actual sampling result from our method on the CIFAR-10 dataset, where we set the sample size to 40, rather than serving as a conceptual illustration. More detailed experimental settings can be found in the RDSS.py file within the supplementary materials we have uploaded. | Summary: Choice of the labeled set in the semi supervised learning is critical for the final performance of the model. This problem can also be looked as AL with SSL, or single shot AL with SSL (in other words similar to experimental design). This works provides a way to select the seed set which is representative, as well as diverse. The problem is reduced to minimizing MMD and similarity score of the selected examples. The paper finally proposes a greedy algorithm, and compare the proposed method against various subset selection baselines, and AL.
Strengths: I like the motivation of the problem and a neat theoretical derivation of the objective, and the provided theoretical analysis. Paper was also easy to follow and experiments are compelling.
Weaknesses: - From a purely combinatorial point of view, I think that the final objective is supermodular in nature. Given the vast literature on submodular/supermodular functions, is it not possible to get an algorithm purely from that standpoint? If so, how different would it be from the proposed one?
- Can one derive things such as leverage scores to detect the outlier-ness of a given point (or any other score)? If so, then couldn't one use something such as diversity - outlier score (or add a score that models likelihood) , with diversity such as Facility location function, and optimize the final objective using greedy?
- In experiments I believe one of the strong baselines such as facility location function is missing. Facility Location has a rich history and have been used in several instances in Active Learning ([1, 2, 3, 4]). I believe authors can add a small discussion on FL and add that baseline. Furthermore, other diversity based approaches have also been considered in the past [5]
- Now a days a lot of focus is also for doing finetuning of existing CLIP models [3]. I'd appreciate one experiment on fine-tuning the CLIP models using the proposed method.
References
- [1] Submodularity in machine learning and artificial intelligence
- [2] An Experimental Design Framework for Label-Efficient Supervised Finetuning of Large Language Models
- [3] LabelBench: A Comprehensive Framework for Benchmarking Adaptive Label-Efficient Learning
- [4] Deep Submodular Peripteral networks
- [5] GLISTER: Generalization based Data Subset Selection for Efficient and Robust Learning
Technical Quality: 3
Clarity: 3
Questions for Authors: Refer to the weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Refer to the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your constructive comments, which have definitely helped us enhance the paper and highlight its contributions in a better way.
**W1. Given the vast literature on submodular/supermodular functions, is it not possible to get an algorithm purely from that standpoint? If so, how different would it be from the proposed one?**
**Response:** Thank you for your question. In our opinion, it is possible to derive a sample selection algorithm from submodular/supermodular functions. This idea is inspiring to us, and we will try to explore its realizability. However, in our paper, if we apply submodular/supermodular functions to deal with the optimization problem, the problem can be purely combinatorial and may lose its advantage in low computational complexity, which is achieved by exploring the convexity of the weighed MMD function (see Appendix A.1).
**W2. Can one derive things such as leverage scores to detect the outlier-ness of a given point (or any other score)? If so, then couldn't one use something such as diversity - outlier score (or add a score that models likelihood), with diversity such as Facility location function, and optimize the final objective using greedy?**
**Response:** Thank you for your question. Multiple combinations of representativeness and diversity criteria are applicable to define sample selection optimization objectives by representativeness and diversity. However, to derive an effective methodology, one should also provide an efficient algorithm and some theoretical guarantees, which require a detailed study on the definition of the optimization objectives.
From this perspective, your idea can be effective if your final objective can be modelled by a submodular framework. Intuitively, this can be achievable if we detaily study the properties of facility location function, diversity - outlier score and models likelihood. In our paper, we use kernel methods and RKHS to model the problem and derive GKHR as an efficient algorithm and generalization/finite-sample-error bound as the theoretical guarantee. The artful combination of representativeness (MMD) and diversity (kernel functions) criterions provide us a simple but effective approach to model the sample selection problem, so we don't have to use greedy algorithm to deal with it. Nevertheless, next time, we will try to start from your standpoint to study this problem if possible.
**W3. Comparison with facility location function [1-4] and diversity based approaches [5] is missing.**
**Response:** We follow your suggestions and compare two facility location-based methods from [2], namely the k-center and the conventional Facility Location (FL) method, as well as the GLISTER method [5]. Notably, the k-center method is equivalent to the Coreset method [6], which we have already benchmarked in our original manuscript (refer to line 309, Table 2).
The sampling process in GLISTER is tightly coupled with the downstream classification task, making it impossible to pre-determine all labelled samples, which renders it unsuitable for SSL. Therefore, we conduct comparisons under the AL framework, consistent with the experimental setup described in Appendix D.5 of the original manuscript. The results are presented in the table below, demonstrating that RDSS consistently outperforms the other methods, with particularly significant improvements observed on CIFAR-100 using 7,500 labels. We will incorporate these experimental results into a subsequent version of the manuscript.
| Dataset | CIFAR-10 | | CIFAR-100 | |
| ------------- | --- | --- | --- | --- |
| Budget | 7500 | 10000 | 7500 | 10000 |
| k-center | 85.46 | 87.56 | 47.17 | 53.06 |
| FL | 86.03 | 89.21 | 47.87 | 55.45 |
| GLISTER | 86.64 | 89.33 | 48.74 | 55.39 |
| RDSS (Ours) | **87.18** | **89.77** | **50.13** | **56.04** |
| | | | | |
**W4. I'd appreciate one experiment on fine-tuning the CLIP models [3] using the proposed method.**
**Response:** This is a good point. We fine-tune the CLIP model via a selection-via-proxy approach [3]. We compare the RDSS with random and k-center sampling methods on CIFAR-10/100 with 10,000 labelled instances when applied to FlexMatch and FreeMatch. The results are shown in the table below, from which we find that RDSS achieves the highest accuracy, outperforming the other two sampling methods. We will incorporate these experimental results into a subsequent version of the manuscript.
| Dataset | CIFAR-10 | CIFAR-100 |
| ------------- | --- | --- |
| *Applied to FlexMatch* | | |
| Random | 96.46 | 83.37 |
| k-center | 96.51 | 85.48 |
| RDSS (Ours) | **97.83** | **86.75** |
| *Applied to FreeMatch* | | |
| Random | 96.58 | 83.29 |
| k-center | 96.75 | 86.14 |
| RDSS (Ours) | **98.02** | **86.96** |
| | | |
References:
[1] Submodularity in machine learning and artificial intelligence. arXiv preprint, 2022.
[2] An Experimental Design Framework for Label-Efficient Supervised Finetuning of Large Language Models. arXiv preprint, 2024.
[3] LabelBench: A Comprehensive Framework for Benchmarking Adaptive Label-Efficient Learning. Journal of Data-centric Machine Learning Research, 2024.
[4] Deep Submodular Peripteral networks. arXiv preprint, 2024.
[5] GLISTER: Generalization based Data Subset Selection for Efficient and Robust Learning. AAAI, 2021.
[6] Active learning for convolutional neural networks: A core-set approach. ICLR, 2018.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal!
Comment: I thank the authors for rebuttal. GLISTER seems to be doing some bi-level optimization, so I am not sure how it is same as solving FL optimization, but what I was hoping for is something similar to done in say [2] (same citations as above).
I also thank the authors for adding the results for fine-tuning CLIP model. Ideally I'd appreciate error bars, as well as going beyond CIFAR 10/100 since it is fine-tuning the CLIP model, one can get decent Imagenet performance too. (As in Labelbench)
I will retain my score, and hoping the discussion and new results mentioned here in the next version of the manuscript.
---
Reply to Comment 1.1.1:
Title: Thank you for your comments!
Comment: Thank you very much for taking the time to review our rebuttal. We are currently conducting comparative experiments similar to the work in [2] (same citations as above) and fine-tuning the CLIP model on the ImageNet dataset. Due to the time constraints and the large volume of data, we were unable to complete these experiments by the discussion phase. However, we will include these experimental results in the next version of the manuscript. | Rebuttal 1:
Rebuttal: Here is a visualization of the sampling results for Reviewer GzC9.
Pdf: /pdf/d3630f75c4a8a7bb8107fcbcb2b336aac5f52636.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Implicit Bias of Adam on Separable Data | Accept (poster) | Summary: The main focus of this paper is on the implicit bias of Adam for a single layer linear model which performs binary classification on separable data. In particular, assuming a zero stability constant $\epsilon$, this paper reveals that Adam finds the solution that achieves maximum-$\ell_\infty$-margin and characterizes the convergence rate for different classes of learning rate. This implicit bias is different from the $\ell_2$-norm minimization solution obtained by previous work which does not assume $\epsilon = 0$.
Strengths: - This paper is clearly written and well-organized. It is easy and clear to follow the argument and motivation of this paper, e.g., the proof sketch makes it easy to follow the way how the theoretical conclusion is developed. In addition, to me, the introduction of the related works are comprehensive and clear. It also clearly summarizes the difference between this paper and related works.
- The settings and results of this paper are new compared to previous works, i.e., previous works showed an $\ell_2$-norm solution implicit bias of Adam on separable data while this paper reveals an $\ell_{\infty}$-norm implicit bias when the stability constant $\epsilon$ is zero.
Weaknesses: Despite the novelty of the theoretical claims, I still have several concerns, which I will discuss in the following.
1. Removing the stability constant $\epsilon$ makes the approach of this paper fails to characterize the influence of it, which, though being small, still has non-negligible effect, e.g., [1] observed that Adam with an $\epsilon$ that is too small does not even converge in certain circumstances. Treating $\epsilon$ as 0 seems a bit rough to me.
In addition, [2] showed that Adam minimizes the interpolation norm of gradients that depends on magnitudes of various hyper parameters including the stability constant $\epsilon$ (although [2] did not specify the types of loss functions and model architectures). [1] claimed that Adam with nonzero $\epsilon$ converges to $\ell_2$-norm solution, which is also verified by extensive experiments. As a comparison, this paper showed that both Adam with $\epsilon=0$ and with a non-negligible $\epsilon$ do not converge to the aforementioned solutions (line 210). In this sense, it seems that the conclusion reached by this paper contradicts with those derived by [1, 2]. Therefore, in my view, it would be better to start with a non-zero $\epsilon$ and let the case with $\epsilon=0$ be a special case to better capture the effect of the $\epsilon$ on the implicit bias.
2. This paper only considers a simple setting: the model is only a one-layer linear model and there is no stochastic sampling noise which is typically necessary in practice. As a comparison, authors of [1] have already studied Adam on separable data for homogeneous models, which can cover the single layer model of the current work as a special case. Thus excluding the stochastic sampling noise in the current work is kind of unsatisfying to me since the model is already a simple one. In addition, I think that the authors of the current work should at least repeat the experiments conducted in [1] (such as those for homogeneous neural networks) to further support their theoretical claims, especially considering that the authors claimed in line 210 that their results are more accurate than those of [1].
**Reference**
[1] Wang et al. The implicit bias for adaptive optimization algorithms on homogeneous neural networks.
[2] Cattaneo et al. On the Implicit Bias of Adam.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could the authors explain the contradiction and connection with previous works? Is it possible to start with a non-zero $\epsilon$ and let $\epsilon=0$ be a special case?
2. How will adding stochastic sampling noise affect the implicit bias?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I do not find a separate limitation section in the main part. In my view, removing the stability constant is a bit rough. This makes the approach presented in this paper fail to capture how the implicit bias of Adam changes for different values of stability constant.
The societal impact is not applicable to this work as it focuses on theoretical parts of Adam.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your support, and address your questions as follows.
>**Q1**:
Adam without $\epsilon$ sometimes does not converge. Contradiction with [1, 2]? Study a non-zero $\epsilon$ and let $\epsilon=0$ be a special case?
**A1**:
Our goal is to study the implicit bias of Adam when the stability constant $\epsilon$ is negligible. The differences between our result and [1,2,3] that consider a non-negligible $\epsilon$ do not lead to any contradiction. Instead, these differences demonstrate that our work indeed provides novel and complementary insights into Adam. We address your detailed comments as follows:
- We emphasize that Adam without $\epsilon$ provably converges under our setting. Proving convergence even without $\epsilon$ is a contribution of our work.
- Our result does not contradict with [2]. By treating the correction term in the continuous-time approximation of Adam as a penalty, [2] gives an informal summary of the implicit bias of Adam without specifying the loss or model. In comparison, our work focuses on a specific setting, and provides a formal and rigorous analysis on the implicit bias of Adam with concrete convergence rates.
- Our theory does not contradict with the theories in [1,3]. [1,3] prove that Adam with a non-negligible $\epsilon$ has the implicit bias towards the maximum $\ell_2$-margin. The intuition is that, after sufficiently many iterations, $\nabla \mathcal{R}(w_t)$ and $v_t$ will be very close to zero (much smaller than $\epsilon$), and $\frac{m_t}{\sqrt{v_t} + \epsilon} \approx \frac{1}{\epsilon}m_t$, indicating that Adam with a non-negligible $\epsilon$ will eventually be similar to GD with momentum. In comparison, we focus on the setting where $\epsilon$ is negligible, and demonstrate that in this case, Adam will have a unique implicit bias towards the maximum $\ell_\infty$-margin. Therefore, our theoretical results do not contradict those in [1,3]. Instead, our paper and [1,3] can together provide a more comprehensive understanding of the implicit bias of Adam.
- Our experiments do not contradict with the experiments in [1,3]. In our simulations, we run different algorithms for $10^6$ iterations, and Adam can typically reach training losses below $10^{-8}$, demonstrating that it is reasonable to stop the training there. Our experiments show that within $10^6$ iterations, Adam with $\epsilon = 10^{-8}$ is indeed approaching the maximum $\ell_\infty$-margin. In comparison, in Figure 1 in [3], Adam and other algorithms are run for over $10^{14}$ iterations, where Adam with $\epsilon$ can eventually approach the maximum $\ell_2$-margin solution. Therefore, our paper focuses more on the practical stopping time, while [3] focuses more on the behavior of Adam after an extensively long training. Hence, our experiment results are complementary to [3], and there is no contradiction.
- “Start with a non-zero $\epsilon$ and let $\epsilon=0$ be a special case” could be an interesting future direction, and we will discuss it in the revision. However, given that existing works [1,2,3] have studied the case with a non-zero $\epsilon$, we believe that our current setting with $\epsilon = 0$ helps us demonstrate our claim in the cleanest setting.
>**Q2**:
Linear model + full-batch Adam is simple compared with [1]. Consider stochastic sampling noise or homogeneous models?
**A2**:
Thanks for your comment. Again, our work aims to study a relatively understudied scenario where $\epsilon$ in Adam is negligible. This setting is complementary to the studies in [1,3], and therefore it is reasonable for us to start from the most classic setting to study implicit bias, which is solving linear logistic regression with full-batch algorithms [3,6].
To the best of our knowledge, existing works on the implicit bias of Adam or AdamW such as [1,3,6] mainly focus on the full batch setting. Extensions to stochastic Adam can possibly be done following the analysis in [5], where the implicit bias of full batch GD is extended to SGD. However, such an extension may be more challenging for Adam due to its complex form. This can be a future direction.
Extensions to homogeneous networks are also an important future direction. However, since our goal is to study the implicit bias of Adam with a negligible $\epsilon$, establishing concrete results in linear logistic regression is a reasonable starting point, and this clean and classic setting serves our purpose well in demonstrating the difference in the implicit bias of Adam when $\epsilon$ is negligible.
>**Q3**:
Line 210 claims the results are more accurate than [1], so add experiments on homogeneous networks?
**A3**:
We would like to clarify that around line 210, we do not compare our result with [1]. Instead, we are comparing with [3], which studies linear classification. We will revise our comments and highlight more on our purpose, which is to study a setting that is complementary to [1,3].
We have concerns that experiments of homogeneous models are out of the scope of our paper as we do not claim anything about homogeneous models. However, we have added some preliminary experiment results in the pdf rebuttal page.
[1] Wang, B., Meng, Q., Chen, W. and Liu, T.-Y. (2021). The implicit bias for adaptive optimization algorithms on homogeneous neural networks. ICML.
[2] Cattaneo, M.D., Klusowski, J.M. and Shigida, B. (2024). On the Implicit Bias of Adam. ICML.
[3] Wang, B., Meng, Q., Zhang, H., Sun, R., Chen, W., Ma, Z.-M. and Liu, T.-Y. (2022).
Does momentum change the implicit regularization on separable data? NeurIPS.
[4] Xie, S. and Li, Z. (2024). Implicit Bias of AdamW: $\ell_\infty $-Norm Constrained Optimization. ICML.
[5] Nacson, M. S., Srebro, N. and Soudry, D. (2019). Stochastic gradient descent on separable data: Exact convergence with a fixed learning rate. AISTATS.
[6] Soudry, D., Hoffer, E., Nacson, M. S., Gunasekar, S. and Srebro, N. (2018). The implicit
bias of gradient descent on separable data. JMLR.
---
Rebuttal 2:
Title: Reply to rebuttals
Comment: I thank the authors for the detailed response.
I would like to clarify that, as I pointed out in my review, I understand that the current work considers Adam without the stability constant. My point of the contradiction between the current paper and previous works lies in that how letting $\epsilon \to 0$ changes the implicit bias from the $\ell_2$ solution to the drastically different $\ell_\infty$-margin solution, e.g., is the transition smooth or abrupt? There exists a gap.
The rest of the rebuttals addressed my other concerns.
---
Rebuttal Comment 2.1:
Comment: Thanks for your prompt reply and for clarifying your question. We confirm that when $\epsilon\to 0^+$, the transition of the implicit bias is indeed abrupt. Consider Adam with stability constant $\epsilon$, and denote by $w_{t, \epsilon}$ its iterate at the $t$-th iteration. Then, the implicit bias of Adam with a fixed value of $\epsilon$ can be characterized by the limits:
$\lim\_{t\to +\infty} \max\_{i\in[n]} \frac{\langle y_i\cdot x_i, w\_{t, \epsilon}\rangle}{\\|w\_{t, \epsilon}\\|\_2}$ and $\lim_{t\to +\infty} \max\_{i\in[n]} \frac{\langle y_i\cdot x_i, w\_{t, \epsilon}\rangle}{\\|w\_{t, \epsilon}\\|\_\infty}$.
Mathematically, the abrupt transition of implicit bias when $\epsilon \to 0^+$ is then due to the fact that the limit $t\to+\infty$ and the limit $\epsilon\to0^+$ are not interchangeable:
$\lim\_{\epsilon\to 0^+}\lim\_{t\to +\infty} \max\_{i\in[n]} \frac{\langle y_i\cdot x_i, w\_{t, \epsilon}\rangle}{\\|w\_{t, \epsilon}\\|_2} \neq \lim\_{t\to +\infty}\lim\_{\epsilon\to 0^+} \max\_{i\in[n]} \frac{\langle y_i\cdot x_i, w\_{t, \epsilon}\rangle}{\\|w\_{t, \epsilon}\\|_2}$, and
$\lim_{\epsilon\to 0^+}\lim_{t\to +\infty} \max_{i\in[n]} \frac{\langle y_i\cdot x_i, w_{t, \epsilon}\rangle}{\\|w_{t, \epsilon}\\|\_\infty} \neq \lim_{t\to +\infty}\lim_{\epsilon\to 0^+} \max_{i\in[n]} \frac{\langle y_i\cdot x_i, w_{t, \epsilon}\rangle}{\\|w_{t, \epsilon}\\|_\infty}$.
Therefore, there is no contradiction. We hope the discussion above can address your question. | Summary: This paper examines the implicit bias of the Adam optimizer in the context of linear logistic regression, demonstrating that it converges to the maximum $\ell_\infty$-margin solution under certain mild conditions. The authors note that omitting the stability constant in Adam updates results in a different implicit bias than gradient descent, with or without momentum, which converges to the maximum $\ell_2$-margin solution. They also explore various decreasing learning rates, showing that Adam's margin converges at a polynomial rate, which is faster than that of gradient descent. Additionally, they provide numerical experiments that support their findings.
Strengths: - Understanding why Adam performs better than GD in several settings is an important problem and this work takes an important step towards this by showing that Adam has a different implicit bias than GD in the linear logistic regression setting.
- Overall, the paper is well-written and easy to follow. The proof sketch in Section 6 is explained well.
Weaknesses: - The paper does not present results for a fixed learning rate and only considers a set of decreasing learning rates.
- The discussion in lines 50-52 and after Corollary 4.7, comparing the rates of Adam and GD, should also comment on the convergence rates for GD with adaptive learning rates (e.g., normalized GD) which have been shown to converge faster (see [1] and related work) than GD.
- (Minor) In Assumption 4.3, ‘non-increasing’ should be ‘decreasing’ or ‘diminishing’.
- The results in prior work on implicit bias of GD are global (hold for any initialization), whereas the results in this paper require an assumption on the initialization (Ass. 4.2). Based on the discussion following this assumption, it might be better to state an assumption on the data and then show that the condition on the initialization holds as a Lemma.
- The paper does not comment on how optimal the obtained rates in Corollary 4.7 are.
**References:**
[1] Wang et al., Achieving Margin Maximization Exponentially Fast via Progressive Norm Rescaling, 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can the authors comment more on why considering the stability constant $\epsilon=0$ makes the setting more challenging? I understand the motivation in lines 105-107, but it is unclear what the challenge is since the accumulated second-order moments would be non-zero.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: There is no potential negative impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your positive comments. Your comments and questions are addressed as follows.
>**Q1**:
The paper does not present results for a fixed learning rate.
**A1**:
When considering fixed learning rate $\eta_t=\eta$ for some small $\eta$, our analysis can imply that $\lim_{t\to \infty}\big|\min\frac{\langle w_{t}, y_i\cdot x_i\rangle}{\lVert w_t \lVert_{\infty}}-\frac{\langle w^*, y_i\cdot x_i\rangle}{\lVert w^*\lVert_{\infty}} \big|\leq O(\sqrt{\eta})$. We will add a comment in the revision.
>**Q2**:
The discussion after Corollary 4.7, comparing Adam and GD, should also compare Adam and GD with adaptive learning rates (see [1] and related work).
**A2**:
Thanks for your suggestion and for pointing out the related work [1]. We will cite it and add more comprehensive comparisons and discussions in the revision.
>**Q3**:
(Minor) In Assumption 4.3, ‘non-increasing’ should be ‘decreasing’ or ‘diminishing’.
**A3**:
Thanks for your suggestion. We will change "non-increasing" to "decreasing" in the revision. We will also add comments to clarify that we do not require the learning rates to be "strictly decreasing”.
>**Q4**:
Based on the discussion following Assumption 4.2, it might be better to state an assumption on the data and then show that the condition on the initialization holds as a Lemma.
**A4**:
We propose such an assumption because it is the most general version of assumption and it can cover various different settings. For example, we expect that the current version of Assumption 4.2 covers the following two cases:
- Case 1 (fixed $w_0$, random data): consider an arbitrary fixed vector $w_0\in \mathbb{R}^d$. Then as long as the training data inputs $\mathbf{x}_1,\ldots, \mathbf{x}_n$ is sampled from any continuous and non-degenerate distribution, $\nabla \mathcal{R}(w_0)[k] \neq 0$ holds with probability $1$.
- Case 2 (fixed data, random $w_0$): consider any fixed training data inputs $\mathbf{x}_1,\ldots, \mathbf{x}_n$ satisfying that the matrix $ [ \mathbf{x}_1,\ldots, \mathbf{x}_n ] $ has no all-zero row. Then as long as $w_0$ is initialized following a continuous and non-degenerate distribution, $\nabla \mathcal{R}(w_0)[k] \neq 0$ with probability $1$.
Therefore, we feel that the current version of Assumption 4.2 may be more general.
Besides, we would also like to clarify that assumptions on properties of initialization have been considered in various previous works studying implicit bias [2, 3, 4, 5]. In particular, [2] makes an assumption that is essentially the same as Assumption 4.2.
>**Q5**:
The paper does not comment on how optimal the obtained rates in Corollary 4.7 are.
**A5**:
Currently, we do not have any lower bounds on the margin convergence rates. Due to the complicated optimization dynamics, establishing such lower bounds may be very challenging, and therefore it is difficult to prove that our rate of convergence is optimal. We will add discussions in the revision, and will also mention in the revision that establishing lower bounds on the margin convergence rates is an interesting future work direction.
Despite not having matching lower bounds, our result demonstrates that the margin of the linear predictor converges in polynomial time for a general class of learning rates, which already significantly distinguishes Adam from gradient descent. Such polynomial convergence rates are also supported by our experiment results.
>**Q6**:
Why does considering the stability constant $\epsilon=0$ make the setting more challenging? The accumulated second-order moments would be non-zero.
**A6**:
When studying Adam without $\epsilon$, it is true that under Assumption 4.2, the gradient coordinates at initialization are non-zero and therefore the accumulated second-order moments would be non-zero throughout training. However, please note that the impact of the initial gradient values decays exponentially fast during training: in a worst case where for a certain coordinate $k$, $\nabla \mathcal{R}(w_t)[k] = 0$ for all $t > 0$, we have $\mathbf{v}_t[k] = \beta_2 (1-\beta_2)^t \cdot \mathcal{R}(w_0)[k]^2$. Therefore, without a very careful analysis, the worst-case exponentially decaying $\mathbf{v}_t[k]$ may significantly affect the stability of the algorithm, and may even impose a $\exp( t )$ factor in the bounds of convergence rate, making the bounds vacuous.
In fact, many existing analyses of Adam and its variants rely on a non-zero stability constant $\epsilon$, having a factor of $1/\epsilon$ in their convergence bounds, e.g., [6] (see Theorem 4.3) and [7] (see Corollary 1 and Remark 2). In comparison, in our work, we implement a very careful analysis (also utilizing the relatively simple problem setting of linear logistic regression) to avoid having such factors in the bounds.
[1] Wang, M., Min, Z. and Wu, L. (2024). Achieving Margin Maximization Exponentially Fast via Progressive Norm Rescaling. International Conference on Machine Learning.
[2] Xie, S. and Li, Z. (2024). Implicit Bias of AdamW: $\ell_\infty $-Norm Constrained Optimization. International Conference on Machine Learning.
[3] Lyu, K. and Li, J. (2020). Gradient Descent Maximizes the Margin of Homogeneous Neural Networks. International Conference on Learning Representations.
[4] Ji, Z. and Telgarsky, M. (2020). Directional convergence and alignment in deep learning. Advances in Neural Information Processing Systems.
[5] Wang, B., Meng, Q., Chen, W. and Liu, T.Y. (2021). The Implicit Bias for Adaptive Optimization Algorithms on Homogeneous Neural Networks. International Conference on Machine Learning.
[6] Zhou, D., Chen, J., Cao, Y., Yang, Z. and Gu, Q. (2024). On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization. Transactions on Machine Learning Research.
[7] Huang, F., Li, J. and Huang, H. (2021). SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients. Advances in Neural Information Processing Systems.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed rebuttal. The lack of a discussion on the optimality of the rates presents a weakness, so I am not raising my score, but I still believe the paper makes a good contribution, so I will maintain my score. | Summary: In this work, the author studies the implicit bias of Adam optimizer for a single layer neural network on separable data. The author's work suggests that, compared to the implicit bias of gradient descent which is the max $ \ell_2 $ margin solution, Adam solution converges to the maximum $ \ell_\infty $ margin solution. For this work, authors take both exponential and logistic loss and find that the convergence speed is on a polynomial order.
In order to confirm the results, the authors perform experiments on synthetic datasets for binary classification tasks and confirm Adam’s convergence to the $ \ell_\infty $ margin comparatively.
Strengths: The work is novel (to the best of my knowledge) and interesting as the study of implicit bias of Adam could have further implications in characterizing the difference in optimization behavior of Adam vs SGD in practical scenarios. The assumptions of the work have been clearly presented and seem reasonable. With regard to the $ \epsilon $, while theoretical results are not provided, the authors include convincing experimental illustrations to convince me of the assumption. I also appreciate the well written proof sketch which helps convey the ideas
Weaknesses: At the moment, I have some concerns with the paper which are more fit to be discussed as questions.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1) Can the authors expand on how they arrive at the right side of inequality after line 292 using 6.1 ? Perhaps take me through the inequality step by step ?
2) Can the author provide some comments regarding the independence of convergence in the case of $ a = \frac{2}{3} $ from $ \rho $ ? Is there some intuition with regards to the boundaries and case on $ a $ ?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your supportive comments! We address them in detail as follows:
>**Q1**:
Can the authors expand on how they arrive at the right side of inequality after line 292 using 6.1 ? Perhaps take me through the inequality step by step ?
**A1**:
Thanks for your question. We would like to explain that there is a typo on line 292: "for all $t > t_2$" should be "for all $t \geq t_2$".
We now present the detailed derivation as follows. Our target is to derive equation after line 292, which is
$\mathcal{R}(w_{t+1}) \leq \Big(1-\frac{\gamma\eta_t}{2}\Big)\cdot \mathcal{R}(w_{t})\leq \mathcal{R}(w_{t})\cdot e^{-\frac{\gamma\eta_t}{2}}\leq \mathcal{R}(w_{t_2})\cdot e^{-\frac{\gamma\sum_{\tau=t_2}^{t}\eta_\tau}{2}}$. $\qquad$ (target)
To derive these results, we first recall that in equation (6.1), we have
$\mathcal{R}(w_{t+1})\leq \mathcal{R}(w_{t}) - \frac{\gamma\eta_t}{2}\mathcal{G}(w_t)$.
Note that in the proof sketch, for simplicity we focus on the case of exponential loss, and for the exponential loss, it holds that $\ell(x) = -\ell'(x) =e^{-x}$. Therefore, by the definition of $\mathcal{R}(w_{t})$ and $\mathcal{G}(w_{t})$, we have $\mathcal{R}(w_{t}) = \mathcal{G}(w_{t})$. Replacing $\mathcal{G}(w_{t})$ by $\mathcal{R}(w_{t})$ in equation (6.1) then gives
$\mathcal{R}(w_{t+1})\leq \Big(1-\frac{\gamma\eta_t}{2}\Big)\cdot \mathcal{R}(w_{t})$.
This proves the first inequality in (target). To prove the second inequality in (target), we utilize the fact that $1-x\leq e^{-x}$ holds for all $x$. Based on this, we have
$1-\frac{\gamma\eta_t}{2} \leq e^{-\frac{\gamma\eta_t}{2}}$,
which further implies
$\Big(1-\frac{\gamma\eta_t}{2}\Big)\cdot \mathcal{R}(w_{t})\leq \mathcal{R}(w_{t})\cdot e^{-\frac{\gamma\eta_t}{2}}$.
This proves the second inequality in (target). So far, we have proved that
$\mathcal{R}(w_{t+1})\leq \mathcal{R}(w_{t})\cdot e^{-\frac{\gamma\eta_t}{2}}$
for all $t \geq t_2$. Applying this result recursively gives
$\mathcal{R}(w_{t+1})\leq \mathcal{R}(w_{t})\cdot e^{-\frac{\gamma\eta_t}{2}} \leq \mathcal{R}(w_{t-1})\cdot e^{-\frac{\gamma\eta_{t-1}+\gamma\eta_t}{2}}\leq \cdots \leq \mathcal{R}(w_{t_2})\cdot e^{-\frac{\gamma\sum_{\tau=t_2}^{t}\eta_\tau}{2}}$.
This proves the last inequality in (target), and our derivation is finished. We will clarify the derivation of this result in the revision.
>**Q2**:
Can the author provide some comments regarding the independence of convergence in the case of $a=2/3$ from $\rho$?
**A2**:
We believe that the margin convergence rate in the case of $a=2/3$ does depend on $\rho$, as the corresponding bound in Corollary 4.7 is
$O(\frac{d\cdot\log t + \log n + \log \mathcal{R}(w_0) + [\log(1/\rho)]^{1/3}}{t^{1/3}})$ for exponential loss, and $O(\frac{d\cdot\log t +nd+ n \mathcal{R}(w_0) + [\log(1/\rho)]^{1/3}}{t^{1/3}})$ for logistic loss. Since we are considering the margin convergence rate as $t$ goes to infinity, these convergence rates can be simplified into $O(\frac{d\cdot\log t}{t^{1/3}})$ for both exponential and logistic losses, because the term $ d\cdot \log t$ can eventually dominate the other terms in the numerators. However, since $d\cdot \log t$ increases very slowly in $t$ and will require exponentially many iterations to dominate the other terms, in our result we still keep the other terms to make the bounds more concrete.
We guess that you are asking about why $\rho$ does not appear in the margin convergence rates for the case $a<2/3$. In fact, this is because we have applied the simplifications discussed above. Let us take exponential loss as an example. When $a<2/3$, for exponential loss, we can prove (see the bound below line 603 in Appendix B.2.) a margin convergence rate of the order
$O(\frac{d t^{1-3a/2} + d^{\frac{2(1-a)}{a}} + \log n + \log \mathcal{R}(w_0) + [\log(1/\rho)]^{1-a}}{t^{1-a}})$.
It is clear that the first term in the numerator $d t^{1-3a/2}$ is the only term that increases in $t$. Moreover, as $t$ increases, the term $d t^{1-3a/2}$ will dominate the other terms in the numerator in polynomial time, which leads to the following simplification:
$O(\frac{d t^{1-3a/2} + d^{\frac{2(1-a)}{a}} + \log n + \log \mathcal{R}(w_0) + [\log(1/\rho)]^{1-a}}{t^{1-a}}) = O(\frac{d t^{1-3a/2}}{t^{1-a}}) = O(\frac{d}{t^{a/2}})$,
This gives the final bound in Corollary 4.7 for $a<2/3$, which does not depend on $\rho$.
>**Q3**: Is there some intuition with regards to the boundaries and case on $a$?
**A3**:
Again, we explain it for the case of exponential loss. We note that Corollary 4.7 is derived based on Theorem 4.5, in which the upper bound for margin convergence is
$O(\frac{\sum_{\tau=0}^{t_0-1}\eta_\tau + d\sum_{\tau=t_0}^{t-1}\eta_\tau^{\frac{3}{2}}}{\sum_{\tau=0}^{t-1}\eta_\tau})$.
Therefore, to give more concrete convergence rates when $\eta_t$ is specifically set as $(t+2)^{-a}$, we need to calculate $\sum_{\tau=0}^t \eta_\tau^{3/2}$ and $\sum_{\tau=0}^t \eta_\tau$ for different values of $a$. By properties of the series $\{ (t+2)^{-a} \}$, this calculation can be separated into 4 cases as follows:
1. When $0<a<\frac{2}{3}$, $\sum_{\tau=0}^t \eta_\tau^{3/2}=\sum_{\tau=0}^t (\tau+2)^{-3a/2} = \Theta(t^{1-3a/2})$ and $\sum_{\tau=0}^t \eta_\tau=\sum_{\tau=0}^t \tau^{-a} = \Theta(t^{1-a})$.
2. When $a=\frac{2}{3}$, $\sum_{\tau=0}^t \eta_\tau^{3/2}=\sum_{\tau=0}^t \tau^{-3a/2} = \Theta(\log t)$ and $\sum_{\tau=0}^t \eta_\tau=\sum_{\tau=0}^t \tau^{-a} = \Theta(t^{1-a})$.
3. When $\frac{2}{3}<a<1$, $\sum_{\tau=0}^t \eta_\tau^{3/2}=\sum_{\tau=0}^t \tau^{-3a/2} = \Theta(1)$ and $\sum_{\tau=0}^t \eta_\tau=\sum_{\tau=0}^t \tau^{-a} = \Theta(t^{1-a})$.
4. When $a=1$, $\sum_{\tau=0}^t \eta_\tau^{3/2}=\sum_{\tau=0}^t \tau^{-3a/2} = \Theta(1)$ and $\sum_{\tau=0}^t \eta_\tau=\sum_{\tau=0}^t \tau^{-a} = \Theta(\log t)$.
The calculations above lead to the boundary cases in Corollary 4.7.
---
Rebuttal Comment 1.1:
Comment: Thank you very much with the detailed response. It helped clarify some of my confusions. I am happy to keep my currrent evaluation. | Summary: This paper studies the implicit bias of the Adam optimizer for logistic regression on linearly separable data. The authors prove that Adam converges to the linear classifier with the maximum $\ell_\infty$-margin. This result contrasts with the classical results on (stochastic) gradient descent (with or without momentum), which converge to the maximum $\ell_2$-margin solution.
Strengths: - The authors theoretically study a popular yet not well-understood optimization method, Adam, in the context of a well-studied classical problem: logistic regression on linearly separable data. This offers a solid and insightful contribution to understanding Adam. In particular, distinguishing Adam from (S)GD with/without momentum on this classical problem is a very interesting result.
- The technical contributions are also of independent interest, as they prove the results for Adam without relying on the stability constant (which is closer to practice) and use mild assumptions.
- The paper is well-written and easy to follow. The proof sketch provides a clear and comprehensive overview of the proof of the main theorem.
Weaknesses: There are no major concerns about this paper. Below are minor comments and some areas for improvement:
- The paper does not provide an intuition behind why Adam achieves the maximum $\ell_\infty$-margin solution, in contrast to GD which achieves the maximum $\ell_2$-margin solution. It would be great if the authors could offer insights on how the $\ell_\infty$-margin arises instead of the $\ell_2$-margin, for example, through a warm-up analysis with SignGD ($\beta_1=\beta_2=0$) or RMSProp ($\beta_1=0$). One way to provide an intuition is as follows: Gunasekar et al. (2018) proved that steepest descent converges to the max-margin solution, implying that SignGD (steepest descent w.r.t. $\ell_\infty$-norm) converges to the maximum $\ell_\infty$-margin solution. Since SignGD is known to be a good proxy for Adam, this may offer an insight into why Adam converges to the maximum $\ell_\infty$-margin solution.
- The authors claim that the bounds in Corollary 4.7 are derived under worst-case scenarios and argue that this is why, in practice, we often observe margins converging faster than the bounds in the corollary. However, this statement lacks supporting evidence. The paper should prove that the rate of convergence is tight. Otherwise, the observed faster convergence of margins in experiments might simply indicate that the bound is not tight enough.
- Some sentences, including those in the abstract, use the term "convergence" unclearly. For example, in the abstract, "this convergence occurs within polynomial time" does not indicate the objective (the normalized $\ell_\infty$-margin in this case) of convergence. This could be confused with other notions of convergence, such as convergence in direction (i.e., $\frac{w_t}{\lVert w_t \rVert} \to \frac{w^*}{\lVert w^* \rVert}$).
- (page 6, line 183) According to the paper, the normalized $\ell_2$-margin converges at a speed of $O(\log \log t / \log t)$ when using GD. However, this should be corrected to $O(1 / \log t)$. According to Soudry et al. (2018), the normalized weight vector converges to the maximum $\ell_2$-margin vector "in direction" with a convergence rate of $O(\log \log t / \log t)$, i.e., $\lVert \frac{w_t}{\lVert w_t \rVert} - \frac{w^*}{\lVert w^* \rVert}\rVert = O(\log \log t / \log t)$. However, the normalized $\ell_2$-margin converges at the speed of $O(1/\log t)$, i.e., $|\min \frac{\langle w_t, y_t \cdot x_t \rangle}{\lVert w_t \rVert} - \frac{\langle w^*, y_t \cdot x_t \rangle}{\lVert w^* \rVert} | = O(1/\log t)$.
- (page 1, line 25) Typo: reply on -> rely on
---
[Gunasekar et al. 2018] Characterizing Implicit Bias in Terms of Optimization Geometry, ICML 2018.
[Soudry et al. 2018] The Implicit Bias of Gradient Descent on Separable Data, JMLR 2018.
Technical Quality: 4
Clarity: 3
Questions for Authors: - Does Theorem 4.5 imply that Adam (with a learning rate $\eta_t = (t+2)^{-a}$, $a<1$) reduces loss faster than GD (Adam: $O(e^{-\gamma t^{1-a} / 4(1-a)})$ vs. GD: $O(1/t)$)? It would be great if the authors could provide a detailed comparison of the convergence rates of loss between Adam and (S)GD with/without momentum.
- Is $\beta_1 \le \beta_2$ a necessary condition? What happens if we use Adam with $\beta_1 > \beta_2$?
- Assumption 4.4 seems to be a non-standard assumption. Is this assumption a necessary condition? Can you explain why such an assumption is needed?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The paper discusses its limitations and future directions, including the extension of the results to homogeneous neural networks and the analysis of stochastic Adam instead of full-batch Adam. I think both directions are promising avenues for future research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback! We address your comments as follows:
>**Q1**:
The paper does not provide an intuition why Adam and GD have different implicit biases. Relation to SignGD?
**A1**:
Thanks for your suggestion. Several recent works have discussed that Adam and SignGD are closely related [2], and the implicit bias of SignGD shown by [1] can indeed provide valuable insights into the implicit bias of Adam. Some of our lemmas are also motivated by this intuition (Lemma 6.1 and Lemma A.3). We will add more discussions on this intuition in the revision. However, we would also like to clarify that, despite the clear intuition, our proof for Adam is not a simple combination of existing techniques. For example, the proofs of Lemma 6.1 and Lemma A.3 are non-trivial, and in Lemma A.3, we have implemented a particularly careful analysis which enables us to cover general learning rates and general values of $\beta_1,\beta_2$.
>**Q2**:
The authors claim that Corollary 4.7 is derived under the worst case and this is why in experiments we often observe margins converging faster than the bounds. This statement lacks supporting evidence. The paper should prove that the rate of convergence is tight.
**A2**:
Currently, we do not have any lower bounds on the margin convergence rates. Due to the complicated optimization dynamics, establishing such lower bounds may be very challenging, and therefore it is difficult to prove that our rate of convergence is optimal. We will make it clear in the revision. We will also mention in the revision that establishing lower bounds on the margin convergence rates is an interesting future work direction.
Despite not having matching lower bounds, our result demonstrates that the margin of the linear predictor converges in polynomial time for a general class of learning rates, which already significantly distinguishes Adam from gradient descent. Such polynomial convergence rates are also supported by our experiment results.
>**Q3**:
The term "convergence" is used unclearly: objective convergence vs normalized margin convergence.
**A3**:
Thanks for pointing out the unclear expressions. We will revise them.
>**Q4**: on page 6, line 183, $\ell_2$-margin convergence speed for GD should be $O(1/\log t)$.
**A4**:
Thanks for pointing it out. We will clarify it in the revision.
>**Q5**:
(page 1, line 25) Typo: reply on -> rely on
**A5**:
Thanks for pointing out this typo. We will fix it.
>**Q6**:
Does Theorem 4.5 imply that Adam reduces loss faster than GD? The authors should provide a detailed comparison.
**A6**:
Thanks for your suggestion. You are right that our result shows an $O(e^{-\frac{\gamma t^{1-a}}{4(1-a)}})$ convergence rate of the training loss for Adam in logistic regression when the data are linearly separable, and this is faster than the $O(1/t)$ convergence rate of gradient descent. In the revision, we will comment on it, and give a detailed comparison of the convergence rates of loss between Adam and (S)GD with/without momentum.
>**Q7**:
Is $\beta_1\leq\beta_2$ necessary? What happens if we use Adam with $\beta_1>\beta_2$?
**A7**:
Yes, this condition $\beta_1\leq\beta_2$ is necessary in our analysis. Such a condition ensures the stability of Adam. Under this condition, we can show that in each iteration of Adam, the update in each coordinate is bounded by a constant (Lemma 6.5).
On the other hand, without this condition, there exist extreme cases where Adam is unstable. For example, consider $\beta_2=0$, $\beta_1 > 0$, and suppose that there exists a coordinate index $k$ such that at iteration $t = 1$, the corresponding coordinate of the gradient is very close to zero, satisfying $0 \leq \nabla \mathcal{R}(w_1)[k] \leq \beta_1(1-\beta_1)\rho / 10000 $. Then, by Assumption 4.2, we can see that $\mathbf{m}_1[k] \geq \beta_1(1-\beta_1)\rho$, and hence
$ \mathbf{m}_1[k] / \sqrt{\mathbf{v}_1[k]} \geq \frac{ \beta_1(1-\beta_1)\rho }{ \beta_1(1-\beta_1)\rho / 10000 } = 10000 $. Clearly, $\mathbf{m}_1[k] / \sqrt{\mathbf{v}_1[k]}$ can be arbitrarily large when $\nabla \mathcal{R}(w_1)[k]$ tends to zero. This implies that when $\beta_2=0$, $\beta_1 > 0$, there are cases where Adam is unstable. For general $\beta_1,\beta_2$ with $\beta_1 > \beta_2$, there are also certain extreme scenarios where Adam is unstable.
We would also like to point out that the condition $\beta_1\leq\beta_2$ is a common condition considered in recent works studying Adam [3, 4, 5]. Besides, this condition aligns well with practice, where popular choices are ($\beta_1=0.9, \beta_2=0.99$), or ($\beta_1=0.9, \beta_2=0.999$). We will add more explanations in the revision.
>**Q8**:
Assumption 4.4 seems non-standard. Is it a necessary condition? Why is such an assumption needed?
**A8**:
Assumption 4.4 is necessary for our proof. Our proof of Lemma 6.1 relies on this assumption (line 482-line 484, line 485-line 488). As we commented below Assumption 4.4 and proved in Lemma C.1, this assumption is quite mild: it holds for fixed (small enough) learning rate, or decaying learning rates $\eta_t = (t+2)^{-a}$ with $a \in (0, 1]$.
[1] Gunasekar, S., Lee, J., Soudry, D. and Srebro, N. (2018). Characterizing implicit bias in terms of optimization geometry. International Conference on Machine Learning.
[2] Balles, L., Pedregosa, F., and Roux, N. L. (2020). The geometry of sign gradient descent. arXiv preprint arXiv:2002.08056.
[3] Xie, S. and Li, Z. (2024). Implicit Bias of AdamW: $\ell_\infty $-Norm Constrained Optimization. International Conference on Machine Learning.
[4] Hong, Y. and Lin, J. (2024). On Convergence of Adam for Stochastic Optimization under Relaxed Assumptions. arXiv preprint arXiv:2402.03982.
[5] Zou, D., Cao, Y., Li, Y. and Gu, Q. (2023). Understanding the generalization of adam in learning neural networks with proper regularization. International Conference on Learning Representations.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. I have thoroughly reviewed the rebuttal and have no further questions. I appreciate the authors' efforts and am happy to maintain my score, voting for acceptance. I look forward to seeing the discussions mentioned in the rebuttal incorporated into the revised manuscript.
---
Reply to Comment 1.1.1:
Comment: Thank you for your support and detailed suggestions. We will make sure to include our discussions in the revised version of the paper. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We appreciate your supportive and constructive comments on our paper. We have addressed all your questions in detail in our individual responses to you. Here, as suggested by Reviewer Gj1V, we include a pdf page presenting some preliminary experiment results on training homogeneous neural networks. But we would also like to clarify that we do not aim to claim any conclusions about trianing neural networks -- our work is still focused on the simple and clean setting of linear classification.
We lookfoward to your further comments and suggestions, and we are happy to discuss any remaining questions in detail.
Best regards,
Authors
Pdf: /pdf/46ac4801504f988a0e5ee2062d4e0ab7c43f04a7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Structural Inference of Dynamical Systems with Conjoined State Space Models | Accept (poster) | Summary: The paper introduces the SICSM framework, integrating Selective State Space Models (SSMs) with Generative Flow Networks (GFNs) to tackle challenges in dynamical systems characterized by irregularly sampled trajectories and partial observations. SICSM leverages the adaptive temporal modeling capabilities of SSMs to learn input-dependent transition functions, enhancing structural inference accuracy. It aggregates diverse temporal dependencies and channels them into a GFN to approximate the posterior distribution of the system’s structure. Extensive evaluations across multiple datasets demonstrate SICSM's good performance in accurately inferring complex interactions in partially observed systems.
Strengths: - The integration of Selective SSMs with GFNs is a novel approach that addresses significant challenges in structural inference for dynamical systems. The adaptive mechanisms for handling irregular sampling and partial observations are particularly innovative.
- The research is thorough and well-documented, with extensive evaluations across a variety of datasets. The methodological rigor and comprehensive experimental validation enhance the reliability of the findings.
- The paper is well-organized and clearly written, with detailed explanations of the methodologies and experimental setups. Figures and diagrams effectively illustrate the concepts and results.
- The proposed SICSM framework has broad applicability in scientific discovery and system diagnostics across multiple disciplines. Its ability to handle real-world complexities such as irregular sampling and partial observations makes it a valuable tool for researchers.
Weaknesses: - The implementation of SICSM is computationally intensive, requiring significant resources and expertise. This complexity may limit its accessibility and widespread adoption.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. How does SICSM handle situations where the interaction structures of the dynamical systems change over time? Are there plans to extend the framework to support dynamic graphs?
2. Can the authors provide more details on the computational resources required for training? Are there any strategies to optimize resource usage?
3. What specific real-world applications do the authors envision for SICSM? Are there particular domains where it has shown exceptional promise?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors have adequately addressed the limitations of their work, including the challenges posed by irregular sampling and partial observations. They propose future research directions to explore dynamic systems with mutable structural elements, indicating a proactive approach to potential limitations. The discussion on incorporating prior knowledge and adapting to different hop distances further strengthens the framework’s applicability.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank Reviewer uuWM for the motivating review! Here are our answers to the concerns:
> The implementation of SICSM is computationally intensive, requiring significant resources and expertise. This complexity may limit its accessibility and widespread adoption.
Many thanks! We acknowledge that the complexity of SICSM, with its dual components of SSSM and GFN, may initially appear daunting. To enhance understanding and facilitate easier adoption, we will include a comprehensive tutorial in the code repository that outlines each step and its purpose within the framework.
> How does SICSM handle situations where the interaction structures of the dynamical systems change over time? Are there plans to extend the framework to support dynamic graphs?
Many thanks for the good question! Addressing changing interaction structures in dynamical systems is indeed challenging, particularly in distinguishing between existing and emerging connections which could blur the reconstructed adjacency matrix. We are exploring the potential use of latent variables to capture these dynamics more accurately. This approach is in its early stages and will require further research and validation.
> Can the authors provide more details on the computational resources required for training? Are there any strategies to optimize resource usage?
We included the training time of each method in Table 4 in the appendix. We trained all of the methods with a single NVIDIA Ampere 40GB HBM graphics card, paired with 2 AMD Rome CPUs (32 cores@2.35 GHz), as mentioned in Section 5.1 of our submission. As we were trying to bridge the Generative Flow Networks with the SSSM, yet only JAX implementations were available, but it comes without the GPU acceleration of SSSM, which was included in its PyTorch implementation. We will work on the way of either implementing the GPU acceleration of SSSM with JAX, or trying to rebuild the GFN with PyTorch.
> What specific real-world applications do the authors envision for SICSM? Are there particular domains where it has shown exceptional promise?
Yes, SICSM is particularly suited for complex domains such as single-cell biology, where it can infer gene regulatory networks, and other scientific fields requiring discovery of latent connectivity among variables. Its ability to handle irregularly sampled data and partial observations makes it a valuable tool for scientific discovery across various disciplines.
We hope these clarifications address your concerns, and we are committed to further refining SICSM to enhance its accessibility and applicability in real-world scenarios.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Thank you for the comprehensive responses and additional details provided in your rebuttal. I am impressed with your plans to enhance SICSM’s accessibility through tutorials and your efforts to extend its capabilities to dynamic graphs and optimize computational efficiency. Your commitment to addressing these complexities, along with the promising applications in fields like single-cell biology, significantly enhances the paper's value. Based on these improvements, clarifications and a full set of new experimental results addressing other reviews, I have decided to raise my score to a 7. I look forward to seeing the continued development of SICSM.
Warm regards.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer uuWM,
Thank you very much for your positive feedback and for recognizing the efforts made to improve our manuscript and address your concerns. We are grateful for your detailed evaluation and are encouraged by your decision to raise your score. Your support motivates us to continue refining SICSM and advancing this area of research. We look forward to potentially contributing further insights to the field and appreciate your interest in our work's development.
Warm regards,
Authors | Summary: This paper proposes to combine State Space Models and Generative Flow Networks to perform structural inference in an irregular time series context. The proposed method is evaluated on a series of different tasks where it performs well, and compared to a number of baselines. The method's robustness to short time series and missing observations is evaluated.
Strengths: The paper proposes an interesting architecture and solves problems that have the potential to be very relevant in real world contexts, such as biological time series. The empirical evaluation is fairly thorough about testing on many different tasks.
Weaknesses: My main concerns for the paper are its low novelty and its low number of ablations, which make it hard to understand how specific pieces contribute to the performance of the method.
Generally I'm uncomfortable with the way many things are presented in the paper, it's not always clear what's a novel contribution and what's not. I encourage the authors to be clear and exercise an abundance of caution.
/!\\ In my humble opinion this paper uncomfortably downplays its similarity to DAG-GFN [14] and JSP-GFN [15] in several places, and I'm not even an author of these papers. This is especially concerning considering that in many instances JSP-GFN is the closest performing baseline to the proposed method.
Technical Quality: 3
Clarity: 2
Questions for Authors: I'm a bit put off by the framing of the method. The SSSM is the parameterization, the GFN is the optimization method, the structural inference is the task. The ingredients aren't individually novel (e.g. Mamba, ContiFormer), and some of those combos have been tried before (I'm thinking in paricular here of DAG-GFN/JSP-GFN). I don't really see how "SICSM [..] redefines approaches to structural inference in complex systems". Maybe what bothers me is that this kind of language obscures the actual contributions of the paper. Many design choices are close to ones taken in [14-15]. I'd encourage the authors to be more careful here. I understand this may come from the authors' lack of familiarity with English, but it creates an unfortunate ambiguity in deciphering what's a contribution and what is just using prior work.
I'm not sure what an $\alpha$-distance is, is it meant to be a placeholder for any norm?
Section 3.3 introduces the flow-matching condition, but more modern conditions exist, and this work in particular seems to be using DB (**and not SubTB!** as suggested by the appendix text). Why is it only introduced in the appendix if it is the chosen objective? Why not just directly present the DB condition used? This is an example of the similarity to DAG-GFN being somewhat downplayed; I encourage the authors to exercise caution.
"To enhance the architectural sophistication of our model, we arrange L Residual Blocks in a sequential configuration, with the output of each block feeding directly into the next." This is a good example of an off putting phrasing. This describes a standard residual model, but the phrasing in this paragraph (and others in this section) suggests this is somehow a new way to do things. For example, unless I'm missing something, what the authors as "intricate multi-hop relationships" is simply a natural and normal consequence of depth in _deep_ neural networks. Either that or the text is not appropriately explaining the uniqueness of the method, which might be even more concerning.
The trick presented in (9) is neat, but it does imply spending $B$ times more compute. Are baselines also allowed to use this trick? If not the comparisons may be unfair.
In section 4.3, the objective is taken from [14-15]. Please use proper attribution.
Section 5.4 poses an interesting hypothesis, but it's unfortunate that it is only qualitatively evaluated. Why not run proper experiments and measure the effect of residual depth?
Another issue more generally is that the design choice of using a residual SSSM model doesn't seem compared to alternatives. What about a deep transformer with exactly the same tricks? What choices matter? It's nice that the effect of the method is analyzed wrt to for example missing observations and compared to baselines, but what about the method with itself, i.e. ablations?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Adequate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank Reviewer 3YzK for the detailed and thoughtful comments. Here are our answers to the questions:
> My main concerns for the paper are its novelty and its low number of ablations, which make it hard to understand how specific pieces contribute to the performance of the method.
Many thanks for the comment. The core innovation of our work, SICSM, addresses structural inference with irregularly sampled trajectories and partial observation, a novel approach not previously explored in structural or relational inference fields. SICSM effectively integrates SSM and GFN to tackle these challenges, significantly enhancing accuracy in complex dynamical systems.
Additional ablation studies are detailed in the revised sections of our manuscript. These studies critically evaluate the contributions of individual components within SICSM, underscoring their collective impact on performance enhancement. We show in the other answers on the ablation studies.
> [...] It's not always clear what's a novel contribution and what's not. I encourage the authors to be clear and exercise an abundance of caution. [...]
Many thanks! We have clarified the modifications and novel contributions of the GFN in our system, particularly concerning the reward function adaptations detailed in Appendix B.5, now moved to the main text for greater visibility. Actually, JSP-GFN learns the parameters $\lambda$ for each node and therefore restore individuity for each edges. This setup is highly align with the real-world: the connections may be isomorphic and we found it to be really helpful for the reconstruction of graph structure for dynamical systems. We have also revised our paper entirely to have clearer reference to DAG-GFN [14] and JSP-GFN [15] to omit misunderstanding of the contribution of this work. Such as on the objective mentioned in section 4.3, and so on.
> I'm not sure what an $𝛼$-distance is, is it meant to be a placeholder for any norm?
Yes. The $\alpha$-distance is indeed a placeholder for any norm, adaptable based on the dynamics of the specific system under study.
> Section 3.3 introduces the flow-matching condition, but more modern conditions exist, and this work in particular seems to be using DB. Why is it only introduced in the appendix if it is the chosen objective? Why not just directly present the DB condition used?
Many thanks! We revised Section 3.3 to better articulate the use of the DB condition, which we initially included in the appendix. It is now prominently discussed in the main text, aligning with its significance in our methodology.
> [...] This describes a standard residual model, [...]
Many thanks for the comment. We just wanted to have more detailed discussion on the advantages of using residual setup here in the text. We revised them to be more precise.
> What the authors as "intricate multi-hop relationships" is simply a natural and normal consequence of depth in *deep* neural networks [...]
We sincerely disagree with this comment. We tried to handle this with regards to the multi-hop relationships illustrated in the Fig. 4 in the submission. As showcase here, on the third column, the red link from node 1 to 4 is a multi-hop connection, while the direct connections would be 1 -> 3 -> 4. So we are trying to state that a finer temporal resolution on the trajectories can help to decrease the possibility of getting multi-hop connections. So with more residual blocks, the learned dynamics will be finer in the temporal aspect.
> The trick presented in (9) is neat, but it does imply spending $𝐵$ times more compute. Are baselines also allowed to use this trick?
Yes, the computational trick in Eq. 9 is also employed by baselines. This technique has been verified across all implementations, confirming its efficacy without disproportionately increasing computational demands.
> Section 5.4 poses an interesting hypothesis, but it's unfortunate that it is only qualitatively evaluated. Why not run proper experiments and measure the effect of residual depth?
The observations in Section 5.4 are backed by rigorous experiments, with detailed quantitative results now included in Table 2 of the attached PDF. As shown in the table, the negative multi-hop edges are commonly observed among setups with just one residual block, and decreases to 6.5 if we use just one first block, but at this moment, the true positives decrease as well. As shown in the table, the best setup is the concatenation the outputs of all blocks, which on one hand decrease the count of negative multi-hop edges and on the other hand increase the count of true positives.
> Another issue more generally is that the design choice of using a residual SSSM model doesn't seem compared to alternatives [..] i.e. ablations?
We have expanded our ablation studies to include comparisons with Transformer, LSTM, and GRU models on both irregularly sampled trajectories with 30 time steps and partial observation with 12 nodes of VN\_SP\_15 dataset. We show the average results below of 10 runs. The parameters of all of the modules are set to match the length of the trajectories and we performed hyperparamter search with Bayesian optimization. As shown in the table, transformers perform slightly better then LSTM and GRU, but all of them are inferior to SSSM, as they cannot deal with multi-hops effectively. Moreover, they fell short on irregulaly sampled trajectories, as they can not learn $\Delta$ adaptively. We included these results in revision.
| Module | Irre. Sampled | Par. Obser. |
| ----------- | ------------- | ----------- |
| Transformer | $70.5$ | $60.2$ |
| LSTM | $68.1$ | $59.8$ |
| GRU | $69.5$ | $59.2$ |
| SSSM | $89.4$ | $80.8$ |
We hope these clarifications and additions address your concerns effectively. We are grateful for your insights, which have significantly contributed to enhancing the rigor and clarity of our work.
---
Rebuttal Comment 1.1:
Comment: Thank you for taking my concerns into consideration. I think, if the final paper is truly improved in alignment with our collective reviews, that it will be more impactful to the community. I will raise my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 3YzK,
Thank you for acknowledging the revisions made to our paper and for your constructive feedback throughout the review process. We are committed to incorporating the insights gathered from all reviews to enhance the paper further. Your decision to raise your score is greatly appreciated and encourages us in our efforts. We are hopeful that the final version will indeed have a meaningful impact on the community, thanks to valuable input like yours.
Warm regards,
Authors of SICSM | Summary: The authors consider the problem of structure learning of dynamical systems from irregularly sampled trajectories and partially observed systems. They propose Structural Inference with Conjoined State Space Models (SICSM), a method based on selective state space models (SSMs) and generative flow network (GFNs). The central idea of this work is to use a SSM for modelling the behaviour of dynamical systems while using a GFN to learn the interacting graph structure between the variables of the system. The authors evaluate their proposed approach on a comprehensive set of datasets for various tasks and compare against a numerous baselines.
Strengths: The authors present a method that addresses a challenging problem in the domain structure learning of dynamical systems -- i.e. learning system structure from irregularly sampled trajectories and partially observed systems. The use of SSMs to approximate system dynamics while using GFNs to learn the graph structure of the system is unique and novel approach to this problem. The authors provide a comprehensive evaluation of their method over variety of systems for irregularly sampled trajectories and partially observed systems, demonstrating SICSM consistently outperforms counterpart approaches.
Weaknesses: - The method has 3 key components: state space model, embedding residual blocks, and a GFN to approximate the graph structure of the system. It is not entirely clear how these individual components interact and the explicit need for the GFN (see questions below).
- The authors consider a comprehensive set of datasets and baselines, but only one evaluation metrics (AUROC). For example, some other metrics to consider for this task are: structural hamming distance (SHD), F1-score, area under the precision-recall curve (AUPRC). Only considering one evaluation metrics makes it difficult to assess the robustness of the approach.
- Another method that seems relevant to this work which address an similar problems is CUTS (Cheng et al. 2023). It appears that majority of the baselines considered in this work are or not necessarily methods explicitly tailored to handle irregular time-series. Including a method like CUTS in this evaluation may be important to create a fairer comparison of SICSM.
References:
Cheng, Yuxiao, et al. "Cuts: Neural causal discovery from irregular time-series data." International Conference on Learning Representations (2023).
Technical Quality: 2
Clarity: 2
Questions for Authors: - For the reward defined in Equation 8, what is the explicit form of $R(<G, \lambda>)$? The authors state that $P(U_{all} | \lambda, \mathbf{Adj})$ represents the likelihood model implemented via a neural network. Is this model trained beforehand? Or is the reward being simultaneously learned throughout training with the GFN?
- A central advantage to using a GFN (or specifically JSP-GFN) to model structure is the ability so approximate the distribution/uncertainty over this structure (and in this case also over the parameters) -- i.e. approximating $P(\mathbf{Adj}, \lambda | U_{all})$ instead of just $\mathbf{Adj}$. In the results, only one deterministic metrics is considered (AUROC). Why not consider a distributional metric to evaluate how well $P(\mathbf{Adj}, \lambda | U_{all})$ is approximated, especially given you are comparing to JSP-GFN?
- What is the motivation of also learning the parameters $\lambda$ if the primary objective is to learn $\mathbf{Adj}$? Moreover, there is no evaluation of $P(\lambda | G)$. If this is an important aspect of the approach, why not include a distributional metrics (as stated in my previous comment), or possibly including evaluation of the negative log-likelihood?
- What is not entirely clear to me is the use of the state space model (SSM) architecture -- specifically, is $\mathbf{Adj}$ embedded in the SSM of each residual block? Is the approximated graph structure being used by the SSM or is this an independent output?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors discuss limitations and broader impacts in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank Reviewer hoUX for the thoughtful comments. Here are our answers to the questions:
> The method has 3 key components: [...]. It is not entirely clear how these individual components interact and the explicit need for the GFN.
The state space model in our approach handles the dynamics between sampled features at each time step and adapts to irregular sampling intervals through SSSMs. Residual blocks help extract multi-resolution dynamics, enabling the model to learn node dynamics across varying time resolutions. The GFN then creates a space of hidden states based on these dynamics to reconstruct the underlying structure. This layered approach ensures comprehensive learning and reconstruction capabilities.
> The authors consider a comprehensive set of datasets and baselines, but only one evaluation metrics (AUROC) [...]
Many thanks! We have expanded our evaluation metrics beyond AUROC, as detailed in the PDF attached to the general rebuttal. Figures 4-6 include results for AUPRC, SHD, and F1 score, where our method, SICSM, consistently outperforms baselines across most datasets, highlighting its robustness even in complex structures like those found in the TF dataset.
> Including a method like CUTS (Cheng et al. 2023) in this evaluation may be important to create a fairer comparison of SICSM.
Many thanks for the suggestion. We investigated into the CUTS (Cheng et al. 2023) and figured out that they suppose having regular time steps, but some of them are marked out by ZOH placeholders. So actually CUTS work on the regularly sampled trajetories with equal time intervals, but some of them were masked out, which is different from the problem setting of SICSM. Moreover, CUTS can only work on trajectories with one-feature for every node at each time step. In contrast, SICSM is supposed to have irregularly sampled trajectories without any ZOH placeholders, and can work on multi-dimensional features. We performed experiments of CUTS on all of the one-dimenstional features trajectories mentioned in this paper, and the results are shown in Fig. 1, 2, and 4 -6 in the attached PDF. As shown in the figures, CUTS has the ability to match with the best baselines, but still outperformed by SICSM. We included the new results in the revision.
> For the reward defined in Equation 8, what is the explicit form of $𝑅(<𝐺,𝜆>)$? The authors state that $𝑃(𝑈_{𝑎𝑙𝑙}|𝜆,Adj)$ represents the likelihood model implemented via a neural network. Is this model trained beforehand?
The reward function $R(<G, \lambda>)$ is transformed logarithmically (see Eq. 37 in Appendix B.5) for implementation purposes. The model $P(U_{all}|\lambda,Adj)$, implemented via a neural network, is learned simultaneously with the GFN, ensuring integrated optimization and learning.
> A central advantage to using a GFN to model structure is the ability so approximate the distribution over this structure (and in this case also over the parameters). [...] Why not consider a distributional metric to evaluate how well $𝑃(Adj,𝜆|𝑈_{𝑎𝑙𝑙})$ is approximated?
Many thanks for the comment! We evaluate the approximation returned by SICSM by comparing with the exact joint posterior distribution $P(Adj, \lambda | U_{all})$. Similar to the experiments in JSP-GFN, we consider here models over d = 5 variables, with linear Gaussian CPDs. We generate 20 different datasets of N = 100 observations from randomly generated Bayesian Networks. The quality of the joint posterior approximations is evaluated separately for $Adj$ and $\lambda$. For $Adj$, we compare the approximation and the exact posterior on different marginals of interest, also called features in JSP-GFN (Deleu et al, 2023), e.g., the edge feature corresponds to the marginal probability of a specific edge being in the graph. Fig. 3 in the attached PDF shows a comparison between the edge features computed with the exact posterior and with SICSM, proving that it can accurately approximate the edge features of the exact posterior. To evaluate the performance of the different methods as an approximation of the posterior over $\lambda$, we also estimate the cross-entropy between the sampling distribution of $\lambda$ given G and the exact posterior $P(\lambda | Adj, U_{all})$. The results are shown in Table 1 in the PDF. We observe that again SICSM samples parameters $\lambda$ that are significantly more probable under the exact posterior compared to other methods.
> What is the motivation of also learning the parameters $𝜆$ if the primary objective is to learn $Adj$?
The diverse nature of real-world graph connections, such as those differing significantly in physical properties (e.g., springs vs. electric forces), necessitates modeling each connection with unique parameters $\lambda$. This individualized approach enhances the accuracy and applicability of our model in complex scenarios.
> Is $Adj$ embedded in the SSM of each residual block? Is the approximated graph structure being used by the SSM or is this an independent output?
The graph structure $Adj$ is not explicitly integrated into the SSM of each residual block; it is an independent output of our model. We are exploring methodologies to potentially link $Adj$ more closely with SSM operations in future work.
We hope these clarifications meet your concerns and illustrate the depth and rigor of our study. We are committed to further enhancing our model based on the feedback received and appreciate your guidance in this process.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response to my questions and concerns. I am happy with the numerous experimental additions provided. I will raise my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer hoUX,
We greatly appreciate your acknowledgment of the answers and additional experiments we provided in response to your valuable feedback. Your willingness to reconsider your score is immensely encouraging and reaffirms our commitment to advancing this research.
Warm regards,
Authors of SICSM | Summary: Processes of scientific interest which are representable as graphs, in biology, chemistry, material sciences, mechanics, are an important application for machine learning. Nodes often represent physical objects, some of which influence each other. Nodes exhibit a set of features which can be observed over time. Prior knowledge about the process stems from a mechanistic understanding and can often be represented as the presence or absence of edges between nodes. Node feature observations may be irregularly spaced through time; not all nodes may be observed with every observation.
This paper develops a statistical model for this application with support for irregularly sampled and partial observations of node features, as well as prior knowledge incorporation. Prior knowledge is restricted to the indication of presence, but not absence, of edges. Partially observable nodes are assumed to be from a static node set throughout all observations (i.e., nodes are either always observable or always unobservable). Observations are not assume to contain a timestamp indication (as in mobile phone accelerometer readings, which may be irregularly sampled but whose timestamp is read at input).
The model's architecture is relatively sophisticated and is based on generative flow networks to represent and learn the structural aspects of the graph, and state space models to represent the evolution of node features over time.
The paper presents experiments on 16 datasets stemming from 4 physical models, and compares to 7 other models, showing superiority in scenarios where observations are irregularly spaced or nodes partially observable.
Strengths: The paper takes an established problem class (graph systems) with its known challenges (irregular sampling, partial observations), which is not original. However it goes to great lengths to make use of two strong methods, GFN and SSM, with a resulting combination that seems reasonable, strong and of useful application.
The paper is generally clear, notations are coherent and legible, several diagrams support the explanation. To improve the writing, a running example might help bridge the abstractions (node, edge, state...) to physical reality, illuminating and motivating the implementation. The same goes to comment on the connection between the model and the applied datasets (some of this is covered in Annex C, with the exception of C.5 which leaves the physical counterparts of modelled data undescribed).
Weaknesses: Experimental validation is moderately convincing. Baseline implementations seem strong, with care taken to recover implementations of competing methods, as documented in Annex D. However, all datasets are synthetic. The only real dataset, PEMS, presented in Annex C.5, with results in Annex E.2. In addition, experimental validation seems unconcerned with performance outside the specific cases of partial observations or irregular sampling -- reducing the paper's claim to "this model is better for these two scenarios only".
There seem to be a duplication in the presentation of datasets (both in sec5.1, between the paragraphs starting l.279 and l.290, and again between Annex C.1 and C.2 vs C.4) -- this is confusing. Also, sec3.3 seems to be internally redundant with duplicated points (e.g. l.151 vs eq3, and l.148 vs l.157, which again is confusing. Numerous sentences have incorrect English syntax which obscures their meaning
Technical Quality: 3
Clarity: 3
Questions for Authors: * Eq.2: since s' is terminal, isn't any $s'' = s_f$ ?
* Eq.5: in your contemplated application scenario, the interval between sampling times, or equivalently the timesamp of each sample, allowing to calculate $\Delta$, is not given with the samples, correct? I've worked on mobile accelerometer/GPS data where sampling is irregular but the timestamps are given, which is why I'd like to make sure. Can you clarify whether a posterior over $\Delta$ can be pratically recovered?
* It might be useful to show a concrete, simple example of training data to clarify the scenario described in abstract terms sec3.1.
* Annex 1 fig5: shouldn't all variables be indexed with $i$, the node id? I'm asking because $A, h^t$ aren't. But if so, how is the interdependence between nodes modelled?
* fig6: do I have it right that GFlowNet only adds edges, but doesn't remove any, moving from start to end? Does that have as a consequence that any prior knowledge can only formulated as known-to-be-present edges, but not known-to-be-absent edges (impacts Annex F l.978)?
* Annex C.5: what is the physical model? What is a node, an edge of the model?
* Annex l.872: how is the % of prior knowledge defined?
* Annex l.863: link is referred to but missing
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: * Checklist point 4 and 5: implementation link is claimed to be provided here and in Annex l.863, but I don't see it. It should be provided in the main paper since the supplementary can't be assumed to be reviewed.
* Limitations: a few more assumptions on the usage scenario should be spelt out, as mentioned in this review
* Checklist point 7: It is certainly possible to report error bars on plots through shading, provided they are not as tiny as you make them here. In addition, error bars could be reported, without lengthening the main paper, in the Annexes -- but they aren't, with the only exception of Table 3 on an experiment which is not reported in the main paper.
* Checklist point 6: experimental settings are not as detailed as that they would allow reproduction. Several details are missing for this, e.g. batch size, data splits.
* Checklist point 12: the claim, and requirement of the checklist that assets have their license mentioned is not complied with regarding either datasets or existing code for competing methods. Despite the claim, this point mostly is not complied with.
* Checklist point 13: does this imply the code is not intended to be released as an asset?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the inspiring review. And here are our answers to your concerns.
> To improve the writing, a running example might help bridge the abstractions (node, edge, state...) to physical reality [...]
We would like to sincerely thank the reviewer for this advice. We added the following paragraph to our revision in Section 3.1 and it will appear with the camera-ready as we cannot upload the revision during rebuttal:
We may image a dynamical system of balls and strings, in which the balls are randomly connected with the springs. Then set initial positions and velocities of each ball and then let them go. Because of the existence of the spring forces, arising from the structural connections between the balls, the balls will change their positions and velocities in the observational period. Then suppose we have no idea on which balls are connected, so the task of structural inference would be, the inference of the connectivity between the balls based on the observational trajcetories of them.
> However, all datasets are synthetic. The only real dataset, PEMS, presented in Annex C.5 [...]
Many thanks! As acknowledges by the research in this field, the acquisition of trajectories with reliable understanding of the underlying structure from real-world, is of huge cost and time-expensive. We do acknowledge this urgent call of the reliable real-world data, and we are now working on it.
> There seem to be a duplication in the presentation of datasets [...]. Also, sec3.3 seems to be internally redundant with duplicated points (e.g. l.151 vs eq3, and l.148 vs l.157, which again is confusing.
Many thanks for the comments, we revised our paper to reduce the duplications and tries our best to keep most relevant information to understanding of our paper in the main body.
> Eq.2: since s' is terminal, isn't any $s'' = s_f$?
Actually, no. $s''$ here in Eq. 2 is used to mark the child of the state $s'$. So only if $s'$ pointing to a terminal, then $s'' = s_f$.
> Eq.5: in your contemplated application scenario, the interval between sampling times, or equivalently the timesamp of each sample, allowing to calculate Δ, is not given with the samples, correct? I've worked on mobile accelerometer/GPS data where sampling is irregular but the timestamps are given, which is why I'd like to make sure. Can you clarify whether a posterior over Δ can be pratically recovered?
No, $\Delta$ is not given with the samples. Many thanks for the inspiring question. Technically, we think the posterior over $\Delta$ can be partially recovered, but we have to change the modeling of $\Delta$ in the SSSM module to be with two vectors. The two vectors work similar to those in variational autoencoders, and we can recover the posterior over $\Delta$ partially from them.
> Annex 1 fig5: shouldn't all variables be indexed with $i$, the node id? [...]
No, the nodes share $A$ and $h^t$.
> fig6: do I have it right that GFlowNet only adds edges, but doesn't remove any, moving from start to end? Does that have as a consequence that any prior knowledge can only formulated as known-to-be-present edges, but not known-to-be-absent edges?
Yes, it does not remove any edge. Yes, currently we can only incorporate known-to-be-present edges to be the prior knowledge, as it fullfills the scenarios in some fields like the GRN inference, where several connections are validated by experiments, so we are more sure about the existence of some edges than the absence of certain edges.
> Annex C.5: what is the physical model? What is a node, an edge of the model?
The model in Annex C.5 is that: many sensors are used to count the traffic in a roadmap, and these sensors may be connected by the road they align with. In this case, suppose we have no idea on the map, but try to reconstruct the map from the traffic-counting from the sensors, as in most cases, continuous traffic flows happen between adjacent sensors.
> Annex l.872: how is the % of prior knowledge defined?
It is defined as the proportion to the count of all positive edges in the groundtruth graph.
> Annex l.863: link is referred to but missing AND Checklist point 4 and 5: implementation link is claimed to be provided here and in Annex l.863, but I don't see it [...]
Many thanks! We revised our paper with the link. As you may interested in our implementation, we included them in our supplementary materials.
> Limitations: a few more assumptions on the usage scenario should be spelt out
Many thanks! We will include the assumption on prior knowledge: sure-to-to-be existed in the limitations among other suggested. And also the experiments in this work mainly covers synthetic data.
> Checklist point 7: It is certainly possible to report error bars on plots through shading, provided they are not as tiny as you make them here. [...]
Many thanks for the advice! We revised the figures to make them with shadings. Please refer to the images in the pdf file attached in the general rebuttal.
> Checklist point 6: experimental settings are not as detailed [...]
Many thanks! We revised this section and include the batch size: 32, data splits into training, validation and test: 8: 2: 2 for all of the datasets. We also include more such as number of epochs and so on.
> Checklist point 12: the claim, and requirement of the checklist that assets have their license mentioned is not complied with regarding either datasets or existing code for competing methods. [...]
Many thanks! We will include the license in our repository. As current we have to obey the double-blinded rule, we removed them from the repo.
> Checklist point 13: does this imply the code is not intended to be released as an asset?
Sorry for our mistake, We correct it to be: We will release the code in our github repo upon acceptance. But we do not include the data, as they are from other work and are public available.
We hope our answers addressed you concerns correctly.
---
Rebuttal 2:
Comment: Thanks to all. I have read all reviews and rebuttals, which are all very informative. Overall, I seem to disagree with fellow reviewer uuWM; I'm glad that hoUX and 3YzK could shed light on links to related work and technical choices.
Reviewers' clarification questions and misunderstandings, including my own, are unmistakeable symptoms of obscurity, since they cannot be dispelled even after careful reading by typical professionals. Some improvements were achieved through revisions, and I very well know how difficult it is to implement these in a short time. Nonetheless, writing is still unclear in many places, and I find that concerns by reviewer 3YzK on distinguishing from competitor methods have not been fully addressed. Extra experiments and ablations proposed in your answer to this reviewer are very useful.
> We may image a dynamical system of balls and strings, ...
Despite broken English, the example helps. The text would further benefit from introducing the terminology (edges, nodes, observations, sampling) and notation on the example.
> The model in Annex C.5 is that: ...
I now understand that the node variables are "volume" units (vehicle counts) as opposed to flow units (volume per time, i.e. vehicles per second). Therefore in this example, increasing $\Delta$ will increase the value of the variable. In springs and balls, this also applies to displacement (an integrated form of velocity). Most examples I know, however, have variables in flow ("intensity") units. Is this correct? Does this affect performance of SISCM or some competitor methods? Therefore, for SICSM to work on datasets where variables are "volumes", is it an important hypothesis that throughout observations, the same partial set of node variables is observed at each time point?
I understand your response to reviewer hoUX on taking a pointwise estimate of graph instead of taking advantage of the distribution. I believe that the issue goes somewhat wider than your response hints at: your evaluation is carried out against known ground truth distributions (e.g. via cross-entropy) only because all your datasets are synthetic and you have access to the ground truth. It is still a major weakness that only synthetic datasets are used.
Extra experiments, ablations and clarifications offered during the rebuttal definitely helped; since the reviewing procedure does not support paper revisions, one must hope that the paper can benefit from them, which is not trivial considering that owing to length limitations, every addition of text or figures requires a deletion.
On the whole, many weaknesses pointed out by myself and fellow reviewers during the review process have been confirmed. I maintain my assessment.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer 2uCG,
Thank you for your update. We appreciate the time and effort you are dedicating to reviewing our rebuttal. Please feel free to reach out if you have any further questions or need additional clarification on any points. We look forward to your feedback.
Warm regards,
Authors of SICSM
---
Rebuttal Comment 2.2:
Comment: Dear Reviewer 2uCG,
Many thanks for your reply. And here are our answers to the concerns raised in your comment:
> concerns by reviewer 3YzK on distinguishing from competitor methods have not been fully addressed
Thank you for your comment. We would like to clarify that JSP-GFN forms the foundational backbone of the GFN component in our work and plays a crucial role in the inference of structural connections, which is the primary objective of structural inference. We have discovered that modeling a distinct set of parameters $\lambda$ significantly enhances the learning process. This approach encourages each edge to learn from its connecting nodes, promoting a degree of edge isomorphism.
To refine this process further, we have incorporated additional regularization terms into the graph prior of the reward function. These include a term for Dirichlet energy $D (\tilde{Adj}, U_{All})$ , the conenctivity term $\mathcal{L}_d(\tilde{Adj})$ and a sparsity term $\mathcal{L}_s(\tilde{Adj})$, which directly address the properties of the graph structure within the latent spaces of GFN. Unfortunately, due to page limitations and our mistake, these details were initially placed in Appendix B.5, but we have since moved them to the main body of the paper for greater visibility.
Furthermore, as outlined in the experimental section (Lines 307-308), the original JSP-GFN framework is not suited for trajectories with multi-dimensional features. In our SICSM framework, the use of SSM compresses multi-dimensional trajectories into one-dimensional form, enabling effective application of JSP-GFN. This adaptation extends the functionality of JSP-GFN to accommodate the complexities of our datasets.
> We may image a dynamical system of balls and strings, ... Despite broken English, the example helps. The text would further benefit from introducing the terminology (edges, nodes, observations, sampling) and notation on the example.
We apologize for any previous lack of clarity in our text. To illustrate, consider an illustrative example where our dynamical system comprises $n$ balls connected by springs, representing $n$ nodes $\mathcal{V}$ and directed edges ${E}$, respectively. Initially, we set the positions and velocities of each ball, so that each node feature $v^t_i \in \mathbb{R}^d$ is $d$-dimensional where $d = 4$ in this example. We then let them move under the influence of spring forces, which arise from the structural connections (edges) between the balls (nodes). Over the observation period, these balls change their positions and velocities. And we record the trajectories as the collection of the evolving features of all nodes: $\mathcal{V} = \{V\} = \{V^{0}, V^{1}, \dots, V^{T-1}\}$ across $T$ time steps, where $V^{t}$ represents the feature set at time $t$. In total we observe a set of $M$ trajectories: $\{V_{[1]}, V_{[2]}, \dots, V_{[M]}\}$, assuming a static edge set ${E}$. Suppose we initially lack knowledge of which balls are connected, i.e., ${E}$ is unknown; the task of structural inference in this scenario would involve deducing the connectivity between the balls based on their observed trajectories.
> The model in Annex C.5 is that: ... I now understand that the node variables are "volume" units (vehicle counts) as opposed to flow units (volume per time, i.e. vehicles per second). Therefore in this example, increasing Δ will increase the value of the variable. In springs and balls, this also applies to displacement (an integrated form of velocity). Most examples I know, however, have variables in flow ("intensity") units. Is this correct? Does this affect performance of SISCM or some competitor methods? Therefore, for SICSM to work on datasets where variables are "volumes", is it an important hypothesis that throughout observations, the same partial set of node variables is observed at each time point?
Yes, the node variables of PEMS dataset in Annex C.5 are actually "volume" units, as it refers to the number of passing vehicles within a time period. Yes, most examples used in the experimental section have variables in flow (such as the velocity for spring and balls).
From our analysis, we have observed that combining both volume and intensity variables generally yields the best performance across all methods, including SICSM. Utilizing only volume or only intensity tends to produce inferior results, with performances of these two variable types often comparable to each other.
---- Continues on another comment ---
---
Reply to Comment 2.2.1:
Title: Part 2
Comment: --- Here continues ---
> It is still a major weakness that only synthetic datasets are used.
Yes, we acknowledge that the reliance on synthetic data is a limitation of this study. We have updated our manuscript to reflect this in the limitations section. Collecting reliable real-world data remains time-consuming and costly, which influenced our decision to use synthetic data with readily discernible ground truths. We are currently exploring more sophisticated synthetic data generation techniques to address the significant challenge posed by data shortages effectively.
> Extra experiments, ablations and clarifications offered during the rebuttal definitely helped.
Many thanks! Fortunately, we already possess most of the experimental results or simply need to write concise scripts to sift through previous data. And we may have finally some nice and sound sleeping.
> Since the reviewing procedure does not support paper revisions, one must hope that the paper can benefit from them, which is not trivial considering that owing to length limitations, every addition of text or figures requires a deletion.
Thank you for your guidance. We are aware that we are permitted an additional page in the final version, which provides us sufficient space to expand the main body of the paper. We will endeavor to incorporate most of the changes directly into the main text. If any results still need to be included in the appendix due to space constraints, we will ensure they are clearly referenced in the main body for easy accessibility.
We hope our answers addressed your questions correctly.
Warm regards,
Authors of SICSM | Rebuttal 1:
Rebuttal: Dear Program Chairs, Senior Area Chairs, Area Chairs, and Reviewers,
We are deeply grateful for the detailed reviews and constructive feedback provided by Reviewers 2uCG, hoUX, 3YzK, and uuWM. We appreciate the recognition of the novelty and applicability of our work in addressing the complex challenges associated with dynamical systems through the Structural Inference with Conjoined State Space Models (SICSM).
The innovative integration of Selective State Space Models (SSMs) with Generative Flow Networks (GFNs) has been recognized for effectively handling complex challenges associated with dynamical systems. Reviewer hoUX noted our "unique and novel approach to the problem of learning system structure from irregularly sampled trajectories and partially observed systems." This underscores the innovative nature of our methodology, particularly in adapting to the irregularities and complexities of real-world data.
The extensive evaluation across multiple datasets and comparison against numerous baselines has demonstrated the superiority of SICSM, particularly in scenarios involving irregularly sampled or partially observable nodes. Reviewer 2uCG mentioned, "The paper presents experiments on 16 datasets... and compares to 7 other models, showing superiority in scenarios where observations are irregularly spaced or nodes partially observable." This highlights the robustness and applicability of our approach in varied real-world settings.
The clarity of our paper's presentation, including its coherent notations and supportive diagrams, has been positively highlighted. Reviewer uuWM appreciated that "The paper is well-organized and clearly written, with detailed explanations of the methodologies and experimental setups." This feedback validates the effort put into ensuring that the sophisticated architecture of SICSM is accessible and understandable.
The potential applicability of SICSM in practical scenarios, such as biological time series analysis and system diagnostics, has been emphasized as a significant contribution. Reviewer 3YzK stated, "The paper proposes an interesting architecture and solves problems that have the potential to be very relevant in real world contexts, such as biological time series." This comment supports our claim that SICSM can be a valuable tool for scientific discovery across multiple disciplines.
We acknowledge the concerns regarding the computational intensity of implementing SICSM, as noted by Reviewer uuWM: "The implementation of SICSM is computationally intensive, requiring significant resources and expertise." We are committed to addressing this by developing tutorials and exploring optimizations to make SICSM more accessible and user-friendly.
In response to the feedback, we will enhance the manuscript to better clarify our novel contributions, particularly distinguishing them from related works like DAG-GFN and JSP-GFN. Reviewer 3YzK's advice to "exercise an abundance of caution" in presenting our contributions will guide our revisions to ensure clarity and accuracy. We will also revise the paper to include the new plots with shadings showing the standard deviations, as well as experimental results with CUTS (Cheng et al. 2023) and ablation studies.
As for the concerns and questions raised by each reviewer, we answered them individually under each review. We hope we have addressed the concerns and answered the questions correctly.
We are encouraged by the constructive critiques and the positive comments on the potential and performance of SICSM. Our team is committed to continuous improvement and is excited about the future contributions our work can make to the field of structural inference and beyond.
Thank you once again for your invaluable input and the opportunity to contribute to this esteemed conference.
Please check the attached PDF for revised figures and more experimental results.
Warm regards,
Authors of SICSM
Pdf: /pdf/199fc0a587bed5d14c15a9617fec5b25f14e6d6d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Assembly Fuzzy Representation on Hypergraph for Open-Set 3D Object Retrieval | Accept (poster) | Summary: This paper presents a novel 3D object retrieval method. First, to facilitate this task, the authors build 3 datasets for training and evaluation, which may significantly benefit the community. Then the paper propose the Isomorphic Assembly Embedding (IAE) and the Structured Fuzzy Reconstruction (SFR) modules, which are designed to generate assembly embeddings with geometric-semantic consistency and overcome the distribution skew of unseen categories. Besides, HIConv is proposed to capture high-order correlations within and among objects. Extensive experiments show that the method achieves sota performance.
Strengths: 1. This paper builds 3 datasets for the task, which may facilitate future research.
2. The paper proposes several novel modules to capture the part-level and inter-object features for object retrieval.
3. The task itself is important in shape understanding.
Weaknesses: 1. No visualization results.
2. The presentation is hard to understand. There are quite some complex equations, like Eq 2 and Eq 4. Please briefly explain what they mean and how they work.
3. In Fig. 1, it shows that intra-object features are extracted before inter-category features. But in Fig. 2, I only see Inter-object features? It's hard for me to match them up.
4. I still don't understand the input. So you need dense point cloud with ground truth 3D part segmentation as input, right? If the segmenation is not perfect, will the method collapse? if the point cloud undergoes SE(3)-transformation, will the method collapse? Can this method handle partial point cloud input, like the point cloud back-projected from depth map?
Technical Quality: 3
Clarity: 1
Questions for Authors: see weakness
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 4
Limitations: The authors didn't discuss the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. **Visualization (Weakness 1)**
We apologize for the lack of sufficient visualization results. We provide some visualized examples of the retrieval results in Fig. R3 of the rebuttal PDF and we will provide more.
2. **Equations (Weakness 2)**
We have revised the expression and explanation of these two equations. Eq 2. is the goal of the open-set retrieval task (minimize the expected risk), which is a widely accepted type for task definition [1-3] in the retrieval field. Eq 4. is the definition of HIConv layer, which followed the commonly used type for graph-based convolution [4-6].
[1] View-based 3-D object retrieval[M]. Morgan Kaufmann, 2014.
[2] Hypergraph-based multi-modal representation for open-set 3D object retrieval. TPAMI, 2023.
[3] SHREC’22 track: Open-set 3D object retrieval. 2022.
[4] How powerful are graph neural networks? ICLR, 2019.
[5] Hypergraph neural networks. AAAI, 2019.
[6] Hgnn+: General hypergraph neural networks. IEEE TPAMI, 2022.
3. **Framework (Weakness 3)**:
Thanks for your valuable suggestion, We apologize for the typos in Figure 2, the "inter-object" should be corrected as "intra-object". We have restructured the presentation of the proposed framework, which consists of two sequentially connected modules: IAE and SFR.
a) The IAE module takes basic part features (as explained in Answer 4) as input. This module employs a structure-aware convolution layer and a set of auto-encoders to achieve assembly fusion of different parts within an object. In this module, the structure-aware convolution layer is implemented by constructing an isomorphism hypergraph and a hypergraph isomorphism convolution function (as explained for Eq. 4 in Answer 2).
b) The SFR module takes assembly embedding for each object as input, utilizing structure-aware feature smoothing and distillation through hypergraph convolution and memory bank reconstruction, respectively. Finally, this module generates the final features (fuzzy embeddings) for similar object matching based on feature distance, thereby enabling retrieval.
4. **Input (Weakness 4)**
Thanks for your valuable suggestion. The inputs for our framework are the part features rather than dense point clouds, and we do not need ground truth 3D part segmentation as input (as described in lines 259-263 and lines 476-478 of the submission). Instead of extracting features from the segmented parts of the point cloud, we use a segmentation network to obtain point-wise features for each point and then average these point-wise features to obtain part features. As shown in Fig. R2 of the rebuttal PDF, the steps are as follows:
a) Input the point clouds of an object.
b) For each point, obtain its point-wise feature and part labels through a pre-trained point cloud part segmentation network.
c) Select the points belonging to the top-$n$ most frequent part categories and then average the point-wise features with the same top-$n$ part label. Then we calculate the average feature of other points. In this way, we obtain $n+1$ part features for each object as the input for our HAFR framework.
We do not need dense point clouds nor the segmentation ground truth for input. As shown in Tab. R1 of rebuttal, compared with the SOTA point cloud segmentation methods [7][8], the features obtained using PointNet did not significantly affect the results. Therefore, we believe our method will not collapse if the segmentation is not perfect. Besides, our framework is a feature-driven method (as described in lines 157-158) and does not directly process raw data (dense point clouds). Thus, its robustness to SE(3) transformations and partial back-projection of point clouds is equivalent to that of the point feature extraction network. These backbone networks have rotation equivariant and adaptability to partial data, as proven in famous works such as [7-11]. Therefore, we believe the method will not collapse when confronted with SE(3) transformations and partial data.
[7] Pointnext: Revisiting pointnet++ with improved training and scaling strategies. NIPS, 2022.
[8] Segment any point cloud sequences by distilling vision foundation models. NIPS, 2024.
[9] Pointnet: Deep learning on point sets for 3d classification and segmentation. CVPR, 2017.
[10] Pointnet++: Deep hierarchical feature learning on point sets in a metric space. NIPS, 2017.
[11] Uni3D: Exploring Unified 3D Representation at Scale. ICLR, 2024.
4. **Limitations (Limitations)**
We have provided a brief discussion in line 330-332.
a) Experiments of various parts. We only conduct limited experiments on the influence of varying parts in the manuscript. We conducted more experiments with the number of input part features set to 3 ($n=2$) and 5 ($n=4$). As shown in Tab. R1 of the rebuttal PDF, both the 4-part (ours) and 5-part settings show performance improvements over 3-part, indicating that more detailed segmentation provides richer information for assembly-based retrieval. However, our 4-part setting currently exhibits better than 5-part. We can infer that once the number of parts reaches a certain level, the utilization of part information also becomes saturated.
b) Discussions of societal impacts. As shown in line 158-159 and Appendix C Algorithm 1, the proposed HARF framework is a feature-driven framework and exclusively relies on the input of basic features, rather than utilizing raw data through the end-to-end approach. This feature-driven representation approach preserves extensibility to other common multimedia data and such as e.g. text, audio, video, and their pieces. We believe this paper can provide a general theoretical foundation and methodological reference for the application of multimedia retrieval in practical real-world scenarios. We will release the datasets and code immediately after the anonymous review period.
---
Rebuttal 2:
Comment: Dear Reviewer,
We would greatly appreciate any updates or feedback you might have regarding our responses to your initial comments. Your insights are valuable to us as we work to improve our paper.
If you need any additional information or clarification from our side, please don't hesitate to let us know.
Thank you for your time and consideration.
---
Rebuttal 3:
Title: Final Rating
Comment: Thanks for your rebuttal. I will increase my rating to borderline accept
---
Rebuttal 4:
Comment: We sincerely appreciate your positive feedback and professional comments on our work. Your valuable suggestions have been crucial in improving the quality of our paper. We also appreciate the rating improvement, and will carefully revise the manuscript according to your review comments and ensure the rigor of the experimental results and references.
Looking forward to academic discussions with you after the anonymous period of NeurIPS 24, if possible! We are willing to share all our experiences, datasets, and codes of this work. | Summary: The manuscript introduces a framework (HAFR) for addressing the challenge of open-set 3D object retrieval. The authors propose a bottom-up approach focusing on part assembly, leveraging both geometric and semantic information of object parts to enhance retrieval performance across categories, including those unseen during training.
The HAFR framework consists of two main modules: Isomorphic Assembly Embedding (IAE) and Structured Fuzzy Reconstruction (SFR). The IAE module utilizes Hypergraph Isomorphism Convolution (HIConv) and assembly auto-encoders to generate embeddings with geometric-semantic consistency. The SFR module tackles distribution skew in open-set retrieval by constructing a leveraged hypergraph based on local and global correlations and employs a memory bank for fuzzy-aware reconstruction.
The authors have created three datasets, OP-SHNP, OP-INTRA, and OP-COSEG, to benchmark their approach. Extensive experiments demonstrate the superiority of HAFR over current state-of-the-art methods in open-set 3D object retrieval tasks.
Strengths: - The paper presents a method for open-set 3D object retrieval that cleverly integrates part-level information using hypergraphs, which is a unique and promising direction in the field. The HAFR framework is well-thought-out, with clearly defined modules (IAE and SFR) that address different aspects of the retrieval task, from assembly isomorphism to distribution skew mitigation.
- The construction of three new datasets with part-level annotations provides a valuable resource for the research community and supports the validation of the proposed method.
- The methodology is clearly described, and the algorithms are well-structured, making it relatively easy for readers to follow the technical contributions.
- The paper is well-written and easy to follow.
Weaknesses: - The paper does not address scenarios with varying numbers of parts per object. Expanding the framework to handle flexibility in the number of parts could improve its applicability.
- The manuscript could benefit from a discussion on the computational complexity and efficiency of the proposed methods, especially when scaling to larger datasets or higher-dimensional part features.
- Why not evaluate on the PartNet(https://partnet.cs.stanford.edu/)?
- Although the paper claims state-of-the-art performance, they do not achieve the best (SDML is the best on OP-COSEG for NDCG metric), what is the reason?
- Some implementation details, such as network architecture specifics and hyperparameter settings, could be better elaborated to ensure reproducibility.
- The paper mentions that data and code will be made available upon acceptance, which is good practice. - However, providing this information upfront or during the review process could enhance transparency and reproducibility. For the three datasets, the detailed construction is missing and encourages the authors to publicize the data, facilitating the community.
- The limitations and failure cases should be discussed comprehensively.
Technical Quality: 3
Clarity: 3
Questions for Authors: The manuscript presents a contribution to the field of 3D object retrieval, particularly in the open-set scenario. The proposed HAFR framework is innovative and has been demonstrated to be effective through rigorous experimentation. However, there are areas where the manuscript could be improved, particularly in terms of computational efficiency, limitations on various parts, and other minor issues. Addressing these points would likely enhance the manuscript's impact and applicability in the field.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. **Varying numbers of parts (Weakness 1)**
HAFR takes 4 part features as input for each object in this manuscript for now. As shown in Fig. R2 of the rebuttal PDF, the steps for part features generation are as follows:
a) Input the point clouds of an object.
b) For each point, obtain its point-wise feature and part labels through a pre-trained point cloud part segmentation network.
c) Select the points belonging to the top-$n$ most frequent part categories and then average the point-wise features with the same top-$n$ part label. Then we calculate the average feature of other points. In this way, we obtain $n+1$ part features for each object as the input for our HAFR framework ( $n=3$ for this paper).
We conducted more experiments with the number of input part features set to 3 ($n=2$) and 5 ($n=4$). As shown in Tab. R1 of the rebuttal PDF, both the 4-part (ours) and 5-part settings show certain performance improvements over the 3-part, indicating that more detailed segmentation provides richer information for assembly-based retrieval. However, our 4-part setting currently exhibits the best performance (better than the 5-part). We can infer that once the number of parts reaches a certain level, the utilization of part information also becomes saturated.
2. **Computational complexity and efficiency (Weakness 2)**
Our experiments are conducted on a computing server with one Tesla V100-32G GPU and one Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz. We provide a detailed comparison of model parameters, training time, and inference time for the two stages in Tab. R2 of the rebuttal PDF. The SFR module occupies a very small parameter space (less than 3\%) and is directly affected by the size of datasets. The IAE module only consumes up to 34\%, which determines the effectiveness with high-dimensional features. We believe that our method can remain effective when scaling to larger datasets or higher-dimensional part features.
3. **PartNet (Weakness 3)**
The ShapeNet Part [1] dataset we used and the PartNet [2] dataset are both fine-grained annotated subsets of ShapeNet, with PartNet having a greater variance in the number of parts per object. As an early exploration of open-set learning in a fine-grained way, we believe this paper should focus on exploring the new assembly-based retrieval task and designing a novel open-set learning paradigm based on both inter-object and intra-object correlations. Therefore, we selected the ShapeNet Part with more evenly distributed parts rather than PartNet, which can reflect the core challenges of a fine-grained approach in an open-set environment. We have preliminarily experimented with an adaptive Scale isomorphism computation method for better generalization, inspired by [3] and [4]. Additionally, we have developed a hypergraph-based dynamic system approach to manage the increasing number of parts and labels inspired by [5] for the future work.
[1] A scalable active framework for region annotation in 3d shape collections[J]. ACM ToG, 2016.
[2] Partnet: A large-scale benchmark for fine-grained and hierarchical part-level 3d object understanding[C]. CVPR, 2019.
[3] Hypergraph isomorphism computation[J]. IEEE TPAMI, 2024.
[4] Uni3D: Exploring Unified 3D Representation at Scale. ICLR, 2024.
[5] Hypergraph dynamic system[C]. ICLR, 2024.
4. **SDML on OP-COSEG (Weakness 4)**
SDML proposes a scalable multimodal learning paradigm for retrieval by predefining a common subspace, aiming to minimize the intra-class difference. In the context of assembly-based retrieval, the SDML method projects all parts into a class-related common space to achieve unification. However, this approach results in an overall feature shift, leading to the loss of unique geometrical information of parts and the correlations among them. As shown in Fig. R3 of the rebuttal PDF, we can find that compared to the OP-SHNP and OP-INTRA datasets, the objects in the OP-COSEG dataset exhibit stronger symmetry, leading to less influence of this biased unification. Furthermore, NDCG is a metric that considers the global perspective, thus SDML achieves a slight advantage (0.11\%) of NDCG on OP-COSEG through this globally biased unification. However, the significantly worse results on both other datasets and other commonly use metric demonstrate the limitation of this method.
5. **Reproducibility and open access (Weakness 5 and 6)**
We have provided a brief description of implementation details and dataset generation in the Appendix. Besides, we are well prepared and will release the datasets, code, configurations, and pre-trained models immediately after the anonymous review period of NeurIPS 24. We are willing to share our experiences on this (OpenReview) or other open-source platforms.
5. **Failure cases (Weakness 7)**
We provide some failure cases in Fig. R4 of the rebuttal PDF. In these failure cases, the query objects (rocket, pistol) and the wrong-matched target objects (car, motorbike) share a certain similarity in their part segmentation. Although the significant performance improvement of the HARF framework demonstrates the necessity of assembly-based research in open-set learning, these cornet cases also indicate the necessity of the equilibrium between different levels of labels, which needs the balance of global semantic and local geometry information. This issue is the same as the generalization ability mentioned in Weakness 1. However, this paper focuses more on the fundamental challenge brought by part assembly for open-set retrieval. Therefore, we only consider typical segmentations in this study. As mentioned in Answers 1 and 3 above, we are currently conducting research to address these more complex environments.
7. **Writing (Limitations)**:
We apologize for the typos and writing issues in this manuscript. We will conduct a thorough review and revision of the entire paper to ensure the clarity and rigor.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Thanks for your great efforts! After reading the response, some major issues have been addressed well, so I still lean towards positive for the submission. I encourage the author to add these clarifications to the main paper. Thanks!
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your positive feedback and professional comments on our work. Your valuable suggestions have been crucial in improving the quality of our paper. We will carefully revise the manuscript according to your review comments and ensure the rigor of the experimental results and references. | Summary: This paper proposes to utilize the part-assembly representation method to mitigate the distribution skew of unseen categories, enhancing the generalization performance for open-set 3D object retrieval. Compared to previous methods, this paper benefits from part-level representation learning rather than object-level representation, obtaining in a good generalization on unseen categories. To utilize the part-level representation, this paper introduces Isomorphic Assembly Embedding (IAE) and the Structured Fuzzy Reconstruction (SFR) modules. The former can generate the assembly embedding isomorphically for each object, and the latter is used for generating the fuzzy representation thus overcoming the distribution skew of unseen categories.
Strengths: The problem is well-motivated and the solution seems working well. The results are good. The paper also contributes three 3D point cloud datasets with multiple part annotations for benchmarking. Extensive experiments on the three benchmarks demonstrate the superiority of the proposed method over current state-of-the-art 3D object retrieval methods.
Weaknesses: 1. The datasets OP-INTRA and OP-COSEG mentioned in the paper may have limitations in category diversity, number of parts, and dataset size, which may affect the generalization ability of the model.
2. The framework comprises many sub-architectures, such as the HIConv layer, multiple auto-encoders, fuzzy embeddings, and memory bank, it seems to be relatively complex. However, this paper does not explicitly discuss the computational efficiency of the model, including training and inference time, and computational cost.
3. Though the paper proposes a solution to the open set problem, the datasets are all virtual. Its generalization ability to unseen categories in real-world applications still needs further verification.
4. The ablation studies show the effect of the HIConv layer. However, only comparisons with MLP and GIN are performed, but no comparisons with other neural layers such as KAN, nor is the number of HIConv layers ablated.
5. The experiments are only conducted on the proposed datasets. The generalization ability of the model on a wider data distribution requires more verification. It would be better to add some experiments on previous public datasets or datasets without open-set settings to demonstrate generalization capabilities.
Technical Quality: 3
Clarity: 3
Questions for Authors: The quantitative performance comparisons in Table 2 show the superiority of the proposed method. However, this paper only surpassed the second place by a little bit in some metrics, and there is no sufficient statistical information to prove the significance of the results, such as p-values.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper should add discussions on limitations and possibly show some failure cases.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response for Reviewer ZtnM**
We sincerely thank you for the valuable comments and advice, which provided important guidance for us to enhance the rigor and coherence of our paper and directed the focus of our future work.
1. **About the generalization ability of the model (Answer for Weakness 1 and Weakness 3)**:
Thanks for your valuable suggestion. As an early exploration of open-set learning in a fine-grained way, we believe this paper should focus on exploring the new assembly-based retrieval task and designing a novel open-set learning paradigm based on both inter-object and intra-object correlations. Therefore, we selected the three 3D object datasets with typical geometrical structures for experiments, which is representative of a fine-grained way for an open-set environment. Experimental results demonstrate the necessity and effectiveness of the assembly-based paradigm and framework. However, further improving the generalization ability of our framework and extending it to encompass more intertwined factors in complex open-set environments is one of the key directions for future work. Specifically, we have preliminarily experimented with an adaptive Scale isomorphism Computation method for generalization across datasets and domains, inspired by [1] and [2]. Additionally, we have developed a hypergraph-based dynamic system approach to manage the increasing number of parts and labels inspired by [3].
[1] Feng Y, et al. Hypergraph isomorphism computation[J]. IEEE TPAMI, 2024.
[2] Zhou J, et al. Uni3D: Exploring Unified 3D Representation at Scale. ICLR, 2024.
[3] Yan J, et al. Hypergraph dynamic system[C]. ICLR, 2024.
2. **About module-wise computational requirements (Answer for Weakness 2)**:
Our experiments are conducted on a computing server with one Tesla V100-32G GPU and one Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz. We provide a detailed comparison of model parameters, training time, and inference time for the two stages in Tab. R2 of the rebuttal PDF.
3. **About more ablation studies (Answer for Weakness 4)**:
Thanks for your valuable suggestion. We conducted more ablation studies, especially on HIConv and other neural layers. Our method employs only a single layer of HIConv, therefore, we explored the impact of more layers of HIConv. Besides, we replaced the HGNN with KAN network [4] in the SFR module. As shown in Tab. R1 of the rebuttal PDF. The full version of our framework yields the best performance, these results indicate the effectiveness of our design in assembly-based open-set 3D object retrieval.
4. **About the public datasets (Answer for Weakness 5)**:
Thanks for your valuable suggestion. The proposed datasets are constructed based on public datasets without open-set settings, making them suitable for open-set retrieval experiments. As mentioned in Answer 1, we will continue to explore assembly-based retrieval research in our extension version, by constructing more datasets and conducting more experiments.
5. **About the significance of the proposed method (Answer for Questions)**:
Thanks for your valuable suggestion. We provide statistics of our framework (HAFR) and the second-best method (HGM$^2$R). As shown in Tab. R3 of the rebuttal PDF, statistical information proves the significance of the results.
6. **About failures cases and limitations (Answer for Limitations)**:
Thanks for your valuable suggestion. We provide some failure cases in Fig. R3 of the rebuttal PDF. In these failure cases, the query objects (rocket, pistol) and the wrong-matched target objects (car, motorbike) share a certain similarity in their part segmentation. Although the significant performance improvement of the HARF framework demonstrates the necessity of assembly-based research in open-set learning, these cornet cases also indicate the necessity of the equilibrium between different levels of labels, which needs the balance of global semantic and local geometry information. This issue is the same as the generalization ability mentioned in Weakness 1. However, this paper focuses more on the fundamental challenge brought by part assembly for open-set retrieval. Therefore, we only consider typical segmentations in this study. As mentioned in Answer 2 above, we are currently conducting research to address these more complex environments. Thank you for your keen observations and academic insights.
Thank you again for your valuable suggestions, especially your professional advice on future work in assembly-based open-set learning. | Summary: This paper presents a method for finding similar samples from a set of 3D objects given query objects in an open setting, where objects can belong to both already seen and new categories. This method is based on considering 3D objects as hypergraphs consisting of individual geometric and semantic parts of objects. The hypergraph is used to form Isomorphic Assembly Embedding. The second part of the proposed HAFR framework is the Structured Fuzzy Representation module that constructs a hypergraph based on local certainty and global uncertainty correlation to enable transfer from seen to unseen categories. The authors propose a new layer, HIConv, which improves the quality of the generated representation. The authors demonstrate the effectiveness of their approach on three datasets that they constructed for this task.
Strengths: - The idea that one can understand the whole object shape from its parts sounds interesting and reasonable.
- The description of Isomorphic Assembly Embedding and Structured Fuzzy Reconstruction is formal and rather clear.
- The authors conduct extensive ablation studies of their method.
Weaknesses: - Based on the provided experiments, it is unclear if HAFR can generalize well to an unseen domain. Are the results in Table 2 provided for the same suite of model weights?
- The literature review does not include existing methods for open-set 3d object retrieval and recent methods for closed-set 3d object retrieval.
- When comparing with other methods, the authors use their own modification of existing multimodal methods. A comparison with modern methods for open-set 3d object retrieval, such as [1], is necessary to demonstrate the effectiveness of this particular method of object representation.
- The method's description lacks an explanation of how the resulting fuzzy embeddings are used to find similar objects. Additionally, the description contains undefined concepts like isomorphism loss and integration function. If these concepts are not introduced by the authors, please include references to articles where they are defined.
[1] Zhou, J., Wang, J., Ma, B., Liu, Y. S., Huang, T., & Wang, X. (2023). Uni3d: Exploring unified 3d representation at scale. arXiv preprint arXiv:2310.06773.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How are fuzzy embeddings used to find similar objects in the target set?
2. What is the size of the memory anchors bank? Will the method remain effective if the dataset contains more than 16 categories?
3. The method is described as open-set, but it requires GT segmentation of the object into parts. How do you see its applicability in real-world scenarios where GT segmentation might not be available for any object? How much would the quality metrics decrease if we used a neural network model for part segmentation?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors discuss limitations in the conclusion regarding the use of the assembly fuzzy representation for a varying number of object parts. In my opinion, another limitation is the need to segment the point cloud into parts to use this method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. **Generalization ability and comparison (Weakness 1)**
All categories of the testing set are unseen during training (widely accepted of open-set retrieval [1-2]), the retrieval results in this paper are experimented on the unseen categories. The compared results between different methods are conducted under the same settings, and training weights.
We have explained this open-set setting in Section 5.1 (lines 254-255) and Appendix A (Table 4). We will provide a more detailed explanation in the revised version of this paper. Quantitative and qualitative results show that HARF achieves significant improvements over existing methods. Under this open-set setting, this improvement sufficiently demonstrates the superiority of HARF in terms of generalization capability.
[1] Open-environment machine learning. National Science Review, 2022.
[2] Hypergraph-based multi-modal representation for open-set 3d object retrieval. TPAMI, 2023.
2. **Related works (Weakness 2)**
In our submission, We have provided a brief review of closed-set 3D object retrieval methods in lines 78-88, and a summary of recent open-set 3D object retrieval methods in lines 95-97.
3. **More comparisons (Weakness 3)**
We provide more comparison between modern methods[3][4] for open-set 3d object retrieval. As shown in Tab. R1 of the rebuttal PDF, the results indicate that these two methods have similar performance to the existing multimodal method (HGM$^2$R). Our HAFR framework shows significant improvements over all these SOTA open-set retrieval methods. This improvement highlights the limitations of existing retrieval paradigms in open-set environments and demonstrates the superiority of our assembly-based open-set retrieval paradigm.
[3] Uni3D: Exploring Unified 3D Representation at Scale. ICLR, 2024.
[4] Openshape: Scaling up 3d shape representation towards open-world understanding. NIPS, 2023.
4. **The approach for finding similar objects (Weakness 4 and Question 1)**
After obtaining the fuzzy embeddings (feature vectors) for all objects, we follow the common distance-based approach in the multimedia retrieval field to find similar objects, which is a widely accepted practice in recent decades [5-7]. Given a feature vector (fuzzy embedding) of query object:
a) Calculate the Euclidean distance between the query feature vector and all target object feature vectors.
b) Sort these distances in ascending order to get the top-$n$ nearest objects.
c) Determine whether the class labels of the top-$n$ nearest objects are the same as the query object label and calculate metrics such as mAP, NDCG, ANMRR, and PR-Curve, where $n$ denote the hyper-parameter of the evaluation metrics and can be chosen based on the specific scenario.
[5] A survey of content-based image retrieval with high-level semantics. PR, 2007.
[6] 3-D object retrieval and recognition with hypergraph analysis. TIP, 2012.
[7] Triplet-center loss for multi-view 3d object retrieval. CVPR, 2018.
5. **More ablation on the Memory Bank (For Question 2)**
The memory bank we used has 128 anchors and 512 dimensions. The memory bank is a commonly used knowledge distillation method in deep learning. Specifically, it constructs several anchors (feature vectors) and then learns the activation scores of the target embeddings relative to all anchors. The memory bank is independent of the classification layer, and its size usually does not affect the performance of the network when the number of categories changes. This has been validated in methods of multiple fields[8]. We have conducted more ablation studies on the memory bank. As shown in Tab. R1 of the rebuttal PDF, changes in the memory size have almost no impact. We believe HARF remains effective even if the dataset contains more than 16 categories.
[8] Semi-supervised semantic segmentation with pixel-level contrastive learning from a class-wise memory bank. CVPR, 2021.
6. **Methods for part feature extraction (For Question 3)**
We have adopted a neural network model for part segmentation to obtain part features (as described in lines 259-263 and lines 476-478 of the submission). However, instead of extracting features from the segmented parts of the point cloud, we use a segmentation network to obtain point-wise features for each point and then average these point-wise features to obtain part features. As shown in Fig. R2 of the rebuttal PDF, the steps are as follows:
a) Input the point clouds of an object.
b) For each point, obtain its point-wise feature and part labels through a pre-trained point cloud part segmentation network.
c) Select the points belonging to the top-$n$ most frequent part categories and then average the point-wise features with the same top-$n$ part label. Then we calculate the average feature of other points. In this way, we obtain $n+1$ part features for each object as the input for our HAFR framework. Where $n$ is the hyper-parameters in this paper.
Therefore, in our framework, we do not need to know how many parts each object should be segmented into. We have conducted more comparisons on the segmentation method. Compared with the SOTA point cloud segmentation methods [9][10], the features obtained using PointNet did not significantly affect the results. Therefore, we believe the method remains applicable in real-world scenarios and is not influenced by the choice of the segmentation network.
[9] Pointnext: Revisiting pointnet++ with improved training and scaling strategies. NIPS, 2022.
[10] Segment any point cloud sequences by distilling vision foundation models. NIPS, 2024.
7. **Undefined concepts (Weakness 4)**
We apologize for the lack of sufficient visualization results. The isomorphism loss mentioned in line 189 is a typo, and it should mean the loss function for the IAE module (section 4.2.3). The integration function of line 189 is an averaged function for multiple features followed [2].
---
Rebuttal Comment 1.1:
Comment: The authors have generally responded to all comments, thank you! In this regard, I have changed the rating to 'borderline accept'
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your positive feedback and professional comments on our work. Your valuable suggestions have been crucial in improving the quality of our paper. We will carefully revise the manuscript according to your review comments and ensure the rigor of the experimental results and references. | Rebuttal 1:
Rebuttal: We thank all reviewers for your insightful feedback and for your valuable time and effort. We try to answer all the questions and weaknesses of each reviewer in the rebuttal section below. The attached PDF contains our additional experimental results and figures.
Pdf: /pdf/9b8638f5e91d96e35326ff582cb88717239d8cc2.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper proposes a framework for open-set 3D object retrieval, called the Hypergraph-Based Assembly Fuzzy Representation (HAFR) framework. This model leverages an Isomorphic Assembly Embedding (IAE) to integrate geometric and semantic consistency. Furthermore, a Structured Fuzzy Reconstruction (SFR) is used to overcome the distribution skew of unseen categories. On three point cloud datasets constructed by the authors, this model outperforms the state-of-the-art.
Strengths: - The motivation for this work is well-established.
- The idea of using hypergraph structures to achieve high-order correlations both within and between objects is novel.
- Sufficient quantitative and qualitative comparisons verify the effectiveness of the proposed model.
Weaknesses: - In structured fuzzy reconstruction, the value of k in the k-nearest neighbors seems to determine the global uncertainty hyperedge. However, the paper lacks explanation or experiments to clarify the selection of k value.
- While HGM2R [1] employs a multimodal approach, the IAE component appears to be similar to the Multi-Modal 3D Object Embedding in HGM2R. What are the differences and unique contributions of IAE compared to the embedding technique used in HGM2R?
-In Table 2, although HGM2R also utilizes hypergraphs, it shows only slight improvements over previous methods in most metrics. For example, the mAP scores on three datasets are only about 0.1 higher. However, the method proposed in this paper demonstrates a significant improvement over HGM2R on the OP-COSEG dataset, with an increase of nearly 0.6. How can this result be explained?
[1] Hypergraph-Based Multi-Modal Representation for Open-Set 3D Object Retrieval. TPAMI 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to paper weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discusseds the limitations in the conclusion section. But I did not find the societal impacts mentioned in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response for Reviewer 3mhr**
We sincerely thank you for the valuable comments and advice, which provided important guidance for us to enhance the rigor and coherence of our paper and directed the focus of our future work.
1. **About the ablation study on $K$-value (Answer for Weakness 1)**:
We further conduct ablation studies on hyper-parameters $K$ to validate the influence of uncertainty hyperedge in the leverage hypergraph. As shown in Fig. R1 of the rebuttal PDF, as the $K$ varies, the performance of the proposed method remains stable and outperforms the compared method in most ranges. However, the performance ceases to improve and may even slightly decrease after reaching a certain value, indicating that the extraction and utilization of high-order correlations have reached saturation. Besides, we can find that different datasets have different peak values of $K$. A larger K-value indicates the requirements for more extensive and deeper capturing of high-order correlations. Your suggestions have inspired our future work, and we will focus on balancing the performance and complexity of the structure-aware network to achieve optimal relationship modeling. We will also include this part of the experiment and analysis in the paper.
2. **About the unique contributions of IAE (Answer for Weakness 2)**:
The IAE module is designed to obtain assembly embeddings from multiple part features. Compared to the multimodal embedding method in HGM<sup>2</sup>R, which generates **averaged embeddings of different modalities from a global perspective**, **the IAE module aims at assembled embeddings with geometric-semantic consistency for all parts from a global-local collaborative perspective**. Specifically:
a) The IAE module constructs a structure (**the Isomorphism Hypergraph**) to capture geometric correlations, such as the order and quantity among the input parts, and then utilizes them for structure-aware fusion. However, HGM<sup>2</sup>R only uses an averaged-guided fusion method with simple auto-encoders.
b) Guided by the isomorphism hypergraph, the IAE module designs **the Hypergraph Isomorphism Convolution (HIConv) layer** that combines geometric and semantic information to generate embeddings collaboratively. However, HGM<sup>2</sup>R uses a naive MLP for feature mapping and fusion, which loses the high-order information within the input.
Although the HGM<sup>2</sup>R designs a satisfactory approach for multimodal embedding, its semantic-only averaged embedding paradigm is not suitable for part assembly, which requires the collaborative use of information from different domains. Based on HGM<sup>2</sup>R, the IAE module of our method introduces the Isomorphism Hypergraph and Hypergraph Isomorphism Convolution to achieve part assembly with geometric-semantic consistency. Experimental results in both the paper and the rebuttal PDF demonstrate the necessity and superiority of the IAE module.
3. **About the improvement on OP-COSEG (Answer for the last Weakness)**:
Although HGM<sup>2</sup>R uses a hypergraph structure, they only construct an inter-object hypergraph at the feature smoothing stage (stage 2), without considering the intra-object correlations between different parts of an object, which directly determine the accuracy of the embeddings. Our method constructs hypergraphs for capturing both intra-object and inter-object correlations, as mentioned in the Answer 2 above. We provide examples of the three datasets in Fig. R4, we can find that compared to the OP-SHNP and OP-INTRA datasets, the objects in the OP-COSEG dataset exhibit stronger symmetry, meaning that the isomorphism between parts is more significant. Ignoring geometric information during assembly embedding may lead to inconsistencies within the same category. These challenges result in only slight improvements over non-hypergraph methods. However, our method tackles the challenge of assembly isomorphism and unification, achieving part assembly with geometric-semantic consistency, leading to significant performance improvements on the OP-COSEG dataset.
4. **About societal impacts (Answer for Limitations)**
As shown in line 158-159 and Appendix C Algorithm 1, the proposed HARF framework is a feature-driven framework and exclusively relies on the input of basic features, rather than utilizing raw data through the end-to-end approach. This feature-driven representation approach preserves extensibility to other common multimedia data and such as e.g. text, audio, video, and their pieces. We believe this paper can provide a general theoretical foundation and methodological reference for the application of multimedia retrieval in practical real-world scenarios. We will release the datasets, code, configs, and pre-trained models immediately after the anonymous review period of NeurIPS 24. We also look forward to engaging and collaborating with more researchers on both theoretical and applied studies of semi-open learning across different fields. Additionally, we are willing to share our experiences on this (OpenReview) or other open-source platforms.
Thank you again for your valuable suggestions, especially your professional advice on future work in assembly-based open-set learning.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I keep my initial rating. | null | null | null | null | null | null |
Infinite-Dimensional Feature Interaction | Accept (poster) | Summary: This work proposes a novel approach for enhancing neural network performance by scaling feature interaction spaces to infinite dimensions using kernel methods. Recent advancements have introduced feature interaction spaces, but these are often limited to finite dimensions, primarily through element-wise multiplications. To overcome these limitations, the authors propose InfiNet, a model architecture leveraging the Radial Basis Function (RBF) kernel to enable infinite-dimensional feature interactions. Finally, the authors provide several empirical results on standard vision tasks.
Strengths: This work provides an interesting generalization of feature-feature interactions via kernels. For the best of my knowledge, this is a novel idea that appears to perform well in practice. However, I am not overly familiar with the current state of the field of deep learning for computer vision. It further provides several larger-scale experiments and interesting ablations.
Weaknesses: * there is no theoretical justification that increasing the dimension of the feature-feature interaction space will lead to better generalization. The paper does a good job analysing this question with ablations. However, this remains an open theoretical question.
* I understand that the motivation for this work comes from applications in computer vision. However, since a major focus in this paper is on comparing the proposed approach to self attention, it would be interesting to not only test this method on images, but also on language.
* the method is reported to have lower FLOPs on average than competing methods. Why is that? Is that a major drawback of this method?
* performance improvement on ImageNet is only marginally. In many cases the proposed method even performs worse than competing methods.
* paragraph starting in line 148: this is on over-claim and has to be removed or rigorously proved. It is not clear how a higher order of $k$ implies better generalization or training. Unless shown in this paper or referenced from another paper, this has to be removed.
Minor:
* line 28: more context for formulating self attention that way has to be provided. It is explained in more detail only at the end of section 3.
* caption of figure 2: there is '?'. Moreover, a description of the presented images should be included. What is shown in Figure 2 on the right hand side? This is only explained in the main text,not the caption. This needs to be changed.
* figure 2, first image on the left: hard to read -- text overlaps with drawing.
Technical Quality: 3
Clarity: 2
Questions for Authors: * what is meant in line 47 + 48? the current formulation is very cryptic. What exactly is linear in $k$?
* figure 2: why does the addition and multiplication interactions reach the same accuracy on cifar10? Isn't that basically MLP vs self-attention? I would presume self attention to perform better.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: * The paper provides empirical results only on vision tasks. However, a major selling point of this paper is generalizing approaches like self attention in terms of feature-feature interactions. Therefore, comparisons with transformers on language tasks should be performed.
* No theoretical analysis is provided proving that the proposed method leads to better generalization.
* The method appears to have on average lower FLOPs than competing methods, while at the same time only marginally outperforming (or even performing worse than) competing methods on imageNet.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the constructive comments from reviewer u52R and the time spent on reviewing this paper. We address the questions and clarify the issues accordingly as described below.
>**[Weakness 1]**: There is no theoretical justification that increasing the dimension of the feature-feature interaction space will lead to better generalization. The paper does a good job analysing this question with ablations. However, this remains an open theoretical question.
**[Re: W1]**: We agree that an in-depth theoretical analysis of the proposed method from the interaction space perspective would enhance our understanding of the model's generalization ability. However, we must acknowledge that we have not yet found a rigorous theorem to fully prove such interaction mechanisms in deep networks. Theoretical analysis of complex systems like InfiNet presents significant challenges. To our knowledge, no existing theorem comprehensively analyzes the generalization ability of advanced neural architectures, such as those incorporating self-attention or gated convolution. Nonetheless, we offer some intuitive and empirical analyses to illustrate the methodology and philosophy underpinning our feature interaction perspective. We hope that these analyses will help readers better understand our motivations and provide guidance for designing improved architectures in future research work.
>**[Weakness 2]**: About testing on language.
**[Re: W2]**: Computer vision models are generally similar to encoder language models. So, we are testing a BERT-like design with InfiNet's architecture on language. InfiNet works on the Encoder model, and it still needs engineering effort. In a GPT-like decoder-only architecture, like most vision models, InfiNet cannot be directly applied due to the limitations of auto-regressive modeling. However, our idea of utilizing kernel methods can be applied in network architecture design, and we are trying to combine the kernel method with xLSTM to construct a new language model.
>**[Weakness 3]**: the method is reported to have lower FLOPs on average than competing methods. Why is that? Is that a major drawback of this method?
**[Re: W3]**: The FLOPs refers to the count of floating point operations in the deep learning architecture area. It represents the amount of computation required by the model. This is a metric where smaller values are better, and our model significantly outperforms others in this regard. Is not a drawback instead an advantage of this method.
>**[Weakness 4]**: performance improvement on ImageNet is only marginally. In many cases the proposed method even performs worse than competing methods.
**[Re: W4]**:
**i. Our main contribution is a new perspective of model design instead of a visual system that can beat all the other models.** We design the experiments for the purpose of verifying the effectiveness of our design perspective over models from plain feature superposition and interaction. We follow the architecture and training configuration of widely used models Swin Transformer for fair comparison. Our goal is to provide a new perspective of model design, but not a SOTA visual recognition system.
**ii. Our performance on ImageNet-1K can be further improved with more refined configuration tuning.**
We can manually refine the various hyper-parameters of the training to obtain stronger performance on the ImageNet validation set. We did not do this, and this is because the performance improvement gained in this way is essentially an overfitting of the dataset. Therefore, there is still substantial room to further improve the performance on ImageNet-1K. We think many techniques including architecture optimization, and training methods would further improve the model performance.
>**[Weakness 5]**: paragraph starting in line 148: this is on over-claim and has to be removed or rigorously proved. It is not clear how a higher order of implies better generalization or training. Unless shown in this paper or referenced from another paper, this has to be removed.
**[Re: W5]**: This is an association with Weakness 1. We will modify the wording here. We will state our view of this in the form of a conjecture/hypothesis instead of an explanation. Some relevant support is given in [1], but a complete theory of such a complex system still needs more effort.
**[Response to Minor Points]**: (1) We will provide more context to make the formulating more self-contained in Section 1. (2)(3) The "?" here corresponds to the one in the figure and represents a replaceable operation. We will change the caption and figure for readability.
>**[Question 1]**: what is meant in line 47 + 48? the current formulation is very cryptic. What exactly is linear in k?
**[Re: Q1]**: This refers to the fact that if the high-order interaction is realized by changing the design of the model architecture, the computational cost will increase linearly with the size of the order k that one wants to achieve, because of the need for elementwise multiplication of k groups of features.
>**[Question 2]**: figure 2: why does the addition and multiplication interactions reach the same accuracy on cifar10? Isn't that basically MLP vs self-attention? I would presume self attention to perform better.
**[Re: Q2]**: This is due to the small size of the CIFAR10 dataset. In fact, Transformer is considered to show some advantages over convolution only on large-scale datasets[3].
**[Response to Limitation]**: See response to W2, W1 and W3
[1] Wu, Yongtao, et al. "Extrapolation and spectral bias of neural nets with hadamard product: a polynomial net study." NeurIPS 2022
[2] Rao, Yongming, et al. "HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions." NeurIPS 2022
[3] Dosovitskiy, Alexey, et al. "An image is worth 16x16 words: Transformers for image recognition at scale." ICLR 2021
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. I agree that it is challenging to include an interesting theoretical analysis of this model. It is perhaps not necessary for this particular paper, as it is already an interesting contribution without.
I urge the authors to include a short description of FLOPs in the context of DL models. Coming from a computational background, readers might interpret the results as a downside of this method when in fact it is even an advantage (i.e., confusing FLOPs with FLOPS, just as I did).
Otherwise, I am happy with the rebuttal and will further increase my score.
---
Rebuttal 2:
Comment: Dear Reviewer u52R,
We appreciate your time and effort in providing feedback on our submission.
As the author-reviewer discussion period draws to a close, we look forward to hear whether our response addressed your concerns and are happy to engage in further discussion if there are still outstanding issues with the paper.
Authors
---
Rebuttal 3:
Comment: Thank you for your response. We appreciate your support in the acceptance of our paper. If you have any further concerns or questions, we are willing to discuss them with you.
We will include the description of FLOPs to make it clear. We will add it in the footnote:
Difference between FLOPs and FLOPS: FLOPs (floating point of operations), the number of floating point operations, is used to measure the complexity of an algorithm/model. This number gets smaller the better. The FLOPs is different from FLOPS (floating point of per second), which is a measure of hardware performance. Following other deep-learning architecture works, we use FLOPs to measure the computing demands. Since our model gets a smaller FLOPs, our method is more efficient.
Thank you for your advice!
Authors | Summary: This paper studies placing a kernel function inside of a neural network architecture to facilitate interaction of features/dimensional expansion. They consider deep convolutional networks with parallel pathway features $x$ and $x'$ and a kernel function computed with both pathways' features as inputs $k(x,x')$. Standard kernel mathematics is used to explain feature expansion. The main novel results are empirical performance of these "InfiNet" architectures, which are shown to perform well in a number of computer vision tests.
Strengths: The idea of unifying different orders of interaction embodied in various neural network architectures, including Transformers is appealing and probably important. The accuracy of the InfiNet experiments is impressive, with a moderate reduction in FLOPs. The paper is easy to read and well-organized, although suggestions are given for how it could be improved.
Weaknesses: My main concerns with the paper are a lack of context for the approach as well as missing important explanations. I also think a good amount of the math that's included could be considered "filler" material that could go into the appendix, since it doesn't represent new results. (I am referring to sections 4.1 and 4.2, most of which can be found in most textbooks which cover kernel methods.)
* Notation which is commonly used in the paper $\oplus$, $\otimes$, * is not explained. You should _explicitly_ define it somewhere, at least in the appendix (and refer people there). In particular, people may be confused by * for elementwise/Hadamard multiplication, since in convnet literature this is often the convolution operator. You call this the "Star Operation" in line 124, but I think it is just elementwise multiplication.
* The authors seem to have missed the vast literature on the connections between random features, neural networks at init, and kernel methods. (CKNs are mentioned but without any discussion of the topics I mention here.) In particular, one way that you could approximate the InfiNet architecture would be to take the two feature streams and pass them each into the same wide, random network/layer and compute the dot product of features at the next level. That would only approximate the kernel function in the InfiNet architecture, and is likely less efficient, but it provides a way to perform dimensionality expansion with a more traditional layer. The authors should discuss these connections.
* Different order of interactions have been studied in random feature and kernel settings already. In random features, interaction order is connected to the sparsity of weights, see e.g. https://arxiv.org/abs/2103.03191 and https://arxiv.org/abs/1909.02603. In kernels, this were referred to as additive kernels https://arxiv.org/abs/1602.00287, also studied in multiple kernel learning https://arxiv.org/abs/0809.1493 (these are just some examples among a larger literature).
* The authors do not seem to want to release their code. They have said "Yes" on Question 5, stating that the code and data are open, but there is no link or indication in the text that the code is available or will be when the paper is published. That seems deceptive.
Technical Quality: 3
Clarity: 2
Questions for Authors: * There is a tension between dimensional expansion, which leads to expressivity in networks, and generalization, which is typically better in low-dimensional settings. Can you discuss this?
* When queries and keys in a transformer are computed using a multilayer network with nonlinearities (rather than a single linear layer, as you've considered), aren't the effective order of interactions higher?
* You claim that the kernel map applied to inputs with $C$ channels takes constant $O(1)$ time (section 4 intro). Wouldn't evaluating the kernel still take $O(C)$ i.e. linear time?
* Can you please include the matrix/tensor shapes and layer sizes explicitly in section 5.1? They could be put into the appendix. It is unclear how many kernel evaluations are performed and on what shape input.
Minor points:
* Sentence lines 45-47 is confusing and should be reworded. Also, the combinatorial expression with the limit is unexplained, not obvious, and doesn't seem to contribute anything here. I suggest removing it.
* Line 61, the expression for span of a certain space is unclear. The main point seems to be that this is an infinite-dimensional function space. Does using this math really add anything?
* Line 61: "as low-overhead as exponential operations" is unclear. Do you mean "evaluating an exponential function"?
* Line 91: "Kernel Method" -> "Kernel Methods" typo
* Line 106: "isotropic" here is unclear to me, suggest removing
* Line 110: "medium" for the intermediate layer connotes different size, suggest changing to "intermediate" or "middle"
* Line 130: Without saying it, are you assuming that the image inputs span the pixel space vector?
* Line 149: "two element-wise multiplication" typo -> "multiplications"
* Notation $W_a \mathbf{x}$ is confusing: In equation (1) this seems to output a scalar. Is that the same in Eqn (6)? What are the shapes of the W matrices?
* What is a "feature branch"? Unclear throughout.
* You say the input is passed through "STEM" and refer people to the ConvNeXT paper https://arxiv.org/pdf/2201.03545. There is more than one "stem" in that paper. Can you be explicit about what you did?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: I would strongly prefer that the limitations be included in the main text during the discussion. With movement of some of the standard math, there would be space.
The authors only consider the squared exponential kernel with bandwidth parameter equal to 1. Other kernels might work better. In particular, the effective dimensionality of the RKHS (related to the kernel decay rates) would be higher with a "less smooth" kernel like the exponential/Laplace kernel.
The results are likely not reproducible unless the authors release their code.
The results are also limited only to supervised vision tasks, rather than other modalities or unsupervised settings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the constructive comments from reviewer USts and the time spent on reviewing this paper. We address the questions and clarify the issues accordingly as described below.
>**[Weakness 1]**: Notation is not explained...
**[Re: W1]**: We'll add a detailed explanation of notation in a subsequent release. The symbol $\oplus$ denotes the Direct Sum of vector spaces. This corresponds to structures such as channel expansion/bottleneck in commonly used neural networks. While the $\otimes$ is elementwise multiplication. Some of the expressions in our text use Star Operation (*). This is intended to be aligned with reference [1], which is essentially elementwise multiplication. We realize that we have overlooked the consistency of notation in the paper, and we will fix this in a subsequent version.
>**[Weakness 2]**: About literature on the connections between random features, neural networks at init, and kernel methods.
**[Re: W2]**: Thank you for your advice. We will add the discussion on the connections between random features, neural networks and kernel methods.
The NTK framework establishes a direct connection between infinitely wide neural networks at initialization and kernel methods. Specifically, it shows that as the width of the network grows, the network's training dynamics can be described by a kernel function (the NTK), linking the neural network's behavior to that of kernel methods. Random features provide an efficient way to approximate the feature mappings used in kernel methods. By using random projections, one can approximate the inner product defined by a kernel function, making it feasible to apply kernel methods. Our contribution is that we look at the relationship between kernel methods and neural networks in a different light. Instead of examining the dynamics of neural networks from the standpoint of the kernel, we consider how kernel methods can benefit neural networks from the perspective of expanding the interaction space.
We appreciate your advice on using random features to approximate InfiNet. We recognize that this is something we haven't thought about in the past, and we think it's a worthy discussion for future work. Meanwhile, we believe that our work will be an important milestone in the development of this field.
>**[Weakness 3]**: Different orders of interactions have been studied in random feature and kernel settings already. ...
**[Re: W3]**: We appreciate your references to research on different orders of interaction in the fields of random features and kernel methods. We will include a section to discuss them. However, we believe there is still a gap between the research in these areas and the research on interaction in NN design. Therefore it is difficult to derive a methodology from these studies that can be directly applied to NN design. Our work looks at interactions in the design of modern NN architectures in the hope of finding a new design consideration.
>**[Weakness 4]**: About the code open source.
**[Re: W4]**: We release the code recently. However, due to the NeurIPS rebuttal policy, we cannot provide you with a direct link. Instead, as per NeurIPS rules, we have sent an anonymous link to AC.
>**[Question 1]**: Tension between dimensional expansion and generalization.
**[Re: Q1]**: In neural network research, the prevailing view is that such a tension does not exist. This is due to the presence of the double descent phenomenon which challenges the traditional bias-variance trade-off perspective. Double Descent suggests a positive correlation between dimensional expansion and generalization when over-parametrized[2].
>**[Question 2]**: About effective order of interactions of transformers
**[Re: Q2]**: An interesting question, and we think the answer is yes. There have been studies of neural network width/depth equivalence in the past, and we think the order of interaction can be added to that. A quantitative study of this issue is more complex and requires more effort.
>**[Question 3]**: About the claim of constant time.
**[Re: Q3]**: Thanks for pointing out that we ignored the complexity of the kernel itself here. If we consider the complexity of the kernel itself, it's supposed to be linear time.
>**[Question 4]**: About including the matrix/tensor shapes.
**[Re: Q4]**: We will further refine the description of the tensor shape in camera-ready. As in Fig.3(b), each rounded rectangular box does not change the tensor shape. And parallel boxes will create independent copies. The two branches of the kernel will each be r (7 in our implementation) times the size of the tensor.
**[Re: Minor Points]**: Due to character limitations, we combine responses for minor points. (1)We have noticed that the description of the paragraph in lines 45-49 may not be clear, and we will reorganize the description. (2)(3) we will remove the redundant expression and clarify that the in kernel method, the computation overhead is the same as evaluating an exponential function. (4)(6)(8)We will fix the typo. (5)Isotropic means constant shape. (7) Yes, we consider it as a basic assumption in computer vision. (9) No, W in Eq.1 is an n-d vector, and in Eq.6 is a matrix. We will change some notation to make it more clear. (10) feature branch means a multi-branch structure like in ResNeXt. (11) STEM refers to the same patchify stem (4*4 non-overlapping convolution).
**[Re: Limitations]**: We will tweak some paragraphs of the article to discuss limitations. We will try more kernels to enhance the described approach. The initial code is open source now. Our work on language is in progress but still needs engineering effort. The performance in unsupervised settings is still unknown, but we are confident that InfiNet has a strong representation.
[1] Ma, et al. "Rewrite the Stars." CVPR 2024
[2] Nakkiran, et al. "Deep double descent: Where bigger models and more data hurt." Journal of Statistical Mechanics: Theory and Experiment 2021
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response and willingness to address my suggestions. I will change my overall score to 7 and assume that these points will be taken seriously.
PS I know what isotropic means, but I still do not know what it means *in your paper's context*.
---
Reply to Comment 1.1.1:
Title: Thank you for your reply.
Comment: Thank you for your response. We appreciate your support in the acceptance of our paper. If you have any further concerns and questions, we are willing to discuss them with you.
We will remove the "isotropic" and find a better word to describe the constant feature shape of 2 layer-MLP.
Authors | Summary: The authors present a new architecture for computer vision applications that models high-order interactions between features. The architecture is similar to an attention block, but introduces an RBF Kernel layer that captures interactions of order higher than two. The resulting method has strong empirical performance across image classification tasks.
Strengths: - The idea of the paper is very interesitng and novel.
- The empirical results show promising performance across involve multiple tasks against sophisticated methods
Weaknesses: - The presentation of the method seems overly complex in some places. For example, providing a clearer explanation of each new layer (perhaps in pseudocode) would help. While the Infiniblock definition is clear, the reader needs to go back to the previous section to understand the input/output shaped of the RBF layer, which takes work, and can be made simpler. Making clearer the intuition behind high-order interactions would be helpful as well. Showing examples of what the model learns would be helpful to make things concrete.
- The empirical performance is reasonably similar to those of previous methods, hence the empirical improvement is not that large.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What are some examples of features and interactions that help learning and that the new model can learn?
- Is it possible to analyze or visualize what interactions the model learned?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I do not see any ethical and societal implications of the work that need to be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the constructive comments from reviewer Jbm2 and the time spent on reviewing this paper. We address the questions and clarify the issues accordingly as described below.
>**[Weakness 1]**: The presentation of the method seems overly complex in some places. For example, providing a clearer explanation of each new layer (perhaps in pseudocode) would help. While the Infiniblock definition is clear, the reader needs to go back to the previous section to understand the input/output shape of the RBF layer, which takes work, and can be made simpler. Making clearer the intuition behind high-order interactions would be helpful as well. Showing examples of what the model learns would be helpful to make things concrete.
**[Response to W1]**:
(1) We adopt the diagrammatic and layer-wise formulaic representations commonly used in articles in the field of deep learning architecture. We will add input/output shapes for each layer in subsequent versions of Fig.3 to make it easier to understand. We have open-sourced our code, which will further help in the understanding of the model we design. However, due to the NeurIPS rebuttal policy, we cannot provide you with a direct link. Instead, as per NeurIPS rules, we have sent an anonymous link to AC.
(2) The intuition for high-order interactions is that new research suggests that high-order interactions are widespread in biological systems, neuroscience, and physical-social systems[1]. We believe that such high-order interactions are clearly present in neural network features as well. The idea of interaction-inspired designs like HorNet and MogaNet, but their models are limited to exploring simple interactions, whereas our model can explore a larger interaction space.
(3)We show some cases in Figure 4 in the Appendix that demonstrate the difference in the Class Activation Mapping feature regions learned in the feature representation space, the finite simple interaction space and the infinite-dimensional interaction space.
>**[Weakness 2]**: The empirical performance is reasonably similar to those of previous methods, hence the empirical improvement is not that large.
**[Response to W2]**:
**i. Our main contribution is a new perspective of model design instead of a visual system that can beat all the other models.** We design the experiments on the purpose of verifying the effectiveness of our design perspective over models from plain feature superposition and interaction. We follow the architecture and training configuration of widely used models Swin Transformer for fair comparison. Our goal is to provide a new perspective of model design, but not a SOTA visual recognition system.
**ii. Our performance on ImageNet-1K can be further improved with more refined configuration tuning.**
We can manually refine the various hyper-parameters of the training to obtain stronger performance on the ImageNet validation set. We did not do this, and this is because the performance improvement gained in this way is essentially an overfitting of the dataset. Therefore, there is still substantial room to further improve the performance on ImageNet-1K. We think many techniques including architecture optimization, and training methods would further improve the model performance.
>**[Question 1]**: What are some examples of features and interactions that help learning and that the new model can learn?
**[Response to Q1]**: We give some examples in Figure 4. We can observe that the region responded to by the model that employs kernel methods better fits its actual class. This shows that our method is more effective in extracting effective features, which is largely due to the fact that the interaction of the target's features allows the target to be learned as a whole, thus enhancing the model to a certain extent.
>**[Question 2]**: Is it possible to analyze or visualize what interactions the model learned?
**[Response to Q2]**: We believe that the commonly used methods such as GradCAM, LRP, etc. can be directly used for the visualization and analysis of InfiNet. However, it is worth noting that due to the multi-branch structure of InfiNet's features and the introduction of mixing of two branches of activation maps by kernel methods, it is possible that relevance scores are not successfully propagated back to the inputs, resulting in only providing the partial information of relevance.
We think that an interpretable visualization of InfiNet‘s interaction is a very complex task that still requires a great deal of effort to achieve. We give some basic ideas for possible visualization of interactions. (1) We can try to anchor a region of interest and obtain a heat map of the regions that are co-interacting with it by using the gradient method. (2) We can obtain the coefficients of the interactions in the region corresponding to the activations by Taylor expansion of the activations at each level of the kernel function, and construct a statistical map.
[1] Battiston F, Amico E, Barrat A, et al. The physics of higher-order interactions in complex systems[J]. Nature Physics, 2021, 17(10): 1093-1098.
---
Rebuttal Comment 1.1:
Comment: I acknowledge the author's response. I am inclined to keep my score.
---
Reply to Comment 1.1.1:
Title: Thank you for your reply.
Comment: Thank you for your response. We appreciate your support in the acceptance of our paper. If you have any further concerns and questions, we are willing to discuss them with you.
Authors | Summary: The paper shifts the focus from traditional neural network design, which emphasizes feature representation space scaling, to feature interaction space scaling. It introduces a new model architecture, InfiNet, that enables feature interaction within an infinite-dimensional space using the RBF kernel, leading to state-of-the-art results. The paper also discusses the limitations of current models in capturing low-order interactions and proposes the use of classic kernel methods to engage features in an infinite-dimensional space.
Strengths: - The idea of the paper is simple, novel and well exposed.
- The paper introduces InfiNet, a model architecture that leverages infinite-dimensional feature interactions using RBF kernels, which enhances model performance of traditional models.
- InfiNet achieves new state-of-the-art performance in various tasks, demonstrating the effectiveness of infinite-dimensional interactions.
- The paper includes extensive experiments on datasets like ImageNet and MS COCO, showing the scalability and efficiency of InfiNet.
Weaknesses: - the paper builds on the simple use of kernel methods. The novelty of the methods is minimal, in the end it is an RBF kernel.
- the performance improvement of Infinet over other models is mostly marginal and no errors have been displayed.
- the paper doesn't really have theoretical novelty
Technical Quality: 4
Clarity: 4
Questions for Authors: - How does InfiNet compare to other models in terms of training time and resource consumption?
- Can the kernel methods used in InfiNet be applied to other types of neural network architectures beyond those discussed?
- Can the authors quantify the increased dimensionality of the kernel methods over simpler operations (sum, product). If the authors take the simplest architecture for imagenet and look at the representations generated by means of using different kernels, can they quantify what is the actual increase in the intrinsic dimensionality of the representation upon training? It is not fully clear to me that the increase in performance is due to an increase in dimensionality.
- the author mention the possibility of exploiting a learnable kernel in place of RBF. Could the author explain and discuss the ratio behind using RBF in place of others? Is it solely driven by the computational complexity. Would the results be different with a different kernel?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the constructive comments from reviewer Tii5 and the time spent on reviewing this paper. We address the questions and clarify the issues accordingly as described below.
>**[Weakness 1]**: The paper builds on the simple use of kernel methods. The novelty of the method is minimal, in the end it is an RBF kernel.
**[Re: W1]**: Our novelty lies in the fact that we perform feature interaction in neural networks by means of kernel methods, which is not trivial. We provide an in-depth discussion of the reasons behind our proposed approach, and we point out that the dimensionality of a hidden space generated through feature interactions is an important part of the performance of a model.
>**[Weakness 2]**: The performance improvement of Infinet over other models is mostly marginal and no errors have been displayed.
**[Re: W2]**:
**i. Our main contribution is a new perspective of model design instead of a visual system that can beat all the other models.** We design the experiments for the purpose of verifying the effectiveness of our design perspective over models from plain feature superposition and interaction. We follow the architecture and training configuration of widely used models of Swin Transformer for fair comparison. Our goal is to provide a new perspective of model design, but not a SOTA visual recognition system.
**ii. Our performance on ImageNet-1K can be further improved with more refined configuration tuning.**
We can manually refine the various hyper-parameters of the training to obtain stronger performance on the ImageNet validation set. We did not do this, and this is because the performance improvement gained in this way is essentially an overfitting of the dataset. Therefore, there is still substantial room to further improve the performance on ImageNet-1K. We think many techniques including architecture optimization, and training methods would further improve the model performance.
The reason we do not give the error bar is that model training is expensive and it is difficult for us to schedule enough resources to perform multiple rounds of ImageNet-1K training and ImageNet-21K pre-training. For the same reason, this (no error bar is shown) is very common in the deep learning architecture community.
>**[Weakness 3]**: The paper doesn't really have theoretical novelty
**[Re: W3]**: We agree that an in-depth theoretical analysis of the proposed method on the interaction space perspective is helpful to better understand our model. But to be honest, currently, we could not find a good theorem to prove rigorously such interaction mechanisms in deep networks, since theoretically analyzing a complex system like InfiNet is very difficult. However, we still have some intuitive and empirical analysis to show the methodology and philosophy behind our feature interaction perspective. We hope such analysis can help readers to better understand our motivation and provide some guidance to design better architectures in future research.
>**[Question 1]**: How does InfiNet compare to other models in terms of training time and resource consumption?
**[Re: Q1]**: In terms of training time. We take an example of InfiNet-T level models on ImageNet-1K training with the same configuration on 4 $\times$ A100.40GB based on our test.
Model| Training Time(min/epoch)|
|---|---|
|ConvNeXt|22|
|Swin|27|
|HorNet|31|
|MogaNet|43|
|InfiNet|30|
As the model increases, the model width gradually increases and the FFN layer will occupy more computational load. The difference in training time for these models will be smaller.
We notice a large amount of data reuse in InfiNet in the multi-branch structure before the kernel. Therefore, the memory access bandwidth is the limiting factor in our model training. We substantially optimized the computation time by using fully equivalent grouped convolutions instead of computing depth-width convolutions in a round-robin fashion. This reduces the training time per epoch from 54 minutes to 30 minutes versus a substantial reduction through cyclic depth-width convolution.
Because of the lack of innate computational optimization, is the main reason for the current slowness compared to ConvNeXt and Swin, but our model training is still faster than most mainstream High-Order networks. The model suffers from load imbalance during computation. However, these problems are solvable and optimizable, and this is the goal of our subsequent work to further optimize the model computation.
>**[Question 2]**: Can the kernel methods used in InfiNet be applied to other types of neural network architectures beyond those discussed?
**[Re: Q2]**: Yes! In fact, element-wise multiplication in any network architecture can be attempted to be replaced by the kernel method, under consideration of the convergence of the model. This includes state-space models (aka Mamba) (Gu, Albert, et al. 2023), HorNet (Rao et al. 2022), gated convolution (Dauphin, et al. 2017), and so on. Other than that, we are trying to use the kernel method on xLSTM (Beck M. et al. 2024).
>**[Question 3]**: Can the authors quantify the increased dimensionality...
**[Re: Q3]**: We give a quantify ablation study on Sec 6.4 and Table 3(b). The intrinsic dimensionality can be calculated with Eq.5 with n=HWK^2 and k = interaction order in Table 3(b), where H/W is the height and width of the model and K is the convolution kernel size. In Table 3(b), we can see that as the interaction order increases, the intrinsic dimensionality increases and the performance improves. Thus, we can see a positive correlation trend that performance increases with dimensionality.
>**[Question 4]**: About the kernel selection.
**[Re: Q4]**: The reason we use the RBF kernel in our model is: (1) The RBF kernel is the simplest kernel function that can realize infinite-dimensional feature interaction. (2) The empirical results. We show the result of using a linear kernel and monomial kernels in Table 3(b).
---
Rebuttal 2:
Comment: Dear Reviewer Tii5,
We appreciate your time and effort in providing feedback on our submission.
As the author-reviewer discussion period draws to a close, we look forward to hear whether our response addressed your concerns and are happy to engage in further discussion if there are still outstanding issues with the paper.
Authors
---
Rebuttal Comment 2.1:
Comment: I appreciate the answers to my comments. My questions have been addressed. Yet, I think my score is appropriate and I will not change it, unless further discussions with the AC and other reviewers will prompt me to do so.
---
Reply to Comment 2.1.1:
Comment: Thank you for your response. We appreciate your support in the acceptance of our paper. If you have any further concerns or questions, we are willing to discuss them with you.
Authors | Rebuttal 1:
Rebuttal: Dear Area Chair and Reviewers,
We appreciate reviewers' precious time and valuable advice. We are happy that most of reviewers acknowledged our novel idea (Tii5, Jbm2, USts, u52R) and experiments (Tii5, Jbm2, USts, u52R).
At the same time, we note the concerns and suggestions of the reviewers on our work. We provide detailed answers to all the questions raised by the reviewers in the following individual responses. We hope these responses address your questions and concerns well. If you still have questions and concerns, we appreciate you discussing them further with us!
Since NeurIPS 2024 does not allow for the submission of revised papers at the rebuttal stage, we are committed to incorporating changes based on the comments of the reviewers in subsequent revised release.
We would like to thank the reviewers again for their valuable comments on our paper and for their time in reviewing and discussing it!
Best Regards,
Authors | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Universal Online Convex Optimization with $1$ Projection per Round | Accept (poster) | Summary: This paper introduces methods for constrained OCO
which automatically achieve the optimal rate without knowing
in advance whether the losses are convex, strongly convex,
exp-concave, or smooth, while using only 1 projection per round.
This is notable because the standard approach proceeds by combining
several expert algorithms with a meta algorithm; in constrained settings
these expert algorithms require implementing a potentially expensive
projection. This work avoids projecting each of the expert algorithm
iterates leveraging the constrained-to-unconstrained reduction of Cutkosky 2020.
Strengths: The paper addresses a clear and real problem that has been left unaddressed
by the majority of literature on this topic. The approach is a pretty straight-forward
modification of existing reductions, but uses them in a new and unexpected way.
Weaknesses: The main weakness is that the paper feels poorly factored. There is
a very large number of back references to previous equations, and the paper would be
very hard to read in print. To actually follow the math, it's almost necessary to
read the paper with a pdf viewer which can display pop-up previews when hovering over links.
I think this would be remedied by better factoring the results into lemmas and propositions.
As noted, the approach is a fairly straight-forward modification of the results from
Cutkosky \& Orabona (2018) and Cutkosky (2020), and essentially boils down to
not dropping negative terms in the analysis, and then exposing matching terms
in the regret decomposition. I think this is fine overall; these considerations
are missing from the literature, and this is a fitting
place for them to enter the literature.
Technical Quality: 3
Clarity: 2
Questions for Authors: Do you think there could possibly be a more abstract way to formalize these universal algorithms? The strange thing about these universal algorithms is that a whole new algorithm seemingly needs to be devised every time one wants to incorporate a new kind of loss (e.g. meta-grad -> mahler to handle strongly convex losses). The ideal result would more generally be a reduction which just passes the losses to each of the experts, maybe with some additional side-information, and lets them construct whatever surrogate loss they want with it. In this way there might just be one "final" paper on universal guarantees.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes, the paper points out in the conclusion that the bounded domain / gradient assumption is
a significant limitation that they hope to address in future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for the constructive reviews! We will revise our paper accordingly.
---
**Q1:** The main weakness is that the paper feels poorly factored.
**A1:** We apologize for the confusion caused by the numerous references to previous equations in this paper. Due to page limit, we attempted to include all the math equations of our key ideas in this paper, which made it difficult to follow in the printed version. We promise to polishing our writing and will follow your suggestion to factor the results into lemmas and propositions.
---
**Q2:** Do you think there could possibly be a more abstract way to formalize these universal algorithms?
**A2:** Yes, there do exist a general framework in the studies of universal online learning that allows expert-algorithms to process the original functions [Zhang et al., 2022]. We are pleased to briefly describe the basic idea of their method here. Following the two-layer structure, the regret can be decomposed to the meta-regret and the expert-regret, that is,
$$
\sum_{t=1}^T f_t(x_t) -\sum_{t=1}^T f_t(x) = \underbrace{\sum_{t=1}^T f_t(x_t)-\sum_{t=1}^T f_t(u_t)}\_{\texttt{meta-regret}} + \underbrace{\sum_{t=1}^T f_t(u_t)-\sum_{t=1}^T f_t(x)}\_{\texttt{expert-regret}}
$$
where $x_t$ and $u_t$ denote the output of the meta-algorithm and expert in the $t$-th round. To bound the expert-regret, we can directly use existing online algorithms as the expert-algorithms that achieve optimal regret bounds for different types of convex functions, e.g., $O(\sqrt{T})$ and $O(\log T)$. To control the meta-regret, their meta-algorithm employs the linearized loss, i.e., $l_t(x)=\langle \nabla f_t(x_t), x-x_t\rangle$, to measure the performance of each expert. Furthermore, they also require the meta-algorithm to yield a second-order bound in terms of $l_t(\cdot)$. In this way, the meta-regret is small for exp-concave functions and strongly convex functions, i.e., $O(\log\log T)$, and is also tolerable for convex functions, i.e., $O(\sqrt{T})$. By incorporating existing online algorithms as experts, their approach inherits the regret of any expert designed for strongly convex functions and exp-concave functions, and also obtains minimax optimal regret for convex functions. We feel that this framework is close to the "final" paper on universal guarantees that you mentioned.
We acknowledge that our work indeed follows this universal framework. However, to address new problems, we have to make essential modifications to this framework. In this paper, we focus on reducing the projection complexity of universal online learning, and have made innovative technical contributions based on this framework. These include specially designed expert-losses for expert-algorithms that handle strongly convex functions, and novel theoretical analysis that advances universal algorithms and the black-box reduction.
---
Rebuttal Comment 1.1:
Comment: Great! Thanks for the detailed explanation of Zhang 2022, this does indeed seem to be the kind of result I was hoping for
I believe the authors will be able to factor the results and polish the writing between now and the camera-ready. I think these are important results and the approach is quite nice, so I've raised my score to a 7.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer VeB9,
Many thanks for your kind reply! We will further enhance our paper according to your constructive reviews.
Best regards,
Authors | Summary: This paper addresses the challenge of online convex optimization with unknown smoothness properties of the loss functions, which can be convex, strongly convex, or exp-concave. The authors propose an algorithm that achieves regret bounds of order $\sqrt{T}$, $\log T$, and $d \log T$ respectively, while requiring only a single projection step per round on the original domain $\mathcal{X}$. Such projections can indeed be computationally expensive. Additionally, the authors present regret bounds with improvment for small losses.
Most algorithms that achieve similar adaptive regret upper bounds rely on meta-algorithms that combine experts (running ONS or OGD with surrogate losses), inspired by the MetaGrad algorithm. Typically, these algorithms necessitate $\log(T)$ projection steps per round (one per expert), which can be computationally burdensome. Mhammedi et al. (2019) reduced this projection cost to $O(1)$ but at the expense of a $d \log T$ regret for strongly convex losses. To overcome this, the authors introduce new surrogate losses based on a black-box reduction technique by Cutkosky et al. (2018), which simplifies the constrained optimization problem on $\mathcal{X}$ to another domain, such as the Euclidean ball, where projections are easier.
Strengths: - The paper is well-written and offers valuable insights into the use of surrogate losses to adapt to strong convexity or exp-concavity. It may serve as a comprehensive entry point into the extensive literature on universal OCO algorithms.
- Despite combining various existing techniques, the results are non-trivial and required solving technical challenges, especially for the strongly convex case. The authors introduce novel negative terms in the analysis to achieve their results.
- Experiments included in the appendix demonstrate that the computational improvements can be significant.
Weaknesses: - The theoretical improvements may appear incremental, appealing to a niche audience interested. The improvement being only in the specific case of strongly convex $\log T$ regret with $O(1)$ projection steps. The primary high-level ideas in the algorithm and analysis are based on prior work.
- The paper still relies on a meta-aggregation procedure, which, although theoretically effective, is not particularly elegant and maintains a per-round complexity of order $O(\log T)$. Achieving $O(1)$ complexity per round seems however highly challenging.
- The convex rate is actually $O(\sqrt{T \log\log T})$, not $O(\sqrt{T})$ as stated in the results.
Technical Quality: 4
Clarity: 4
Questions for Authors: - The algorithm requires prior knowledge of the parameters $G$ and $T$, would simple doubling trick allow to tune these?
- Your algorithm still requires O(1) projection steps on $\mathcal{X}$ and O(log T) projection steps on $\mathcal{Y}$. Do you think that projection free algorithms such as variants of Online Frank Wolfe could be used instead of OGD and ONS (up to deteriorating slightly the rate) to remove all projections (or at least the O(log T) on $\mathcal{Y}$) while still being adaptive to the smoothness?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for the constructive reviews! We will revise our paper accordingly.
---
**Q1:** Would simple doubling trick allow to tune prior knowledge of the parameters $G$ and $T$?
**A1:** In fact, the doubling trick enables our proposed algorithm to avoid the prior knowledge of $T$, at the cost of an additional $\log T$ in the regret bound. However, this trick is inadequate for removing the requirement of the prior knowledge of $G$, as the variability in function gradients is unknown. Therefore, designing lipschitz adaptive online learning algorithms is a highly challenging task and merits further exploration in the future.
---
**Q2:** Your algorithm still requires $O(1)$ projection steps on $\mathcal{X}$ and $O(\log T)$ projection steps on $\mathcal{Y}$.
**A2:** We would like to clarify that, in contrast to the expensive projection operations onto the complicated domain, projection operations on the surrogate domain $\mathcal{Y}$ are much cheaper and can be considered negligible. In this work, we construct a ball $\mathcal{Y}=\\{x | \Vert x\Vert\leq D \\}$ as the surrogate domain. For the OGD algorithm, the projection operation of the reduced algorithm on $\mathcal{Y}$ can be realized by a simple rescaling, i.e., $y_t = \hat{y}_t$ if $\Vert \hat{y}_t\Vert \leq D$; otherwise $y_t=\hat{y}_t \cdot \frac{D}{\Vert \hat{y}_t\Vert}$, where $\hat{y}_t$ denotes the unprojected decision and $y_t$ denotes the decision on the ball $\mathcal{Y}$. For the ONS algorithm, there exists an efficient implementation of ONS in the literature, where the time-consuming projection operation can be replaced by a more efficient singular value decomposition [Lemma 7, Mhammedi et al., 2019]. After combining all the experts' decisions by a meta-algorithm, we project the solution in $\mathcal{Y}$ onto $\mathcal{X}$, which is the only projection onto $\mathcal{X}$ per round.
---
**Q3:** Do you think that projection-free algorithms such as variants of Online Frank Wolfe could be used instead of OGD and ONS to remove all projections while still being adaptive to the smoothness?
**A3:** Thanks for your insightful comments! We can choose projection-free algorithms, such as variants of Online Frank Wolfe, as the expert-algorithms. However, given the current studies on projection-free algorithms (summarized in the table below), ***this approach may suffer a deterioration of the regret bound and can not handle certain cases***.
More specifically, as is shown in the table, in the literature, there are no suitable projection-free algorithms for exp-concave functions, neither for strongly convex and smooth functions. Moreover, when functions are smooth, existing projection-free algorithms are unable to achieve problem-dependent bounds, such as the small-loss bounds in this work.
| Algorithm | Condition on Loss | Regret Bound |
| :---------: | :--------------------: | :----------: |
| OFW [1] | convex | $O(T^{3/4})$ |
| OSPF [2] | convex and smooth | $O(T^{2/3})$ |
| SC-OFW [3] | strongly convex | $O(T^{2/3})$ |
| AFP-ONS [4] | exp-concave and smooth | $O(T^{2/3})$ |
Finally, we would like to highlight that although using projection-free algorithms can remove all projections, they may not achieve greater efficiency based on the universal framework. Specifically, most projection-free algorithms, such as OFW and its variants, replace the original projection operation with a linear optimization step. Since the universal framework requires maintaining $O(\log T)$ expert-algorithms, this approach needs to perform $O(\log T)$ linear optimization steps per round, which can be time-consuming when $T$ is large.
**References:**
[1] Hazan and Kale. Projection-free Online Learning. ICML, 2012.
[2] Hazan and Minasyan. Faster Projection-free Online Learning. COLT, 2020.
[3] Wan et al. Projection-free Online Learning over Strongly Convex Sets. AAAI, 2021.
[4] Garber and Kretzu. Projection-free Online Exp-concave Optimization. COLT, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. This addresses some of the points I raised. I have no further questions and will await the discussion with the other reviewers.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer aw3E,
Thank you very much for your kind reply! We will further improve our paper.
Best regards,
Authors | Summary: This paper studies universal OCO algorithms with fewer projections. Previous work either use $O(\log T)$ projections per round, or have a sub-optimal dependence on $d$ for strongly-convex loss. This work designs a new surrogate loss to achieve tight regret for Lipschitz convex/exp-concave/strongly-convex losses simultaneously, with only 1 projection per round.
Strengths: The technical contributions are solid: this paper makes a strict improvement over previous results.
The paper is very well-written, clearly introducing the challenges and the main ideas. Details of the analysis and algorithm are nicely explained.
Weaknesses: The contribution seems somewhat incremental to me. The only improvement is a $d$ factor for strongly-convex loss. Such result is nice to know but I'm not sure how significant such it is. In addition, the technical novelty isn't significant either.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for the constructive reviews. We will revise our paper accordingly.
---
**Q1:** The significance of our result.
**A1:** We emphasize that the $d$-dependence is significant in online learning studies. We provide the following elaborations and will also revise the paper to highlight this significance more clearly.
* For online convex optimization, knowing the minimax optimality regarding $d$ and $T$ is a fundamental question to investigate. It is well-known that for convex/exp-concave/strongly convex functions, the minimax rate is $O(\sqrt{T})$ , $O(d\log T)$ and $O(\log T)$. Therefore, in universal online learning, achieving the regret bounds that match these minimax rates is also of great significance. As explicitly mentioned by the authors of MetaGrad, van Erven et al., the extra factor $d$ in the regret bound for strongly convex functions is a "real limitation". Our results have successfully achieved minimax optimality in terms of $d$ and $T$ through non-trivial technical ingredients.
* Numerous studies in online learning are dedicated to improving regret bounds by reducing dependence on the $d$. For example, prior work on private OCO achieved an $\tilde{O}(\sqrt{dT}/\epsilon+\sqrt{T})$ regret bound [2], where $\tilde{O}(\cdot)$ hides polylog factors in $T$. In a subsequent study, this result is improved to $\tilde{O}(d^{1/4}\sqrt{T}/\sqrt{\epsilon}+\sqrt{T})$ [3], which is a significant advancement. Other examples include improving $d$-dependence for multi-armed bandits, bandit convex optimization, etc.
Therefore, we believe that the result in this paper is significant.
**Reference:**
[1] van Erven et al. MetaGrad: Adaptation using Multiple Learning Rates in Online Learning. JMLR, 2021.
[2] Jain et al. Differentially Private Online Learning. COLT, 2012.
[3] Kairouz et al. Practical and Private (Deep) Learning Without Sampling or Shuffling. ICML, 2021.
---
**Q2:** The significance of our technical novelty.
**A2:** We would like to take this chance to emphasize our technical novelty.
* **Challenge:** This work focuses on universal algorithms with $1$ projection per round. Prior work applied the black-box reduction to existing universal algorithm, i.e., MetaGrad, which achieves optimal regret for exp-concave and convex functions with $1$ projection per round. The basic idea is to cast the original problem on the constrained domain $\mathcal{X}$ to an alternative one of the surrogate loss on a simpler domain $\mathcal{Y}$. To handle strongly convex functions, a straightforward way is to use a universal algorithm that supports strongly convex functions, e.g., Maler, as the black-box subroutine. However, this black-box approach fails in deriving a tight regret bound for strongly convex functions, because we are unable to bound this term,
$$
\tilde{O}\left(\sqrt{\sum_{t=1}^T \Vert y_t-x\Vert^2} -\sum_{t=1}^T \Vert x_t-x\Vert^2\right),
$$
where $y_t\in\mathcal{Y}$ is the fake decision on the surrogate domain, and $x_t\in\mathcal{X}$ is the true decision on the feasible domain. The above term is unmanageable since $\Vert y_t-x\Vert\geq\Vert x_t-x\Vert$. To the best of our knowledge, this technical challenge is entirely new for both universal algorithms and the black-box reduction, with no prior work to draw upon.
* **Analysis:** In this work, we address the above challenge in two steps. First, we introduce a novel meta-expert decomposition and, motivated by this, we specifically design a novel expert-loss. This expert-loss offers the advantage of bounding the meta-regret by the following term,
$$
\tilde{O}\left(\sqrt{\sum_{t=1}^T\langle\nabla g_t(y_t),y_t-y_t^i\rangle^2}-\sum_{t=1}^T\Vert x_t-y_t^i\Vert^2\right).
$$
Second, to bound the above difference, we dig into the properties of the surrogate loss of the black-box reduction. The technical novelty of our theoretical analysis is that the second-order bound in terms of the decision on the surrogate domain (fake decision) can be converted into the one of the decision on the feasible domain (true decision), at the cost of adding an addition positive term $\Delta_T$ (See Lemma 7 for details), that is
$$
\sum_{t=1}^T\langle\nabla g_t(y_t),y_t-y_t^i\rangle^2\leq O\left(\sum_{t=1}^T\langle\nabla g_t(y_t),x_t-y_t^i\rangle^2\right) +\Delta_T,
$$
where $\Delta_T$ is defined in our paper. Furthermore, compared to previous work on black-box reduction, we also prove a tighter connection between the original function and the surrogate loss (See Lemma 3 for details), which introduces a negative term $-\Delta_T$ that can automatically offset the cost arising from the aforementioned conversion. By combining these two bounds, we are able to control the meta-regret under the strong convexity condition, thus ensuring that it does not affect the algorithm's optimality. Therefore, we believe that the technical novelty in our theoretical analysis is valuable for both universal algorithms and the black-box reduction.
* **Applications:** Two-layer algorithms are widely adopted in modern online learning, including universal online learning studied in our paper and also non-stationary online learning. Therefore, our developed techniques can be also useful for non-stationary online learning, especially regarding strongly convex/exp-concave functions. For example, Baby et al. [2022] propose a two-layer algorithm with optimal dynamic regret for strongly convex functions, which also suffers from $O(\log T)$ projection complexity. By applying our technique to a strongly adaptive algorithm with a second-order bound, it is hopeful to reduce the number of projections of their method to $1$ per round.
Overall, we believe that the technical novelty of our work is significant. Thanks for your comments, and we will revise the paper to further highlight the technical novelty.
**References:**
[4] Baby and Wang. Optimal dynamic regret in proper online learning with strongly convex losses and beyond. AISTATS, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response!
Given that optimal dependence on $d,T$ is already known for each sub-case, the $d$ dependence improvement is on how we aggregate them to build universal algorithms. In my opinion, such $d$ dependence improvement is not as significant because the study of universal algorithms mostly remains on theory level so far: how useful universal algorithms are practically is in question, there seems much fewer scenarios that crucially require universal algorithms than those of each sub-case.
As a result, I will maintain my score. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates | Accept (poster) | Summary: This paper proposes a mitigation strategy called "pure tuning, safe testing" to mitigate harmful finetuning issues for LLMs. The strategy is very simple, basically to use a safety system prompt for inference and do finetuning without such a prompt. The core philosophy is that harmful knowledge in the finetuning stage is learned without a safety prompt, but in inference time the the added safety prompt is used and therefore harmful knowledge will not be activated.
Strengths: 1. The studied problem -- harmful finetuning for LLMs by itself is important and has raised widespread public interest among the community, and this paper is one of the early batches of papers to propose a timely analysis and mitigation strategy for the problem.
2. Comprehensive evaluation is conducted to show the effectiveness of the method.
3. The paper is well-written, and the proposed strategy is simple enough to understand, which I think may raise the common interest among the community.
Weaknesses: 1. The core issue of PTST is that: given that the system prompt has changed between finetuning/testing, it does not make sense to me that why the helpfulness is not degraded while the harmfulness is lowered. Both benign helpful knowledge/harmful knowledge are learned with the finetuning system prompt, changing the template in the inference time will simultaneously lower helpfulness/harmfulness in my view. However, this is not the case in Table 2 (a) and Table 3(a), which indicates that changing the template will not always lower helpfulness (sometimes even increase helpfulness, e.g., CA->CL). I conjecture the reason is that the length of CL prompt is longer, which elicits better helpfulness performance. An explanation for this phenomenon will be appreciated.
2. The observation in Section 4 that mixing safety data can reduce ASR is available in Vlguard Zong et al. [2024]. I understand that this is a concurrent finding, but it would be nice if the authors could mention and discuss this in Section 4.
3. The experimental results are not intuitive enough. Particularly, I think it is not ideal to use so many prompts (e.g., TV, TA,CV,CA,CL) for comparison. When I am reading Table 2, I am confused about which one is a safety prompt and which one is not a safety prompt, and therefore, I cannot immediately get the intuition shown by the results.
4. The literature review seems to be comprehensive, but there are a few related works missing. Since (Qi et al, 2024), there are a few mitigation solutions proposed to address the same challenges. I would appreciate it if the authors could appropriately cite and discuss these literature:
------------------Before NeurIPS review cycle------------------
[1] Fine-tuning can cripple your foundation model; preserving features may be the solution https://openreview.net/forum?id=VQ7Q6qdp0P (ICLR2024 template)
[2] Immunization against harmful fine-tuning attacks https://arxiv.org/pdf/2402.16382 (ICLR2024 workshop template)
------------------concurrent------------------
[3] Representation noising effectively prevents harmful fine-tuning on LLMs https://arxiv.org/pdf/2405.14577 (NeurIPS2024 template)
[4] Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning httpsImmunization against harmful fine-tuning attacks https://arxiv.org/abs/2405.18641 (NeurIPS2024 template)
[5] No Two Devils Alike: Unveiling Distinct Mechanisms of Fine-tuning Attacks https://arxiv.org/pdf/2405.16229 (NeurIPS2024 template)
[6] Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models https://arxiv.org/pdf/2405.16833v1 (NeurIPS2024 template)
[7] A safety realignment framework via subspace-oriented model fusion for large language models https://arxiv.org/pdf/2405.09055 (Elsivier Journal template, first available May, 2024)
[8] Navigating the Safety Landscape: Measuring Risks in Finetuning Large Language Models https://arxiv.org/abs/2405.17374 (NeurIPS2024 template)
I am aware that some of the listed work is concurrent work (e.g., con-current submissions to NeurIPS 2024). However, it is encouraged to also cite and discuss them, because that will be beneficial for the development of the research field (but the authors should at least cite those existing works that appeared before the NeurIPS2024 review cycle).
5. Baselines for comparison are lacking. As there are already a few mitigation strategies for harmful finetuning issues, I suggest the authors add one or two baselines, e.g., Vaccine[Huang et al., 2024] for comparison.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weakness part. I still have questions regarding the results:
In table 2 (b) (c), why changing CL->CV will increase the harmfulness while CL->CA will decrease it?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations, but I suggest the authors discuss the potential impact that the helpfulness will be lower with changing the template (see the first weakness), although this is not that apparent per the authors' experimental results.
Overall, I believe that the idea of changing the template in finetuning/testing should reduce the risk of harmful finetuning, but of course, should come with the potentially negative impact that downgrades the finetune performance. I am willing to increase the score, as long as my detailed questions are answered and the limitation is clearly discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the positive and constructive feedback. Below, we address the reviewer’s questions.
**Q1** (rephrased): It would be appreciated if the authors could provide an explanation for why using different templates does not simultaneously lower the helpfulness while the harmfulness is lowered. Is it because longer prompts such as CL can elicit better helpfulness performance?
**A1:** Thanks for the reviewer’s suggestion. We respond to this question in our general response. Our hypothesis is that there is some compositional generalization ability arising from the pretraining and alignment phases (prior to the fine-tuning phase we study). Understanding this would be an interesting future direction.
The length of the prompt may also be a factor, but Figure 3(b) shows that a shorter version of CL (llama-short) can achieve an even better helpfulness than CL.
-----------------------
**Q2:** The VLGuard paper (Zong et al., 2024) also observed that mixing safety data can reduce ASR. It would be nice if the authors could mention and discuss it in Section 4.
**A2:** Yes, the VLGuard paper curated a vision-language safety instruction-following data and showed that mixing this data during fine-tuning can mitigate safety issues. In our NeurIPS submission, we have already included a discussion on this work in Section 5 (Related Works). We will also cite this paper in the first paragraph of Section 4.
Meanwhile, we want to clarify that the main point of our Section 4 is not to repeat the point that mixing some safety data mitigates the safety issue, but to show that adding the PTST strategy on top of mixing the safety data can further enhance the safety performance.
-----------------------
**Q3** (rephrased): Table 2 includes too many prompt templates, making it hard for readers to see which ones contain safety prompts.
**A3:** Below, we respond to the review assuming the reviewer was talking about Table 1, since Table 2 only contains 3 templates.
We apologize for any inconvenience caused by the presentation of Table 1. In the next version, we will add a “\*” to the prompt template (CL) that contains a safety prompt. In the current version, blue cells stand for training and testing with the same templates and orange cells stand for PTST. We will revise the caption to explain the meaning of these colors and the “\*” symbol to be added.
-----------------------
**Q4:** The literature review seems to be comprehensive but still some related works are missing.
**A4:** Thanks for listing so many related works before and concurrent to our work. We will definitely cite and discuss them in the next version of our paper!
-----------------------
**Q5:** I suggest the authors add one or two baselines for comparison, e.g., Vaccine (Huang et al., 2024).
**A5:** Thanks for the suggestion. We added Self-Reminder (SR) and In-context Defense (ICD) as two lightweight baselines. Please see the general response above for the results.
Comparing with Vaccine would also be interesting, but we want to note that this method is orthogonal to our PTST. Vaccine is a robust alignment method that works in the alignment phase (prior to the fine-tuning phase we study). It can make the model’s safety more robust to fine-tuning, but even if Vaccine works well, one can still add PTST during the fine-tuning phase to enhance the model’s safety even further. In our new experiments, we choose to focus on comparing with baselines that work in the fine-tuning phase.
-----------------------
**Q6:** In Table 1(b)(c), why does changing CL->CV increase harmfulness while changing CL->CA decreases it?
**A6:** Thank the reviewer for carefully reviewing our results. First, we would like to clarify our main point in Table 1:
1. There exist many cases where using different training and testing templates leads to lower ASR than using the same one, while still improving the model on helpfulness. This can be seen by comparing diagonal entries and off-diagonal entries in Table 1.
2. However, not all off-diagonal entries are safer than diagonal entries, just as the reviewer points out.
3. Among the off-diagonal entries, the PTST strategy consistently succeeds. That is, when the training template does not have a safety prompt (e.g., TV, TA, CV, CA) and the testing template has a safety prompt (e.g., CL in Table 1 and three other such templates we tried in Figure 3), the ASR is consistently lower than training with the same prompt template and the helpfulness consistently improves. This can be seen by comparing the diagonal entries and the orange-colored off-diagonal entries in Table 1.
Regarding the reviewer’s question on the results of CL->CV and CL->CA, we would like to point out that these strategies are not PTST, and we do not claim that they will consistently reduce or increase the ASR. Besides PTST, there may be other strategies that consistently work, and exploring these other strategies can be an interesting future direction.
---
Rebuttal 2:
Title: Thanks for the rebuttal
Comment: For Q3, I am referring to Table 2. After one month, I again forgot how to interpret Table 2. I think it is generally not a good idea to use the table showing training/testing combination. Dumb people (like me) may not easily interpret your results (a lot of abberviation for prompt template make situation worse). Perhaps it will better to provide at least one easier to interpret table to show your performance?
It is also interesting to see that the method can be combined with alignment stage solution, e.g., Vaccine. Could you discuss a little bit about this potential combination in the next version of the paper? (perhaps in the last section when you talk about conclusion and future direction)?
---
Rebuttal Comment 2.1:
Comment: We would like to thank the reviewer's suggestion. In the next version, we will definitely try our best to present Table 2 in a way that is more interpretable to readers. We will also expand our discussion on potential combinations with the alignment stage solution in the last section (Conclusions) and cite related papers (e.g., Vaccine). | Summary: This paper shows that the prompt templates used during fine-tuning and inference play a crucial role in safety alignment. Then, the authors propose to fine-tune models without a safety prompt, but include it at test time (user inference), which is counter to intuition. The authors demonstrate their method in the following experiments: when using the same prompts during training on GSM8K and testing, attack success rate increases for a Llama-2-Chat model (the authors considered 5 different prompts). The authors also show the same trend across models GPT-3.5 Turbo, Mistral-7B-Instruct-v0.2, and Llama-2-7b-chat, and across datasets ChatDoctor and OpenOrca.
Strengths: This is a paper that points out a new direction in safety alignment for fine-tuning language models. The paper is written very clearly, with a novel method and supporting experimental results.
Weaknesses: Improvements to this paper can be made from the following aspects: (1) there still seems to be noise in the experiment results, although that does not take away the novelty in proposing the PTST approach, (2) there should be more discussion about implications of PTST
Technical Quality: 3
Clarity: 3
Questions for Authors: I have two specific questions and one general question:
Q1. This is a question regarding the experiment results, which I find convincing but nevertheless flawed. In Table 1, (b) shows the trend that this paper is arguing for, specifically that training and testing on the same prompt template makes attacking easier. However, I question whether that is generally the case? The trainTA-testCV entry suffers the same ASR as trainCV-testCV in (b), and the trainCV-testTV entry suffers even higher ASR than trainTV-testTV in (d). I don't think these outliers invalidate the general trend of PTST results, but I still question how universally applicable PTST will be, in the sense that it is unclear whether there is a "PTST prompt" that will perform well under all scenarios? Perhaps there is a hidden confounder at play here?
Q2. Is there a particular reason that TV and TA is not included in the further experiments like GPT-3.5 (judging from Table 1, there are certain cases when TV and TA perform the best)?
Q3. This is a high-level question about the message that this paper is sending. Throughout experiments in this paper, it seems like there is always a tradeoff between helpfulness and ASR. Philosophically, is that really the case? I personally like the paper and believe that PTST is an interesting new direction of research, but I wonder whether a sufficiently intelligence machine still needs to give up either helpfulness or safety? I think the paper is lacking in discussion about the implications of PTST, and how the PTST method might inspire future papers to explore the direction of safety aligned fine tuning.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed limitations in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for acknowledging that the paper is clearly written and that the proposed PTST strategy is novel. Below, we address the reviewer’s questions.
**Q1:** This paper argues that training and testing on the same prompt template makes attacking easier. However, the trainTA-testCV entry suffers the same ASR as trainCV-testCV in Table 1(b), and the trainCV-testTV entry suffers an even higher ASR than trainTV-testTV in 1(d).
**A1:** Thank the reviewer for carefully reviewing our results. In fact, we would like to clarify that we do NOT claim that using different training and testing templates is *always* safer than training with the same template, but there exist many such cases, and PTST is a particular case where such a phenomenon consistently happens. More specifically, our claim is the following:
1. There exist many cases where using different training and testing templates leads to lower ASR than using the same one, while still improving the model on helpfulness. This can be seen by comparing diagonal entries and off-diagonal entries in Table 1.
2. Among these cases, the PTST strategy (described in the gray box on Page 2) consistently succeeds. That is, when the training template does not have a safety prompt (e.g., TV, TA, CV, CA) and the testing template has a safety prompt (e.g., CL in Table 1 and three other such templates we tried in Figure 3), the ASR is consistently lower than training with the same prompt template and the helpfulness consistently improves. This can be seen by comparing the diagonal entries and the orange-colored off-diagonal entries in Table 1.
We will make this point more clear in the next version of our paper.
-------------------
**Q2:** Is there a particular reason that TV and TA is not included in the further experiments like GPT-3.5
**A2:** Yes, the main reason is that we are fine-tuning GPT-3.5 with OpenAI’s APIs, but these APIs do not support TV and TA and only accept chat-mode data represented as a list of system, user and assistant messages.
-------------------
**Q3:** Throughout experiments in this paper, it seems like there is always a tradeoff between helpfulness and ASR. Philosophically, is that really the case? I wonder whether a sufficiently intelligence machine still needs to give up either helpfulness or safety? The paper is also lacking in discussion about the implications of PTST, and how the PTST method might inspire future papers to explore the direction of safety-aligned fine tuning.
**A3:** Thank the reviewer’s suggestion on adding more high-level discussion. We will definitely do in the next version of the paper.
For existing models, Table 2 and Figure 2(b) showed results on GPT-3.5 Turbo, which is more intelligent than Llama 2-Chat, but still GPT-3.5 Turbo suffers from safety issues after custom fine-tuning with benign data. In this situation, PTST is helpful to mitigate such issues.
Although we cannot perfectly predict the future, we believe that a model that is good at both helpfulness and safety won’t appear just by adding more data, as discussed in many existing papers [1, 2]. We indeed need better designs of pretraining and alignment methods to achieve this goal.
For designing better pretraining and alignment methods, it would be helpful if one could know better how the model is going to be used after fine-tuning. Our paper points out the PTST principle for fine-tuning the model after the alignment phase, and an interesting future direction is to identify important factors in pretraining and alignment methods that make PTST work for current models, and improve these methods to make PTST work better in future models. Please also see our general response for a more detailed discussion.
[1]. Jailbroken: How Does LLM Safety Training Fail? https://arxiv.org/abs/2307.02483
[2]. Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training https://arxiv.org/abs/2401.05566
---
Rebuttal Comment 1.1:
Title: I recommend acceptance
Comment: Many thanks to the authors for carefully responding to my questions and answering all of them. I have carefully evaluated all the reviews and responses. Seeing that the only reject score is given by Reviewer jeuL and their questions have also been sufficiently answered, I would happily recommend this paper for acceptance in its current state. | Summary: This paper addresses a critical issue, i,e., LLMs' loss of safety after being fine-tuned. The authors pay their attention to the prompt templates used during fine-tuning and testing, which leads to the main observation that fine-tuning with the valina template and testing with the safe template yields the best robustness.
Strengths: (1) Understanding the effect of fine-tuning on the LLM safety through prompt templates is novel.
(2) The PTST strategy shows promising performance gains when compared with the common strategy where a template is consistently used.
(3) The authors conducted experiments on several templates, models, and datasets.
Weaknesses: (1) The authors leave the understanding of the PTST strategy to future work and very limited discussion on the underlying mechanism of PTST can be found. Although it might be hard to develop a rigorous theory explaining the strategy, I still feel it necessary for the authors to at least propose some hypotheses and try to verify them with concrete experiments.
(2) There are cases when the helpfulness of models is notably decreased if we adopt the PTST rule, such as (TV, CL) and (TA, CL) for Llama-7B.
(3) Some lightweight defenses such as Self-Reminder [1] and ICD [2] can be incorporated into the (CL, CL) training scheme, which will serve as good baselines for PTST. Comparison with safeguarding algorithms can help readers better understand the significance of PTST.
[1] Defending ChatGPT against jailbreak attack via self-reminders; Xie et.al; Nature
[2] https://arxiv.org/abs/2310.06387
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) PTST seems to be a general principle to follow when fine-tuning aligned LLMs and the templates considered are restricted to several existing ones. I am wondering whether this principle can help us design better prompt templates for fine-tuning.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for acknowledging the novelty and promising performance of PTST. Below, we address the reviewer’s questions.
**Q1:** I feel it necessary for the authors to at least propose some hypotheses on the underlying mechanism of PTST and try to verify them with concrete experiments. I also wonder whether this principle can help us design better prompt templates for fine-tuning.
**A1:** Thanks for the reviewer’s suggestion. We have responded to this question in our general response.
* Our hypothesis is that there is some compositional generalization ability arising from the pretraining and alignment phases (prior to the fine-tuning phase we study). Understanding this would be an interesting future direction.
* The PTST principle may be helpful for designing better prompt templates. It recommends searching for new training templates without mentioning safety and for new testing templates among those emphasizing safety.
----------------------
**Q2:** There are cases when the helpfulness of models is notably decreased if we adopt the PTST rule, such as (TV, CL) and (TA, CL) for Llama-7B.
**A2:** We would like to thank the reviewer for carefully reviewing our results. It is true that the helpfulness of the models with training and testing templates TV -> CL and TA -> CL are notably lower than those with the same training and testing template. However, we want to note the following two points:
* Still, the helpfulness is improved after fine-tuning according to the PTST rule. E.g., GSM8K accuracy is 15.31 under No FT -> TV and 6.52 under No FT -> CL, but TV -> CL gives 23.76. This suggests that PTST consistently improves the helpfulness upon the model without fine-tuning, though in some cases PTST may lead to worse helpfulness than the model fine-tuned and tested with the same template.
* Whether PTST can lead to comparable helpfulness as using the same training and testing templates can depend on the model. For example, on Mistral-7b-Instruct-v0.2, both TV -> CL and TA -> CL give comparable helpfulness improvement as using the same training and test templates (see Table 6). Thus, an interesting future direction is to explore whether we can improve the aligned model in the alignment or even the pretraining phase to make the helpfulness improvement more robust to the change of template. See also the general response above for a detailed discussion.
----------------------
**Q3:** Some lightweight defenses such as Self-Reminder [1] and ICD [2] can be incorporated into the (CL, CL) training scheme, which will serve as good baselines for PTST. Comparison with safeguarding algorithms can help readers better understand the significance of PTST.
**A3:** Thanks for the suggestion. We added Self-Reminder (SR) and In-context Defense (ICD) as two other lightweight baselines. Please see the general response above for the results.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Hi,
I appreciate the authors' effort in conducting the extra experiments and find the results helpful.
However, the authors still fail to give any experiment/theory-grounded explanation for the effectiveness of PTST even though three out of four reviewers have raised the concern. From my perspective, it is simply not enough to convince the community with repeated experiments on model A/B/C + setting D/E/F. Without such rigorous verification (or at least some basic attempts), I am always doubtful about the correctness of the proposed rule on new models (e.g. Llama-3-Instruct), new datasets (e.g. Function Calling Extended), new fine-tuning strategies (e.g. QLoRA), and new chat templates (e.g. those used by the QWen model herds or templates in languages other than English.)
In summary, I will keep my score since Q1 is not well addressed, and I personally do not favor recommending the paper as an oral/award paper. | Summary: This paper discusses the issue of maintaining model consistency after fine-tuning large language models (LLMs). The research team, through extensive experiments, found that the prompt templates used during fine-tuning and inference play a crucial role in maintaining model safety. The paper proposes the "Pure Tuning, Safe Testing" (PTST) principle, which involves not using safety prompts during fine-tuning but incorporating them during testing to significantly reduce the occurrence of unsafe behaviors.
Strengths: 1. Through extensive experiments, it is demonstrated that prompt templates are crucial for maintaining safety during both training and testing.
2. The PTST approach is proposed, which improves safety performance.
Weaknesses: 1.Why fine-tune on math datasets (gsm8k, Orca-Math) to verify the model's safety? How does the performance compare when fine-tuned on safety-specific datasets, such as Anthropic/hh-rlhf?\
2.The experiments on PTST are insufficient, as they do not adequately compare the effectiveness of the approach with current alignment algorithms such as PPO, DPO, KTO, among others.\
3.This paper proposes the PTST algorithm, but it is a training technique and lacks a certain level of innovation.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1.It would be interesting to see whether the approach also scales to more datasets, such as hh-rlhf, or a combination of GSM8k and hh-rlhf for mixed training.\
2.Could you explain what the core contributions of the PTST algorithm are? How does it differ from algorithms like DPO?\
3.How does the performance of PTST compare to aligner[1] on larger-scale datasets?\
[1]Ji J, Chen B, Lou H, et al. Aligner: Achieving efficient alignment through weak-to-strong correction[J]. arXiv preprint arXiv:2402.02416, 2024.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: 1.Is the PTST algorithm still effective with an increasing amount of data or the introduction of mixed datasets?\
2.Lacks comparison with other methods for improving LLM safety.(Aligner,DPO...)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for reviewing our paper and for acknowledging our experiments as extensive. However, we’d like to point out that the reviewer’s main comment (1, 2 below) under “weakness” (as well as Question 2, Question 3 and Limitation 2) suggests they **may not** have absorbed the core idea of our paper.
> 1. Why fine-tune on math datasets (gsm8k, Orca-Math) to verify the model's safety? How does the performance compare when fine-tuned on safety-specific datasets, such as Anthropic/hh-rlhf?
> 2. The experiments on PTST are insufficient, as they do not adequately compare the effectiveness of the approach with current alignment algorithms such as PPO, DPO, KTO, among others.
The starting point of our paper is that often end-users try to improve the capabilities of aligned models (such as llama chat) on specialized tasks like GSM8K. This fine-tuning leads to a loss of safety alignment (i.e., a rise in unsafe responses) if one follows the common practice that uses the same training and testing prompt templates, as discussed in the introduction (Section 1; see also Appendix C). Our paper proposes a lightweight trick (called PTST) that avoids such a drastic loss of alignment during post-alignment fine-tuning (see, e.g., Table 1 for safety with/without our method).
### Clarification of Misunderstanding
The reviewer asks for the comparison of PTST and alignment algorithms in Weakness 2, Question 2, Question 3 and Limitation 2 and for experiments on the human preference dataset hh-rlhf in Weakness 1 and Question 1. However, PTST and alignment algorithms are used in different stages of LLM training and deployment. As highlighted in Sections 1 and 2, our paper studies the scenario where a model owner fine-tunes an **already aligned** LLM to enhance a certain helpfulness metric, e.g., fine-tune on GSM8k to improve math capabilities, measured by the accuracy on the GSM8k test set. PTST helps the fine-tuned LLM retain the safety attribute previously acquired during the alignment training. By contrast, alignment algorithms, e.g., PPO, DPO, KTO, are used to align base LLMs, e.g., Llama 2, with human preferences prior to the fine-tuning process we investigate.
We believe this clarification already resolves the reviewer’s questions raised in Weaknesses 1, 2, Questions 1, 2, 3, and Limitation 2. Below we respond to the other questions.
### Other Concerns
**W3:** This paper proposes the PTST algorithm, but it is a training technique and lacks a certain level of innovation.
**A:** We argue that PTST is a non-trivial training technique. A common practice for fine-tuning is to use the same prompt templates for training and inference to maximize the downstream performance. By contrast, PTST encourages safety by creating a distribution shift in training and inference while still maintaining the downstream performance. Furthermore, PTST is lightweight and easy to implement, allowing for further refinement of aligned LLMs with significantly reduced safety concerns.
**L1:** Is the PTST algorithm still effective with an increasing amount of data or the introduction of mixed datasets?
**A:** Yes. As detailed in Section 3.3 and Appendix F, besides GSM8k, which contains 8k samples, we also conducted experiments with larger datasets, including ChatDoctor with 100k samples and a 600k-sample subset of OpenOrca. I. In Section 4, we fine-tuned the models on a mixture of GSM8k and safety data. PTST consistently maintains safety across all these settings.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the detail answer. In general, The authors have addressed my main concerns, I increase my rating to 6 (from 4 to 6). | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their time and effort in reviewing our paper. Below we address some common questions that are raised by more than one reviewer.
**Q1:** Can the authors provide some discussion on why PTST works and how this method might inspire future explorations? (by SwBC, 1poN, JbMw)
**A1:** Our hypothesis is that there is a certain form of compositional generalization when the language model learns to handle user instructions and safety prompts. More specifically, during the pretraining and alignment phases, the model has to learn to perform the following two skills (or tasks):
* S1: Given a user instruction, write a helpful response;
* S2: Given a safety prompt and a user instruction, determine whether it is harmful. If yes, write an appropriate response to refuse to answer it.
Further, the model has to learn how to compose these two skills together:
* S1+S2: Given a safety prompt and a user instruction, do S2 first; if it turns out that the instruction is not harmful, do S1.
We speculate that these two skills become quite “disentangled” in LLMs after they see a massive amount of data during the pretraining and alignment phases. That is, LLMs do S1 or S2 in task S1+S2 in a similar way as if they are doing S1 or S2 alone. Then it makes sense that fine-tuning S1 can lead to helpfulness improvement in both S1 and S1+S2. This corresponds to the case of PTST, since we do not add safety prompts during fine-tuning but add one during testing.
Understanding how this compositional generalization and disentangled skills emerge in LLMs can be an interesting future direction. A possible way to deepen such an understanding is to pretrain and align LLMs in a fully-controlled setting, and do ablation studies to see which part of training contributes the most to the disentanglement between S1 and S2. An example of this research style is [this paper on the physics of LLMs](https://arxiv.org/abs/2309.14316), which explains interesting phenomena about knowledge storage in LLMs in a fully-controlled setting with synthetic data. We believe this is a possible way to go, but due to the time limit and resource constraints, we cannot provide any experiment results yet.
For future research, it would be interesting to explore better pretraining and alignment strategies so that PTST can work even better in the custom fine-tuning phase after alignment. This would require a lot of speculation and understanding of which part of training contributes the most to the compositional generalization mentioned above.
-----------
**Q2:** Can the authors provide more comparisons with some baseline methods? (by SwBC, JbMw)
**A2:** Following the reviewer SwBC’s suggestion, we added Self-Reminder (SR) and In-context Defense (ICD) as two other lightweight baselines. The following table shows the ASRs on DirectHarm4 when using training and testing templates. Each row represents a training template, and each column represents a testing template. Due to limited time, the newly added experiments are only conducted with a single seed. The other entries in the table below are directly copied from Table 1.
| train \ test | CV | CA | CL | SR | ICD |
|-------------:|------|----|-----|-----|-----|
| **CV** | 11.00 | 20.50 | 1.08 | 0.00 | 0.25 |
| **CA** | 8.08 | 46.42 | 1.00 | 0.75 | 2.00 |
| **CL** | 6.83 | 18.92 | 18.08 | 7.00 | 1.00 |
| **SR** | 11.50 | 39.75 | 8.00 | 20.25 | 3.00 |
| **ICD** | 21.00 | 33.75 | 4.25 | 3.25 | 26.75 |
These results show that baseline methods using SR or ICD for both training and testing lead to high ASRs. Consistent with our paper’s main claim, training with prompt templates that do not emphasize safety (CV, CA) and testing with templates that contain safety prompts (CL, SR, ICD) generally lead to very low ASR. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Communication-Efficient Federated Group Distributionally Robust Optimization | Accept (poster) | Summary: This work introduces three algorithms for communication-efficient Federated Group Distributionally Robust Optimization. The effectiveness of the proposed algorithms are verified through both theoretical and experimental results.
Strengths: 1) This work studies an important problem of federated group distributionally robust optimization.
2) The theoretical results show the advantages of the proposed algorithms.
Weaknesses: 1) This work proposes three algorithms, including FGDRO-CVaR, FGDRO-KL, and FGDRO-KL-Adam. There lacks a comparison between these algorithms. For example, what are the connections and differences between these algorithms?
2) The analysis for FGDRO-CVaR assumes the loss function to be rho-weakly convex, which is missing from the main context.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1) Missing reference: How about the comparison with this work [1]?
[1] Communication-Efficient Distributionally Robust Decentralized Learning https://arxiv.org/pdf/2205.15614
2) What are the experimental setups for the number of clients and non-IID?
3) In experimental results (Tables 2 and 3), FGDRO-CVaR seems to have no advantages in both task; why do we need this algorithm? Besides, it is better to highlight the best-performance results in Tables 2 and 3.
4) Intuitively, using Adam optimizer can bring training speedup and is supposed to outperform other algorithms. But why do the results show that sometimes FGDRO-KL is better than FGDRO-KL-Adam?
5) In proof, is an assumption of bounded gradient needed? If I don't misunderstand, Line 550 indicates such an assumption.
6) If the loss function assumes to be convex, analyzing the optimal distance between the loss value the minimum loss should be better in Theorem 6.2.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review! We believe we can address your concerns as follows.
***Q1: There lacks a comparison between the three proposed algorithms. What are the connections and differences between these algorithms?***
***A:*** FGDRO-CVaR and FGDRO-KL employ well-established regularization techniques [Deng et al., 2020; Lan & Zhang, 2023], each suited to different tasks and data distributions.
__FGDRO-CVaR__ focuses on optimizing for worst-case scenarios or the average of the worst-case losses, making it particularly effective in high-stakes applications like healthcare and finance, where avoiding extreme losses is crucial. However, it can be sensitive to outliers or malicious client attacks.
__FGDRO-KL__, on the other hand, uses KL divergence as a softer regularizer to promote smoother and more stable learning. Thus it can be beneficial in scenarios where robustness to outliers or malicious clients is needed.
__FGDRO-KL-Adam__ further enhances FGDRO-KL by incorporating Adam-type updates.
[Deng et. al., 2020] Distributionally Robust Federated Averaging.
[Lan & Zhang, 2023] Optimal Methods for Convex Risk Averse Distributed Optimization
***Q2: The analysis for FGDRO assumes the loss function to be $\rho$-weakly convex, which is missing from the main context.***
***A:*** This property can indeed be deduced from Assumption 4.1, as demonstrated in equation (16) of the appendix. In the revised version, we will ensure that this assumption is clearly articulated within the main body.
***Q3: Missing reference "Communication-Efficient Distributionally Robust Decentralized Learning"***
***A:*** Thank you for bringing this literature to our attention. While this literature addresses similar problem to ours, it is tailored for decentralized federated learning. It would incur high communication cost if directly applied to a centralized federated learning framework because it requires communication at every iteration. Additionally, the approach discussed is not applicable to scenarios involving a CVaR regularizer. We will include a discussion of this literature in our revision.
***Q4: What are the experimental setups for the number of clients and non-IID?***
***A:***
In our previously submitted experiments, we used a natural data split, where data from the same hospital, web source, location, or demographic group were assigned to the same machine, with up to 17 clients in total. The following table summarizes key statistics of the data, including the Client Imbalance Ratio, which represents the ratio between the number of training samples on the client with the most data and the client with the least data, and the Class Imbalance Ratio, which reflects the ratio of training data in the largest to the smallest classes in classification tasks.
|Datasets |__Pile__ | __CivilComments__ | __Camelyon17__ | __iWildCam2020__ | __PovertyMap__ |
|------|-------|-------|------|----|-------|
|Client Imbalance Ratio | 258 | 36.2 | 1 | 1.7 | 5.9 |
|Class Imbalance Ratio | N/A | 4.6 | 1 | 48021 | N/A |
To further explore non-IID scenarios, we conducted additional experiments using the Cifar10 dataset, where we created an imbalance by reducing the representation of 5 classes by 80%, and then distributed the data across 100 clients using two different Dirichlet distributions: Dirichlet (0.3) and Dirichlet (10). Results are summarized as below (detailed table at Section 2 of https://anonymous.4open.science/r/NeurIPS_11919-3402).
|Datasets | Dirichlet(0.3) | Dirichlet(10) |
|--|--|-|
|Metric | Worst Acc, Average Acc | Worst Acc, Average Acc |
|FedAvg |0.3140, 0.6236 | 0.3620, 0.6742 |
|SCAFFOLD|0.3245, 0.6337| 0.3821, 0.6816|
|FedProx | 0.3102, 0.6189 | 0.3757, 0.6925|
|FedAdam| 0.4860, __0.7147__ | 0.4460, 0.7042|
|DRFA | 0.3215, 0.6381 | 0.3752, 0.6739 |
|DR-DSGD | 0.3277, 0.6403 | 0.3700, 0.6792 |
|FGDRO-CVaR | 0.4100, 0.6606 | 0.4010, 0.6882|
|FGDRO-KL| 0.3560, 0.6369| 0.4110, 0.6951|
|FGDRO-KL-Adam | __0.5280__, 0.7057 | __0.5110__, __0.7286__|
***Q5: FGDRO-CVaR seems to have no advantages in both tasks; why do we need this algorithm? Besides, it is better to highlight the best-performance results in Tables 2 and 3.***
***A:***
On the CivilComments, iWildCam2020, PovertyMap and newly added Cifar10 datasets, FGDRO-CVaR outperforms previous baselines. Specifically, on PovertyMap, it surpasses FGDRO-KL, although it performs worse than FGDRO-KL-Adam. The primary reason for this is that FGDRO-CVaR does not implement Adam-type local updates. This is due to the current limitations in developing provable algorithms for FGDRO-CVaR with Adam-type updates, given the nonsmoothness of the problem. This is an area we plan to explore in future research. We will also highlight the best-performing results in the tables in our revision. Thank you for the suggestion.
***Q6: Why do the results show that sometimes FGDRO-KL is better than FGDRO-KL-Adam?***
***A:*** It is not unusual for SGD updates to occasionally outperform Adam in specific scenarios. Although Adam is generally more robust, it comes with additional hyperparameters that require careful tuning. We believe the observed result where FGDRO-KL-Adam underperforms compared to FGDRO-KL could be improved through an extensive hyperparameter search.
***Q7: In proof, is an assumption of bounded gradient needed?***
***A:*** Yes, an assumption of bounded gradients is necessary. This requirement is implicitly addressed in Assumptions 4.1 and 5.1 of the main text, where we assume that the functions are Lipschitz continuous. That implies the bounded gradients. We will explicitly address this point in the main text of our upcoming revision.
***Q8: If the loss function assumes to be convex, analyzing the optimal distance between the loss value the minimum loss should be better in Theorem 6.2.***
***A:*** While this is a valuable suggestion, our work does not assume convexity in the loss function, as the focus of this paper is on the nonconvex regime.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: After reading other reviews and the authors' responses, I would keep my score. My major concern is that the bounded gradient assumption is somehow too strong.
---
Reply to Comment 1.1.1:
Title: Regarding your concern about the bounded stochastic gradient assumption
Comment: Thank you for your valuable feedback! After a thorough analysis, we have managed to relax the bounded stochastic gradient assumption ($||\nabla \ell(\mathbf{w};\mathbf{z})||^2\leq G^2$) to more standard assumptions in the literature of federated learning and optimization. Specifically, we now assume a bounded true gradient ($||\nabla \ell(\mathbf{w};\mathcal{D}\_i)||^2\leq C_\ell^2$, where $\ell(\mathbf{w};\mathcal{D}\_i) = \mathbb{E}\_{\mathbf{z}\in\mathcal{D}\_i} \ell(\mathbf{w}; \mathcal{D}\_i)$) and bounded variance ($\mathbb{E}_{\mathbf{z} \in \mathcal{D}\_i}||\ell(\mathbf{w}; \mathbf{z}) - \ell(\mathbf{w}; \mathcal{D}\_i)||^2 \leq \sigma^2$). We have updated the assumption and analysis accordingly, and you can find the revised details in the "updated_analysis.pdf" file at https://anonymous.4open.science/r/NeurIPS_11919-3402. For example, your original concern about the usage of the bounded stochastic gradient assumption in Line 550 has now been addressed with these relaxed assumptions in Lines 550-551. | Summary: This paper addresses the challenge of reducing communication costs and sample complexity in Federated Group Distributionally Robust Optimization (FGDRO). The authors present the FGDRO-CVaR algorithm and the FGDRO-KL algorithm to address different constraints. Subsequently, they conduct extensive experiments across various real-world tasks, including NLP and CV tasks. The corresponding empirical results confirm the effectiveness of their proposed methods.
Strengths: 1. The exploration of reducing communication costs for federated group DRO is a rarely-studied topic within the FL community.
2. The theoretical convergence analysis for the proposed algorithms is somewhat solid.
3. The authors conduct comprehensive experiments to validate the effectiveness of the devised algorithms.
Weaknesses: 1. The contributions and novelties of this paper are unclear. It appears that the authors have directly combined existing federated adaptive algorithms with pre-existing federated group DRO methods in this paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The introduction's treatment of the concept of generalization appears incomplete. It is evident that there are two levels of generalization in Federated Learning, as delineated in [1] and [2].
2. As highlighted in the aforementioned weaknesses, the authors should provide additional clarification regarding the contributions and novelty of this paper. Overall, it appears that the proposed method is primarily a direct combination of existing methods.
3. Similarly, the authors should delineate the challenges and innovations intrinsic to their theoretical analysis. Specifically, they should underscore the complexities involved in analyzing federated adaptive algorithms when applied in federated group DRO.
4. On line 154 of Page 4, what is the relationship between the "accurate estimate" and the "moving average" in the subsequent sentence?
5. Some minor points to address. The authors might consider offering more empirical results on convergence analysis in the experimental section. Additionally, they should further consider the statistical significance of these convergence analyses.
[1] Hu X, Li S, Liu Y. Generalization bounds for federated learning: Fast rates, unparticipating clients and unbounded losses[C]//The Eleventh International Conference on Learning Representations. 2023.
[2] Yuan H, Morningstar W, Ning L, et al. What do we mean by generalization in federated learning?[J]. arxiv preprint arxiv:2110.14216, 2021.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review! We address your suggestions and concerns as follows.
***Q1: The introduction's treatment of the concept of generalization appears incomplete. It is evident that there are two levels of generalization in Federated Learning, as delineated in two literature.***
***A:***
Thank you for bringing to our attention the literature that discusses the two levels of generalization in federated learning. In our work, we are referring to generalization at the participant level, where the goal is to develop robust models that perform well across both participating and non-participating clients. Based on the insights from the mentioned literature, we have revised the first paragraph of our introduction as follows:
_"... Generalization here refers to the model's ability to perform consistently across different clients, including those that have not participated in the training [Hu et al., 2023, Yuan et al., 2021]."_
***Q2: It appears that the authors have directly combined existing federated adaptive algorithms with pre-existing federated group DRO methods in this paper.***
***A:***
First, our federated adaptive algorithms differ from those in the existing literature in terms of both algorithm design and analysis, such as [Reddi et al., 2020], as we employ Adam-type updates at every step while [Reddi et al., 2020] uses SGD updates on local steps and only apply Adam at global steps. Second, our approach to federated group DRO is distinct from previous works. By considering constraint-free equivalent forms of federated group DRO problems and focusing on dedicated algorithm design and analysis, we have achieved lower communication and sample complexity, with or without adaptive components, compared to the literature.
***Q3: The authors should delineate the challenges and innovations intrinsic to their theoretical analysis.***
***A:***
For FGDRO-CVaR, we are the first to consider a constraint-free equivalent form and develop a communication-efficient algorithm for it, significantly reducing communication costs, as evidenced in Table 1. In addition to sharing machine learning models, we propose introducing only an additional scalar threshold to select participants in each round, minimizing additional costs. The new formulation, being a nonsmooth and compositional problem, introduces challenges that are uncommon in federated learning literature. We addressed these by carefully designing the use of moving average estimators to handle client drift and achieve linear speedup.
For FGDRO-KL, although previous literature has considered constraint-free compositional reformulations, they typically require large batch sizes on each machine to estimate gradients. This approach is impractical and incurs high sample complexity. Instead, we employ moving averages that only require small data batches while still providing accurate gradient estimations, making our method more efficient.
For FGDRO-KL-Adam, we allow local updates using Adam-type updates, which introduces the challenge of handling unbiased gradients, further complicated by the use and updating of the second-order moment. Our analysis carefully manages the moving estimates of the first and second-order moments, ensuring that the solution provably converges.
***Q4: On line 154 of Page 4, what is the relationship between the "accurate estimate" and the "moving average" in the subsequent sentence?***
***A:*** The "moving average" is what we refer to as our "accurate estimate." To clarify this relationship, we have restated the sentences as follows: _"To address this, it is common practice to create an accurate estimate of
$g_i(w)$ [23, 22, 60, 61, 31]. Specifically, we employ a moving average $u$ as our accurate estimate."_
***Q5: Some minor points to address. The authors might consider offering more empirical results on convergence analysis in the experimental section. Additionally, they should further consider the statistical significance of these convergence analyses.***
***A:*** We conducted additional experiments using the Cifar10 dataset, where we created an imbalance dataset by reducing the data of 5 classes by 80%, and then distributed the data across 100 clients using two different Dirichlet distributions: Dirichlet (0.3) and Dirichlet (10). Results are summarized as below (detailed table at Section 2 of https://anonymous.4open.science/r/NeurIPS_11919-3402).
|Datasets | Dirichlet(0.3) | Dirichlet(10) |
|-----------------|-----------------|-----------------|
|Metric | Worst Acc, Average Acc | Worst Acc, Average Acc |
|FedAvg |0.3140, 0.6236 | 0.3620, 0.6742 |
|SCAFFOLD|0.3245, 0.6337| 0.3821, 0.6816|
|FedProx | 0.3102, 0.6189 | 0.3757, 0.6925|
|FedAdam| 0.4860, __0.7147__ | 0.4460, 0.7042|
|DRFA | 0.3215, 0.6381 | 0.3752, 0.6739 |
|DR-DSGD | 0.3277, 0.6403 | 0.3700, 0.6792 |
|FGDRO-CVaR | 0.4100, 0.6606 | 0.4010, 0.6882|
|FGDRO-KL| 0.3560, 0.6369| 0.4110, 0.6951|
|FGDRO-KL-Adam | __0.5280__, 0.7057 | __0.5110__, __0.7286__|
We have expanded our analysis to better address communication efficiency. Specifically, we now include figures that illustrate the communication complexity of each method by comparing the worst-case testing accuracy against the number of local updates and the communicated data sizes. You can find the figures at Section 1 of https://anonymous.4open.science/r/NeurIPS_11919-3402.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the additional explanations of the other issues that I have mentioned in the review and more experiments, which have solved all of my concerns. I recommend the acceptance of this paper. However, since I am not an expert in this field, I decide to keep my current score. | Summary: The paper presents three methods for Federated Learning Group Distributionally Robust Optimization: (i) one tailored to reduce the CVaR which optimizes the top K-losses, (ii) another one tailored to tackle the KL divergence, and finally (iii) one that uses Adam locally. The paper is well written and the ideas are presented. To the best of my knowledge, the proofs are correct. My main concerns are regarding the relevance and importance of the subject, the lack of experiments, and the lack of empirical studies on communication efficiency.
Strengths: [S1] The paper is well-written, and the ideas are presented.
[S2] The theoretical results are correct, to the best of my knowledge.
Weaknesses: [W1] The relevance of the subject is not entirely addressed. See [Q1]
[W2] The experiment section is limited. In particular, the paper does not present any intuition on the problems they are solving. They do not consider the number of samples per server for example. I believe the authors should include a class imbalance problem [AN AGNOSTIC APPROACH TO FEDERATED LEARNING WITH CLASS IMBALANCE - Shen et al, ICLR 22].
[W3] Communication efficiency is not properly addressed by the authors. The authors show the number of communication rounds required, but they do not take into account how much is communicated. The authors claim that this method is more efficient in terms of communication, and they show it theoretically, but in the experiment section, there is no evidence of communication efficiency. I suggest the authors reveal the communication cost associated with each method, measured in the amount of data shared between servers.
[W4] Privacy is an important subject of Federated Learning, but in this paper, there is no analysis of the privacy aspect. Can the authors elaborate on the privacy aspect of this work?
[W5] Federated learning is a technique used to train on a set of machines. The idea is that the number of machines that participate is large. It appears to me that the largest number of servers is 17. This seems to me insufficient for a distributed learning problem.
Technical Quality: 3
Clarity: 4
Questions for Authors: [Q1] Why should be designed solutions that are distributional robust? And, at what cost? If we compare a method that simply maximizes/minimizes the FL problem, what is the overall loss? I believe the overall loss should be smaller, given that being distributionally robust is a particular case, and therefore, the unconstrained problem achieves a smaller minimum.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time to review! Below we address your concerns and suggestions.
***Q1: Why should be designed solutions that are distributional robust? And, at what cost? If we compare a method that simply maximizes/minimizes the FL problem, what is the overall loss?***
***A:*** Designing distributionally robust solutions in federated learning is essential for two reasons: __1)Robustness__: It effectively manages distributional shifts, making models more reliable in real-world scenarios. In high-stakes applications like healthcare and finance, where failure in extreme cases can be costly, distributional robustness is crucial for minimizing risks. __2) Fairness__: By upweighting the group that has lower performance, it aims to not only perform good on average but on every subpopulations equitably.
Focusing on distributional robustness may result in a potential reduction in average performance for the majority in the observed distribution. However, this trade-off is necessary to achieve the broader goal of creating models that are both robust to distributional shifts during testing and fair across all data groups.
***Q2: In particular, the paper does not present any intuition on the problems they are solving. They do not consider the number of samples per server for example. I believe the authors should include a class imbalance problem [AN AGNOSTIC APPROACH TO FEDERATED LEARNING WITH CLASS IMBALANCE - Shen et al, ICLR 22].***
***A:*** In our previous submitted experiments, we utilized natural data splits where data from the same hospital, web source, locations, or demographic groups were placed on the same machine. These experiments have involved with highly imbalanced number of data on servers. The following table summarizes key statistics of the data, including the Client Imbalance Ratio, which represents the ratio between the number of training samples on the client with the most data and the client with the least data, and the Class Imbalance Ratio, which reflects the ratio of training data in the largest to the smallest classes in classification tasks.
|Datasets |__Pile__ | __CivilComments__ | __Camelyon17__ | __iWildCam2020__ | __PovertyMap__ |
|------|-------|-------|------|----|-------|
|Client Imbalance Ratio | 258 | 36.2 | 1 | 1.7 | 5.9 |
|Class Imbalance Ratio | N/A | 4.6 | 1 | 48021 | N/A |
To further address this concern, we have conducted additional experiments using the Cifar10 dataset. We create an imbalance dataset by reducing the data of 5 classes by 80\% and then distributed the data across 100 clients according to two different Dirichlet distributions: Dirichlet (0.3) and Dirichlet (10), using code released by [Shen et al, ICLR 22]. Results are summarized as below (detailed table at Section 2 of https://anonymous.4open.science/r/NeurIPS_11919-3402).
|Datasets | Dirichlet(0.3) | Dirichlet(10) |
|-----------------|-----------------|-----------------|
|Metric | Worst Acc, Average Acc | Worst Acc, Average Acc |
|FedAvg |0.3140, 0.6236 | 0.3620, 0.6742 |
|SCAFFOLD|0.3245, 0.6337| 0.3821, 0.6816|
|FedProx | 0.3102, 0.6189 | 0.3757, 0.6925|
|FedAdam| 0.4860, __0.7147__ | 0.4460, 0.7042|
|DRFA | 0.3215, 0.6381 | 0.3752, 0.6739 |
|DR-DSGD | 0.3277, 0.6403 | 0.3700, 0.6792 |
|FGDRO-CVaR | 0.4100, 0.6606 | 0.4010, 0.6882|
|FGDRO-KL| 0.3560, 0.6369| 0.4110, 0.6951|
|FGDRO-KL-Adam | __0.5280__, 0.7057 | __0.5110__, __0.7286__|
***Q3: Communication efficiency is not properly addressed by the authors. I suggest the authors reveal the communication cost associated with each method, measured in the amount of data shared between servers.***
***A:*** We have expanded our analysis to better address communication efficiency. Specifically, we now include figures that illustrate the communication complexity of each method by comparing the worst-case testing accuracy against the number of local updates and the communicated data sizes. You can find the figures at Section 1 of https://anonymous.4open.science/r/NeurIPS_11919-3402.
***Q4: Privacy is an important subject of Federated Learning, but in this paper, there is no analysis of the privacy aspect. Can the authors elaborate on the privacy aspect of this work?***
***A:***
Thank you for your comment. In our work, models and estimates of gradients are shared, both of which have been extensively studied in the privacy literature. The techniques developed in these studies can be integrated with our algorithms and applied within our framework.
Additionally, we aggregate certain scalar variables (in FGDRO-CVaR, the scalar variable s; in FGDRO-KL and FGDRO-KL-Adam, the scalar variable v). Borrowing the technique from the Remark 3.1 of [Shen et al., ICLR 2022], these variables can be aggregated using Homomorphic Encryption, ensuring that their exact values remain confidential. We appreciate the reviewer for bringing this literature to our attention.
In summary, while we acknowledge the importance of privacy in federated learning, we believe that our algorithms do not introduce significant new challenges in this area. We will included an elaborated discussion on this matter in the revision.
***Q5: Federated learning is a technique used to train on a set of machines. The idea is that the number of machines that participate is large. It appears to me that the largest number of servers is 17.***
***A:*** In our new experiments on Cifar10, we have scaled up the number of clients to 100. Please check the results presented in Section 1 and 2 of https://anonymous.4open.science/r/NeurIPS_11919-3402.
---
Rebuttal Comment 1.1:
Title: Reponse
Comment: I appreciate the author's response. I choose to keep my score. | Summary: This paper aims to improve the efficiency of existing federated group distributionally robust optimization (FGDRO) when considering two specific types of regularization, condition value at risk and KL divergence. To address the first type of problem, the authors propose FGDRO-CVaR that reduces the sample complexity and communication costs simultaneously. For KL conditions, the proposed FGDRO-KL reduces the sample complexity while retaining the same communication costs. Moreover, the authors integrate the notion of Adam into FGDRO-KL, yielding FGDRO-KL-Adam and achieving better convergence speed.
Strengths: 1. The paper is well-written, though some background information is missing.
2. The problem is well-motivated. The sample and communication efficiency is a pivotal problem in federated learning, though the benefits are not fully analyzed in the experiments.
3. The proposed method is grounded and improves over prior baselines.
Weaknesses: 1. The background can be more thoroughly explained. The authors are encouraged to provide additional context to address the following questions, which will greatly enhance the paper's completeness. Why is federated group distributionally robust optimization (FGDRO) an important problem or technique? What are the sources of the additional communication costs? Why is it necessary to consider two different types of regularization? Are these types of regularization relevant to different applications?
2. My major concerns lie in the experiments and their settings.
- **Data Splits.** While FGDRO's main advantage appears to be its ability to address non-IID optimization, the experimental setup concerning data splits lacks clarity. An analysis of the non-IID levels, such as those derived from different Dirichlet-distributed data splits with varying $\lambda$ values, is missing. Including more representative baselines, such as SCAFFOLD and FedProx, which are also designed for non-IID optimization, could further enhance the analyses.
- **Performance.** The proposed method performs similarly to the baselines in most experiments. For example, in Tables 2 and 3, apart from the Adam variant, the proposed method is comparable to the baselines. This would be acceptable if the proposed method demonstrated improved efficiency; however, relevant analyses on this aspect are absent from the experiments.
- **Communication or Sample Complexity Analysis.** An empirical analysis comparing complexity versus utility would be beneficial and highlight the advantages of the proposed method. For instance, the experiment in Figure 1 can be extended to a comparison among different baselines.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The datasets considered in this paper do not seem common in the existing literature. Could the authors report numbers on datasets like CIFAR-10/100 or EMNIST? Why did the authors choose the datasets in the paper?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes, the authors have discussed the limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review! We address your questions as below and will include the discussion in the revision.
***Q1: Why is FGDRO an important problem or technique?***
***A:*** In federated learning, data is distributed across multiple clients, each with its own unique data distribution. FGDRO is crucial because it addresses the heterogeneity of these distributions, ensuring that models are fair across different clients and also robust to distributional shifts.
***Q2: What are the sources of the additional communication costs?***
***A:***
For FGDRO-CVaR, the only additional communication involves a scalar variable $s$, which is negligible compared to the large size of the deep learning model that needs to be shared in federated learning algorithms. For FGDRO-KL, the primary source of additional communication cost is the sharing of $m$, a moving average estimator of the gradient, which is the same size as the model itself. For FGDRO-KL-Adam, the additional communication cost arises from sharing both $m$ and $q$, which are estimators of the first and second order moments of gradients, respectively.
***Q3: Why is it necessary to consider two different types of regularization?***
***A:***
FGDRO-CVaR and FGDRO-KL employ well-established regularization techniques [Deng et al., 2020; Lan & Zhang, 2023], each suited to different tasks and data distributions. __FGDRO-CVaR__ focuses on optimizing for worst-case scenarios or the average of the worst-case losses, making it particularly effective in high-stakes applications like healthcare and finance, where avoiding extreme losses is crucial. However, it can be sensitive to outliers or malicious client attacks.
__FGDRO-KL__, on the other hand, uses KL divergence as a softer regularizer to promote smoother and more stable learning. Thus, it can be beneficial in scenarios where robustness to outliers or malicious clients is needed.
__FGDRO-KL-Adam__ further enhances FGDRO-KL by incorporating Adam-type updates.
[Deng et. al., 2020] Distributionally Robust Federated Averaging.
[Lan & Zhang, 2023] Optimal Methods for Convex Risk Averse Distributed Optimization
***Q4: Data Splits. The experimental setup concerning data splits lacks clarity. An analysis of the non-IID levels, such as those derived from different Dirichlet-distributed data splits is missing.***
***A:*** In our previously submitted experiments, we used a natural data split, where data from the same hospital, web source, location, or demographic group were assigned to the same machine, with up to 17 clients in total. The following table summarizes key statistics of the data, including the Client Imbalance Ratio, which represents the ratio between the number of training samples on the client with the most data and the client with the least data, and the Class Imbalance Ratio, which reflects the ratio of training data in the largest to the smallest classes in classification tasks.
|Datasets |__Pile__| __CivilComments__| __Camelyon17__|__iWildCam2020__| __PovertyMap__|
|-|-|-|-|-|-|
|Client Imbalance Ratio | 258 | 36.2 | 1 | 1.7 | 5.9 |
|Class Imbalance Ratio | N/A | 4.6 | 1 | 48021 | N/A |
We conducted additional experiments using the Cifar10 dataset, where we created an imbalance dataset by reducing the data of 5 classes by 80%, and then distributed the data across 100 clients using two different Dirichlet distributions: Dirichlet (0.3) and Dirichlet (10). Results are summarized as below (detailed table at Section 2 of https://anonymous.4open.science/r/NeurIPS_11919-3402).
|Datasets | Dirichlet(0.3) | Dirichlet(10) |
|-|-|-|
|Metric | Worst Acc, Average Acc | Worst Acc, Average Acc |
|FedAvg |0.3140, 0.6236 | 0.3620, 0.6742 |
|SCAFFOLD|0.3245, 0.6337| 0.3821, 0.6816|
|FedProx | 0.3102, 0.6189 | 0.3757, 0.6925|
|FedAdam| 0.4860, __0.7147__ | 0.4460, 0.7042|
|DRFA | 0.3215, 0.6381 | 0.3752, 0.6739 |
|DR-DSGD | 0.3277, 0.6403 | 0.3700, 0.6792 |
|FGDRO-CVaR | 0.4100, 0.6606 | 0.4010, 0.6882|
|FGDRO-KL| 0.3560, 0.6369| 0.4110, 0.6951|
|FGDRO-KL-Adam | __0.5280__, 0.7057 | __0.5110__, __0.7286__|
***Q5: Including more baselines, such as SCAFFOLD and FedProx, which are also designed for non-IID optimization, could further enhance the analyses.***
***A:***
We have included SCAFFOLD and FedProx as baselines (see Section 2 and 4 of https://anonymous.4open.science/r/NeurIPS_11919-3402). However, it's important to highlight some fundamental differences between these methods and our FGDRO algorithms. SCAFFOLD and FedProx optimize the average loss, whereas our FGDRO algorithms focus on prioritizing clients with poorer performance, thereby enhancing robustness and generalization.
***Q6: The proposed method performs similarly to the baselines in most experiments. This would be acceptable if the proposed method demonstrated improved efficiency. An empirical analysis comparing complexity versus utility would be beneficial.***
***A:***
We would like to highlight that our algorithms statistically significantly outperform the baselines in most tasks, as shown by the p-values in Section 4 of https://anonymous.4open.science/r/NeurIPS_11919-3402.
We include additional figures that illustrate the complexity comparison of methods. Specifically, we compare the worst-case testing accuracy against the number of local iterations, and the worst-case testing accuracy against the size of communicated data. These figures can be found at Section 1 of https://anonymous.4open.science/r/NeurIPS_11919-3402.
***Q7: The datasets considered in this paper do not seem common in the existing literature.***
***A:***
The datasets we used were chosen because they naturally contain groups that reflect real-world scenarios, such as data from the different hospitals, web sources, locations, or demographics. These natural splits make them particularly relevant for studying group distributionally robust optimization [11, 66]. Additionally, we have conducted experiments on the Cifar10 dataset as mentioned earlier.
---
Rebuttal Comment 1.1:
Comment: I want to thank the authors for their response, which effectively addressed my concerns. Although I still have some reservations about the performance of common baselines like FedProx and SCAFFOLD in the rebuttal, I encourage the authors to provide more context on the background, clearly outline the differences between the proposed methods and the baselines, and include the new results. Given this, I’d like to raise my score to 5.
---
Reply to Comment 1.1.1:
Title: Thank you for raising the score
Comment: We are glad to hear that your concerns have been addressed by our response. As suggested, we will include more discussion on the background, differences between methods and new results. Thank you! | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Can an AI Agent Safely Run a Government? Existence of Probably Approximately Aligned Policies | Accept (poster) | Summary: This paper defines social markov decision processes (SMDPs) as an MDP generalization incorporating a population of individuals with distinct utility profiles aggregated by a social welfare function. It provides a novel quantitative definition of alignment in this context, then leverages this definition to characterize probably approximately aligned policies and safe policies, prove the conditions under which they exist, and relate them to the accuracy of the reward model.
Strengths: 1. This paper is well written, and the background is particularly clear.
2. The definitions and theoretical results are thorough and rigorous. This paper precisely relates the probability of aligned behavior to the world model accuracy, which I believe is valuable.
3. This paper acknowledges that realistic inaccuracy in the world model could cause intolerable uncertainty in the PAA policy, and shows a more practical approach (safeguarding a black-box policy).
Weaknesses: Even the more practical approach of safeguarding a black-box policy may have severe limitations. I believe the paper would be strengthened by a discussion of the feasibility of this -- in particular, what is computational complexity of computing $\mathcal{A}_{safe}$ for a SMDP?
Typo: On line 277, I believe "expansive" should be "expensive".
Technical Quality: 4
Clarity: 4
Questions for Authors: How does the SMDP formalism handle individuals that give assessments on different scales? What assumption(s) does it rely on regarding interpersonal comparisons of utility?
Confidence: 2
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: This paper includes an excellent discussion of the limitations of these results, including the theoretical conditions under which PAA and safe policies will be unreliable. The paper also discusses further practical and philosophical limitations in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and for the time spent reviewing our paper.
**Feasibility of safeguarding black-box policies in SMDPs**
We argue that the main issue is not whether safeguarding a black-box policy is feasible, but whether it produces a useful policy. Indeed, the practicality of the safeguarding method comes from the fact that the computational complexity of finding $\mathcal A_{safe}$ can be made as small as desired based on the resources available (with $K$, $C$, $H$, and $n$ being chosen as desired, unlike the PAA policy where they are determined by $\epsilon$, $\delta$, $\gamma$ and $D_{KL}(p\Vert\hat{p})$). However, as these values decrease, the $\alpha$ term in the definition of $\mathcal A_{safe}$ increases, and the resulting policy may become less useful, as only actions with very high Q-values will be considered safe. Such actions might not even exist (empty $\mathcal{A}_{safe}$), in which case the safe policy is essentially useless. On the other hand, deriving the computational complexity to ensure usefulness (i.e., the safe policy always selects an action) cannot be done in the general case, as it would require knowledge of the Q-values for every state-action pairs.
**Scale of the reported utilities**
We believe it is crucial that assessors are instructed to report their utilities for different states within a unified range, such as between 1 and 10. If this range is not standardized, utilities would need to be artificially mapped to a common scale to be aggregated, which we believe would introduce more distortion compared to when this mapping is done internally by the assessor before reporting. However, even with a unified range, interpersonal comparability is not guaranteed, as any given rating within that range might reflect different levels of welfare across different individuals. This issue is philosophical in nature, and there is no feasible experiment to determine whether utilities are truly comparable.
From this point, two approaches can be taken:
- One can simply treat the utilities as if they were comparable, and construct a social welfare function based on them.
- Alternatively, one can take additional steps to calibrate the utilities using anchor states that should elicit similar welfare across individuals. For instance, in an economic context, assessors could be instructed that a rating of 9 corresponds to landing their dream job, while a rating of 2 corresponds to being unemployed. Although this may not necessarily make utilities more comparable (as people may have different sensibilities to these events), it does provide a more interpretable framework.
In general, while interpersonal comparability might seem like a strong assumption, there are practical examples where it appears evident. For instance, it is generally conceivable that two people can determine who enjoys chocolate more through a conversation. This necessarily implies that their utilities have a certain level of comparability.
---
Rebuttal Comment 1.1:
Comment: Thank you for the nuanced discussion. This work seems thorough to me, and I will maintain my original rating. | Summary: This paper applies ideas from the Probably Approximately Correct framework to agent alignment. The paper defines a new idea of a policy which is Probably Approximately Aligned and explores the existence of such policies under certain assumptions of social welfare and models of the world. The authors show that probably approximately aligned (and approximately aligned) policies exist when there is a sufficiently accurate world model. However, to compute this policy is quite expensive. Thus, the authors also develop the idea of a safe policy which can be derived using a PAA policy and seems to be a policy that will probably not result in a catastrophically bad state.
Strengths: Overall the paper appears to be a very reasonable application of a well established form of analysis into a novel domain.
The main idea of providing bounds for the quality of an agent's policy is very important and will likely be the focus of much work in the near future. This is quite useful work and appears to me as the potential basis for work that can eventually have significant beneficial impact on the world.
The paper is generally well written and the motivation is clear. In places the math is a little dense but it seems to be as approachable as it can be for this sort of analysis. I do certainly appreciate that you've put a moderate amount of the work into the actual paper rather than stuffing all the important stuff into the appendix.
Weaknesses: Not a weakness, but my disclaimer: I was not able to thoroughly review every detail of the math due to time constraints so my understanding of the paper is limited.
The primary (and minor) issue I see with the paper is that it is quite abstract and doesn't give a clear idea of how close this is to being useful. While obviously difficult to fit into a conference paper, an experimental section may give some intuition for details such as how accurate a world model really needs to be, how beneficial PAA/safe policies are, etc.
It seems that Sec 3.2 is constructive in a sense and provides a PAA policy. Some further commentary on the practicality of this policy (is it entirely impractical to use it for synthetic experiments, or simply impractical in any useful setting/world model?) would help to contextualize the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: You do a good job of stating weaknesses but it seems that the first weakness listed may be quite significant. Is this work essentially just pushing the real difficulty of aligned policies into the task of building a statistically sound world model?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are well stated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and for the time spent reviewing our paper.
**Usefulness of the theoretical results**
We understand that the reviewer's primary concern is the lack of clarity regarding the applicability of the theoretical results to real-world scenarios. While we do not expect PAA (or safe) policies to be implemented in practical applications in the immediate future, we believe this work lays a foundation for future research, where relaxing some constraints could make the approach more practical. For instance, as mentioned in our response to review Fxrx, one potential relaxation is using surrogate models to provide the necessary feedback for constructing the alignment metric. Another relaxation to address the issue of world model accuracy is to develop world models biased towards predicting catastrophic state-action pairs, as errors in predicting non-critical outcomes are less problematic. The rationale behind the paper's fully theoretical approach is to emphasize the existence of a priori alignement (through PAA policies) rather than presenting and testing a specific policy, with the hope that it will spark further research in this direction.
**Practicality of the proposed PAA policy**
Regarding the practicality of the PAA policy introduced in the paper, it is designed primarily to facilitate mathematical proof of its PAA property in a general setting (hence demonstrating the existence of PAA policies in any context). The policy essentially employs a brute force approach by testing all possible actions to identify the best one. However, in a real-world scenario, domain expertise and other simplifying assumptions would certainly be leveraged to eliminate certain state-action pairs, significantly reducing the algorithm's complexity. We intentionally omitted an experimental section to avoid giving the impression that one of the main contribution of the paper is the PAA policy itself, and that it should be implemented as is.
**Are we simply shifting the difficulty to building better world models?**
The main question of the reviewer is a very important one. We address it by exploring two key implications of our work:
- Firstly, as mentioned in the paper’s introduction, *a priori* alignment is feasible only if we have a sufficient understanding of the consequences of each action available to the autonomous agent. However, our results offer an alternative perspective: they indicate that critical actions should not be delegated to an autonomous agent if the available world model is poor when it comes to predicting their effects. In that sense, our work is the first to provide a quantitative framework for determining which actions can be safely entrusted to an autonomous agent. We believe this type of analysis alone could initiate several research threads in the near future.
- Secondly, while it is true that creating a statistically accurate world model can indeed be as challenging as developing an aligned AI agent, achieving consensus on what constitutes a good world model (i.e., one with high empirical accuracy) is generally more straightforward than defining what constitutes an aligned AI agent (i.e., one with sufficient alignment, which is typically poorly defined). In other words, while our work may shift the challenge to another complex problem, it is one where the solution is easier to verify and where the objective is better defined.
---
Rebuttal Comment 1.1:
Comment: Thank you for the well reasoned response. I'm not fully convinced that accurately modeling the world is as easy as you make it out to be but getting into deeper detail is likely out of scope.
---
Reply to Comment 1.1.1:
Comment: We want to clarify that we are not claiming that *modeling* the world is easy in any general setting. If this impression is conveyed in the paper or the rebuttal, please let us know where, and we will address it. Our intention was to answer your initial question by providing context regarding the implications of our work.
One alternative approach to understanding these implications is the comparison with LLM alignment. Pre-training a LLM on a large corpus of text essentially builds a world "language" model, where, given a state (the start of a sentence), the model predicts the probability of the next token with little to no consideration of the alignment of the full sentence. Then, an algorithm (e.g., RLHF or DPO) is used to align the model with the user's interests. While the analogy is somewhat limited, our work would correspond to this second phase. | Summary: The paper aims to define alignment quantitatively and ensure AI agents' actions are predictable and safe. The paper start by outlines the basics of utility and social choice theory, focusing on quantifying social satisfaction and the conditions under which it is measurable. Next, the paper defines probably approximately aligned (PAA) and approximately aligned (AA) policies and provides a modified sparse sampling algorithm to achieve these policies under certain conditions. The paper also presents the idea of "safe policies" and a method to ensure AI actions are verifiably safe for society.
Strengths: - Originality: This paper introduces a novel, quantitative definition of alignment in social decision-making contexts, drawing from utility and social choice theory.
- Quality: The paper primarily focuses on theoretical contributions rather than empirical experiments. It is well-structured.
- Clarity: The paper provides detailed mathematical derivations and proofs to support the existence of PAA and safe policies. It includes extensive references and context, including foundational works in utility theory, social choice, AI safety, and reinforcement learning, emphasizing the interdisciplinary nature of aligning AI with human values.
- Significance: This work has a significant impact. While primarily theoretical, the work aims to provide a foundation for developing AI systems that could be safely used in critical applications like social governance, policy-making, or resource allocation.
Weaknesses: The safeguarding method is described in a general context, with limited discussion of its applicability to specific real-world problems. Consider adding examples of real-world applications where the safeguarding method could be particularly beneficial. For instance, discuss its application in autonomous vehicle systems, healthcare decision-making, or financial trading algorithms.
Technical Quality: 3
Clarity: 4
Questions for Authors: Could you provide more detailed steps on how the safeguarding method can be practically implemented in real-world systems? Consider add roadmap with examples on how to adapt a black-box policy into a safe policy.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors discuss various limitations of their approach, including computational complexity for large state spaces and strong assumptions about the availability and accuracy of information. The paper also highlights challenges in building reliable world models and the philosophical questions surrounding the informational basis of utilities.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and for the time spent reviewing our paper.
We understand that that the primary concern of the reviewer is the gap between the theoretical result presented in the paper and their practical implementation in real-world scenarios (in particular for safe policies). To address this concern, please find below a detailed discussion about the implementability of these policies in several applications, along with a general roadmap.
- **Healthcare decision making**:
We consider a personalized autonomous doctor (black-box AI agent) for a specific patient. Each week, based on the patient’s health data, the agent determines the appropriate medication for the patient by consulting the patient about various alternatives about aspects such as tolerance to side effects, budget constraints, and preferred timing of administration. In that scenario, the world model could simply be the rate of success and of side effects associated with each medication/dosage during the trial phase of that treatment. Based on these factors, the patient rates each medication alternative on a scale from 1 to 10, and can decide on a minimum discounted welfare level $\omega$. If the agent cannot find a verifiably safe action among its proposed alternatives, it halts and refers the patient to a human doctor. Based on the given guarantees, there will still exist (with high probability) a medication path that generate a future discounted welfare of at lease $\omega$.
A certified doctor (or several) could also be directly involved in the process. This involvement could take two forms: (i) the doctor acts as a stakeholder accountable for the AI's recommendation, providing feedback on the risks associated with each alternative (also on a scale from 1 to 10 , which is then aggregated with the patient’s score to form a global welfare score), or (ii) the doctor serves as a world model, predicting the impact of each alternative on the patient's health in the absence of other predictive models.
- **Financial trading**:
We consider the example of an autonomous mutual fund, where investors pool their money to generate returns. The investors could collectively set an aggregate tolerance threshold, $\omega$, that the autonomous trading system must guarantee. Before executing any trade, the system could present various trading strategies along with their associated risks to a subset of the investors. These investors would then assign a utility score (e.g., from 1 to 10) to each option, considering factors like risk and investment type. If no strategy meets the risk tolerance $\omega$, the autonomous trader would abort the transaction and either defer to a human trader or return the funds to the investors. This mechanism ensures that each investor can trust the autonomous agent with their capital, offering a level of verifiable security not always present with human traders. This type of framework could strongly appeal to ethical investors, who prioritize not only the financial returns of their investments but also their moral implications.
- **Autonomous driving**:
In such a scenario, where decisions need to be made rapidly (likely at the sensor acquisition rate), direct human feedback is typically not feasible. However, cars could communicate with each other. For instance, when a car intends to change its trajectory (such as overtaking, turning, stopping, or accelerating), it could broadcast various proposed paths to nearby vehicles. These neighboring cars would then assess the risk (e.g., collision potential) of each trajectory based on their own planned actions and surroundings (which might not be visible to the initial car). The initial car would then select a safe trajectory or abort the maneuver and return control to the driver if no safe options are available. Although this may not be the most natural application of PAA policies, we believe that designers of autonomous vehicles could benefit from approaching the autonomous driving problem through a social choice framework, where each car would act as a stakeholder within the environment (i.e., the roads).
- **General roadmap**:
In general, to construct a safe policy from a black-box policy, several factors must be considered. First, we need to establish acceptable tolerance levels, $\omega$ and $\delta$. Next, we must select values for $H, K, C, n$ based on the available resource constraints, such as the number of feedback instances we can gather and the frequency with which we can query the world model to plan each action. Increasing these hyperparameters (excluding $\delta$) typically leads to a lower $\alpha$, resulting in a safe policy that is less prone to delegating its decision authority (i.e., that is more useful). The accuracy of the world model also influences $\alpha$, with a higher accuracy producing a more useful safe policy. If the resulting safe policy is not useful enough, it suggests that either the resources allocated are inadequate or the world model lacks sufficient accuracy (i.e., $\alpha$ is too large). In both cases, it indicates that we are not yet ready to safely use the black-box policy for that particular application. However, if the amount of feedback required is the bottleneck, one can also imagine a proxy setting where each stakeholder delegates it’s ability to provide feedback to a personalized model (similar to a reward model) that can automatically provide feedback on a given state without human intervention. Note that the safety guarantees would be valid on the alignement metric computed with these surrogate models rather than with the real utilities.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. It was a good discussion. | Summary: The paper investigates the potential for AI agents to safely make critical decisions, such as those in a government setting, by examining the concept of alignment. It introduces Probably Approximately Aligned (PAA) policies, which are policies that are nearly optimal in aligning with social welfare objectives. The authors draw from utility and social choice theories to provide a quantitative definition of alignment and propose methods to ensure AI actions are verifiably safe for society. They also discuss the practical challenges in implementing such policies and suggest future directions for research in this area. The focus is on developing a theoretical framework that could eventually be applied to AI governance and decision-making processes.
Strengths: The authors draw from utility and social choice theories to provide a quantitative definition of alignment and propose methods to ensure AI actions are verifiably safe for society.
Weaknesses: I think the problem is not well presented.
Technical Quality: 2
Clarity: 1
Questions for Authors: I think the problem is not well presented.
E.g.
Section 3.2 - Algorithm for Computing the Policy:
How can you the result of Equation (7) are not clearly explained.
Estimation of Reward (Equation 3):
Equation (3) still appears to be a posterior approach.
Confidence: 2
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and for the time spent reviewing our paper.
It is primarily mentioned that the problem is not well presented. We agree that clarity in presentation is crucial and would greatly appreciate specific suggestions on how to improve it. Below is a detailed discussion about the two specific examples that are put forward:
- **Equation (7)**: To prove our main contribution, the existence of PAA policies (Theorem 2), we construct such a policy in Section 3.2. This policy is built on a near-optimal planning algorithm that approximates Q-values using a world model. Once these Q-values are obtained, the policy selects actions with the highest Q-value, similar to many common reinforcement learning algorithms. Equation (7) formally defines this policy. The technical core of the paper demonstrates that this policy is indeed PAA. In Section 3.2 (with full proofs in the Appendix), we show that the approximate Q-values can be made arbitrarily close to the optimal Q-values. While this specific policy is primarily designed to be provably PAA across various applications, we acknowledge that is is one of many possible PAA policies and we anticipate that future work may develop more efficient algorithms for specific applications.
- **Equation (3)**: This is still a prior approach according to our definitions because the future states $s'$ are sampled based on the world model $\hat{p}$ rather than the true environment $p$. Thus, the agent does not need to perform actions in the real world to learn the optimal actions. Instead, it can plan according to $\hat{p}$ to ensure that its next action is quasi-optimal with high probability. The policy is therefore a priori aligned, as it ensures alignment before any real actions are taken.
While the main concerns of the reviewer focus on presentation and clarity, we would greatly appreciate a more detailed feedback related to the paper's soundness and contribution, which would help us address the concerns leading to the borderline score. In particular, we would like to understand if the reviewer found that some claims could be better supported (soundness), and if these claims lack originality (or significance) with respect to existing literature (contribution). | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Regularized Adaptive Momentum Dual Averaging with an Efficient Inexact Subproblem Solver for Training Structured Neural Network | Accept (poster) | Summary: This article proposes an optimization algorithm RAMDA for training structured neural networks, which combines a number of optimization techniques including dual averaging, momentum, and coordinate-wise preconditioners. Similar to the existing RMDA algorithm, RAMDA also has the capacity to identify the local manifold structure of the solution. The author(s) provide theoretical analyses to justify the convergence property of RAMDA, and develop an inexact subproblem solver as required by RAMDA.
Strengths: The proposed RAMDA algorithm extends the existing RMDA algorithm by adding a coordinate-wise preconditioner, and its theoretical analysis seems to be novel.
Weaknesses: I think one major weakness of the current manuscript is the **correctness** of some theoretical results presented in the article.
1. Theorem 1 suggests that the regularizer function $\psi$ can be nonconvex. However, as the RAMDA algorithm heavily relies on the proximal operator of $\psi$, how do you define the proximal operator when $\psi$ is nonconvex? For example, equation (6) is used to define the new iterate $W^t$, but when $\psi$ is nonconvex, it is likely that the "argmin" is a set and is not uniquely defined.
2. Taking a closer look at the proof of Theorem 1, I feel that the author(s) may have a misunderstanding of an existing theorem. In Appendix B, equation (11) is obtained by citing Theorem 10.15 of [1]. However, Theorem 10.15 of [1] applies to functions of the form $F(x)=f(x)+\psi(x)$, where $f$ is smooth and nonconvex, but $\psi$ is convex. In other words, the non-convexity only applies to the smooth part, not the regularizer.
3. If the findings above are valid, then the author(s) may need a thorough examination of the technical proofs to see if there is any error.
4. If we assume $\psi$ is convex, then there should be an Nesterov-accelerated version of Algorithm 2 that converges in $O(\varepsilon_t^{-1/2})$ iterations, which is faster than the rate given in Theorem 1.
[1] Beck, A. (2017). First-order methods in optimization. Society for Industrial and Applied Mathematics.
===============================================================
Edit: during the rebuttal the author(s) seem to have addressed the concerns above.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the "Weaknesses" section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations of the proposed algorithm in Section 5 and Appendix C.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the careful checking of our paper, including the proofs.
In current days, it has been increasingly rare to see a reviewer spending this much effort and time to review papers, and we truly appreciate it.
And we thank the reviewer for pointing out our careless errors, which provides us a chance to correct them and obtain better guarantees.
Our reply is as follows.
**Q1**
It is indeed correct that when the regularizer is nonconvex, the argmin could be a set, and thus the proximal operator becomes a set operator.
This is already reflected in our proof of Thm 1 (see the eq between Lines 544 and 545 on P.14).
In this case, we can select any minimizer from the set of minima and the analysis would not be affected. (In the main paper, since we are only approximately solving the problem, whether the real solution is unique does not matter in the algorithmic description.)
This is the standard case in analysis involving the proximal operator of a nonconvex function. See, for example, the canonical work of Attouch et al. (2013).
**Q2 & Q3**
Our result is correct, but indeed the citation is incorrect and one minor assumption is missing, and thank you for pointing it out.
A better reference (although without a formal theorem but just some discussion) would be section 5.1 of Attouch et al. (2013).
And the additionally required assumption is that the objective function is lower-bounded by some finite value (which does not necessarily require convexity of the regularizer).
For completeness, we provide a proof for the claimed rate in (11) here.
Consider applying proximal gradient with stepsize $\theta_t$ to a function $\bar Q_t(Z) := f_t(Z) + \psi (Z)$, where $f_t$ is $L$-smooth and $\psi$ comes with an easily computable proximal operator.
Let the iterates be updated by
$$Z^{j+1} \in \arg\min_{Z} \left(\hat Q_t(Z; Z^j) := \langle \nabla f_t(Z^j), Z - Z^j \rangle + \frac{1}{2 \theta_t} \|Z - Z^j \|^2 + \psi(Z)\right),$$
then from that $Z^{j+1}$ is a minimizer of $\hat Q_t(Z; Z^j)$, we know that
$\hat Q_t(Z^{j+1}; Z^j) \leq \hat Q_t(Z^j; Z^j)$, which implies
$$\langle \nabla f_t(Z^j), Z^{j+1} - Z^j \rangle + \frac{1}{2 \theta_t} \|Z^{j+1} - Z^j \|^2 + \psi(Z^{j+1}) \leq \psi(Z^j).$$
On the other hand, from the $L$-smoothness of $f_t$, we know that
$$f_t(Z^{j+1}) - f_t(Z^j) \leq \langle \nabla f_t(Z^j), Z - Z^j \rangle + \frac{L}{2} \|Z - Z^j \|^2.$$
Adding these two inequalities together, we obtain that
$$\bar Q_t(Z^j) - \bar Q_t(Z^{j+1}) \geq (\frac{1}{2 \theta_t} - \frac{L}{2}) \|Z^{j+1} - Z^j\|^2.$$
By taking $\theta_t = 1/(2L)$ (as we did in the first line of Alg 2), summing the above inequality for $j=0,1,2,\dotsc,T$, and noting that $\bar Q_t(Z^0) - \bar Q_t(Z^{T+1})$ is upper bounded by $\bar Q_t(Z^0) - \bar Q_t^*$, we obtain our claimed rate in equation (11).
**Q4**
For the convex case, now we realize that the rate can actually be improved to $O(\log \epsilon_t^{-1})$. The idea is that when the regularizer is convex, $\hat Q_t$ is actually strongly convex.
Indeed, from our construction of $P^t$, we know that the smallest diagonal entry is at least $\epsilon$, and therefore the subproblem is $\epsilon$-strongly convex and the smooth part is $L$-smooth, where $L = \max(diag(P^t)) + \epsilon$.
We denote the global optimum by $\bar Q^*$ and $\kappa := L/\epsilon$.
We know that $\bar Q_t(Z^j) - \bar Q_t(Z^{j+1})$ is upper bounded by $\bar Q_t(Z^j) - \bar Q_t^*$, which converges at a linear rate (see, for example, equation (3.4) of Drusvyatskiy & Lewis (2018)):
$$ \bar Q_t(Z^j) - \bar Q_t (Z^{j+1}) \leq \bar Q_t(Z^j) - \bar Q_t^* \leq \left(1 - \frac{1}{2\kappa}\right)^j(\bar Q_t(Z^0) - \bar Q_t^*).$$
On the other hand, with the stepsize being $1/L$, we know from Theorem 1 of Nesterov (2013) that
$$\bar Q_t(Z^j) - \bar Q_t(Z^{j+1}) \geq \frac{L}{2} \|Z^j - Z^{j+1}\|^2.$$
Combination of these two results then gives that it takes at most $O(\log \epsilon_t^{-1})$ iterations to satisfy (7).
On the other hand, for a convex $\psi$, accelerated proximal gradient has the same dependency on $\epsilon_t$, although with a possibly faster linear convergence rate.
However, accelerated proximal gradient does not guarantee that the objective value is always decreasing, so the second condition in (7) might be violated.
This is the why we did not apply it as our subproblem solver.
**References:**
Attouch, Hedy, Jérôme Bolte, and Benar Fux Svaiter. "Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods." Mathematical Programming 137.1 (2013): 91-129.
Nesterov, Yu. "Gradient methods for minimizing composite functions." Mathematical programming 140.1 (2013): 125-161.
Drusvyatskiy, Dmitriy, and Adrian S. Lewis. "Error bounds, quadratic growth, and linear convergence of proximal methods." Mathematics of Operations Research 43.3 (2018): 919-948.
---
Rebuttal Comment 1.1:
Comment: Thanks for the updates. I'll need some time reading the new materials.
---
Reply to Comment 1.1.1:
Title: Typos
Comment: We would like to correct two typos in our previous rebuttal.
1. In our reply to Q2 & 3, in the one line inequality between "On the other hand ......" and "Adding these two ......", the two $Z$ on the right-hand side should be $Z^{j+1}$.
2. In our reply to Q4, $L$ should be defined as $L =max(diag(P^t))$. ($\epsilon$ is already included in the definition of $P^t$).
Sorry for the confusions these typos might have been caused. | Summary: This paper develops regularized adaptive momentum dual averaging (RAMDA) for structured neural networks. The method uses the preconditioning matrix to accelerate the convergence of a regularized momentum dual averaging (RMDA) method at the price of requiring the local solver (e.g. standard proximal gradient methods) to solve the subproblem. By the preconditioning matrix inspired by the AdaGrad stepsizes in Eq. (2), RAMDA outperforms RMDA and other existing gradient-based methods for solving structured neural networks in various learning applications.
Strengths: 1. Theoretical results suggest the convergence towards the solution to the subproblem when the proximal gradient methods are used as the local solvers, and almost surely convergence of RAMDA that derives from the manifold theory under the standard $L$-smoothness assumption on the objective functions $f$.
2. Empirical results illustrate the superior performance of RAMDA over RMDA and other existing gradient-based methods for various neural network tasks. Clear criteria, e.g. for solving the subproblems, are clearly stated in the numerical experiments.
Weaknesses: 1. I think there is an error in Eq. (3) where it should be the square root $\sqrt{\cdot}$ in the diagonal operator for $P^t$. This is because $P^t$ uses $U^t$ that is computed from the element-wise multiplicative product of the gradient $G^t$. Is $P^t$ inspired by the AdaGrad stepsizes? If so, then adding the justifications on using $P^t$ in RAMDA is worthwhile to better distinct RAMDA from RMDA.
2. In the experiments, can you comment on the impact of the different $\epsilon_t$ on the training performance of RAMDA? Because I believe that using $\epsilon_t$ a bit higher than $10^{-8}$ set in your experiments RAMDA might achieve far lower training time than other methods while keeping still comparable perplexity to solve Transformer-XL with WikiText-103 in Table 4, or Tacotron2 with LJSpeech in Table 5.
Technical Quality: 3
Clarity: 3
Questions for Authors: I listed questions as part of weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations of current theoretical results are clearly and fairly discussed after Theorem 2.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the careful evaluation of our paper and the invaluable suggestions. Our response is as follows.
**Q1.**
Equation 3 is indeed correct that we are using the cube root.
This choice of the preconditioner follows the empirical success of the MADGRAD algorithm of Defazio & Jelassi (2022) for smooth
optimization, and our proof also indicates that using the cube root still leads to convergence.
More discussion about the reasoning of using the cube root instead can be found in Sec 5.5.4 of Defazio & Jelassi (2022), but in
general we consider that the numerical results as more strong evidence than those verbal explanations.
On the other hand, even using the cube root, $P^t$ is indeed still inspired by AdaGrad to accumulate the previous gradient norm
squared. We will add discussion about AdaGrad in our revision
**Q2.**
It is expected that with a looser subproblem stopping condition, we could reduce some running time of RAMDA. However, it is quite
unlikely that it can be faster than other methods: all methods share some common operations including calculation of the gradient
(which is usually the major bottleneck) and several vector additions (for updating the iterates and the accumulated gradient norm) per
iteration, and using a looser subproblem stopping condition would not affect these parts. The best we can hope for is that RAMDA and
ProxGen would be no slower than other methods that do not involve solving a subproblem with no closed-form solution.
With further investigation, we found that In Tacotron2 with LJSpeech, the time spent on solving the subproblem is very little, at 2.7%
and 1.6% of the total training time for RAMDA and ProxGen, respectively.
This suggests that loosening the subproblem stopping condition would provide only very limited benefit.
For Transformer-XL with WikiText-103, the time percentages spent on solving the subproblems are 10% for RAMDA and 5.9% for
ProxGen, indicating a rather high burden.
Consequently, for this problem, we tried to increase the stopping threshold from $10^{-8}$ to $10^{-6}$, which then resulted in a 20.2%
reduction in the subproblem time for RAMDA, while that of ProxGen almost remained unchanged (which suggests that probably there
is a sharp decrease in the objective improvement so that it reached both the old and the new stopping conditions at the same
iteration). As you conjectured, Both RAMDA and ProxGen with this looser stopping condition produced validation perplexity and
weighted structured sparsity comparable the original ones. However, as mentioned above, the running time difference is actually not that huge. (For ProxGen it is likely that the algorithm behavior actually did not change given that the subproblem time remains unchanged.)
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response, and your empirical results with fair discussions on the impact of $\epsilon_t$. Because of this, I would like to maintain my score. | Summary: #### Summary
The paper introduces the Regularized Adaptive Momentum Dual Averaging (RAMDA) algorithm for training structured neural networks. RAMDA addresses the challenge of solving the subproblem involved in the regularized adaptive methods, which typically lacks a closed-form solution. The paper presents an inexactness condition that retains convergence guarantees and proposes an efficient subproblem solver. The algorithm leverages manifold identification theory to ensure that the iterates of RAMDA attain the ideal structure induced by the regularizer at convergence. Extensive experiments demonstrate the effectiveness of RAMDA in various tasks, including computer vision, language modeling, and speech synthesis.
Strengths: #### Strengths
1. **Novel Algorithm**: RAMDA combines adaptive momentum dual averaging with efficient inexact subproblem solving, providing a practical and theoretically sound method for training structured neural networks.
2. **Theoretical Guarantees**: The paper provides strong theoretical support, including convergence guarantees and structure identification, ensuring the algorithm's robustness.
3. **Practical Efficiency**: The proposed inexact subproblem solver is efficient, making RAMDA feasible for large-scale applications.
4. **Empirical Validation**: Extensive experiments across multiple domains demonstrate the superior performance of RAMDA in terms of both prediction accuracy and structured sparsity.
Weaknesses: #### Weaknesses
1. **Computational Complexity**: The computational complexity of the proposed subproblem solver, especially for high-dimensional data, needs more detailed discussion.
2. **Generality**: While the paper focuses on specific types of structured neural networks, extending the methodology to other models and regularizers would enhance its generality.
3. **Comparative Analysis**: More detailed comparisons with other state-of-the-art methods, beyond the provided benchmarks, would strengthen the empirical validation.
4. **Implementation Details**: Practical guidelines for implementing RAMDA, including parameter tuning and handling different data distributions, are somewhat lacking.
Technical Quality: 3
Clarity: 3
Questions for Authors: #### Questions
1. **Computational Complexity**:
- Could you provide more details on the computational complexity of the proposed subproblem solver? How does it scale with increasing data size and model complexity?
2. **Generality**:
- The paper focuses on structured neural networks with specific regularizers. Are there any challenges in extending RAMDA to other types of models or regularizers, such as those used in different machine learning tasks?
3. **Comparison with Existing Methods**:
- How does RAMDA compare empirically with other state-of-the-art methods for training structured neural networks? Are there specific scenarios where RAMDA significantly outperforms these methods?
4. **Implementation Guidelines**:
- Can you offer practical guidelines for implementing RAMDA in real-world scenarios? Specifically, how should practitioners tune the parameters, such as the learning rate and the inexactness threshold?
5. **Assumptions and Limitations**:
- The paper discusses some assumptions and limitations. Could you elaborate on the key assumptions that are critical for the theoretical results, and how robust the method is to violations of these assumptions?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: no
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the evaluation of our paper. Our reply is as follows.
1. Computational Complexity: The complexity of the subproblem (Eq. 4) depends on the regularizer and especially its associated
proximal operation. Let the subproblem dimension, which is also the model size, be $n$. Most of the widely used regularizers come
with a proximal operator whose cost scales at $O(n)$, like the sparsity-inducing ones. However, there are also cases with higher
dependency on the problem dimension, such as the nuclear norm regularizer we adopted in appendix E, whose cost scales at
$O(n^3)$. Therefore, the subproblem complexity dependency on the model size would be dependent on the regularizer.
On the other hand, the data size does not affect the subproblem solver at all -- likely it affects the number of outer iterations and the
time cost of computing the stochastic gradient.
2. Generality: We disagree that this paper focused on specific regularizers. Indeed for experimental purposes we have to select some
representative regularizers because there is no way to exhaust all possible regularizers in experiments. However, our discussion,
assumption on the regularizer, and analysis are all general such that a broad class of regularizers are included. On the other hand,
as we discussed in Appendix E and in the previous point, some regularizers might be useful but too costly for the subproblem solver,
and it might require more domain knowledge to design better structure grouping or subproblem solvers to tackle this cost issue in
these cases.
As for extending to other machine learning tasks, RAMDA is specifically designed for deep neural networks and probably less
suitable for other machine learning tasks. In particular, adaptive models are not that widely used in other machine learning tasks, up
to our knowledge, and therefore we do not expect our method to be as competitive on other machine learning tasks.
3. Comparison with Existing Methods: We have compared RAMDA with other state-of-the-art methods for training structured neural networks in Section 6. Our numerical results have clearly indicated that RAMDA produces better prediction performance and higher structuredness simultaneously than state of the art, and with running time comparable to the most efficient algorithm. This shows that RAMDA indeed pushes forward the state of the art for training structured neural networks.
It is well-known that adaptive methods tend to be most successful in many state-of-the-art deep neural networks, especially the transformer (see Zhang et al. (2020)). It is also the case for RAMDA: our experiments have shown that it performs the best on such models. It also has outstanding performance on large-scale problems like the ImageNet problem.
4. Implementation Guidelines:
There are four type of possible hyperparameters to tune:
- Learning rate: We do grid search to tune this hyperparameter in all of our experiments. Otherwise, $10^{-2}$ is a good guess if the
computational resources are limited.
- Learning rate scheduling: In all of the experiments, we use stage-wise learning rate scheduling for RAMDA. Each stage has an equal number of iterations, and the multiplicative factor of learning rate decay at each stage is $10^{-1}$.
- $\lambda$ (weight of the regularization): We do grid search to tune this hyperparameter in all of our experiments.
- Momentum: $10^{-1}$ or $10^{-2}$ are good choices. We simply chose $10^{-2}$ in all the experiments of the Section 6.
Inexactness threshold: There are two hyperparemeters to controll the inexactness threshold (see Section 6):
- max_iters: the maximun number of the iterations. We set it to $100$ in all the experiments.
- rtol (: (previous_subproblem_objective-current_subproblem_objective)/(abs(current_subproblem_objective)+1.0): criterion for early
stopping. We set it to $10^{-8}$ in all the experiments except Table 1.
Implementation and experimental code for reproducibility has also been included in the supplementary materials.
5. Assumptions and Limitations: Thms 1 & 4 have very little assumptions and limitations, and therefore they are applicable to almost all situations. For Thm 2, as discussed after the theorem statement, the key assumption is that the iterates converge to a point. Its satisfaction might require some additional regularity conditions that are usually satisfied by generic problems, but the analysis would be very involved and we will leave it as future work. In practice, we notice that with proper hyperparameter search, this assumption seems to hold true in general -- usually those hyperparameters that lead to divergent iterates are not adopted.
For Thm 3, the nondegeneracy condition is the key assumption, and again it holds for generic problems (since the relative boundary is a measure zero set, that the negative gradient falls on it is a probability zero event.)
---
Rebuttal Comment 1.1:
Title: Learning rate scheduling setup
Comment: More details regarding the learning rate scheduling setup: We begin with determining the total number of epochs, often considered as a budget, and then split them into 3 to 5 roughly equal stages. While more epochs typically lead to better performance when resources allow, in some cases, the total number of epochs should be adjusted based on validation performance to identify the optimal point. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their careful evaluations of our paper.
We will individually reply to the reviewers to address their specific questions, and here we would like to highlight some changes that we will make in our revision.
- For Theorem 1, the rate when $\psi$ is convex can be improved to $O(\log \epsilon_t^{-1})$. This is from the unutilized observation that the objective function is actually strongly convex, and thus proximal gradient is expected to have a linear rate of convergence instead of the current slower sublinear one.
- In the proof of Theorem 1, reviewer 4z2u pointed out that there is a wrong citation for the case of nonconvex $\psi$. A better reference would be Section 5.1 of Attouch et al. (2013). We will also add the derivation to show that indeed this rate is correct, just the citation was wrong. More details are in our response to reviewer 4z2u.
- There is a typo in our proof of Theorem 1: in (16), (21), and the unnumbered
equation before (21),
$$\nabla E_{\xi \sim D} [ f_{\xi}( W^{t})] \circ \nabla E_{\xi\sim D} [ f_{\xi} ( W^{t}) ]$$
should instead be
$$E_{\xi\sim D} t[ \nabla f_{\xi} ( W^{t} ) \circ \nabla f_{\xi}( W^{t} ) ].$$
**Reference:**
Attouch, Hedy, Jérôme Bolte, and Benar Fux Svaiter. "Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods." Mathematical Programming 137.1 (2013): 91-129. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Multi-modal Transfer Learning between Biological Foundation Models | Accept (poster) | Summary: The paper introduces a novel multi-modal model, IsoFormer, designed to integrate DNA, RNA, and protein sequences for predicting RNA transcript isoform expression across different tissues. It utilizes pre-trained modality-specific encoders to generate embeddings that are then combined using a sophisticated aggregation method. The model demonstrates significant improvements in prediction accuracy compared to single-modality approaches.
Contriburion:
1. Developed the first general-purpose multi-modal model integrating DNA, RNA, and protein sequences.
2. Demonstrated successful application of transfer learning from modality-specific encoders.
3. Provided a new robust framework for advancing the prediction of RNA transcript isoform expression.
Strengths: 1. Innovative Integration of Modalities: The paper presents the first attempt to integrate three biological sequence modalities (DNA, RNA, and proteins) in a unified model, providing a comprehensive approach reflective of natural biological processes.
2. Effective Transfer Learning: IsoFormer effectively leverages pre-trained encoders to enhance its predictive power, benefiting from both intra-modal and inter-modal transfer learning.
3. Robust Evaluation: Experiments demonstrate the model's capability, outperforming existing methods in predicting transcript isoform expression, which is a challenging task due to its multi-modal nature.
Weaknesses: 1. Complexity and Computation: The model's complexity and the computational demands might limit its accessibility and use, particularly in environments with restricted resources.
2. More Comprehensive Evaluation for PLM's representation learning capability would make this paper better.
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) For Tab 5., wonder what’s the performance for “DNA and RNA encoder not pre-trained”
(2) Could authors provide evaluation results for DNA, RNA and protein encoder separately on their own popular benchmarking tasks? Ideally, these models should improve on those downstream tasks as well. Would give 7’ or even 8’ if authors conduct these experiments. It could be a great and exciting work in this AI4Bio field.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: 1. Data Requirements: The effectiveness of the model is contingent on the availability of comprehensive and high-quality multi-modal datasets.
2. Generalizability: While promising, the results are primarily validated on specific types of gene expression data, and its performance across broader biological applications remains to be fully assessed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful feedback and positive comments of our work.
> For Tab 5., wonder what’s the performance for “DNA and RNA encoder not pre-trained”
We have now completed this ablation study with full evaluation of the effect of pre-training for each encoder (detailed in the attached PDF, Table 2). Interestingly, we observe in every setting that using a pre-trained encoder against its randomly initialized counterpart consistently leads to improved performance. This further demonstrates that IsoFormer leverages modality-specific pre-training.
> Could authors provide evaluation results for DNA, RNA and protein encoder separately on their own popular benchmarking tasks? Ideally, these models should improve on those downstream tasks as well. Would give 7’ or even 8’ if authors conduct these experiments. It could be a great and exciting work in this AI4Bio field.
We evaluated the downstream performance of the Nucleotide Transformer model before and after fine-tuning it within the Isoformer on the isoform prediction task (when considering it as a base DNA model). We selected the prediction of gene expression measured by bulkRNA-seq from Chia Hsiang et al. [1] as it aligns with the fine-tuning task of the IsoFormer while relying on a different data source. We show a clear increase in performance of the NT model before and after fine-tuning within the IsoFormer framework (see attached PDF, Table 3). We will add these results to the paper. This highlights the potential of our approach to jointly pre-train models for different modalities. We will discuss in the conclusion section how this could be extended to a larger set of downstream tasks across modalities, which will probably require to expend the tasks and datasets used to train the IsoFormer.
> Data Requirements: The effectiveness of the model is contingent on the availability of comprehensive and high-quality multi-modal datasets.
The reviewer brings up one of the key motivating aspects of our work. One of the biggest bottlenecks in the development of models integrating multiple biological sequences has been the lack of available matched data between DNA, RNA, and proteins. These multi-modal datasets are becoming more available but the amount of data is not enough to train models from scratch (as we demonstrate in Table 5 of the paper). Our approach bridges this gap by leveraging uni-modal encoders that have been trained on high-quality uni-modal datasets and achieving transfer learning across modalities. Our work demonstrates that this limitation can be overcome.
[1] Kao, Chia Hsiang, et al. "Advancing dna language models: The genomics long-range benchmark." ICLR 2024 Workshop on Machine Learning for Genomics Explorations. 2024.
---
Rebuttal Comment 1.1:
Comment: Hi thanks for all the efforts and detailed response. For my question (2), we refer to benchmarking tasks used in protein language models, RNA language models, and DNA language models, not only on transcript isoform expression prediction task. Not sure if the remaining time would be enough. Sorry that I didn't response promptly because of work during weekdays.
---
Reply to Comment 1.1.1:
Title: Clarification on the additional gene expression task
Comment: For clarification, the additional experiment is the prediction of gene expression measured by bulkRNA-seq from Chia Hsiang et al. [1], which is different from the isoform expression prediction task that was used to train IsoFormer. In the transcript isoform expression prediction task, the goal is to predict the expression of each RNA transcript isoform of a given gene, whereas in the gene expression prediction task, the goal is to predict the expression of a gene from its DNA sequence - and this is one of the key benchmarking tasks for DNA language models nowadays. Those tasks are different but correlated, hence we chose it as we expect transferability between them. Our experiment demonstrates that training Nucleotide Transformer as a DNA encoder within our multi-modal transcript expression framework increases its performance on this gene expression task.
We did not have the time to run extensive experiments on existing benchmarks for protein and RNA language models before the end of discussion period but we agree that those would be valuable insights. This is an interesting step for future iterations of this work that we will add to the conclusion of the paper.
[1] Kao, Chia Hsiang, et al. "Advancing dna language models: The genomics long-range benchmark." ICLR 2024 Workshop on Machine Learning for Genomics Explorations. 2024. | Summary: The paper introduces a new framework for the multi-modality pretrain model according to the Central dogma of biology. The method encode DNA, protein and RNA at the same time. The proposed method can transfer knowledge from the encoders pretraining and modalities.
Strengths: The paper is well-organized and easy to follow
The authors have proved that the multi-modality of single cell data can help model predictions.
Weaknesses: Lack of experiments. 1. More ablation studies should be conducted about removing different modalities of the model in Table 2 ( e.g. we observe only RNA can achieve a high performance, what about protein+RNA? ).
2. More dataset details should be included. The split of training/validation/test sets is not clear. If the authors do the experiments on the same dataset, they should split the dataset according to the tissues to validate the transferability of the proposed method.
Technical Quality: 2
Clarity: 2
Questions for Authors: The authors should include more motivation about the proposed method. For example, why can the changed DNA influence the RNA seq? Why not directly predict RNA expression from RNA seq?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The biological system is more complex. The proposed method only include the direct map from DNA to RNA. However, in real world, RNA can effect the expression of DNAs. More details should be discuss in the future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate this reviewer's comments and suggestions that will improve the revised manuscript.
> More ablation studies should be conducted about removing different modalities of the model in Table 2 ( e.g. we observe only RNA can achieve a high performance, what about protein+RNA? ).
We have now performed the suggested, and missing, ablation studies for NT, Enformer and Borzoi (added now) models (see attached PDF, Table 1). We now can compare the results of using any combination of one, two, or three modalities with three different DNA encoders. These results reinforce the advantage of combining multiple modalities to solve the isoform expression prediction task. The new version of the paper will incorporate these results and their appropriate interpretation. Many thanks for the suggestion.
> More dataset details should be included. The split of training/validation/test sets is not clear. If the authors do the experiments on the same dataset, they should split the dataset according to the tissues to validate the transferability of the proposed method.
Regarding the data split in training/validation/test sets, we will provide more details on the methods. Briefly, we have followed standard approaches for sequence-based models where different chromosomes are held out for the different sets (chromosomes 1-19 for training, 20 for validation and 21-22 for test). This ensures no data leakage due to overlapping or recombinant sequences. Since the only input are DNA/RNA/AA sequences, we do not expect transferability between tissues and can only do tissue-specific predictions for the tissues used during training.
> The authors should include more motivation about the proposed method. For example, why can the changed DNA influence the RNA seq? Why not directly predict RNA expression from RNA seq?
We will also include more motivation about the proposed method. For further clarification, here we are predicting isoform expression which is measured by RNA-seq. Thus, it would be trivial to predict it from RNA-seq and the main challenge is to predict it from DNA sequence. Transcript isoform expression is influenced by DNA regulatory elements in addition to RNA regulatory elements, and that is the main motivation for combining multiple encoders to improve performance on this task.
> The biological system is more complex. The proposed method only includes the direct map from DNA to RNA. However, in the real world, RNA can affect the expression of DNAs. More details should be discussed in the future work.
We will also improve the discussion of the complexity of gene regulation and the system. Indeed, the fact that RNA can also affect in turn the expression of genes from DNA is one of the main motivations to mix DNA and RNA sequences in our approach, modeling interactions between biological modalities based on their sequence alone.
We hope our revisions address your concerns. We would be happy to expand on any of these points throughout the discussion period if further clarification is needed.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I have raised my score. Hope your next version can be more clear | Summary: The paper models isoform relative abundance across tissues with a multimodal approach based on 3 pretrained encoders for DNA, RNA, and AA sequences. DNA encoder uses a sequence centered on the gene’s TSS, RNA encoder uses the known isoform sequence from RNAseq and the protein encoder uses corresponding AA sequence. They perform multiple ablations on the utility of having all 3 separate encoders and, given separate encoders, how to aggregate them into a single isoform specific embedding/prediction, and look at attention layers of RNA module to find biologically meaningful regions of attention.
Strengths: Isoform level analysis using 3 separate pretrained encoders for DNA, RNA, and AA sequences is a good strategy. The authors provide useful ablations on the utility of the multi modal approach and on modern strategies for combining those into a single embedding. Looking for biolgoically meaningful interpretations of attention layers is useful.
Weaknesses: I don’t think the authors can claim this is the first attempt to combine DNA, RNA, and AA modalities with techniques from NLP. See the recent Evo work here https://www.biorxiv.org/content/10.1101/2024.02.27.582234v2 . While they evaluate their performance against Enformer, that’s a large part of their own model. So the evaluations have an intramural feel to them. It’d be interesting to see how their strategy compares to other multi modal models such as Evo, and more RNA centric work like Borzoi, which looks at a more fine grained look of variant effects on the DNA to RNA relationship. Looking at average isoform abundance across individuals is all well and good, but GTEx also has individual genomes, and genomic variation across individuals will also of course affect splicing patterns and which isoforms come from what individuals.
Technical Quality: 3
Clarity: 2
Questions for Authors: Some comments and questions:
Centering on TSS will only capture regulatory elements within the chosen sequence length. There could be distal or trans CREs outside
GTEx database has less than 1000 donors, confirming the 5000 individuals claim?
Looking at equation 5 in 4.4, how does f_psi depend on the tissue T? Is it a separate head per tissue, or are you predicting the vector of isoform abundance across tissues with one pass? And is f_theta and f_phi the same f but different weights as f_psi? Where does the summation over i take place?
5.1 does ablations with one DNA encoder, then 5.2 shows superior performance with Enformer as the DNA encoder. So the ablations in 5.1 may not be accurate with respect to this new encoder. It also begs the question of how would the Enformer do as the RNA encoder as well.
How are RNA and protein sequence lengths handled when they’re longer than the model input sequence size?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors should be more explicit about the limitation of using reference genome and known isoform sequences and how this kind of sweeps splicing as a function of dna sequence under the rug.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for the constructive comments and positive assessment of our work.
> I don’t think the authors can claim this is the first attempt to combine DNA, RNA, and AA modalities with techniques from NLP. See the recent Evo work
Evo is a model based solely on DNA sequences and has been applied to RNA and protein modalities using their respective DNA sequences, a technique used in recent models [1,2]. Evo was trained on prokaryotic data, while ours uses eukaryotic data. For eukaryotes, models trained on genomes show limitations due to data distribution differences arising from the presence of intronic regions [2]. Our approach uses different vocabularies per modality and models pre-trained on corresponding datasets. Therefore, Evo and our models are not directly comparable and have different strengths. Ours is the first to combine DNA, RNA, and AA modalities with their respective alphabets.
> It’d be interesting to see how their strategy compares to other multi modal models such as Evo, and more RNA centric work like Borzoi
We have now compared our multi-modal approach using Enformer and Borzoi in addition to NT. Interestingly, using Borzoi as DNA encoder leads to the same conclusions on the positive effect of aggregating more modalities (see attached PDF, Table 1), maintaining the ranking between the different ablations. This observation supports the robustness of our approach. Additionally, Borzoi outperformed NT as a DNA encoder, which is expected since Borzoi is tailored for expression tasks. However, despite the Borzoi authors reporting similar performance to Enformer in modeling gene expression, we observed slightly lower performance for Borzoi.
> Looking at average isoform abundance across individuals is all well and good, but GTEx also has individual genomes, and genomic variation across individuals will also of course affect splicing patterns and which isoforms come from what individuals.
> The authors should be more explicit about the limitation of using reference genome and known isoform sequences and how this kind of sweeps splicing as a function of dna sequence under the rug.
Genomic variation across individuals will does affect splicing patterns, we are not accounting for that for now. GTEx also has individual genome data but we do not have access to it yet. We will highlight this point in the paper and propose as future experiments to incorporate genetic variants. Still, this should not impact the comparison between using the different modalities alone or in combination, as in all cases they use the reference genome sequences.
> Centering on TSS will only capture regulatory elements within the chosen sequence length. There could be distal or trans CREs outside.
It is true that centering on TSS will only capture regulatory elements within the chosen sequence length. But when using Enformer we use a context window of ~190kb, thus capturing most distal enhancers, being the state-of-the-art method to capture such long interactions. Similarly, when using Borzoi in our new experiments, we use a context window of ~512kb.
> GTEx database has less than 1000 donors, confirming the 5000 individuals claim?
We indeed had a typo and will correct it in the revised version, thanks for pointing this out.
> Looking at equation 5 in 4.4, how does f_psi depend on the tissue T? Is it a separate head per tissue, or are you predicting the vector of isoform abundance across tissues with one pass? And is f_theta and f_phi the same f but different weights as f_psi? Where does the summation over i take place?
f_psi returns indeed the vector of isoform abundance across tissues in one pass, returning a vector of n_t floating numbers where n_t is the number of tissues. The reviewer is right, the notation f is confusing. f_theta, f_phi and f_psi are different functions parametrized with different weights; we'll rename them f^{encoders}_theta, f^{aggregation}_phi and f^{expression}_psi to improve clarity.
For the sake of simplicity, we removed the summation over batches of data in the loss function. The summation over i takes place implicitly as we sample randomly isoforms and their expression in the dataset. We do not necessarily have all isoforms from the same gene within the same batch. We will add a sentence in the text to clarify this.
> 5.1 does ablations with one DNA encoder, then 5.2 shows superior performance with Enformer as the DNA encoder. So the ablations in 5.1 may not be accurate with respect to this new encoder. It also begs the question of how would the Enformer do as the RNA encoder as well.
We have also now performed all encoder ablations for NT, Enformer, and Borzoi models, providing more insights about the benefits of combining multiple encoders (see attached PDF, Table 1). Many thanks for the suggestion.
Regarding using Enformer as the RNA encoder, since it is long-range but most RNAs are shorter than 10kb, we favored the use of a more general RNA encoder. This could be done as future work, or rather using recently published RNA pre-trained models.
> How are RNA and protein sequence lengths handled when they’re longer than the model input sequence size?
We will clarify in the revised version how we handle RNA and protein sequences that exceed the model's input size. Protein sequences were all within the model's maximum input length, so no additional processing was required. RNA transcripts longer than 12kb were left-cropped to 12kb to conserve the 3’UTR regions, which are crucial for mRNA stability and polyadenylation, impacting isoform abundance. This adjustment affected only a small portion of the dataset. We will include the sequence length distributions in the paper's supplementary material.
[1] Richard, G., et al. "ChatNT: A Multimodal Conversational Agent for DNA, RNA and Protein Tasks." bioRxiv (2024): 2024-04.
[2] Outeiral, C., and C. M. Deane. "Codon language embeddings provide strong signals for protein engineering. bioRxiv." preprint (2022).
---
Rebuttal Comment 1.1:
Comment: thank you for addressing my questions. i think those corrections and clarifications will benefit the paper’s presentation. | null | null | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their time reading our manuscript and for providing constructive feedback in their reviews.
We are glad the reviewers value positively our approach to combine different biological modalities together and emphasize that the experimental results support the paper’s contributions. We also acknowledge certain areas of the manuscript can be strengthened and are grateful to the reviewers for providing improvement suggestions on those areas. Below, we summarize the main concerns that have been raised during the review and the changes we have made to address them. Concrete concerns and questions are further discussed in the reviewer-specific responses.
**Full enumeration of modality combinations and use of RNA-centric models like Borzoi**
Reviewers suggested expanding the results presented in Tables 2 and 3 of the paper to include 1) any combination of one, two, or three modalities, and 2) the use of Borzoi as one of the encoders. We agree with the suggestion and present exhaustive results in Table 1 of the attached PDF. These results reinforce the advantage of combining multiple modalities to solve the isoform expression prediction task. In addition, we can observe that when using Borzoi as a DNA encoder the ranking between the different ablations is maintained. We will include this updated table in the final version of the paper.
**Full enumeration of using the pre-trained and non-pre-trained versions of each encoder**
Table 5 of our paper compares different cases in which only a subset of the encoders is pre-trained. This table demonstrates the effectiveness of transfer learning from pre-trained uni-modal encoders to achieve a strong performance in the multi-modal task. Reviewers suggested extending this table to include the 8 possible combinations of pre-trained vs. non-pretrained. We provide the complete ablation in Table 2 in the attached PDF. The extended results show in every setting that using a pre-trained encoder against its randomly initialized counterpart consistently leads to improved performance. We will add this table to the final version of the paper.
**Clarification of the data processing approach**
The reviewers have inquired about our specific use of the GTEx data. First, an interesting point has been made on using reference genome vs. individual genomes. We have opted to use the reference genome in our work because of the availability of the data. While we agree it would be interesting to use individual genome data, we believe that the reference genome is already a strong support for proving the main goal of the paper: combining multiple biological sequence modalities together to obtain superior performance in multi-modal tasks. We will discuss the use of individual genome data as future work in the final version of the paper.
Second, reviewers have asked to provide further details on the train/validation/test splits. In our work we have created these splits based on chromosomes, which is a standard approach in sequence-based models and ensures there is no data leakage across splits. We add the specific splits in our response and in the final version of the paper. Finally, we have also addressed other specific points about the dataset such as the treatment of long sequences (we crop them, not necessary when using Enformer) and the motivation to look into DNA for a RNA-seq-related task (DNA and RNA are intrinsically linked and the expression of one modality is affected by the other and vice versa). We will reemphasize these discussion points in the final version of the paper.
---------
Overall, we hope these changes address the different concerns that have been raised and reflect our willingness to improve the paper in all possible ways. Please, do not hesitate to provide further feedback.
Pdf: /pdf/c9ee60e83f905a44722b1283ba5265ac47b796e8.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Fast T2T: Optimization Consistency Speeds Up Diffusion-Based Training-to-Testing Solving for Combinatorial Optimization | Accept (poster) | Summary: The paper introduces Optimization Consistency Models (OptCM) as a novel method for solving combinatorial optimization (CO) problems efficiently. Traditional diffusion models, although powerful, are computationally intensive due to their iterative denoising processes. OptCM overcomes this limitation by learning direct mappings from noise levels to optimal solutions, enabling rapid single-step solution generation. The contributions of this paper are three-fold. First, OptCM reduces the computational overhead significantly by enabling fast, single-step solution generation while maintaining high solution quality. Second, This protocol ensures that samples from different generative trajectories converge consistently to the optimal solution. Thrid, Introduced at the test stage, this method enhances solution exploration and quality during inference.
Strengths: OptCM significantly reduces the computational overhead by enabling fast, single-step solution generation, compared to the multiple steps required by traditional diffusion models. This efficiency allows for rapid inference, making it practical for real-time and large-scale applications.
Despite the reduced computational steps, OptCM maintains high solution quality, often outperforming state-of-the-art methods that require more extensive processing. The optimization consistency training ensures that the generated solutions are close to the optimal solution.
The optimization consistency training protocol is a novel approach that minimizes the differences among samples from varying generative trajectories, ensuring robust and consistent solution generation. This method enhances the model's ability to generalize across different problem instances.
The introduction of a consistency-based gradient search during the test stage allows for further exploration and refinement of the solution space, improving the final solution quality. This approach bridges the gap between training and inference, making the model more adaptable to new instances.
Weaknesses: Overall, the paper's strengths lie in its innovative approach to reducing computational complexity while maintaining high solution quality, its robust and versatile model design, and its impressive performance on benchmark tasks. However, there are some weaknesses in this paper. Some weaknesses listed below might be found to be too general and don't require the authors to address them now.
The advanced techniques used in OptCM, such as optimization consistency training and gradient-based search, may be challenging to implement and require a deep understanding of the underlying principles.
The model's performance is closely tied to the quality and diversity of the problem instances used during training. If the training set does not adequately represent the test instances, the model's effectiveness might be reduced.
While the paper demonstrates the superiority of OptCM over state-of-the-art neural solvers, it could provide a more detailed comparison with traditional, non-neural methods in terms of both performance and computational efficiency.
Technical Quality: 3
Clarity: 3
Questions for Authors: How does the computational complexity of OptCM compare with traditional diffusion models and other state-of-the-art neural solvers? What specific optimizations were implemented to achieve the reported speedup in solution generation?
How does OptCM compare with classical optimization algorithms like simulated annealing or genetic algorithms in terms of both performance and computational resources?
Can OptCM be effectively applied to real-world problems with noisy, incomplete, or dynamically changing data?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful comments and for acknowledging novelty, model design, and empirical performance. Below we respond to the specific comments.
> **Q1: How does the computational complexity of OptCM compare with traditional diffusion models and other state-of-the-art neural solvers?**
Since the consistency model requires two inference predictions with different noise levels during training, it requires twice the training cost of the original diffusion model. However, this overhead on training is offline, and the consistency model is much more efficient than diffusion at inference time. As shown in Table 1, 2, and 4, OptCM can achieve 82.8x speedup on TSP-50/100, 16.8x speedup on TSP-500/1000, and 26.3x speedup on MIS compared to previous SOTA diffusion counterparts even with solution quality advantage.
> **Q2: What specific optimizations were implemented to achieve the reported speedup in solution generation?**
Compared to the time-consuming iterative sampling process requiring denoising across multiple noise levels, the speedup of OptCM comes from the method design that learns direct mappings from different noise levels to the optimal solution for a given instance. This facilitates high-quality generation with minimal shots. To learn such direct mappings, we propose an optimization consistency training protocol, which, for a given instance, minimizes the difference among samples originating from varying generative trajectories and time steps relative to the optimal solution. Please refer to Sec. 4 for algorithm details. We also supplement the specific algorithm in the PDF attachment of the general response.
We also supplement the specific design choices of our OptCM, and the listed hyperparameters correspond to those used in the algorithm presented in the paper. We will supplement these details to the paper to enhance its comprehensiveness.
| Training | Design Choice |
| ------------------------- | ----------------------------------------- |
| Consistency Loss Function | $d(x,y)=$Binary_Cross_Entropy$(x,y)$ |
| Scaling Factor | $\alpha=0.5$ |
| Weighting Function | $\lambda (t)=1$ |
| Discretization Curriculum | $t\sim \{1, 2,...,T\}$, randomly sampling |
| Initial Learning Rate | $\eta=0.0002$ |
| Learning Rate Schedule | Cosine decay, decay rate $\omega=0.0001$ |
| Test | Design Choice |
| --------------------------- | ------------------------------------------------------------ |
| Sampling Step Schedule | $t_1=T(1-\sin(N\cdot i\pi/2))$, $t_2=T(1-\sin(N\cdot (i+1)\pi/2))$ |
| Guided Weighting Parameters | $\lambda_1=50$, $\lambda_2=50$ on TSP <br> $\lambda_1=2$, $\lambda_2=2$ on MIS |
| Rewrite Ratio | $\epsilon=0.2$ on TSP and ER-[700-800] <br> $\epsilon=0.3$ on RB-[200-300] |
> **Q3: How does OptCM compare with classical optimization algorithms like simulated annealing or genetic algorithms in terms of both performance and computational resources?**
Thanks for noting classical algorithms, which are indeed important for the comprehensive evaluation. Indeed we have already included traditional algorithms in our comparison: 1) For TSP, we compare SOTA traditional exact solvers Concorde and Gurobi, as well as Heuristic solvers LKH-3 and Fasthest Insertion. 2) For MIS, the compared SOTA traditional solvers include the exact solver Gurobi and the heuristic solver KaMIS.
The mentioned algorithms like simulated annealing or genetic algorithms are indeed not very competitive in solving hard problems. We supplement the experimental results of SA and GA below; please refer to Table 1 and 2 for the corresponded performance of other SOTA baselines, where competitive results fall within the drop of 1%.
| Setting | Length | Drop | Time |
| ---------- | ------ | -------- | ----- |
| SA_TSP100 | 18.40 | 138.43% | 1h58m |
| SA_TSP500 | 108.87 | 558.12% | 2m5m |
| SA_TSP1000 | 248.99 | 975.28% | 2h22m |
| GA_TSP100 | 39.14 | 407.41% | 2h3m |
| GA_TSP500 | 240.70 | 1355.94% | 2h15m |
| GA_TSP1000 | 497.18 | 2047.16% | 2h32m |
> **Q4: Can OptCM be effectively applied to real-world problems with noisy, incomplete, or dynamically changing data?**
We have provided the performance of OptCM on real-world datasets such as TSPLIB, and the relevant results are shown in Tables 5, 6, and Appendix A. The experimental results show that OptCM can generalize to the real-world data, which outperforms all the previous SOTA baselines. While for specific environments such as scenarios with noisy, incomplete, and dynamically changing data, we have not made customized designs to deal with them at the moment. Currently, OptCM is a fundamental model that focuses purely on the performance of the solving task on relatively clean data, but we also acknowledge the importance of studying the application to various noisy, incomplete, and dynamically changing scenarios, and we will take this as the future direction of our subsequent research. Thank you for your suggestions.
---
We hope this response could help address your concerns. We believe that our work can have an important impact on the field of ML4CO as a possible new backbone model to drive the effectiveness of existing methods further. Thank you again for your firm support and recognition of our paper. If our response has satisfactorily addressed your concerns, we would be sincerely grateful if you could consider a higher score, as it would greatly support our efforts. Thank you once again for your consideration.
---
Rebuttal Comment 1.1:
Title: Acknowledgement of authors' response
Comment: Thanks for the rebuttal; I am inclined to keep my current score, which is leaning to accept this paper.
---
Reply to Comment 1.1.1:
Comment: Thanks for your acknowledgment. We appreciate your time and effort in reviewing our paper and are grateful for your feedback throughout this process. We will incorporate all supplementary results and explanations into the final version of the paper and will continue to refine our work and add additional results based on your valuable feedback. | Summary: This paper advances CO DM-based neural solvers under the setting where labeled training graphs are available by considering Consistency Models and gradient search (which was adopted from T2T).
Strengths: 1- The paper is in general well-written and technically sound.
2- The use of CMs to accelerate the sampling procedure.
Weaknesses: See Questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1- Generalization is a major problem for supervised neural solvers. The need to train a different model for each graph distribution is a bottleneck for these solvers. For example, in the MIS problem, how does a CM trained on ER700 with p=0.15 generalize when faced with an ER700 test instance with p=0.2? Which p would require different training given that n=700? Furthermore, does the proposed approach need to train a different model for ER2000 with p=0.15? These need to be explicitly explained/investigated as these could be considered as a limitation of the proposed method. The MIS problem differs based on different densities and degree distributions, not only the graph size. SATLIB, GNMs, SBMs can be considered.
2- How does the size of the training dataset impact the outcomes?
3- How does the proposed method handle real-world graphs (such as the SNAP graphs in https://snap.stanford.edu/data/)? What would be the training dataset? In most cases, these graphs do not follow a certain degree distribution or density. This is a major limitation in this method and all other learning-based methods. This point needs to be clearly discussed in the paper. As an alternative, the authors should clearly state that the proposed method only operates when training data points (with true optimal solutions) are available.
4- Scalability results are missing for dense and sparse graphs if the models do generalize to higher n and same density.
5- The run-time comparison with heuristics and ILP solvers does not include training times. The authors should either include them, or explicitly/clearly mention that.
6- The training dataset when using KAMIS to label graphs is similar to training a regression model with inaccurate true values as labels. The reason is KAMIS does not guarantee an optimal solution. For MIS, there are other techniques to generating graphs with guaranteed true MISs (maximum, not maximal) such as the following: Create a graph G with n nodes, make k of them completely connected, and randomly add edges to/from the remaining vertices with degree <= k. Then, the complement graph G' should have at least one independent set of size k and no independent set of greater size.
7- Missing iSCO [1] and the differentiable solver in [2] for comparison.
8- The ER results of DIFUSCO are not the same as were reported in the original paper? I know that DIFUSCO is a diffusion-based method where the sampling procedure starts with a random noise vector drawn from the standard Gaussian. However, a discussion is needed of why lower results are reported here.
9- The novelty is not that significant from the T2T method other than the use of CMs and considering an encoding of size N X 2 instead of N X 1. Using CMs does improve the run-time, and slightly improve the solutions sizes.
10- Generally, the proposed approach and other supervised methods such as DIFUSCO and T2T depend on additional post-processing procedures. This dependence needs to be further explained and investigated. The details for the post processing procedures (such as MCTS and Greedy Decoding) are needed, even if they were also adopted in DIMES and DIFUSCO. For example, if none of these procedures were used, how does this impact the outcomes?
[References]
[1] Revisiting sampling for combinatorial optimization. ICML, 2023.
[2] A differentiable approach to the maximum independent set problem using dataless neural networks. Nueral Networks, 2022.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the valuable comments, and for acknowledging our writing and technical soundness. Nonetheless, we believe there may exist some misunderstandings, especially regarding the value of the research line of data-driven learning-based solvers. It seems the major concern of the comment is about the application of data-driven machine learning in problem solving, which might cause the model to fit a specific data distribution and limit its generalization to different distributions. To pre-clarify this potential misunderstanding that may affect the rating of our work, please kindly refer to the generalization discussion in the general response. This may also directly respond to the first question about the generalization ability of neural solvers.
In addition, we notice that the reviewer may focus more on the MIS problem. Here we note that Our paper introduces a foundational framework and a robust model for various CO problems. We conducted experiments on both the edge-selection problem TSP and the node-selection problem MIS, primarily focusing on TSP for a comprehensive evaluation of OptCM, including solving results, generalizability, runtime-drop curves, and hyperparameter studies. For MIS experiments, we mainly focus on showing OptCM's capability of handling different problems, while similar conclusions can be inferred from TSP results. This follows the typical experiment organization of previous works like DIMES, DIFUSCO, and T2T. As per your request, we also supplement generalization results on MIS to further strengthen our paper. Please see the specific comments below.
> **Q1: In the MIS problem, how does a CM trained on ER700 with p=0.15 generalize when faced with p=0.2? Scalability results on MIS problem to generalize to higher n.**
We provide supplementary results for generalization results on the MIS problem below. We test the model trained on ER 700-800 with p=0.15 to different p and n. We find that the generalization ability of OptCM is significantly better than that of the previous diffusion-based methods DIFUSCO and T2T regarding both solution quality and speed, e.g., in ER 350-400 Sampling setting OptCM achieves significant performance gain from (23.28\%, 24m31s) to (11.45\%, 1m1s).
|p|Decoding|Method|Size|Drop|Time|
|-|-|-|-|-|-|
|0.2|Greedy|DIFUSCO (T_s=100)|26.25|25.65%|6m31s|
|||T2T (T_s=50,T_g=30)|27.84|21.13%|7m52s|
|||OptCM (T_s=1,T_g=1)|28.04|20.58%|32s|
|||OptCM (T_s=5,T_g=5)|29.52|16.38%|1m57s|
| |Sampling|DIFUSCO (T_s=100)|27.98|20.73%|27m15s|
|||T2T (T_s=50,T_g=30)|28.07|20.49%|33m58s|
|||OptCM (T_s=1,T_g=1)|28.81|18.39%|1m40s|
|||OptCM (T_s=5,T_g=5)|30.10|14.74%|6m13s|
|n|Decoding|Method|Size|Drop|Time|
|-|-|-|-|-|-|
|350-400|Greedy|DIFUSCO (T_s=100)|27.31|28.04%|5m1s|
|||T2T (T_s=50,T_g=30)|28.54|24.80%|6m59s|
|||OptCM (T_s=1,T_g=1)|32.56|14.20%|22s|
| |Sampling|DIFUSCO (T_s=100)|29.33|22.73%|20m12s|
|||T2T (T_s=50,T_g=30)|29.12|23.28%|24m31s|
|||OptCM (T_s=1,T_g=1)|33.61|11.45%|1m1s|
|1400-1600|Greedy|DIFUSCO (T_s=100)|34.39|32.48%|22m7s|
|||T2T (T_s=50,T_g=30)|OOM|OOM|OOM|
|||OptCM (T_s=1,T_g=1)|36.95|27.47%|1m39s|
| |Sampling|DIFUSCO (T_s=100)|35.55|30.21%|1h27m31s|
|||T2T (T_s=50,T_g=30)|OOM|OOM|OOM|
|||OptCM (T_s=1,T_g=1)|38.59|24.25%|3m56s|
> **Q2: How does the size of the training dataset impact the outcomes?**
For in-distribution evaluation, the larger the dataset, the better the learning outcome. This is a clear result of ML mechanisms, as having more data allows the model to better summarize more patterns and strategies. As for generalization, we can use a model pretrained on one dataset and apply it to the test data, which might only require a small amount of data from the new distribution for fine-tuning.
> **Q3: How does the proposed method handle real-world graphs?**
We can train on data with a similar distribution and then test on a given real-world test set. In fact, we have already conducted tests on real-world TSPLIB data. While the out-of-distribution generalization results may degrade compared to in-distribution performance, our experimental results show that the degradation is not severe. OptCM's generalization ability is also better than other state-of-the-art learning-based solvers.
We note again that data-driven machine learning methods have their own value and role in various applications, such as optimization problems, including automatic strategy discovery, GPU acceleration, and adaptation to specific data distribution problems. The learning-based solver approach has also received significant attention from the machine learning community, underscoring the importance and acceptance of these methods. On the other hand, dataless methods and traditional solvers have their own advantages, such as better generalization performance and robustness. We will include related work, such as iSCO and MIS_dNN, in our paper for discussion.
> **Q4: The training time comparison with heuristics and ILP solvers**
Traditional solvers do not require training, and we will supplement the training time for OptCM in the paper. For intuition, OptCM for TSP-500 requires about 45 hours of training on 4 A100 GPUs. For comparison, POMO for TSP-100 requires about 1 week on a single Titan RTX and Sym-NCO for TSP-100 requires 2 weeks on a single A100 GPU. It should be additionally noted that the training time is offline and does not affect the inference time during solving.
> **Q5: Label generation with KAMIS.**
Thank you for your suggestion. It is very beneficial for constructing training data for MIS. On the other hand, we need to point out that since we adopt the generative model to learn the distribution of high-quality solutions, we do not have strict requirements for the optimality of the training data labels. T2T's Fig. 5 shows that training with worse labels can still yield plausible solutions (better than the supervision quality) during inference by sampling from the generative distribution.
---
Rebuttal 2:
Title: Rebuttal by Authors (Cont.)
Comment: > **Q6: Missing iSCO and the differentiable solver in for comparison.**
Thanks for providing the related works. The iSCO and MIS_dNN methods mentioned are more traditional methods, which are somewhat different from the previous data-driven machine learning solution algorithms. However, as another important line of algorithms also has an important reference value, we compare iSCO as follows. At the same time, for MIS_dNN's approach, we do not include it in the comparison for the time being, as we did not find implementations that can run through it. We will continue to make efforts to include the comparison of these two approaches in the paper, as well as a discussion of similar works in the related work section.
For the experimental results, we discover that OptCM and iSCO perform better on TSP and MIS problems, respectively. On the other hand, OptCM significantly outperforms all data-driven learning-based methods by significant margins.
Comparison of the TSP problem.
|N|Method|Size|Drop|Time|
|-|-|-|-|-|
|500|iSCO|16.64|0.54%|6m56s|
||DIFUSCO (T_s=100)|16.80|1.50%|4m40s|
||T2TCO (T_s=50,T_g=30)|16.68|0.82%|6m29s|
||OptCM (T_s=5,T_g=5)|16.61|0.39%|2m10s|
|1000|iSCO|23.33|0.91%|7m56s|
|| DIFUSCO (T_s=100)|23.55|1.89%|14m25|
|| T2TCO (T_s=50,T_g=30)|23.44|1.40%|19m39|
|| OptCM (T_s=5,T_g=5)|23.25|0.58%|8m37s|
|10000|iSCO|74.02|3.14%|1h36s|
|| DIFUSCO (T_s=100)|73.57|2.51%|31m24s|
||T2TCO (T_s=50,T_g=30)|-|-|-|
||OptCM (T_s=5)|72.94|1.63%|15m35s|
Comparison on the MIS ER-[700-800] dataset.
|Decoding|Method|Size|Drop|Time|
|-|-|-|-|-|
|Greedy|DIFUSCO (T_s=100)|37.03|18.53%|5m30s|
||T2TCO (T_s=50,T_g=30)|39.81|11.28%|7m7s|
||OptCM (T_s=1,T_g=1)|40.25|10.30%|25s|
||OptCM (T_s=5,T_g=5)|40.68|9.34%|1m32s|
|Sampling|iSCO (Few steps)|44.77|0.2%|1m23s|
||iSCO (More steps)|45.15|0.6%|5m34s|
||DIFUSCO (T_s=100)|39.12|12.81%|21m43s|
||T2TCO (T_s=50,T_g=30)|41.41|7.72%|27m45s|
||OptCM (T_s=1,T_g=1)|40.98|8.66%|1m19s|
||OptCM (T_s=5,T_g=5)|41.73|6.99%|5m51s|
> **Q7: The ER results of DIFUSCO are not the same as were reported in the original paper?**
To ensure fairness, all comparisons between DIFUSCO and T2T are based on categorical diffusion (discrete diffusion). The original DIFUSCO paper used Gaussian diffusion for the ER dataset (categorical diffusion was used for other data), which may lead to some performance differences. We will explain this in the paper.
> **Q8: The novelty is not that significant from the T2T method.**
OptCM is inspired by some excellent previous works, which is one of the key supports for its ability to achieve such strong experimental results. However, we must note that the technical contributions of OptCM are highly non-trivial. Please kindly refer to the general response for the specific novelty claim.
In addition, the reviews from reviewers 3pWn and rpwx firmly support our novelty claims: "Overall I think the method is novel. The extension of consistency training framework into diffusion-based CO solvers is not trivial, and this work is trying to solve a well-motivated problem. The empirical evaluations are quite convincing compared to diffusion-based CO solvers." “Overall, the paper's strengths lie in its innovative approach to reducing computational complexity while maintaining high solution quality, its robust and versatile model design, and its impressive performance on benchmark tasks.”
> **Q9: The dependence on additional post-processing procedures needs to be further explained and investigated.**
OptCM does not rely on complex post-processing methods like MCTS. We only use a greedy approach to "round" the continuous output of the neural network to the nearest feasible path. Specifically, we sequentially insert edges or nodes with the highest confidence to the partial solution if there are no conflicts until we obtain a feasible solution. The sampling operation simply involves generating multiple heatmaps simultaneously and then applying the greedy method. Our comparisons with previous state-of-the-art methods are consistent and fair, demonstrating our advantages under various post-processing methods.
For all learning methods, since the neural network output is continuous, we cannot obtain feasible solutions without using greedy decoding (sequence models perform the greedy procedure within their models). The complexity of the greedy operation is O(num_of_decision_variables), which takes almost no time and is merely to ensure feasibility. This operation is similar to rounding continuous outputs between 0 and 1 to either 0 or 1. We would supplement the detailed descriptions to the main paper.
---
We hope this response could help address your concerns. We believe that our work can have an important impact on the field of ML4CO as a more powerful backbone model to drive the effectiveness of existing methods further. We would be sincerely grateful if you could reconsider your rating, and we are more than happy to address any further concerns you may have.
---
Rebuttal 3:
Title: Response to Authors
Comment: I would like to the thank the authors for their response. While I still have major concerns about the evaluation of this method, I will increase my score to 4 given the authors' efforts, clarity of presentation, using CMs, and developing the gradient search approach at inference.
### Generalization Discussion:
I agree that generalization in data-driven ML is, in itself, a significant research topic across many applications. In fact, learning theory (such as: User-friendly introduction to PAC-Bayes bounds https://arxiv.org/pdf/2110.11216) does not really support good OOD performance. However, this paper proposes a supervised ML solver for combinatorial optimization problems, not image classification. Before deep learning, we lacked the capability to classify large-scale images like those in ImageNet. In contrast, combinatorial optimization problems have been well-studied in the past, with many problem-specific heuristics, such as LKH-3 and KAMIS, that remain unbeaten in terms of solution quality. While the proposed method did show faster run-time (excluding offline training time) when training data was available, the method should explore its limitations/capabilities to support the claims in [2].
I believe that evaluating graphs of different sizes and densities is feasible, as tools like NetworkX include several random graph generators that can help identify the instances the proposed method can effectively solve. In fact, a reasonable approach to evaluate whether the proposed model is learning patterns of solutions is to evaluate its OOD performance.
I am not disputing the merit of conventional machine-learning-based methods for combinatorial optimization (ML4CO), rather highlighting that the authors should comprehensively evaluate their approach. An example of an ML4CO method (an RL-based approach) that fully evaluated its approach is the LwD method, which the authors considered as a baseline on only ER and RB graphs.
### In the MIS problem, how does a CM trained on ER700 with p=0.15 generalize when faced with p=0.2?
- I acknowledge the new results and the comparison with diffusion-based methods. However, I believe that reporting the results of KAMIS would provide more insight into the actual performance when training with p=0.15 and testing with p=0.2. The authors should evaluate with other values of p along with other graph random generators.
- The SATLIB dataset (not considered) contains hard sparse instances that most dataless and data-centric methods consider, including T2T. Is it because OptCM cannot handle relatively sparser graphs than ER (with p=0.15)? Or is it because SATLIB does not provide enough data for training?
### How does the size of the training dataset impact the outcomes?
OptCM uses CMs, which are essentially diffusion models designed for faster sampling. However, these models require an extensive amount of training data to enter the generalization regime (see Figure 2b of "The Emergence of Reproducibility and Generalizability in Diffusion Models"). Is the CM used in this paper in the generalization or memorization regime?
### How does the proposed method handle real-world graphs (such as SNAP graph in MIS)?
I acknowledge that the authors' reference to TSPLIB partially addresses the question. However, COPs vary significantly, so experimenting on the SNAP dataset using your MIS-trained model would help further understand the limitations/capabilities of the proposed method.
I also recognize that "data-driven machine learning methods have their own value and role in various applications, such as optimization problems, including automatic strategy discovery, GPU acceleration, and adaptation to specific data distribution problems." **However, are any of these examples specifically combinatorial optimization problems?**
### Label generation with KAMIS.
Yes, It would be beneficial for constructing training data. You could, in fact, use it to partially control the density in the graph. However, I am unsure how much it can help when the nodes degree distribution is different at testing time.
If you adopt a generative model to learn the distribution of *high-quality* solutions, how do you justify not having strict requirements for the optimality of the training data labels? The empirical observation in Figure 5 of the T2T paper pertains to the TSP problem, not the MIS problem.
### Q6: Missing iSCO and the differentiable solver in for comparison.
I acknowledge the comparison with iSCO for TSP and MIS. Excluding the training time and assuming dataset availability, OptCM generally requires less run-time compared to iSCO for in-distribution test instances.
### Q7: The ER results of DIFUSCO are not the same as were reported in the original paper?
Does the authors' response mean that the DIFUSCO results in this paper use a different diffusion model than the one used in the original DIFUSCO paper? If so, the authors should use the baseline method as is.
---
Rebuttal Comment 3.1:
Title: Official Comment by Authors
Comment: Thanks for acknowledging our efforts and the advice for a more comprehensive evaluation. Below we respond to your comments.
> **I. The value of ML4CO. "I also recognize that "data-driven machine learning methods have their own value and role in various applications, such as optimization problems, including automatic strategy discovery, GPU acceleration, and adaptation to specific data distribution problems." However, are any of these examples specifically combinatorial optimization problems?"**
Thanks again for the valuable point. Indeed, the discussion in the general response (2) is exactly for the CO context. We would like to summarize below.
1. Please first notice that ML4CO papers in [1] (e.g., DIMES, DIFUSCO, T2T, AM, POMO) and Bengio's first authored paper [2] claimed that "ML4CO focuses on CO algorithms that automatically perform learning on a chosen implicit distribution of problems", and "ML can help improve an algorithm on a distribution of problem instances in two ways: 1) replace some heavy computations by a fast approximation; 2) explore the decision space and learn out of the experience the best-performing behavior (policy)." Traditionally, experts have to research on a given problem typically for many years to achieve a strong heuristic solver. While for ML solvers, they can learn from the historical data to implicitly discover strategies to plausibly handle a certain problem within a underlying data distribution. The neural inference also allows for GPU acceleration compared to traditional solvers. And the value of adapting to specific data distribution problems can be supported by [1] (e.g., DIMES, DIFUSCO, T2T, AM, POMO) and Bengio's first authored paper [2], which indicates that data in real-world applications often exhibits a certain problem structure within the implicit distribution. In many cases, we need machine learning methods to uncover patterns within the data and solve problems associated with a specific distribution. **Please notice in these cases, we focus more on the generalization performance under the IID assumption.**
2. The development of the ML4CO community can support the value of this research line. ML4CO has gained significant traction, evidenced by numerous publications at leading conferences such as NeurIPS, ICML, and ICLR—each annually featuring about 20 papers on this subject. Please refer to [1] for a summarized list of ML4CO papers, including 34 CO problems and problems like TSP has more than 50 papers recorded. Around 20 survey papers, including Yoshua Bengio's first-authored paper [2], provide extensive summaries and highlight ML4CO's critical research value.
[1] github.com/Thinklab-SJTU/awesome-ml4co
[2] Bengio, Yoshua, Andrea Lodi, and Antoine Prouvost. Machine Learning for Combinatorial Optimization: a Methodological Tour d'horizon. EJOR 2021.
> **II. The comprehensiveness of the evaluation.**
Thanks for the suggestion. We still have to first note that we primarily focus on TSP for a comprehensive evaluation of OptCM, including solving results, generalizability, runtime-drop curves, and hyperparameter studies. For MIS experiments, we mainly focus on showing OptCM's capability of handling different problems. This follows the typical experiment organization of previous works like DIMES, DIFUSCO, and T2T.
For the TSP problem, we have already performed many experiments on the generalization results, including generalizing to different scales of TSP (from 50 to 1000) as shown in Sec. 6.1 and Table 3, and generalizing to different distributions like real-world TSP instances from TSPLIB as shown in Appendix A and Table 1, 2. These results show that OptCM maintains good generalization results and outperforms previous SOTA diffusion-based baselines. While for MIS problems, we try our best to further supplement more results within the tight time window below.
---
Rebuttal 4:
Title: Official Comment by Authors (Cont.)
Comment: 1. Generalization results on the MIS problem for other p values. The models are trained on ER 700-800 with p=0.15. We can discover that OptCM outperforms all other learning baselines.
| p | Type | Method | Size | Drop | Time |
| --- | --------- | -------------------- | --------- | ---------- | ------ |
| 0.2 | Heuristic | KAMIS | 35.30 | - | 58m37s |
| | Greedy | DIFUSCO (T_s=100) | 26.25 | 25.65% | 6m31s |
| | | T2T (T_s=50,T_g=30) | 27.84 | 21.13% | 7m52s |
| | | OptCM (T_s=1,T_g=1) | 28.04 | 20.58% | 32s |
| | | OptCM (T_s=5,T_g=5) | **29.52** | **16.38%** | 1m57s |
| 0.3 | Heuristic | KAMIS | 24.36 | - | 1h14m |
| | Greedy | DIFUSCO (T_s=100) | 15.84 | 34.99% | 7m58s |
| | | T2T (T_s=50, T_g=30) | 16.43 | 32.55% | 8m20s |
| | | OptCM (T_s=1, T_g=1) | 17.43 | 28.45% | 51s |
| | | OptCM (T_s=5, T_g=5) | **17.69** | **27.39%** | 2m52s |
| 0.4 | Heuristic | KAMIS | 18.19 | - | 1h22m |
| | Greedy | DIFUSCO (T_s=100) | 11.75 | 35.40% | 9m40s |
| | | T2T (T_s=50, T_g=30) | 12.77 | 29.77% | 10m28s |
| | | OptCM (T_s=1, T_g=1) | 12.86 | 29.30% | 1m1s |
| | | OptCM (T_s=5, T_g=5) | **13.27** | **27.06%** | 3m36s |
2. We supplement the results on the SATLIB real-world dataset below. Initially, we did not include the SATLIB results because OptCM requires more data to learn the consistency mapping, which, due to its greater power, is more challenging to learn. Unfortunately, SATLIB does not provide sufficient data for this purpose. However, we still discover a positive results of OptCM outperforming previous baselines.
|Type|Method|Size|Drop|Time|
|-|-|-|-|-|
|Heuristic|KAMIS|425.96\*|--|37.58m|
|Gurobi|Exact|425.95|0.00%|26.00m|
|-|-|-|-|-|
|RL+Sampling|LwD|422.22|0.88%|18.83m|
|RL+Sampling|DIMES|423.28|0.63%|20.26m|
|UL+Sampling|GlowNets|423.54|0.57%|23.22m|
|SL+Sampling|DIFUSCO (T_s=100)|425.14|0.19%|53m41s|
|SL+Sampling|T2T (T_s=50,T_g=30)|425.18|0.18%|38m1s|
|SL+Sampling|OptCM (T_s=5,T_g=5)|**425.23**|**0.17%**|25m35s|
3. We supplement cross-dataset generalization results between RB graphs and ER graphs below. As seen, OptCM outperforms previous diffusion-based counterparts by a clear margin, e.g., in "Train:ER; Test:RB" "Sampling" setting, OptCM achieves significant performance gain from the previous (23.96%, 13m40s) to (10.64%, 2m37s).
|Setting|Type|Method|Size|Drop|Time|
|-|-|-|-|-|-|
|Train:ER; Test:RB|Greedy|DIFUSCO (T_s=100)|15.87|21.00%|10m8s|
|||T2T (T_s=50,T_g=30)|16.59|17.41%|15m5s|
|||OptCM (T_s=1,T_g=1)|16.73|16.59%|40s|
|||OptCM (T_s=5,T_g=5)|**17.01**|**15.21%**|2m39s|
|Train:ER; Test:RB|Greedy|DIFUSCO (T_s=100)|29.98|27.54%|10m48s|
|||T2T (T_s=50,T_g=30)|31.47|23.96%|13m40s|
|||OptCM (T_s=1,T_g=1)|36.39|11.96%|43s|
|||OptCM (T_s=5,T_g=5)|**36.94**|**10.64%**|2m37s|
> **III. Is the CM used in this paper in the generalization or memorization regime?**
Thank you for this very interesting question. OptCM indeed employs an atypical generative framework where we learn a high-quality solution distribution for a given instance, which can be viewed as a conditional generation process. However, since each condition (instance) is unique and the same instance conditions will not reappear during testing, the model cannot achieve plausible results by simply memorizing solutions for certain instance conditions. This is perhaps the intuitive response to your question. Nevertheless, we believe it is a rather open and intriguing question. Although we cannot provide empirical evidence due to the tight time window, we plan to investigate and analyze this question further in the future.
---
Rebuttal 5:
Title: Official Comment by Authors (Cont.)
Comment: > **IV. How does the proposed method handle real-world graphs (such as SNAP graph in MIS)?**
Thank you for the question. We would like to reiterate that this paper primarily focuses on TSP evaluation as the main experiment, with MIS results included as supplementary evidence to demonstrate the model's ability to handle different problems. This structure aligns with the organization of leading conference works like DIMES, DIFUSCO, and T2T, and it's worth noting that many studies focus solely on TSP, such as GNNGLS and UTSP. We believe that TSPLIB experiments effectively supports the real-world performance of OptCM on the TSP problem. As for the MIS problem, we add the experiments on SATLIB dataset and we believe the results effectively demonstrate the model's capility to generalize to real-world graphs.
Regarding SNAP graphs, we faced challenges completing experiments within the tight time constraints. The majority of SNAP graphs contain millions of nodes, making it extremely time-consuming for heuristic solvers to provide ground truth within the allotted time. Additionally, our model was primarily trained on problems with fewer than 1,000 nodes, and scaling up to the size of SNAP graphs might exceed the current generalization capabilities of our learning-based model. We intuitively believe a large-scale training dataset (e.g., ER-[9000-11000]) is necessary to better generalize to SNAP graphs. Although we are unable to provide results at this time, we are happy to extend our work to address this dataset in the immediate future.
---
Thank you for the advice and the opportunity to engage in this valuable discussion. We hope our response satisfactorily addresses your concerns and we will include all these meaningful empirical results and discussions in our final version.
We believe this work advances the field by making important strides in improving NCO's performance, given OptCM's strong empirical results and the potential to serve as a more powerful backbone model for future works. We would sincerely appreciate it if you could reconsider your rating, and we are more than willing to address any further concerns you may have.
---
Rebuttal Comment 5.1:
Title: Response to Authors - Final Remarks
Comment: Thank you for providing more results to evaluate your model and the additional clarification. I do acknowledge the following:
- The use of CMs did improve the results in terms of solution quality and run-time when compared to other DM-based methods.
- The sampling procedure inherited in pre-trained CMs (or generally DMs) at inference can be modified to be a per-instance solver and allow for conditional sampling. When compared to the inference of GNNs, I think the proposed approach has a better potential to further improve. I fully accept this point from the authors.
- When compared to problem-specific heuristics, the method does achieve a significant improvement in terms of *inference* time due to the fast sampling of CMs and the use of GPUs. However, the method significantly under-performs when compared to these classic solvers in terms of solution quality.
In short, under the setting that (1) a large accurately-labeled training set is accessible, (2) the massive "offline" training time can be ignored, and (3) the testing instances are not that different from the training set, the proposed solver can obtain "descent" solutions extremely fast.
I am not suggesting that the proposed solver must outperform on every metric for every COP across all datasets. Instead, I am emphasizing that the authors should thoroughly investigate the limitations of their method and identify the setting in which it excels.
A suggestion regarding SNAP graphs (very sparse and very large graphs): If the authors are still okay with labeling their training graphs using KaMIS (though I still have some reservations and believe this should be further investigated in subsequent versions of the paper), then KaMIS should be capable of obtaining solutions in a fair amount of time (albeit not guaranteed to be optimal), as it relies heavily on various graph reductions (see their results in the LwD paper for an example). Additionally, Google has introduced a SAT solver called CP-SAT (https://developers.google.com/optimization), which can solve very sparse/very large instances using the MIS ILP formulation significantly faster than Gurobi and KaMIS.
Although I have doubts over the practicality of the settings where this method excels, which I believe would require further investigations, I will raise my score to 5 given the potential of further improvements on the per-instance conditional sampling in trained CMs.
---
Reply to Comment 5.1.1:
Comment: Thank you for acknowledging our work and raising the score, as well as for the insightful suggestions and valuable discussions. We fully concur that a thorough investigation of the limitations and a clear identification of the settings where our method excels are essential for the comprehensiveness of our study. We also appreciate the valuable suggestion for handling the SNAP dataset. We will incorporate all supplementary results and explanations into the final version of the paper and will continue to refine our work and add additional results based on your valuable feedback.
---
Rebuttal 6:
Comment: Thank you once again for your constructive feedback and your willingness to consider raising your score to 5. We greatly appreciate your support. We have noticed, however, that the updated score has not yet been reflected in the system. If it’s not too much trouble, could you kindly take a moment to update the rating at your earliest convenience? We greatly appreciate your attention to this detail, which is crucial for the final evaluation of our work. Thank you very much for your continued support and consideration.
Title: Kind Reminder | Summary: This paper presents Optimization Consistency Models (OptCM) for solving combinatorial optimization (CO) problems efficiently. By leveraging the consistency model, OptCM maps varying noise levels to optimal solutions in a single step, significantly reducing computational overhead. This approach is validated through extensive experiments on the Traveling Salesman Problem (TSP) and Maximal Independent Set (MIS), demonstrating significant superior efficiency compared with state-of-the-art diffusion-based models.
Strengths: 1. This paper introduces a consistency model to improve the efficiency of diffusion-based combinatorial optimization solvers.
2. Extensive experiments on TSP and MIS show that OptCM can outperform existing methods.
Weaknesses: 1. This work is mainly incremental, based on previous works DIFUSCO [1] and T2T [2].
2. Larger-size TSPs, such as those with 100000 nodes, should be tested against state-of-the-art learning-based methods [3].
3. Despite significantly improving solving efficiency, the proposed method is limited in addressing constrained COPs (e.g., CVRP and more complex COPs) and requires optimal solutions as labels.
[1] DIFUSCO: Graph-based Diffusion Solvers for Combinatorial Optimization, NeurIPS, 2023.
[2] T2T: From Distribution Learning in Training to Gradient Search in Testing for Combinatorial Optimization, NeurIPS, 2023.
[3] GLOP: Learning Global Partition and Local Construction for Solving Large-scale Routing Problems in Real-time, AAAI, 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comment, nice suggestions, and for acknowledging our soundness, presentation, experimental extensiveness, and empirical performance. We seriously value the main novelty concern reflected in the comment and carefully address it in the general response. Below we respond to your specific comments.
> **Q1: This work is mainly incremental, based on previous works DIFUSCO and T2T.**
OptCM is inspired by some excellent previous works, which is one of the key supports for its ability to achieve such strong experimental results. However, we must note that the technical contributions of OptCM are highly non-trivial. Please kindly refer to the general response for the specific novelty claim, which we also briefly summarize here:
1. Compared to DIFUSCO and T2T's diffusion models, OptCM's model design, including the model inference process and the training objective function, is very different. Indeed, consistency models are recognized as a new independent class of generative models in generative tasks, while we further establish optimization consistency models (OptCM) to tailor the model for optimization scenarios.
2. The testing-phase gradient search method proposed by OptCM represents a completely novel approach compared to that of T2T. In fact, due to the difference between consistency mapping and diffusion inference, OptCM's framework and implementation are fundamentally independent.
3. In terms of experimental results, OptCM demonstrates exceptional performance, not only in solution efficiency but also in the quality of the solutions. We believe such strong empirical results substantiate the claim that OptCM represents more than just incremental progress within the existing diffusion framework.
In addition, the reviews from reviewers 3pWn and rpwx firmly support our novelty claims: "Overall I think the method is novel. The extension of consistency training framework into diffusion-based CO solvers is not trivial, and this work is trying to solve a well-motivated problem. The empirical evaluations are quite convincing compared to diffusion-based CO solvers." “Overall, the paper's strengths lie in its innovative approach to reducing computational complexity while maintaining high solution quality, its robust and versatile model design, and its impressive performance on benchmark tasks.”
> **Q2: Larger-size TSPs, such as those with 100000 nodes, should be tested against state-of-the-art learning-based methods like GLOP.**
Thanks for the valuable question. In the rebuttal phase, we extend our model training to TSP-10000 and compare OptCM with several baselines, including DIFUSCO and GLOP.
|Method|Length|Drop|Time|
|-|-|-|-|
|GLOP|75.62|5.36%|32s|
|GLOP more revisions|75.29|4.90%|1.8m|
|DIFUSCO(T_s=100)+2Opt|73.57|2.51%|31m24s|
|OptCM(T_s=5)+2Opt|72.94|1.63%|15m35s|
We also explore generalizing the model to 100k nodes to assess its scalability. However, we discover that although OptCM can achieve quick prediction in seconds, the greedy decoding procedure—which sequentially inserts edges or nodes with the highest confidence into the partial solution until a feasible solution is obtained—can be prohibitively time-consuming, making it meaningless for the evaluation. OptCM is not yet suitable for large-scale evaluations without an optimized decoding procedure or a high-level divide-and-conquer strategy like GLOP. Indeed we suggest that there may be a chance for the combination of OptCM's backbone model and GLOP's divide-and-conquer strategy for scalability.
We will add the above evaluation to our paper and include the discussion of GLOP in related work and future work discussion to enhance the comprehensiveness of the paper.
> **Q3: Limitations in addressing constrained COPs and supervision requirements.**
In fact, our method can handle various constrained problems. We rely on the powerful expressive capability of generative models to learn the constraints, ensuring that the generated heatmap results are as close to feasible as possible. This is followed by post-processing (e.g., greedy algorithms) to utilize the model's output while strictly satisfying the constraints. This paradigm theoretically handles any constraint, which is also the approach used in previous mainstream works such as Diffusco and T2T. However, for more complex constraints, the model's ability to capture constraints may be limited. In such cases, using sequential models to explicitly control constraints at each step can be more effective. For example, the mainstream methods for the CVRP problem are still based on sequence models.
Nevertheless, heatmap methods have their own advantages, such as not suffering from the sparse reward problem, enabling them to end-to-end handle larger-scale problems with better solution quality. These methods are currently the SOTA for classic problems like TSP and MIS, highlighting their strengths. There has also been considerable works in leading conferences that maintain competitive performance, demonstrating the value of this line of research. Additionally, currently works focusing on extending neural solvers for larger-scale problems using divide-and-conquer paradigms become a new important line of research.
We believe that different methodological routes have their own expertise in dealing with problems of different characteristics, and each route has its own advantages and disadvantages. We will add these discussions to our paper, including a discussion of some recent works such as GLOP and LEHD [1].
---
We hope this response can help address your concerns. We believe that this work contributes to this community and could potentially provide a more powerful backbone for future works in this domain. We would sincerely appreciate it if you could reconsider your rating, and we are more than happy to address any further concerns you may have.
[1] Neural Combinatorial Optimization with Heavy Decoder: Toward Large Scale Generalization. NeurIPS 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. However, I still believe this method faces significant challenges when addressing constrained COPs, such as CVRP, even with its relatively simple constraints.
1. While these heatmap methods incorporate post-processing techniques to theoretically handle any constraints, their practical performance may be inadequate, as the model often struggles to fully capture these constraints.
2. Due to the NP-hard nature of COPs with complex constraints, obtaining optimal solutions as labels remains impractical.
3. Recent research [1] also questions the effectiveness of ML-based heatmap generation, suggesting that the performance is largely driven by the post-hoc search rather than the heatmap.
[1] Position: Rethinking Post-Hoc Search-Based Neural Approaches for Solving Large-Scale Traveling Salesman Problems, ICML, 2024.
---
Rebuttal 2:
Title: Thanks for the Prompt Reply
Comment: Thanks for the prompt reply. During the rebuttal process, we presented our novelty claim, supplemented large-scale results, and responded to the limitations of handling complex constraint problems. Despite this, the reviewer continues to express concerns regarding the limitations of heatmap-based methods, which we believe warrant meaningful discussion. Below, we offer our detailed response to this remaining concern.
> **1: While these heatmap methods incorporate post-processing techniques to theoretically handle any constraints, their practical performance may be inadequate, as the model often struggles to fully capture these constraints.**
We totally agree with this point. Relying on neural models to capture the constraints would require balancing the tradeoff between the model expressiveness and the task difficulty. When focusing on more complex constraints, sequence models will show their superiority in handling constraints by explicitly controlling constraints at each step. We will highlight this point in our final version.
Nevertheless, we still need to note that heatmap methods have their own advantages, such as not suffering from the sparse reward problem, enabling them to end-to-end handle larger-scale problems with better solution quality. These methods are still the current SOTA for classic problems like TSP and MIS, highlighting their strengths. There have also been considerable heatmap-based works in leading conferences that maintain an impact on the community, like GCN, DIMES, DIFUSCO, T2T, etc., showing its value as an important research line in the Neural CO domain.
Again, we note that different methodological routes have their own expertise in dealing with problems of different characteristics, and each route has its own advantages and disadvantages. We believe our method, with a novel model backbone and strong empirical performance, can contribute to this community and inspire future works to further enhance NCO.
> **2. Due to the NP-hard nature of COPs with complex constraints, obtaining optimal solutions as labels remains impractical.**
Since we use a generative model to learn the distribution of high-quality solutions, we do not impose strict requirements on the optimality of the training data labels. As demonstrated in Fig. 5 of T2T, training with suboptimal labels can still produce plausible solutions—surpassing the quality of the supervision—by sampling from the generative distribution. Indeed, in our experiments, we obtain the labels of large-scale problems from heuristic methods like LKH and KAMIS, which only require high solution quality rather than strict optimality to produce reasonable empirical results. We will include a discussion of supervision requirements and also mention this in the discussion of the comparison between RL and SL in our paper.
> **3. Recent research [1] also questions the effectiveness of ML-based heatmap generation, suggesting that the performance is largely driven by the post-hoc search rather than the heatmap.**
Please note that [1] primarily focuses on heatmap-based methods paired with strong post-hoc search techniques like MCTS. In contrast, our approach only employs basic decoding strategies, such as greedy decoding. Yet without MCTS, OptCM still demonstrates highly competitive performance, even surpassing LKH within limited time budgets (please see Figs. 3 and 4). In our settings, compared to previous heatmap-based methods like DIMES, DIFUSCO, and T2T, OptCM significantly outperforms them across all decoding settings, underscoring the substantial impact of heatmap quality on solving performance (please see Tables 1, 2, and 4).
---
Thank you for the opportunity to engage in this valuable discussion. We hope our response effectively addresses your concerns. We believe this work advances the field by making important strides in improving NCO's performance. We would sincerely appreciate it if you could reconsider your rating, and we are more than willing to address any further concerns you may have.
---
Rebuttal Comment 2.1:
Comment: Thank you for your response.
I still believe that the contribution of this work is somewhat incremental, and the method may not be well-suited for constrained COPs. Nevertheless, I do see that the authors have made efforts to refine this work. Therefore, I am raising my score to 5. This neutral score means that if other reviewers and AC lean to accept this work, I would not oppose it. I encourage the authors to make more contributions in future version.
---
Rebuttal 3:
Comment: Thank you for acknowledging our efforts and raising the rating. We will continue refining our work to make a meaningful contribution to the community. However, we would still like to emphasize our novelty and respond to the concern of the capability for other constrained COPs.
**1) Novelty.**
Indeed CMs are recognized as a new independent class of generative models in generative tasks, and in this paper, we further introduce the optimization consistency condition and establish optimization consistency models (OptCM) to tailor the model for optimization scenarios.
Compared to diffusion solvers, the difference between the consistency mapping ($x_t\to x_0$) and the denoising function ($x_t\to x_{t-1}$) makes all the designs including model inference process, the training objective function, the gradient search procedure, etc., totally different. It is challenging to design such a new system with strong empirical performance.
Moreover, the experimental results can support our claims. Our improvements over previous SOTA diffusion-based methods are of a significant magnitude, e.g., OptCM variants with more sampling and gradient search steps can achieve 82.1% performance gain with 14.7x speedup compared to previous diffusion-based counterparts on TSP-50/100. We believe that such strong empirical results substantiate the claim that OptCM represents more than just incremental progress within the existing diffusion framework.
We affirm our commitment to making the source code publicly accessible upon publication. The objective of this work is to provide reference and contributions for the community. Open-sourcing the code is a fundamental obligation we undertake.
**2) Concern of the capability for other constrained COPs.**
As we previously mentioned, OptCM is theoretically capable of addressing any problem that can be framed as an edge-selecting or node-selecting problem. However, it is unrealistic to expect a single model to excel across all tasks. Different problems possess unique characteristics that may be better suited to specific methodologies. For problems like TSP, heatmap-based methods (using only basic greedy decoding) can achieve state-of-the-art performance, whereas RL-based approaches often struggle with sparse rewards and training instability, which may impede direct scaling to larger problem sizes. However, for problems with more complex constraints, sequence models demonstrate superiority by explicitly managing constraints at each step. Indeed, we do not yet know whether strong generative models can directly handle problems like CVPR; perhaps they already can, or at least have the potential to. (Though the discussion period is approaching its end, we will continue to adapt the framework to the CVRP task to validate this point.) The key point is that each methodological approach has its own strengths and weaknesses, and we do not expect the certain model to excel across all tasks. Indeed, there are also many papers in leading conferences merely focus on addressing a single problem, e.g., for TSP [1,2,3], MIS [4,5], SAT [6,7,8], etc., yet these contributions are still highly valued by the community.
[1] H-tsp: Hierarchically solving the large-scale traveling salesman problem. AAAI, 2023.
[2] Graph Neural Network Guided Local Search for the Traveling Salesperson Problem. ICLR, 2022.
[3] Unsupervised Learning for Solving the Travelling Salesman Problem. NeurIPS, 2023.
[4] Learning What to Defer for Maximum Independent Sets. ICML, 2020.
[5] Maximum Independent Set: Self-Training through Dynamic Programming NeurlPS, 2023.
[6] Learning a SAT solver from single-bit supervision. NeurIPS, 2019.
[7] NSNet: A General Neural Probabilistic Framework for Satisfiability Problems NeurIPS, 2022.
[8] Online Bayesian Moment Matching based SAT Solver Heuristics. ICML, 2020.
---
Thank you again for the advice. We hope our response satisfactorily addresses your remained concerns. We believe OptCM advances the field by making important strides in improving NCO's performance, given OptCM's strong empirical results and the potential to serve as a more powerful backbone model for future works. We appreciate your previous acknowledgment and hope that our clarifications will merit a higher rating for our work. | Summary: This paper introduced a new algorithm for solving some classic combinatorial optimization problems. The method falls into the category of learn-based generative solvers. More specifically, it is a direct extension of the DIFUSCO [1] and T2t [2] solver, which are diffusion-based generative solvers. The improvement is mostly done through improving on the sampling step of the two aforementioned works with consistency models (CM) [3], a recent notable regime that enables drastic reduction of the number of function evaluations (NFE, or sampling steps) of vanilla diffusion models. The novelty lies in extending CM into discrete regime and combining it with consistency-based gradient search, which is necessary for combinatorial optimization problem. Empirical evaluations show the effectiveness of this new solver, where it achieves competitive objective value in a much shorter time, compared to various baselines.
[1] Z. Sun and Y. Yang, “DIFUSCO: Graph-based diffusion solvers for combinatorial optimization, in Thirty-seventh Conference on Neural Information Processing Systems, 2023.
[2] Y. Li, J. Guo, R. Wang, and J. Yan, “T2t: From distribution learning in training to gradient search in testing for combinatorial optimization,” in Advances in Neural Information Processing Systems, 2023.
[3] Y. Song, P. Dhariwal, M. Chen, and I. Sutskever, “Consistency models,” arXiv preprint arXiv:2303.01469, 2023
Strengths: 1. Overall I think the method is novel. The extension of consistency training framework into diffusion-based CO solvers is not trivial, and this work is trying to solve a well-motivated problem. The empirical evaluations are quite convincing compared to diffusion-based CO solvers (of course, one expects so as consistency models inference time is much faster than diffusion generative models).
2. The paper is overall well-written, and the authors seem to have done quite thorough literature reviews to gather sufficient baselines to compare to the performance of their work to.
Weaknesses: 1. It is necessary to elaborate on why the authors wrote "... note F1 is exactly the (implicit) the objective of the diffusion and consistency models..." at line 259. In other words we need to see why (should be a lemma with proof) we have the quantity $F_1$ in equation (6) is the equivalence of the loss in eq (4).
2. It is unclear how the overall training paradigm takes place in practice, including the gradient search part. The authors should include an algorithm box on the training of their OptCM framework. Algorithm 1 on Multistep Consistency Sampling is almost the same as in the original Consistency Models paper, so I suggest the authors move them to the Appendix. If I'm not mistaken, the training of consistency models is quite tricky, for example, it is crucial to design a suitable training time discretization schedule for CM to work well. Could the authors elaborate on this for their problem?
I'm willing to re-evaluate my score if the authors can answer these two points.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Consistency training introduced additional training overhead; I think the authors did not discuss this in Appendix D.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and insightful suggestions, as well as for acknowledging our novelty, motivation, non-trivial contributions, and convincing evaluations. Your questions and suggestions are instrumental in further strengthening our paper. Below, we respond to your specific comments.
> **Q1: Why we have the quantity F1 in equation (6) is the equivalence of the loss in eq (4)?**
Thanks for your valuable suggestion. We will supplement the derivation and intuition to the paper.
Recall that $F_1=-\mathbb{E}\_{q(\mathbf{x}|G)q(\mathbf{h}|\mathbf{x},G)}\left[\log p_\theta(\mathbf{x},\mathbf{h}|G)- \log q(\mathbf{x})q(\mathbf{h}|\mathbf{x},G)\right]$, we apply an approximation to the posterior over $\mathbf{x}=\mathbf{x}\_0$ as a point estimate $q(\mathbf{x}|G)=\delta(\mathbf{x}-\mathbf{\eta})$ and obtain $F_1=\mathbb{E}\_{q(\mathbf{h}|\eta,G)}\left[\log\frac{q(\mathbf{h}|\mathbf{\eta},G)}{p_\theta(\mathbf{\eta},\mathbf{h}|G)}\right]$. This term is exactly the usual variational bound on negative log likelihood conditioned on graph $G$, which is exactly the optimization objective of the diffusion models. It can be further derived under the denoising diffusion framework similarly in the original DDPM paper:
$$\begin{aligned}F\_1&=\mathbb{E}\_{q(\mathbf{x}\_{1:T}|\eta,G)}\left[\log\frac{q(\mathbf{x}\_{1:T}|\mathbf{\eta},G)}{p\_\theta(\mathbf{\eta},\mathbf{x}\_{1:T}|G)}\right] \\\\ &= \mathbb{E}\_{q} \left[ \log \frac{\prod\_{t=1}^T q(\mathbf{x}\_t | \mathbf{x}\_{t-1},G)}{p\_\theta(\mathbf{x}\_T) p\_\theta(\eta | \mathbf{x}\_1, G) \prod\_{t=2}^T p\_\theta(\mathbf{x}\_{t-1} | \mathbf{x}\_t, G)} \right] \\\\ &= \mathbb{E}\_q \left[ -\log p\_\theta(\mathbf{x}\_T) + \sum\_{t=2}^T \log \frac{q(\mathbf{x}\_{t}|\mathbf{x}\_{t-1},G)}{p\_\theta(\mathbf{x}\_{t-1}|\mathbf{x}\_t, G)} + \log \frac{q(\mathbf{x}\_1|\eta, G)}{p\_\theta(\eta|\mathbf{x}\_1, G)} \right] \\\\ &= \mathbb{E}\_q \left[ -\log p\_\theta(\mathbf{x}\_T) + \sum\_{t=2}^T \log \left( \frac{q(\mathbf{x}\_{t-1}|\mathbf{x}\_t, \eta,G) }{p\_\theta(\mathbf{x}\_{t-1}|\mathbf{x}\_t,G)}\cdot \frac{q(\mathbf{x}\_t|\eta,G)}{q(\mathbf{x}\_{t-1}|\eta,G)} \right) + \log \frac{q(\mathbf{x}\_1|\eta,G)}{p\_\theta(\eta|\mathbf{x}\_1,G)} \right] \\\\ &= \mathbb{E}\_{q} \left[ \log \frac{q(\mathbf{x}\_T|\eta,G)}{p\_\theta(\mathbf{x}\_T)} + \sum\_{t=2}^T \log \frac{q(\mathbf{x}\_{t-1}|\mathbf{x}\_t, \eta,G)}{p\_\theta(\mathbf{x}\_{t-1}|\mathbf{x}\_t,G)} - \log p\_\theta(\eta|\mathbf{x}\_1,G) \right] \\\\ &= \mathbb{E}\_{q} \left[ D\_{KL}(q(\mathbf{x}\_T|\eta,G) \| p\_\theta(\mathbf{x}\_T)) + \sum\_{t=2}^T D\_{KL}\left(q(\mathbf{x}\_{t-1}|\mathbf{x}\_t, \eta,G) \| p\_\theta(\mathbf{x}\_{t-1}|\mathbf{x}\_t,G)\right) - \log p\_\theta(\eta|\mathbf{x}\_1,G)\right] \\\\ &= \sum\_t \mathbb{E}\_{q} \left[D\_{KL}\left(q(\mathbf{x}\_{t-1}|\mathbf{x}\_t,\eta,G) \| p\_\theta(\mathbf{x}\_{t-1}|\mathbf{x}\_t,G)\right)\right] + C \end{aligned}$$
This term is the optimization objective on point estimate $\eta$. While in optimization consistency models, we do not directly predict $p\_\theta(\mathbf{x}\_{t-1}|\mathbf{x}\_t,G)$ in the objective calculation, but we can virtually estimate this term using an additional predicted $\hat{\eta}$ :
$$\begin{align}p\_\theta(\mathbf{x}\_{t-1}|\mathbf{x}\_t,G)=q(\mathbf{x}\_{t-1}|\mathbf{x}\_t,\hat{\eta},G)=q(\mathbf{x}\_{t}|\mathbf{x}\_{t-1},\hat{\eta},G)\frac{q(\mathbf{x}\_{t-1}|\hat{\eta},G)}{q(\mathbf{x}\_t|\hat{\eta},G)}.\end{align}$$
Then $F\_1$ can be reduced to the distance between the estimated $\hat{\eta}$ and the real $\eta$, i.e.,
$$\begin{align} F\_1 = \sum\_t \mathbb{E}\_{q} \left[d\big(p\_\theta^t(\eta),\delta\left(\mathbf{x}-\mathbf{\eta}\right)\big)\right] + C,\end{align}$$
which is equivalent to the consistency loss which optimizes points from different noise level to map to the target point.
> **Q2: The authors should include an algorithm box on the training of their OptCM framework.**
Thanks for the nice advice. We have included specific training algorithms in the PDF attachment of the general response and will incorporate them into our paper.
> **Q3: More discussions on training details.**
We supplement the specific design choices of our OptCM, and the listed hyperparameters correspond to those used in the algorithm presented in the paper. We will supplement these details to the paper to enhance its comprehensiveness.
Moreover, we affirm our commitment to making the source code publicly accessible upon publication. The objective of this work is to provide reference and contributions for the community. Open-sourcing the code is a fundamental obligation we undertake.
|Train|Design Choice|
|-|-|
|Consistency Loss Function|$d(x,y)=$Binary_Cross_Entropy$(x,y)$|
|Scaling Factor|$\alpha=0.5$|
|Weighting Function|$\lambda (t)=1$|
|Discretization Curriculum|$t\sim \{1, 2,...,T\}$, randomly sampling|
|Initial Learning Rate|$\eta=0.0002$|
|Learning Rate Schedule|Cosine decay, decay rate $\omega=0.0001$|
|Test|Design Choice|
|-|-|
|Sampling Step Schedule|$t_1=T(1-\sin(N\cdot i\pi/2))$, $t_2=T(1-\sin(N\cdot (i+1)\pi/2))$|
|Guided Weighting Parameters|$\lambda_1=50$, $\lambda_2=50$ on TSP <br> $\lambda_1=2$, $\lambda_2=2$ on MIS|
|Rewrite Ratio| $\epsilon=0.2$ on TSP and ER-[700-800] <br> $\epsilon=0.3$ on RB-[200-300]|
> **Q4: Consistency training introduced additional training overhead; I think the authors did not discuss this in Appendix D.**
Thanks for your careful review and suggestion. Since the consistency model requires two inference predictions with different noise levels during training, it requires twice the training cost of the original diffusion model. However, this overhead on training is offline, and the consistency model is much more efficient than diffusion at inference time. We will supplement this in our new revision.
---
We hope this response could help address your concerns, and we are more than happy to address any further concerns you may have.
---
Rebuttal Comment 1.1:
Comment: Thank you for the extensive rebuttal. I am happy to keep my current score, which is leaning acceptance.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful feedback throughout the review process. We are pleased to hear that you found our rebuttal satisfactory, and we are making every effort to improve the paper owing to your valuable feedback. We believe OptCM advances the field by making important strides in improving NCO's performance, given OptCM's strong empirical results and the potential to serve as a more powerful backbone model for future works.
Given your earlier indication that a re-evaluation of the score was possible based on our response, do you have any other concerns keeping you from raising your score? If any other concerns remain, we are more than willing to offer clarifications or engage in further discussions. Your continued support would be greatly appreciated, as it would significantly bolster our efforts. Thank you once again for your time and consideration. | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewers’ time, valuable feedback, and constructive suggestions. Overall, the reviewers deem our work as "well-motivated" (3pWn), "well-written" (3pWn, rPB6), and "technically sound" (rPB6), acknowledging our "novel" "robust" "versatile" methodology and model design (3pWn, rpwx), with "convincing" "significantly superior" "extensive" "impressive" empirical evaluations (3pWn, P6jy, rpwx). The primary concerns revolve around the novelty and inherent limitations in generalization associated with the data-driven learning paradigm, where fitting a specific data distribution may hinder generalization to different distributions. To preempt the potential misunderstandings that might impact the evaluation of our work, we first restate our contributions and address the generalization concerns.
**(1) Novelty Claim.**
OptCM is inspired by some excellent previous works, which is one of the key supports for its ability to achieve such strong experimental results. However, we must note that the technical contributions of OptCM are highly non-trivial, as supported by reviewers 3pWn and rpwx.
1. Compared to DIFUSCO and T2T's diffusion models, OptCM's model design, including the model inference process and the training objective function, is very different from the previous diffusion model. Indeed, consistency models are recognized as a new independent class of generative models in generative tasks, and in this paper, we further introduce the optimization consistency condition and establish optimization consistency models (OptCM) to tailor the model for optimization scenarios. This stands the model to be a new highly effective and efficient backbone for future development of learning-based solvers.
2. In terms of T2T's two-stage framework, we merely refer its high-level concept to introduce instance-tailored solving during testing, thus bridging the gap between training and problem-solving. In fact, due to the difference between consistency mapping and diffusion inference, OptCM's framework and implementation are fundamentally independent. The testing-phase search is an additional effort building upon the original inference model. With the consistency mapping-based model, the search procedure is an innovative method that is completely different from that of T2T.
3. For experimental results, OptCM demonstrates exceptional performance in both solution efficiency and quality. Notably, our improvements over previous SOTA diffusion-based methods are of a significant magnitude, e.g., OptCM variants with more sampling and gradient search steps can achieve 82.1% performance gain with 14.7x speedup compared to previous diffusion-based counterparts on TSP-50/100. Additionally, this is the first time end-to-end solvers have surpassed LKH on the runtime-gap curve for the TSP problem within a given timeframe. We believe that such strong empirical results substantiate the claim that OptCM represents more than just incremental progress within the existing diffusion framework.
**(2) Generalization Discussion.**
The major concern of Reviewer rPB6 is about the application of data-driven machine learning in problem solving, which might cause the model to fit a specific data distribution and limit its generalization to different distributions. Here we provide a targeted discussion.
Firstly we must note that Generalization challenges are common across data-driven machine learning methods, not just ours. Any machine learning method that automatically learns solving strategies and data structures from training data (whether supervised, reinforcement learning, or unsupervised) faces generalization challenges. Nonetheless, Machine Learning for Combinatorial Optimization (ML4CO) has gained significant traction, evidenced by numerous publications at leading conferences such as NeurIPS, ICML, and ICLR—each annually featuring about 20 papers on this subject. Please refer to [1] for a summarized list of ML4CO papers, including 34 CO problems and problems like TSP has more than 50 papers recorded. Around 20 survey papers, including Yoshua Bengio's first-authored paper [2], provide extensive summaries and highlight ML4CO's critical research value. Therefore, considering the demonstrated success in the ML4CO field, it would be inappropriate to evaluate a method primarily on its machine learning attributes.
On the other hand, learning from the given data is indeed one of the advantages of learning-based solvers. Data in real-world applications often exhibits a certain problem structure within the implicit distribution. In many cases, we need machine learning methods to uncover patterns within the data and solve problems associated with a specific distribution. It is also supported by Bengio in [2] that "ML4CO focuses on CO algorithms that automatically perform learning on a chosen implicit distribution of problems", and "ML can help improve an algorithm on a distribution of problem instances in two ways: 1) replace some heavy computations by a fast approximation; 2) explore the decision space and learn out of the experience the best-performing behavior (policy)."
Regarding the empirical results, we have already performed many experiments on the generalization results, including generalizing to different scales of TSP (from 50 to 1000) as shown in Sec. 6.1 and Table 3, and generalizing to different distributions like real-world TSP instances from TSPLIB as shown in Appendix A and Table 1, 2. These results show that OptCM maintains good generalization results and outperforms previous SOTA diffusion-based baselines. We also supplement generalization results on MIS to strengthen our paper in the specific individual comments. On the other hand, generalization is itself an independently important research domain, which can typically be orthogonal to our method.
[1] github.com/Thinklab-SJTU/awesome-ml4co
[2] Machine Learning for Combinatorial Optimization: a Methodological Tour d'horizon. EJOR 2021.
Pdf: /pdf/6ddd69cf56375375639cfc3357b3908195d9ffc9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Med-Real2Sim: Non-Invasive Medical Digital Twins using Physics-Informed Self-Supervised Learning | Accept (poster) | Summary: This paper proposes a novel method for creating patient-specific digital twins using non-invasive patient health data. The authors introduce a physics-informed self-supervised learning (SSL) algorithm that pretrains a neural network on learning a differentiable simulator of the cardiac process. Then, another model is trained to reconstruct physiological measurements from non-invasive data while being constrained by physical equations learned during pretraining. The method is applied to identify digital twins of cardiac hemodynamics using echocardiogram videos, showing good results in unsupervised disease detection and in-silico clinical trials.
Strengths: * The method uses non-invasive data, avoiding time-consuming and complicated patient interventions.
* The model accuracy is enhanced by including a physics-based model during training.
* The authors demonstrate the method's utility in modeling complex physiological processes like cardiac pressure-volume loops with open-sourced datasets.
Weaknesses: * Simplifications/assumptions in the Windkessel and LVAD models might not fully capture the complexity of the heart dynamics.
* The results might be sensitive to low quality non-intrusive data, which can affect the global accuracy of the method.
* It is not clear the demographic diversity of the echocardiography dataset. Thus, the model might not generalize well across all the segments of the population.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Lines 251-255, Table 3: Which criteria was used to select the learnable and the fixed parameters of the model? Are these selected by using some kind of sensitivity analysis based on the state-space matrices in Eqs. 21 and 24?
* Line 708: Appendix C1 already addresses the ill-possedness of the inverse problem. That means that the trained model could potentially assign the same digital twin to two different patients due to the similarity of their echocardiograms. Have the authors found any difficulty in this regard?
* Have the authors considered using an easier and more accessible nonintrusive techniques such as electrocardiograms? Would the use of several modalities of non-intrusive data for the same patient increase the performance of the model?
Minor comments:
* Line 227: "tune-able" might refer to "tunable".
* Line 252: Incorrect reference, Table A.2. might refer to Table 3.
* Line 814: Incorrect reference, Figure D.2. might refer to Figure 7.
* Figure 7: The figure model seems to be incomplete/offset.
* Equation 35: A parenthesis is missing after $R_{NO}$.
Final comment: The paper is well structured, the results are promising and the methodology has moderate impact on the field. However, it's still unclear if this model could handle the variability and complexity of human physiology by only non-invasive measurements. Based on the comments above, I reccomend a weak accept.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations were addressed appropriately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the very thoughtful comments and feedback!
**Weaknesses:**
Thank you for highlighting these issues. Please see below a point-by-point response to your concerns.
* While the Windkessel model does not fully capture the complexity of cardiac dynamics, this simplicity is actually a strength of our approach. The heart is an incredibly complex organ system, with millions of parameters describing its electrophysiology, hemodynamics, and biomechanics. However, it is not always necessary to simulate the entire cardiovascular system to make specific diagnostic or treatment decisions. Our model focuses on predicting pressure-volume (PV) loops relevant to diagnostic and treatment procedures for end-stage heart disease. In this context, the Windkessel model is the most parsimonious in-silico simulator for the relevant physiological processes. Additionally, we want to stress that our main contribution is a general two-stage physics-informed self-supervised learning approach and not a specific in-silico model—this approach is general enough to incorporate any cardiovascular system simulator, including more complex lumped circuit models of the arterial system.
* Please note that non-invasive data is not necessarily of low quality; it simply means the data was acquired without invasive procedures, such as catheterization. Echocardiograms, which we use in this paper, are a widely-used, gold-standard non-invasive method for diagnosing cardiac function. Our paper proposes a new method that extracts additional valuable information from echocardiograms, which previously required invasive procedures and were impractical in most primary care settings. This method enables new use cases and does not compromise the accuracy of any existing procedures.
* This dataset includes a reasonably diverse demographic, comprising 68% White, 14% Black, 7% Asian patients, and individuals from other ethnicities. Additionally, 55% of the patients are male. We will provide a detailed demographic performance breakdown in the Appendix of the final paper.
**Responses to questions:**
**Question 1:** We selected learnable versus fixed parameters based on the intended use case of predicting cardiac left ventricular pressure-volume (PV) loops in combination with a sensitivity analysis to understand how they affect the model. Ultimately, parameters directly related to the circuit analog of blood flow in the left ventricle were allowed to be variable, while others were fixed to isolate the prediction of these dynamics. We first defined intervals of validity for each of the 14 independent parameters, based on how variations of each parameter affects the PV loops (realistic shape and magnitude of pressures of volumes), while ensuring that the volume ranges and EF ranges of the two echo datasets were covered. These were centered around reference values from the literature. For the 14 independent parameters, we did perform a sensitivity analysis based on the state-space matrices in Eqs. 21. We selected the final 7 learnable parameters, based on how changes in these parameters affect the volume space, their physical meanings, their resulting PV loop shapes, and their ability to encompass the volume ranges of the two echo datasets. We **include a table in the global rebuttal PDF that summarizes the sensitivity analysis results and modeling choices**.
**Question 2:** It is definitely true that, in general, patients with similar echocardiograms may be assigned identical digital twins and it is also true that this may not capture sufficient differences between patients. This is a general issue encountered in all inverse modeling problems, and is not peculiar to our setup. In our setting, we use a lumped parameter model with a small set of parameters that we have reasons to believe will be able to be distinguished between patients because they have a visual representation in an echocardiogram. This means that there are correlations between visual features and values of the in-silico model parameters that allow us to uncover meaningful differences in the digital twins that can distinguish, for example, mitral stenosis (Fig. 3(b)). We believe that studying the identifiability of specific models, developing general conditions for identifiability of physics-based models and developing new learning procedures that improve the chances of model identification are all very interesting directions for theoretical research that complements our applied method. One potential direction to improve identification would be incorporating patient level meta-data or other modalities as is done in other digital twin settings to enhance the model’s ability to distinguish between patients to make a measurement model more nearly injective.
**Question 3:** It would be really interesting to study the use of electrocardiograms in this setting and even more interesting to combine these data assets. Echocardiograms were the natural choice for our model because they are a standard tool for encoding information about left ventricular pressure in addition to left ventricular volume, which they are routinely used to assess [Popescu et al 2022]. However, we recognize that incorporating additional non-invasive modalities such as electrocardiograms could potentially enhance the model’s performance. We are curious to explore the possibility in future studies.
Thank you for catching the references and minor typos, we will happily update those. Figure 7 is intended to show shorting the circuit to calculate an equivalent circuit.
_Bogdan A Popescu, Carmen C Beladan, Sherif F Nagueh, Otto A Smiseth, How to assess left ventricular filling pressures by echocardiography in clinical practice, European Heart Journal - Cardiovascular Imaging, Volume 23, Issue 9, September 2022, Pages 1127–1129, https://doi.org/10.1093/ehjci/jeac123_
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the rebuttal, I have no more concerns. | Summary: This paper introduces a novel methodology for identifying patient-specific digital twins using noninvasive medical imaging, particularly focusing on cardiac hemodynamics. By leveraging a physics-informed self-supervised learning approach, the research addresses the challenge of modeling digital twins without invasive data collection. The process involves pretraining a neural network to simulate physiological processes and then fine-tuning it to reconstruct physiological measurements from noninvasive modalities, constrained by the learned physical equations. This framework allows for the simulation of patient-specific physiological parameters and the potential for conducting in-silico clinical trials and unsupervised disease detection.
Strengths: 1. The paper introduces a cutting-edge method combining physics-informed neural networks with self-supervised learning to tackle the inverse problem of estimating physiological parameters from noninvasive imaging data.
2. By utilizing noninvasive data, the proposed method significantly reduces the need for invasive procedures, enhancing patient safety and comfort, and potentially broadening the applicability of digital twin technology in routine clinical practice.
3. The methodology's ability to simulate detailed physiological conditions and interventions opens up vast possibilities for its application in personalized medicine, including unsupervised disease detection and in-silico clinical trials, which can significantly accelerate the development of therapeutic strategies.
Weaknesses: 1. Insufficient performance comparison. The paper only compares PINN and Neural ODE methods. There are many variations of PINNs methods which outperform the original one. The paper should choose more solid baselines.
2. No ablation study. The paper proposed a two-stage training strategy, but it didn't show why it is necessary.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How do the results of the model change if the estimated parameters are dynamic rather than constant? Considering the medical context where a patient's condition is continually changing, assuming constant parameters seems unrealistic. How might this affect the reliability and accuracy of the model used in such scenarios?
2. Can you explain why blood flow is often modeled using an electrical circuit analogy, and why it adheres to Kirchhoff's laws of voltage and current?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The proposed method requires a pretty strong assumption of the PDEs dynamics. I am a little bit worried about the training efficiency of this method. Can it be used as a real diagnostic tool? Additionally, I am intrigued by the potential of this method to be applicable across a broader range of medical disciplines.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for your very thoughtful comments! Below, we provide a detailed response to the weaknesses and questions.
**Weaknesses:**
Please see below a point-by-point response to your concerns.
* The limited baseline comparisons stem from the novelty of our problem setup. While there are many variants of Physics-Informed Neural Networks (PINNs) in the literature, not all are suitable for our specific problem. Our setup is unique because it involves a partially known model that combines both known and unknown components. As explained in lines 721-725, this setup can naturally be approached using a generative inverse problem approach to model $\mathcal{F}^{-1}$. We compare this model to other generative and PINN solutions in Table 4, using two taxonomies: the known versus unknown forward model (rows) and paired versus unpaired input-output data (columns). Table 4 highlights that the various PINN variants make different assumptions about the problem setup, making them unsuitable as baselines for our case. Therefore, we chose to compare our approach only with baseline PINN and Neural ODE methods. Our goal was to demonstrate that our surrogate model achieves a different goal but still has a comparable understanding of the underlying physics as these standard approaches, rather than showing superior performance in learning physics. We are open to implementing additional baselines if the reviewer suggests specific PINN variants that are applicable to our setup.
* We did conduct an ablation by removing the physics-based model to evaluate its impact on ejection fraction (EF) predictions (Table 1). Additionally, we explored other feature extraction architectures, such as LSTM and transformer models, but these did not achieve the same level of loss minimization as our 3D-CNN approach. For the two-stage training strategy, we also experimented with alternative differentiable ODE solvers. However, these solvers were not compatible with our architecture and resulted in similar parameters across different patients or introduced additional numerical error. Consequently, we opted for our two-stage approach, which includes a physics-informed pretext task and physics-guided fine-tuning with 3D-CNN. We are very open to suggestions from the reviewer for additional ablation studies that could further validate our approach and their implications for our set up.
**Responses to questions:**
**Question 1:** First, we want to clarify the difference between the patient's cardiac states and cardiovascular parameters. Our model is dynamic: it analyzes an ultrasound video to extract a cardiac state that varies over time, characterized by changes in pressure and volume throughout the video, which lasts for several minutes. The underlying parameters, however, are fixed because they depend on the heart's anatomy, deformities, and tissue properties, which change over days, months or years, not within the short duration of a single echocardiogram acquisition. While modeling the long-term evolution of cardiac health is a fascinating problem, our model can still address this setup by applying our two-stage procedure to new echocardiograms acquired over different years.
**Question 2:** The analogy between cardiac blood flow and electric circuits is a classical result in hydrology and fluid mechanics. In this analogy, blood flow is analogous to electrical current as it is driven by pressure differences and encounters resistance like electricity is driven by voltage and material resistance. Valves act as diodes, capacitors emulate the elastance behavior of heart chambers, and heart tissues induce resistance analogous to that in electric circuits. This analogy is often referred to as the “drain-pipe theory” and is widely used in fluid mechanics beyond cardiovascular modeling. The class of models studied in our paper, called Windkessel models, with varying complexity have been used to model cardiac dynamics for many years, though other classes of models do exist. The model we take to be fixed for our paper was validated in the Simaan et al paper (which we specifically used for its incorporation of an LVAD). The authors synthetically perturbed the model to demonstrate the behavior was as clinically expected and compared to real human measurements to demonstrate accuracy.
Kirchhoff's laws are adhered to in the model simply because they are adhered to in any electric circuit. In this way, Kirchhoff's Voltage Law parallels the conservation of energy in the circulatory system, where the sum of pressure drops around a closed loop equals the pressure driving the flow. Kirchhoff's Current Law reflects the conservation of mass in blood flow, ensuring that the total blood flow entering a junction equals the total flow exiting.
**Finally, we want to address your overall comment** and clarify that we do not make strong assumptions about PDE dynamics. Instead, we use a low-fidelity model that exactly characterizes cardiac hemodynamics in the heart chambers relevant to cardiovascular diseases, while ignoring the details of blood flow in other arterial peripherals. We strongly believe that our approach has broad applications for individualized treatment of end-stage heart failure. This includes not only the use of LVAD, as demonstrated in Section 4.4, but also any other medical device with a corresponding in silico model. Additionally, our method can be used to diagnose diseases not typically identified using echocardiograms alone, such as mitral stenosis.
---
Rebuttal Comment 1.1:
Comment: Thank you very much! Your reply almost addresses my concerns. | Summary: I have read this manuscript during ICML review. It looks the same so I copied my previous review.
The authors presented a method to infer the physical parameters θ of physiological process (heart pumping blood) from noninvasive observation y (the echo image). The mapping from y to θ cannot be directly learned due to the lack of paired data. Instead, they find that an intermediate variable x_bar (the EF) can be annotated by experts (x_bar=g(y)) which can also be calculated based on θ (x_bar=m(M(θ))). Thus the observable pair (y, x_bar) provides the supervision for learning θ = F_inv(y), through the relation x_bar=m(M(F_inv(y))) where m is rule-based and M is a solution to ODE. They first train a surrogate network to approximate M(θ) through synthetic simulation, followed by learning the parameter of F_inv on observations {(y, x_bar)}.
Strengths: 1. The paper is clearly written and easy to read with clear symbols.
2. The idea of introducing a physical model helps better characterise the physiological system and provide a learning method.
3. The surrogate model overrides the cost of solving ODE and make it differentiable during learning.
4. The inference of physical parameters helps to perform virtual experiments.
Weaknesses: 1.The authors compared their methods in predicting EF from echocardiogram with supervised 3DCNN but did not outperform them in MAE. It should be noted that the EF is calculated on ED and ES segmentation and supervised segmentation network is expected to perform even better.
2.Lack of validation. The authors performed validation by comparing EF derived from the physical parameters to the observed EF. But this does not guarantee the correctness of their physical parameters. In fact, arbitrary intermediate variables can be defined and could also lead to the comparable prediction of EF.
3.Training the surrogate model of x=M(θ) need to generated synthetic samples while this is domain-dependant. Whether the learned mapping fits extreme or shifted situations is unknown, lacking of uncertainty analsysis.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1.Could the authors provide more solid validation of their inferred physical parameters?
2.Could the authors provide the uncertainty/robustness/generalisation ability of the surrogate model?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: My biggest concern is that the method lacks ground-truth validation, at least some samples.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review of our paper. We greatly appreciate your and other reviewers feedback from our ICML submission, and we have implemented changes to address these comments in our new submission. We are happy to share that we have **added additional validation and comparison of the physics based surrogate model in response to your feedback**. Specifically, we compared this approach to a baseline PINN approach and a Neural ODE approach, which demonstrate similar performance but fundamental differences in set up (Table 2). We also took into account your suggestion of further validating the surrogate model and its generalizability by adding predictions on an additional out-of-sample synthetic dataset in Section 4.5 and Appendix Figure 9.
In response to your specific concerns, we restate or add the following in a point-by-point response.
**Weakness 1:** Thank you for your observation regarding the comparison with supervised 3DCNN and other segmentation models in predicting EF from echocardiogram data. While we acknowledge the potential for supervised segmentation models to yield superior results in terms of mean absolute error (MAE) for EF prediction, it's important to clarify that our study's primary objective differs from that of these supervised algorithms. In our research, the primary focus is utilizing the volume states that compose EF (volume at ES and ED) as a clinically significant label to facilitate the training of our model for latent parameters and PV loops. We incorporate physics as fixed layers for this purpose, a process which may marginally compromise the model's EF prediction accuracy. In Table 1, we showcase that the sacrifice in EF prediction accuracy is minimal and acceptable within the context of our objectives. We acknowledge that certain state-of-the-art Unet models designed for segmentation tasks may indeed achieve superior MAE in EF prediction. However, it's crucial to note that these models are specifically optimized for segmentation purposes, whereas our methodology integrates physics-informed learning to achieve broader objectives beyond segmentation alone.
**Weakness 2:** Thank you for bringing up the limitation on the validation of our latent parameters. Evaluating solely comparing EF derived from physical parameters to observed EF for validation is clearly insufficient as you point out. Validating these individual parameters is of course challenging given that they are parameters of a physical model rather than exact patient measurements. In our particular experimental setting, our parameters certainly have physiological meaning, which we describe qualitatively, but they are not directly measurable. For example, Rm, mitral valve resistance, in our model is an electric circuit analogy parameter that is not equal to measures of mitral valve resistance in clinical practice. We sought to perform qualitative validation, i.e. describing qualitative relationships between Rm and actual mitral valve resistance and in using predicted PV loops to predict mitral valve stenosis disease labels. In future work, we would seek to find correlations between model parameters and physiological effects or conditions. For example, [Lamberti et al 2024 below] show correlation between pulmonary vascular compliance, a parameter similar to those in our model, and right heart decompensation with LVAD implantation.
_Kimberly K. Lamberti et al, Dynamic load modulation predicts right heart tolerance of left ventricular cardiovascular assist in a porcine model of cardiogenic shock. (2024).DOI:10.1126/scitranslmed.adk4266_
**Weakness 3:** The surrogate model/pretext task is definitely intended to be domain dependent and would need to be trained on synthetic data for any new in silico mathematical model. The data will always be synthetic for pretraining the physical dynamics, so it would not be dependent on the image data’s collection. In our case, the surrogate model is trained on a synthetic multidimensional grid filling a space of the form [a_1, b_1] x .. x [a_n, b_n], where [a_i, b_i] is the validity interval for the parameter θ_i, and θ=[θ_1,...,θ_n] and is chosen accordingly to the problem addressed. From this synthetic dataset, covering all possible states (and thus all possible patients), the surrogate model M(θ) is domain independent, and capable of predicting M(θ) even for extreme or shifted situations (since these are included in the synthetic dataset). We add an out-of-sample prediction on 1000 new points and show a uniform distribution of errors and comparison of this method to other physics informed approaches to demonstrate that it learns just as well and is better suited to our set up.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. My concerns have been addressed to some degree. I have increased my rating. | Summary: The paper proposes a method to identify parameters for digital twin models of patients using non-invasive health data, eliminating the need for invasive procedures. This method focuses on scenarios like cardiac hemodynamics, where traditionally invasive measurements (e.g., through catheterization) can be predicted using non-invasive data (e.g., echocardiograms). The novelty of the method is to solve the associated inverse problem specifically for that patient, so that personalized predictions can be performed.
The proposed method uses a two-step SSL approach that structurally resembles pretraining and finetuning in SSL. First, a neural network is pretrained on synthetic data to learn the forward dynamics of a physics-based model. Then the pretrained model is then used to train another network on actual non-invasive patient data to predict physical parameters.
Application to Cardiac Hemodynamics:
The paper illustrates how to apply the above method for cardiac hemodynamics using echocardiography. This allows the prediction of patient-specific pressure-volume (PV) loops using non-invasive echocardiogram videos.
Strengths: The paper is overall well-written and it identifies a real problem with potential high impact: the design of personalised medical twins to avoid invasive procedures
Weaknesses: The authors focus only on a very specific medical use-case, they don't try to generalize their method to more cases. In the introduction the authors illustrate this as a generic approach, therefore I was surprised to not find an attempt to support their vision with more application examples. Without the demonstration that this method can have a broader use I don't think this paper is suitable for presentation at this conference. Also, the authors don't run convincing ablations to demonstrate that their approach is sounding.
Technical Quality: 1
Clarity: 3
Questions for Authors: Have you experimented with this method beyond Cardiovascular Hemodynamics?
You implement a 3D-CNN - how did you arrive at such architecture? have you run ablations?
Confidence: 3
Soundness: 1
Presentation: 3
Contribution: 2
Limitations: The authors mention the limited clinical validation. This is certainly one limitation to take in account. I would consider also the lack of generalization to different clinical procedures.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback on our paper!
The proposed method is indeed general and not limited to the cardiovascular system example. Any physical or biological system that can be described through ordinary differential equations can utilize our approach. The problem setup in Section 2.1 and the training/loss functions in Section 2.2 are designed to be very general, capturing any mapping from non-invasive observations to a physical system defined by differential equations. We chose to focus on cardiovascular hemodynamics as our example because our contribution is conceptual rather than empirical. By studying a specific disease area in depth, we were able to illustrate a wide range of use cases for digital twins, from diagnosis to simulation of counterfactual treatment outcomes. These use cases were previously not possible, and our goal was to demonstrate the potential rather than validate a new model for an existing use case at scale. We believe that our in-depth focus on a disease area enhances rather than undermines our contribution.
To give another example for a problem setup in oncology where the exact same framework can be applicable, consider the in-silico model proposed in [Baldock et al 2013], which describes the use of an ordinary differential equation for modeling proliferation and invasion of tumor cells. The two parameters of this model: dispersion D and proliferation p, have physiological meaning and impact on cancer prognosis [Baldock et al 2013]. They have been proposed to be measured using a mathematical formula from diffusion weighted MRI [Ellingson et al 2010 below]. Our approach could learn the dynamics of the ODE and build a non-invasive direct approach for these parameters to be extracted from images with a 3DCNN, provided data is available. By demonstrating a method for creating digital twins that can be generalized, our work could encourage more public sharing of relevant healthcare datasets, fostering the development of new digital twins across various medical fields. This is a crucial step towards broader applications and greater impact in medical research.
In response to your questions regarding the choice of architecture and ablation studies, we want to stress that our framework is not specific to this architecture. We found the 3D-CNN architecture to be the most empirically performant architecture in predicting ejection fraction using the PINN layers compared to other architectures we tried, such as LSTM and transformer models. This is in line with previous results that showed the effectiveness of 3D-CNNs in modeling echocardiography [Ouyang et al 2021]. Regarding ablation studies, we conducted an ablation by removing the physics-based model to assess its impact in EF predictions. **We are very open to suggestions from the reviewer for additional specific ablation studies** that could further validate our approach and their implications for our set up.
_Ouyang, D., He, B., Ghorbani, A., Yuan, N., Ebinger, J., Langlotz, C.P., Heidenreich, P.A., Harrington, R.A., Liang, D.H., Ashley, E.A. and Zou, J.Y., 2020. Video-based AI for beat-to-beat assessment of cardiac function. Nature, 580(7802), pp.252-256._
_Ellingson, B.M., LaViolette, P.S., Rand, S.D., Malkin, M.G., Connelly, J.M., Mueller, W.M., Prost, R.W. and Schmainda, K.M. (2011), Spatially quantifying microscopic tumor invasion and proliferation using a voxel-wise solution to a glioma growth model and serial diffusion MRI. Magn. Reson. Med., 65: 1131-1143. https://doi.org/10.1002/mrm.22688_
---
Rebuttal Comment 1.1:
Comment: I have read the authors reply and all the other reviews. I appreciate the effort of the authors in providing more details and explanation, they helped me understand the paper and their motivations better. I slightly increased my score. However my concern on the generalizability of the method and the missed opportunity to demonstrate it remains. (and I can see it's a shared concern among the reviewers). | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable feedback. We would like to take this opportunity to summarize the key contributions of our work, address common concerns across the reviews, and clarify some aspects of our methodology that may have not been fully appreciated.
**Summary of Contributions**
Our paper introduces methodology for tuning patient-specific parameters of medical digital twins with non-invasive data. We do so by creating a framework for solving a composite inverse problem to reconstruct physiological states from indirect measurement non-invasively. We build a physics-informed self-supervised learning (SSL) algorithm that first pre-trains a neural network to learn a differentiable simulator of the mathematics of the digital twin model which is then used to constrain the reconstruction of physiological measurements from data. This enables diverse applications from simulations of patient-specific models, which we demonstrate in our experiments in the setting of cardiac hemodynamics and the facilitation of in silico trials (for left-ventricular assistance devices, LVADs) and disease detection (mitral stenosis).
**Key Strengths**
* **Innovative methodology:** Our method is designed to be suitable to a challenging real world inverse problem. We combine the strengths of physics-informed models with self-supervised learning to address the challenge of modeling personalized digital twins without direct measurement.
* **Non-invasive data:** Our approach is built with non-invasive indirect measurements. This has the potential to spur research into the development of medical digital twins suitable in a more diverse patient population.
* **Broad application:** The proposed framework allows for the simulation of detailed physiological parameters and interventions in any set up where a process can be modeled using ordinary differential equations. This generalizability will allow for further research for more widely applicable digital twins of various health systems and non-invasive modalities.
**Common Themes:**
* **Architecture and ablation studies:** We performed ablation studies to evaluate the impact of removing the physics-based model component and explored alternative architectures. The results showed that removing the physics-based model still allowed for comparable EF predictions, indicating that our model’s supervised learning capabilities were not compromised while enabling digital twins simulation. We explored various video feature extraction frameworks including LSTM and transformers besides 3DCNN and alternative differentiable ODE solvers finding that none performed as well in our experimental setting. We are open to suggestions to the reviewers of additional ablation studies to further validate our approach and set up but want to stress that our framework is not specific to any particular architecture.
* **Generalizability of approach:** Our manuscript presents the example of the application of our methodology for modeling cardiovascular hemodynamics with non-invasive echocardiogram data. For this experiment, we selected parameters in a simplified dynamic lumped parameter model with the specific focus of evaluating patient cardiac pressure-volume loops. We evaluated the performance on two public echocardiogram datasets with diverse demographics and studied this experiment in depth to demonstrate a wide range of potential use cases for models developed in this framework. However, the problem set up and training are designed to be very general so that they might be used to develop digital twins for any system that can be modeled with ordinary differential equations using a mapping from a set of non-invasive observations. Our method is conceptual, aiming to foster data sharing and encourage broader applications in medical research.
* **Performance comparison and baselines:** We selected comparisons, specifically Physics-Informed Neural Networks (PINNs) and Neural ODE methods, to emphasize the fundamental differences in setup and to demonstrate differences in our surrogate model despite a comparable understanding of the underlying physics as state of the art methods. There are many variants of PINNs in the literature but not all are suitable to our set up. Our method is designed to approximate known and unknown components of an inverse problem. We propose a generative inverse problem approach that will work across parameter sets, which contrasts with the PINN and Neural ODE approaches that focus on learning physics for individual patients. Despite potential advantages of advanced PINN variants, our approach is tailored for capturing broader inverse relationships rather than optimizing for single-patient physics learning. We compared our approach to these baselines to illustrate similar performance while highlighting our method’s unique setup and goals. We remain open to incorporating additional relevant baselines if they align with our specific framework.
Pdf: /pdf/dfab191bac1561f2eac3767112d13ff03939789e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Trading off Consistency and Dimensionality of Convex Surrogates for Multiclass Classification | Accept (poster) | Summary: This paper studies surrogate loss design and the trade-off between surrogate consistency and loss dimension. The contributions are three-fold: (1) the characterization of the hallucination region, where the decoded prediction from the surrogate loss minimizer gives a class with no target probability mass, indicating a completely "irrational" prediction; (2) the construction of the ("weakly but reasonably" inconsistent) calibrated surrogate and link under the low-noise setup; (3) the decomposition of property eliciation into multiple elicitation problems with low dimensions.
Strengths: + **Very well-motivated problem**: Consistent/calibrated surrogate losses, the main topic of this paper, are sometimes too restrictive because the (in)consistency analysis hinges on "far-fetched" distributions---as argued by the authors, and as can be seen in some counterexamples to show the inconsistency of the well-known Crammer-Singer hinge loss. If we can remove such "pathological" situations from the entire distribution space, we would have a nicer characterization of loss functions. This is the central motivation of this paper.
+ **Interesting instances to show calibrated surrogates under the low-noise condition**: This paper mainly investigates two situations to convince the benefits of the relaxed notion of calibration: the unit cube and permutahedron. Both make sense with reasonably practical scenarios and provide the first attempt to show the benefits of incorporating the knowledge of the noise level in loss function design.
Weaknesses: + **The result with the unit cube may need $d \\geq n$**: At first sight, the statement of Corollary 7 (the calibrated link design for the unit cube) does not have any restriction on $\\alpha$ (say, any dependency on the embedding $d$), unlike Corollary 8 (the calibrated link design for the permutahedron). When I look at its proof, I suspect we need the condition $d \\geq n$ for this because the proof leverages that "$P\_\\alpha^y$ is a strict subset of the orthant that contains $v\_y$" at l.500. Otherwise, it is strange because we can choose an arbitrarily small embedding dimension $d$.
+ **The trade-off could be seen only for the unit cube case**: The main contribution of this paper is to showcase the trade-off between the consistency and the embedding dimension, as suggested by the title. However, we may not see a clear trade-off in the unit-cube case. This is also related to the above point: Once we have $d \\geq n$, there may not be a clear trade-off for $\\alpha$ and $d$. Given this, we are not very sure how universally we can observe the trade-off across different elicitation problems.
Nevertheless, I don't think these points are significant enough to undermine the contributions of this paper. I expect the authors to address them, which leads to the presentation of the contributions in a more fair/precise manner.
Technical Quality: 4
Clarity: 2
Questions for Authors: Here are the major questions.
+ In the proof sketch of Theorem 3, it is not very straightforward to see $\\mathrm{conv}(\\mathrm{vert}(P\_{-y})) \\cap \\psi\_y^\\varphi \\ne \\varnothing$. This is important to ensure that $\\mathcal{H} \\ne \\varnothing$ (before invoking Helly's theorem). Even if this is merely a sketch, it is better to discuss it in my opinion.
+ Figure 1 (especially the left one) is challenging to understand. Although I can guess that each vertex of the rectangle corresponds to $v\_y$, I cannot see what each area "ad", "cd", etc. mean exactly. Either the figure or caption should be refined. Figure 1R (and its caption) is slightly more digestible.
+ In Theorem 5, the constructed link contains the "point-set" distance $\\|u - P\_\\alpha^y\\|\_2$. What does it mean exactly? It appears several times afterward (for example, in Theorem 6). I guess we can recover the statements without significant modifications if we modify the link definition slightly.
----
Some minor points:
- In the last paragraph of the introduction, it is a bit too sudden to talk about the mode for unfamiliar readers. You may say a few words about the relationship between the mode and multiclass classification.
- In Definition 1, the very general report space is introduced, which I'm not sure is really necessary or not. Subsequently, the discussion is only based on the case $\\mathcal{R} = \\mathbb{R}^d$.
- By the way, the domain of a surrogate loss $L$ should be $\\mathcal{R} \\times \\mathcal{Y}$ in Definition 1, not $\\mathcal{Y} \\times \\mathcal{Y}$.
- Throughout the paper, "mode", "prop", and "vert" look better with \\mathrm, not with \\text.
- In Construction 1, $v\_y$ is not formally introduced (while I could catch its meaning). In my opinion, this should be formally introduced because it appears repeatedly in the paper.
- At l.171, "To show that ..." does not form a grammatically complete sentence.
- At l.174, what does it mean by "the vertex figure"?
- At l.194--196, the sentence is not easy to understand mainly because it contains two "if." Could you rephrase it without multiple "if"?
- At l.201, the level set $\\psi\_y$ for the general link is not defined as far as I see, even though the level set for $\\psi^\\varphi$ is defined in Definition 7.
- At l.201, $R\_\\mathcal{Y}$ probably matches $R$. Is it true? I'm asking out of curiosity.
- At l.209, what does it mean by "a robust sense"?
- At l.213, the last "via $(L,\\psi)$ with respect to $\\ell_{0-1}$" is relevant to the definition of the strict calibration region $R\_\\mathcal{Y}$. However, if you append "via [...]" in this way directly after $R\_\\mathcal{Y}$, it is a bit hard to understand at first sight.
- At l.229, $\\delta\_y$ is not defined (though I catch the meaning).
- At l.233, the notation $\\psi\_\\alpha^\\varphi$ should be changed if possible because this notation with the subscript/superscript is confusing with the level set notation $\\psi\_y^\\varphi$.
- At l.250, $=$ (in the beginning) should be $\\subseteq$ in my understanding.
- At l.251 and l.267, the link notation $\\psi\_\\alpha^P$ should be $\\psi\_\\alpha^\\varphi$. In regard to this, the notation of $\\psi\_\\alpha^{P^\\square}$ and $\\psi\_\\alpha^{P^w}$ in Section 4.2 are better to be consistent with $\\psi\_\\alpha^\\varphi$ if possible---the superscript stands for the embedding function, not the embedded space.
- At l.269, "$P\_\\alpha^y$ is [...] pairwise disjoint" should be complemented with "in $y$" to be clearer.
- At l.274, the strict properness is not introduced. Actually, I think the usual calibration suffices in this context.
- In the appendix, can you make the theorem numbers consistent with those used in the main part? You can do this by using the restatable package.
Confidence: 4
Soundness: 4
Presentation: 2
Contribution: 4
Limitations: This work has discussed the limitations. Specifically, there is computational hardness to break down an elicitation problem into multiple low-dimensional problems in Section 5.
This work does not have potential negative societal impacts because the contributions of this paper are relevant only to general machine learning theory.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: To clarify Corollary 7, the number of outcomes $n$ is $n=2^d$, and we embed said outcomes into the vertices of the d-dimensional hypercube, which has $2^d$ vertices. It turns out that for this choice of d, we obtain consistency for any value of $\alpha < 0.5$. In other words, for this choice of $d = \log n$, there is no dependence on $d$ in the threshold for $\alpha$ for the hypercube embedding. Although not necessarily a continuous tradeoff, we see this as a positive result since it implies that regardless of the number of outcomes, binary embedding into the unit cube allows for consistency as long as the true outcome has at least half probability.
Regarding Theorem 3’s sketch, we did not show that $\mathcal{H} \neq \emptyset$; however, in the full proof, we do. We will add this detail to the sketch. For Figure 1, we plan to add clarifying statements for the labels within the captions. With regard to Theorem 5, the point set distance is the minimum distance between said point and any point within the set. We plan to define the point set distance in our work to clarify our result.
Concerning the mentioned minor points, we thank you for your detailed pass through the paper; we plan to incorporate many of your writing suggestions to clarify the presentation of our work.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer zH7s
Comment: Thank you for the concise responses. I do not have more concerns. Good luck for the authors. | Summary: This paper proposes a method called polytope embedding, which embeds multiclass predictions onto real numbers. The paper studies the properties of this embedding, like hallucination and calibration. Further, with low-noise assumptions, the authors showed more calibration results for their embedding in some cases like embedding into the unit cube.
Strengths: The topic of this paper on how to design a consistent surrogate loss seems interesting. The results proved in the paper also relate to topics that people care about like hallucination and calibration.
Weaknesses: Even though each result seems interesting, the structure of this paper is not very clear to me. I am open to different arguments but I think it is hard for readers to understand what the authors are trying to show with this paper.
Technical Quality: 2
Clarity: 2
Questions for Authors: I am wondering what exactly the authors' definition of partial consistency is. How exactly are the methods trading off consistency and dimensionality? If the authors mean consistency by hallucination and calibration, which theorem shows how dimension affects these properties.
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Currently, the definition of partial consistency is implicit via Definition 2, but we say that a surrogate and link $(L, \psi)$ are partially consistent if it is calibrated only over some $\mathcal{P} \subsetneq \Delta_{\mathcal{Y}}$ and not calibrated over $\Delta_{\mathcal{Y}}$. We will make this explicit after Definition 2.
At large, we demonstrate tradeoffs for consistency and dimensionality via Theorems 3, 5, and 10. As a negative result, Theorem 3 (Section 3) shows that embedding in d<n-1 dimensions guarantees inconsistency through hallucination, which is a stronger notion than inconsistency. This implies that we cannot have both low dimensionality and full consistency simultaneously, so we move forward in two directions: characterizing partial consistency in low dimensions (Theorem 5; Section 4) and recovering full consistency via multiple (low-dimensional) problem instances (Theorem 10; Section 5).
---
Rebuttal 2:
Comment: Thank the authors for their response. The paper's contribution becomes clearer to me after the authors' explanation. I think it would make the paper more readable if author can add some high level explanation before the important theorems or add a paragraph at the beginning explain the paper's structure. I raised my score accordingly. | Summary: The paper examines the trade-off between consistency and dimensionality in multi-class classification. It has been known that the lower bound on the dimension for consistent surrogate losses under any distribution is $n - 1$, wheren $n$ is the dimension of the input space. The authors propose the notion of partial consistency, which permits the establishment of surrogate losses at much lower dimensions than the input dimension. This method allows for the consistency of lower-dimensional surrogate losses under a low-noise condition and the construction of dimensionally reduced consistent surrogate losses across multiple problem instances.
Strengths: 1. The paper is the first study to explore the control of the trade-off between consistency and dimension in multi-class classification.
2. The paper is well-written and clear, even though it is theoretically dense.
Weaknesses: 1. The paper demonstrates the existence of distributions under which consistency holds for low-dimensional surrogates but does not offer much guidance on how to identify such distributions for a given surrogate loss.
2. The paper restricts its study and analysis to asymptotic guarantees.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Theorem 5 establishes the existence of an $\alpha \in [0, 0.5)$ across distributions where consistency is maintained. Is it possible to determine or estimate the value of $\alpha$ for a specific surrogate loss to guarantee consistency?
2. The max hinge loss, formulated by Crammer and Singer (2001), is inconsistent when $\alpha = 0$ and consistent when $\alpha = 0.5$. Do the findings of this paper potentially include such a result as a particular example?
3. The paper examines the asymptotic guarantees of Bayes consistency. Could some of these results be extended to incorporate non-asymptotic consistency guarantees, such as excess error bounds or H-consistency bounds?
4. The low-noise condition mentioned in Line 228 can be considered as the multi-class version of Massart's noise condition. Is it possible to extend the analysis to encompass the broader Tsybakov noise condition?
5. Could the authors provide further elaboration on how to choose an embedding in practice, based on the theoretical results?
6. The results in the paper apply to multi-class classification. Is there any possibility of generalizing them to other scenarios, such as ranking?
7. The use of $\mathcal{H}$ to denote the hallucination region in line 164 could lead to confusion, as $\mathcal{H}$ is commonly associated with the hypothesis set. Is there an alternative notation that could be used to avoid this ambiguity?
8. The notion of partial consistency is mentioned in both the abstract and introduction, yet it is not formally defined in the main text. It seems that partial consistency suggests that surrogate reports may not correspond to a single distribution. A formal definition would likely enhance reader comprehension.
9. The approach of using multiple problem instances and aggregating their outputs is intriguing. However, might this method be vulnerable to data imbalances across different problem instances? I am eager to hear the authors' opinions on this.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We briefly respond to individual questions in order:
Q1: In general, finding the $\min_{\alpha \in [0, \frac 1 2)} \alpha$ that (almost*) guarantees consistency is possible through property elicitation, though it is laborious.
*Working directly with consistency conditions is often much more difficult than elicitation, and while elicitation implies consistency, the converse is not always true, but usually holds as long as the argmin is well-defined.
Q2: This work's analysis is limited to Bregman losses (Definition 6) (therefore excluding hinge loss). Understandably, we would like to generalize our analysis in future work; however, the structure of Bregman losses permits us to choose any convex polytope as the embedding structure, allowing our analysis to be rather general from this perspective.
Q3: This is a great question, which requires some subtleties in the answer: first, because $\mathcal{H}$-consistency is not equivalent to $\mathcal{H}$-calibration without distributional assumptions that the Bayes optimal $f^* \in \mathcal{H}$ (Steinwart 2007 Section 2, under the name P-minimizability). Our proofs typically work through calibration, and extend under this assumption with $\mathcal{H}$ is the class of all measurable functions. This assumption is relatively common in the $\mathcal{H}$-consistency/calibration literature, and would permit some of our results and approaches to generalize in a relatively straightforward manner. However, there might be more intensive, but tighter, tradeoffs to be studied under the $\mathcal{H}$-consistency regime because of the interaction between assumptions on $\Pr[X,Y]$ (Bayes optimal in class) and $\Pr[Y \mid X = x]$ for all $x$ (calibration tradeoffs). In general, when the Bayes optimality assumption is not satisfied, there is an \emph{approximation gap} that emerges which merits a different analysis from $\mathcal{H}$-calibration to $\mathcal{H}$-consistency, and our results would not immediately generalize.
The question on non-asymptotic analysis would be an exciting direction for future work. We expect such results could be possible to obtain using techniques such as in e.g. Mao et al. NeurIPS 2023, but bounds would likely vary depending on the specific form of the embedding and the convex function $G$ defining the Bregman divergence.
Q4: We were unaware of the literature on Massart and Tsybakov noise while writing this work. We are thankful for pointing this literature out and look forward to incorporating it into future editions of this paper.
Q5: At large, our two results (Corollary 7 and 8) provide examples on the tradeoffs of dimensionality and consistency that could be insightful in practice. However, this question does require further exploration relative to different problem domains.
Q6: We explored partial ranking in some of our preliminary work; however, the geometry of the level sets became quite intricate, making us unable to establish a general result.
Q7: Thank you for this insight, we will explore a different option to denote hallucinations.
Q8: Currently, partial consistency is implicit via Definition 2, but this is a good point. We plan to write this out explicitly in the body of the work following Definition 2.
Q9: Since this is an asymptotic result, we didn't address questions about sample complexity guarantees. This would require detailed exploration and model assumptions to answer said question; however, as an initial starting point, we expect that using a boosting model could help reduce the effects of data imbalance within the multiple problem instance approach.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response from the authors. I fully agree with their points and would like to extend my congratulations on their commendable work. | Summary: In this paper, the problem of constructing consistent multiclass surrogate losses for the 0-1 loss while reducing the dimension of the scoring function is studied. The concept of partial consistency, which can be dated back to the study of multiclass SVM, is used as a crucial part of this work. It is first revealed that any losses with scoring function’s dimension less than #class number-1 will lead to severe misclassification. Then low noise assumption is used to enable the trade off between (partial) consistency and the scoring function’s dimension number. An attempt of recovering full consistency with several scoring functions of lower dimensions is also made.
Strengths: 1. A clear trade off between the strictness of low-noise assumption and scoring function’s dimension number is quantified.
2. The analysis of hallucinations is also enlightening, which provides an interesting insight of why some well-suited models (with some losses) can make wrong predictions even with no real-world evidence.
3. The clear presentation and layout of the results make them easy for readers to understand.
Weaknesses: While the dimension of predictor is reduced and thus the computational cost of training can be promisingly reduced, the computational cost of the inference stage may increase compared with that of the traditional #class number-dimensional scoring functions. Is there any remedy for this problem?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The cost of inference can be broken down into two parts: (a) the cost of a forward pass through a model, and (b) the cost of computing the link function. For (a), this is typically unchanged or even reduced by loss function design (reduced when prediction dimension is lowered). For (b), in the paper Structured Prediction with Projection Oracles by Mathieu Blondel, Blondel argues this is relatively cheap since the MAP link projects onto a convex set. Historically, the consistent loss literature has ignored this question of efficient linking and its connections to prediction dimension, and we agree this should be explored further in future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I agree that the increased computational cost can be fair. However, given the long-term deployment of large-scale models, the rising inference costs might quickly outweigh the reduction in training costs. It would be beneficial for the authors to highlight this concern in the future work section. | Rebuttal 1:
Rebuttal: Thank you to all of the reviewers for their feedback. We sincerely appreciate the time and effort you put into these reviews. We see them as very constructive and believe they will help us improve this work. We have written a response for each reviewer addressing individual concerns and questions regarding this work. Once again, thank you for your reviews. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
From Unstructured Data to In-Context Learning: Exploring What Tasks Can Be Learned and When | Accept (poster) | Summary: The paper basically presents three theoretical analyses related to ICL. The section 2 shows that we can use CBOW to do the (country)-(capital) kind of ICL. The section 3 shows that positional embeddings, multiple layers in autoregressive LM, and blocked noise structures are important for ICL. The section 4 shows that ICL could fail when there are systematic and consistent mismatches between the training sequence and testing sequence.
Strengths: I think this paper is easy to follow and most explanations are clear (One minor suggestion: it would be more clear to also illustrate the correct answer of each prompt and provide some brief explanations such as the prompt in section 3 tries to repeat the first letter of a word). I choose fair in the presentation rating because I feel that the paper oversells its contributions in the title and abstract.
All the claims are supported by both strong theoretical conclusions and empirical simulations. The theoretical contributions are novel to me but I am not a theoretical researcher. Since the situations/preconditions of the claims are extremely simple, I think its significancy is not high for practitioners, but the contributions might be significant for theoretical researchers and might inspire the follow-up work.
Weaknesses: I think the main weakness of this paper is the mismatch between scope it seems to cover and its actual scope. The title and abstract suggests that this paper tries to study why the ICL works well given the unstructured training data in practice, but what the paper actually did is thoroughly studying 3 toy situations.
I understand that we often have to simplify the situations in order to get strong theoretical conclusions. I also think that, at least to me, it is difficult to derive those theoretical results in such simplified situations and all the toy situations are relevant to the ICL. Nevertheless, I think these situations are not very representative to most of the practical ICL settings. After all, most ICL is beyond just relying the co-occurrence statistics of the sentences like CBOW, finding the first letter of the word, and repeating some words in the context.
I understand that nowadays, one paper often needs to oversell its scope in the title and abstract to get attentions. Hence, although I suggest that the authors can revise the main storyline to reduce the overselling, I am fine with the current title and abstract if this paper is accepted at the end. I am also not a researcher who studies theory, so I do not know how significant or novel these theoretical results are. Therefore, I would like to let other researchers who have better theoretical background to rate this paper.
Technical Quality: 4
Clarity: 3
Questions for Authors: I have no specific question to the authors, so I think the rebuttal won't change my opinion. I won't lower my score if the authors choose to skip the rebuttal to my comments.
Confidence: 2
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: The limitation discussion in the appendix K is fair.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for reviewing our paper. We are delighted that you found it both easy to follow and supported by strong theoretical conclusions and empirical simulations. Below, we address your comments.
> It would be more clear to also illustrate the correct answer of each prompt and provide some brief explanations such as the prompt in section 3 tries to repeat the first letter of a word.
We appreciate your suggestion. We have modified Figure 1 to include a brief description of each ICL task and the correct answer for each prompt. Please refer to the PDF file in "Author Rebuttal".
> I think the main weakness of this paper is the mismatch between scope it seems to cover and its actual scope. The title and abstract suggests that this paper tries to study why the ICL works well given the unstructured training data in practice, but what the paper actually did is thoroughly studying 3 toy situations.
> After all, most ICL is beyond just relying on the co-occurrence statistics of the sentences like CBOW, finding the first letter of the word, and repeating some words in the context.
> I suggest that the authors can revise the main storyline to reduce the overselling.
Thanks for your comments. We have revised the title of the paper to “In-Context Learning from Training on Unstructured Data: The Role of Co-Occurrence, Positional Information, and Training Data Structure.”
Additionally, we have updated the abstract to better reflect that this paper identifies three components necessary for ICL to occur from training on unstructured data and provides both theoretical and empirical justifications for each component. For instance, we included the sentence “To this end, we thoroughly examined three ICL tasks and identified three components that are crucial for ICL to occur from training on unstructured data: co-occurrence, positional information, and training data structure.” We hope these changes will alleviate your concern regarding the discrepancy between the paper’s title and abstract and its actual scope.
Regarding the content, we would like to emphasize that we did not claim that this paper provides a complete explanation for why ICL works well with unstructured training data. While we fully agree that ICL encompasses more scenarios than those covered in the paper—as noted in the Limitations section of the original submission—our goal is to offer insights into some components of unstructured data that are crucial for ICL to occur.
Furthermore, although the ICL tasks discussed in this paper are simple, they are firmly grounded in prior research. For example, ICL tasks involving known pairings—such as word translations and country-capital city pairs—have been analyzed in studies by Brown et al. (2020), Todd et al. (2024), and others. Similarly, ICL tasks like word-first letter pairings have been explored in works by Xu et al. (2024), Chen et al. (2024), and others. We believe that thoroughly examining ICL in these simplified settings offers an interesting perspective on ICL, particularly regarding the importance of co-occurrence, positional information, and the structure of the training data.
**References**
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Neural Information Processing Systems.
Chen, Y., Zhao, C., Yu, Z., McKeown, K., & He, H. (2024). Parallel structures in pre-training data yield in-context learning. arXiv preprint arXiv:2402.12530.
Todd, E., Li, M., Sharma, A., Mueller, A., Wallace, B. C., & Bau, D. (2024). Function vectors in large language models. International Conference on Learning Representations.
Xu, Z., Shi, Z., & Liang, Y. (2024). Do large language models have compositional ability? An investigation into limitations and scalability. ICLR 2024 Workshop on Mathematical and Empirical Understanding of Foundation Models.
---
Rebuttal 2:
Title: Thank you for willing to revise the title and abstract
Comment: I raise my presentation score from 2 to 3 and overall score from 5 to 6.
Nevertheless, I remain my confidence to 2 because I think the main contribution of this paper is its theoretical analyses but I do not have sufficient background to judgement the this main contribution. | Summary: This paper studies the emergence of in-context learning (ICL) in both CBOW (Mikolov et al,. 2013) and Transformer models. The focus is on simple synthetic settings that can be studied both theoretically and through synthetic experiments with small models. The paper identifies co-occurence as a key ingredient for ICL to emerge in CBOW. Then the paper considers how positional information is critical for a Transformer (or any) model to identify certain order-dependent patterns. Finally, the paper presents two sythetic scenarios involving repetition in which ICL fails with a simple model.
Strengths: - The paper begins by identifying synthetic scenarios in which co-occurrence within a sentence is sufficient for a continous bag-of-worlds (CBOW) model to be able to perform ICL. The paper proves two theorems identifying when co-occurrence statistics are sufficient to ensure that CBOW could perform ICL.
- The paper then proves that positional information is required to perform certain sythetic tasks related to the ordering of tokens.
- Finally, the paper identifies two sythetic settings in which one might expect ICL to work.
- Sythetic experiments support each of the above claims.
Weaknesses: - The paper states that "ICL is achievable by only modeling co-occurrence information using [CBOW]". However, this seems to miss the generality with which the term ICL is used. That is, ICL is commonly used for generation tasks such as full-sentence machine translation (not just the simple token-level translation examples in this paper). So to say that "ICL is achievable" seems like a misuse of the terminology. Without a more careful definition of ICL, this statement is invalid.
- After showing that Llama 2 is unable to use ICL to translate the word English words "soon" and "main" to Indonesian, the paper claims that "these models should be equally likely to produce the correct answer for any given [word], irrespective of its relevance to the in-context examples. However, our experiment demonstrates that this is not the case". This is a huge leap for a poorly designed experiment. Llama 2 was trained on 98.08% English data. The amount of Indonesian language data may have been miniscule. As such, co-occurence may offer an explanation for the result, but adjacency might be equally informative. To speak of co-occurrence without any discussion of adjacency seems a bit odd here. This same issue appears later in the paper's claim "This suggests ICL may arise from co-occurrence information", whereas a claim that it is informed by co-occurrence might be more apt.
- It is not clear to this reader why one would expect the setting in Section 4.1 to succeed via ICL in the first place. For example, we also wouldn't expect these settings to suceeed if they were presented to a supervised learner either because of the mismatch between the training examples and the prompt example.
- The paper relegates the entire 2.5 page related work section to the appendix. It would be better to include more in the main paper; at present only lines 25-32 in the Intro address prior work making it difficult to position this paper early on.
Technical Quality: 3
Clarity: 2
Questions for Authors: - In line 258, the paper claims that "each token in V should be present as the first token in both the training and test sets." But shouldn't we be interested in whether this is really required in the largest of LLMs? Is there any way to connect this result back to larger models?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for reviewing our paper. Below, we address your comments.
> The paper states that "ICL is achievable by only modeling co-occurrence information using CBOW". However, this seems to miss the generality with which the term ICL is used. … So to say that "ICL is achievable" seems like a misuse of the terminology. Without a more careful definition of ICL, this statement is invalid.
We appreciate your comment. We have updated the paper to clarify that this statement applies to ICL tasks with known input-output pairings, like country-capital and word translations.
> The paper claims that "these models should be equally likely to produce the correct answer for any given [word], irrespective of its relevance to the in-context examples. However, our experiment demonstrates that this is not the case". This is a huge leap for a poorly designed experiment. Llama 2 was trained on 98.08% English data. The amount of Indonesian language data may have been miniscule.
We thank you for your comment. Although LLaMA 2 and other LLMs are mostly trained on English corpus, we believe this does not invalidate our experiment, as the words involved in the experiments are common in both languages. If the models learn the English-to-Indonesian mapping, they should translate correctly regardless of context.
However, it is worth noting that the experiments in Section 2.4 do not involve any rare languages, and the same conclusion applies.
> It is not clear to this reader why one would expect the setting in Section 4.1 to succeed via ICL in the first place.
Thanks for pointing this out. As per your suggestion, we have revised the wording in Section 4.1 to clarify that the failure of ICL in this scenario is not surprising and is instead related to the importance of tasks present in the training data, similar to findings of Raventós et al. (2023) and Yadlowsky et al. (2023).
The original phrasing that the setting in Section 4.1 should succeed via ICL was intended to differentiate ICL in LLMs from simple pattern recognition. In Section 4.1, the pre-training data and test prompts consist of sentences with different repeating patterns. A model (or human) that learns to recognize these patterns is expected to successfully perform ICL on the test prompts, even when the pattern is different. Similarly, in Section 4.2, a model (or human) that identifies consistent $(a_i, b_i)$ pairs at the start and end of each pre-training sentence should successfully perform ICL on the test prompts that contain in-context examples of the form $(a_i, b_i)$.
> It would be better to include more (related work) in the main paper; at present only lines 25-32 in the Intro address prior work making it difficult to position this paper early on.
We thank you for the comment. We have expanded the Introduction section to better relate our work to existing literature and clarify its position in the ICL research landscape. Specifically, we highlighted the following comparisons:
- Numerous studies connected ICL with classical methods like gradient descent (e.g., Akyürek et al. (2022)), Bayesian inference (e.g., Xie et al. (2022)), and Newton’s method (e.g., Fu et al. (2023)). In contrast, our work links ICL to the continuous bag of words (CBOW) model, demonstrating that ICL involving known input-output pairings can be enabled through learning CBOW co-occurrence patterns.
- While several studies examined the pre-training aspects of ICL, such as data distribution (e.g., Chan et al. (2022)) and task diversity (e.g., Raventós et al. (2023)), our work emphasizes the importance of co-occurrence, positional information, and training data structure for ICL.
- Other research explored ICL in specific data-generating processes like discrete functions (Bhattamishra et al. (2023)) and autoregressive processes (Sander et al. (2024)). Our work focuses on data characterized by input-output pairs and repeating token patterns.
> In line 258, the paper claims that "each token in V should be present as the first token in both the training and test sets." But shouldn't we be interested in whether this is really required in the largest of LLMs? Is there any way to connect this result back to larger models?
Thanks to your comment, we repeated the experiment with larger models by varying embedding dimensions (10, 50, 100, or 200) and transformer layers (1, 5, 10, or 20). We found that test accuracy remained zero when each token in $V$ appeared as the first token in exactly one of the training and test sets, regardless of model size. In practice, this condition is likely met due to the vast size of LLMs' pre-training data.
**References**
Akyürek, E. et al. (2022). What learning algorithm is in-context learning? Investigations with linear models. ICLR
Bhattamishra, S. et al. (2023). Understanding in-context learning in transformers and LLMs by learning to learn discrete functions. ICLR
Chan, S. et al. (2022). Data distributional properties drive emergent in-context learning in transformers. NeurIPS
Fu, D. et al. (2023). Transformers learn higher-order optimization methods for in-context learning: A study with linear models. NeurIPS M3L
Raventós, A. et al. (2024). Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression. NeurIPS
Sander, M. E. et al. (2024). How do transformers perform in-context autoregressive learning? ICML
Xie, S. M. et al. (2022). An explanation of in-context learning as implicit Bayesian inference. ICLR
Yadlowsky, S. et al. (2023). Pretraining data mixtures enable narrow model selection capabilities in transformer models. arXiv
---
Rebuttal 2:
Title: Reply to rebuttal
Comment: Thanks for the thorough response. You've mostly addressed my concerns, but one of the small (but still important!) odd claims you make is not, and that is raised again below -- hopefully more clearly this time.
> We have updated the paper to clarify that this statement applies to ICL tasks with known input-output pairings
Great. Qualifying definitions of ICL seems very important.
> the experiments in Section 2.4 do not involve any rare languages, and the same conclusion applies.
Sorry -- I think you latched on to my rare languages point and missed the broader question about poor experimental design. The paper still claims in lines 75-77 that "If ICL stems from the ability of LLMs to recognize consistent mappings in test prompts, these models should be equally likely to produce the correct answer for any given [word], irrespective of its relevance to the in-context examples. However, our experiment demonstrates that this is not the case".
But why would anyone expect that a model should be equally likely to produce a correct answer for any given word irrespective of its relevance to the in-context examples? That is, your if/then statement does not seem like something most researchers would agree with. For example, decades of work on NLP and machine learning have shown that when learning through parameter estimation (instead of ICL), the relevance of the training examples to the test examples is of critical importance (i.e. the entire NLP literature on domain adaptation) (see [1] and [2]). Similarly, recent work on retrieval augmented ICL and the like has shown that large gains are possible when one retrieves a collection of in-context examples that are relevant (usually in an nearest-neighbor embedding sense) to the test instance (see [3] and [4]). In other words, the dominant view is that the when a model is learning (either via fine-tuning or ICL), whether the test instance is relevant to training examples is a critical factor. (Note that the citations here are just ones I could quickly find for MT. But I imagine you could find similar papers supporting this for, say, classification tasks closer to your experiment in Section 2.4 as well.)
In other words, it seems reasonable that one should counter your statement "our experiment demonstrates this is not the case" by pointing out that the LLM could simply be learning to recognize consistent mappings in test prompts /and/ that the mappings it infers may be domain specific. That would better align with the fields current understanding of how learning, be it through parameter estimation or ICL, happens in most models (LLMs and other varieties too).
This is a relatively minor issue with the paper, but the claim seems so outlandish that I'm returning back to it here.
[1] "A Survey of Domain Adaptation for Neural Machine Translation"
[2] "Domain Adaptation and Multi-Domain Adaptation for Neural Machine Translation: A Survey"
[3] "Towards Robust In-Context Learning for Machine Translation with Large Language Models"
[4] "Efficient Machine Translation Domain Adaptation"
> We have expanded the Introduction section to better relate our work to existing literature and clarify its position in the ICL research landscape.
Great!
> In practice, this condition is likely met due to the vast size of LLMs' pre-training data.
That seems reasonable.
-----
Under the assumption that lines 75-77 can be fixed easily, I'm bumping my score up to a 5.
---
Rebuttal 3:
Comment: Thank you for your response! We completely agree with your point and will make the necessary revisions to lines 75-77. | Summary: The paper studies the emergence of ICL using a synthetic setting. Particularly, it focuses on the importance of concurrence statistics to ICL, and shows that under some simplified conditions, a CBOW-styled model is proven to complete the correct completion for an ICL example. The paper additionally proves the importance of position encodings in the studied setting, showing that when the ICL task is inherently task dependent, position encodings is necessary for good performance.
Strengths: The paper studies an important problem. The approach---reconstructing LM behavior in much "shallower" models---is intriguing and can be applied to additional problems concerning LMs. The technical claims are well presented and the paper is overall very readable.
Weaknesses: The main weakness is that the paper studies a very synthetic setting. I understand some simplification are needed for the derivation of theoretical results, and this is OK. But, for example, it would be interesting to try deriving results on cases where the input consists of valid grammatical sentences, rather than a concatenation of tuples. If that is not possible, the paper should clearly state the disparity between "real" ICL setting and this setting. While LMs can be presented with tuples in inference time, they are usually not trained on such tuples, but rather on free form language.
Technical Quality: 4
Clarity: 3
Questions for Authors: The experimental results use cross entropy loss rather than the squared loss used in the theory. Why?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper. We are pleased that you considered the problem we addressed important and found the paper well-presented and very readable. Below, we address your comments.
> It would be interesting to try deriving results on cases where the input consists of valid grammatical sentences, rather than a concatenation of tuples. If that is not possible, the paper should clearly state the disparity between "real" ICL setting and this setting. While LMs can be presented with tuples in inference time, they are usually not trained on such tuples, but rather on free form language.
We appreciate your comment. We have expanded the Limitations section to clearly emphasize the distinctions between our setting and the general ICL setting. That being said, it is worth noting that the results related to co-occurrence are applicable to valid grammatical sentences, as the co-occurring pairs can appear anywhere within the sentence (e.g., "Beijing is the capital of China," "the city of Beijing is located in China").
> The experimental results use cross entropy loss rather than the squared loss used in the theory. Why?
We thank you for this comment. While the cross-entropy loss is frequently used in practice, it does not yield a unique optimal set of parameters (due to the translation invariance of the softmax function), and obtaining closed-form expressions for the minimizers can be difficult. Therefore, we used the squared loss to make theoretical analyses more manageable. This simplification has been adopted in other theoretical works, such as in Li et al. (2023).
**References**
Li, Y., Li, Y., & Risteski, A. (2023). How do transformers learn topic structure: Towards a mechanistic understanding. International Conference on Machine Learning.
---
Rebuttal Comment 1.1:
Title: Reponse
Comment: Thanks for your response.
Please note I asked why the squared loss was not *also* used in the experiments.
---
Rebuttal 2:
Comment: Thanks for the clarification.
We confirmed that the experimental results with the squared loss align with the theoretical findings. However, we did not include it in the paper because the squared loss is rarely used in practice.
Concretely, in the context of Theorem 1, an accuracy of $1$ is achieved when $(p_0, p_1, p_2) = (0, 1, 0)$ or $(0, 0, 1)$ under the squared loss in the clean scenario, consistent with our theoretical results.
The full results with the squared loss are as follows:
- In the clean scenario, an accuracy of $1$ is achieved for each of the 6 $(p_0, p_1, p_2)$ tuples.
- In the corrupted scenario, an accuracy of $0$ is achieved when $(p_0, p_1, p_2) = (0, 1, 0)$, while an accuracy of $1$ is achieved for the remaining 5 tuples.
Under the setting of Theorem 2, the balanced accuracy when $(p_0, p_1, p_2) = (0, 0, 1)$ is $1$ under the squared loss, also confirming our theoretical results.
---
Rebuttal Comment 2.1:
Title: Response
Comment: Thanks for the clarification. I maintain my positive assessment. | Summary: The paper investigates the emergence of ICL from training on unstructured data. It explores two types of ICL tasks: the first involves input-output pairings that frequently co-occur within sentences, and the second comprises recognizable patterns that do not commonly co-occur. The authors demonstrate that the first task can be addressed by modeling co-occurrence information, and highlight the importance of positional information and blocked noise structures through the second task. Additionally, the paper discusses scenarios where ICL fails. Both theoretical and experimental evidence are provided in the paper.
Strengths: - It enhances understanding of how the structure of training data influences the emergence of ICL capabilities
- the paper provides a mix of theoretical proofs and empirical validations to support its claims
Weaknesses: - There is a lack of experiment details in the paper, such as the number of training sentences used, the frequency of each input-output pair's repetitions within training sentences, and the methodology for generating training and evaluation data.
- The scope of the experiments is limited, using small datasets and simplistic model architectures. Moreover, there is an absence of real-world data.
- There is uncertainty about whether the findings would scale well to complex real-world data, larger models and higher embedding dimensions.
Technical Quality: 3
Clarity: 2
Questions for Authors: Can you add examples of training sentences and prompts for experiments?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper. We are pleased that you found our work valuable in enhancing the understanding of ICL on unstructured training data, and our theoretical and empirical results well-supported. Below, we address your comments.
> lack of experiment details in the paper, ... number of training sentences used, the frequency of each input-output pair's repetitions ... methodology for generating training and evaluation data.
> Can you add examples of training sentences and prompts for experiments?
We appreciate your comment. We have included more details about the experiments in the paper to strengthen our arguments. For convenience, we summarized them below:
- For Table 1 exps, training data consists of 50K sentences. In the clean version, sentences are generated uniformly as described in lines 122-124. In the corrupted version, sentences are similarly generated, but each $(c_i, d_i)$ pair is replaced by $(c_i, r_j)$ or $(d_i, r_j)$ with a probability of $1/4$ each (lines 124-126). Test sentences are generated according to the setup in Theorem 1.
- Clean ex:
- Training: $c_1 d_1 r_1 r_2 r_3 r_4 r_5 r_6$ or $r_1 r_2 r_3 r_4 r_5 r_6 r_7 r_8$
- Prompt: $c_1 d_1 c_2 d_2 c_3 d_3 c_4 \dots$
- Corrupted ex:
- Training: $c_1 r_1 r_2 r_3 r_4 r_5 r_6 r_7$ or $c_1 d_1 c_2 r_1 r_2 r_3 r_4 r_5$
- Prompt: $c_1 d_1 c_2 d_2 c_3 d_3 c_4 \dots$
- For Table 2 exps, training data consists of 50K sentences. In the clean version, sentences are generated uniformly as described in lines 166-169. In the imbalanced and extreme versions, the 60 other words are divided into three categories: 20 for $cd$ sentences ($rcd_{\cdot}$), 20 for $ce$ sentences ($rce_{\cdot}$), and 20 for both types ($r_{\cdot}$). In the imbalanced version, $cd$ ($ce$) sentences are 4 times more likely to sample a $cd$ ($ce$) word than a $ce$ ($cd$) word. In the extreme version, $cd$ ($ce$) sentences cannot contain any $ce$ ($cd$) words. Test sentences are generated according to the setup in Theorem 2.
- Clean ex:
- Training: $c_1 d_1 r_1 r_2 r_3 r_4 r_5 r_6$ or $c_1 e_1 r_1 r_2 r_3 r_4 r_5 r_6$
- Prompt: $c_1 d_1 c_2 d_2 c_3 d_3 c_4 \dots$ or $c_1 e_1 c_2 e_2 c_3 e_3 c_4 \dots$
- Imbalanced ex:
- Training: $c_1 d_1 rcd_1 rcd_2 rcd_3 rce_4 r_5 r_6$ or $c_1 e_1 rcd_1 r_2 rce_3 rce_4 rce_5 r_6$
- Prompt: $c_1 d_1 c_2 d_2 c_3 d_3 c_4 \dots$ or $c_1 e_1 c_2 e_2 c_3 e_3 c_4 \dots$
- Extreme ex:
- Training: $c_1 d_1 rcd_1 rcd_2 rcd_3 r_4 r_5 r_6$ or $c_1 e_1 r_1 r_2 r_3 rce_4 rce_5 r_6$
- Prompt: $c_1 d_1 c_2 d_2 c_3 d_3 c_4 \dots$ or $c_1 e_1 c_2 e_2 c_3 e_3 c_4 \dots$
- Table 3 exps follow the setup of experiments in Table 2, except that the pairs are now of the form $(c_i, d_i)$ and $(e_i, f_i)$ instead of $(c_i, d_i)$ and $(c_i, e_i)$.
- For Section 2.5 exps, below is an ex. for each sentence type in Appendix D:
- Paramaribo is the vibrant heart of Suriname.
- Gabon (GAB) protects its diverse rainforests and wildlife.
- The banking sector is central to Liechtenstein's prosperity.
- Every country has its unique cultural identity and heritage.
- The city of Dushanbe reflects Tajikistan's vibrant spirit. Roseau is the cultural tapestry of Dominica.
- Mayotte (MAY) features lush landscapes and peaks. Turkmenistan (TKM) features the fiery Darvaza Crater.
The ICL prompts follow the form used in Section 2.4, with 1-5 examples.
- For Table 4 exps, training and test data consist of all sentences in the form $abca$, where $a$, $b$, and $c$ are distinct. Each test sentence is different from any training sentence. In the first scenario (both), the first tokens of the training sentences cover the entire vocabulary. In the second scenario (either), each token can be the first token in either the training or test data, but not both (lines 255-256).
- For Table 5 exps, training data consists of 50K sentences generated uniformly as detailed in Section 3.1 (lines 289-296). The ICL prompt formats are also described in Section 3.1.
- For Table 6 exps, training data consists of 50K sentences. In the clean scenario, training data are of the form $abcadefd$ and $abcbdefe$, with ICL prompts as $\underline{abcadef} \dots$ and $\underline{abcbdef} \dots$. In the block-noisy scenario, training data include sequences like $n_1 n_2 n_3 n_4 abcadefd$ and $abcb n_1 n_2 n_3 n_4 defe$, with ICL prompts as $\underline{abcadefdghi} \dots$ and $\underline{abcbdefeghi} \dots$.
- For Table 7 exps, training data consists of 50K sentences generated uniformly according to the processes in Sections 4.1 and 4.2. The ICL prompt formats are also described in the same subsections.
> The scope of the experiments is limited, using small datasets and simplistic model architectures. … absence of real-world data.
> uncertainty about whether the findings would scale well to complex real-world data, larger models and higher embedding dimensions.
Thanks for your comment. We recognize these issues as shortcomings of our paper, as already noted in the Limitations section. However, we believe that our thorough analyses with small data sets and simple model architectures provide valuable insights into some components of unstructured data that are crucial for ICL. Also, even though we did not train LLMs on real-world data sets in our paper, we empirically validated our co-occurrence arguments by probing LLaMA 2, which was trained on real-world data sets (see Section 2.4).
Regarding larger models with higher embedding dimensions, our findings remain consistent with the theoretical conclusions. For example, even in larger models (embedding dimension up to 200 and number of layers up to 20), the test accuracies in Table 4 remain zero when each token in $V$ appears as the first token in exactly one of the training and test sets. Similarly, under the same models, the test accuracies in Table 7 also remain zero, in line with Theorems 5 and 6.
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: I appreciate the authors' responses during the rebuttal. The experimental details are helpful for the comprehension of the work. I would like to maintain my original score as I am not very confident in my assessment over the theoretical contributions of this work. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for providing constructive and insightful reviews. Please find our response to each reviewer in the "Rebuttal" section. Also, the updated Figure 1 (as requested by Reviewer xShA) is attached here.
Pdf: /pdf/29af091604ae57951a34989ef4894d6558264083.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On Convergence of Adam for Stochastic Optimization under Relaxed Assumptions | Accept (poster) | Summary: This paper provides two probabilistic convergence rates for Adam with generalized affine variance noise under smoothness and generalized smooth condition, respectively, which achieves comparable results to many prior results.
Strengths: Please see the above Summary.
Weaknesses: 1. I suggest that authors should provide detailed formulas for some notations including $\textbf{g}_t, g(\textbf{x}), \nabla f(\textbf{x})$. Is $\nabla f(\textbf{x})$ the gradient with the form of expectation.
2. In line 118, the reference [10] is cited repeatedly.
3. Section 5 is used to discuss the most related works and make comparisons with the main results in this paper. However, authors only discuss the most related works without any comparison with their main results.
4. As mentioned in 1., if $\nabla f(\textbf{x})$ is the gradient with the form of expectation, the two results (Theorems 3.1 and 4.1) in this paper are not fully high probability since $\frac{1}{T} \sum_{t=1}^T ||\nabla f(x_t)||^2$ is equivalent to $\frac{1}{T} \sum_{t=1}^T ||E_{z_i}[\nabla f(x_t,z_i)]||^2$ ($z_i$ is the training data defined by me) which is smaller than $\frac{1}{T} \sum_{t=1}^T E_{z_i}[||\nabla f(x_t,z_i)||^2]$. In other words, the results with the form $\frac{1}{T} \sum_{t=1}^T E_{z_i}[||\nabla f(x_t,z_i)||^2]$ can directly derive the corresponding results with the form $\frac{1}{T} \sum_{t=1}^T ||E_{z_i}[\nabla f(x_t,z_i)]||^2$, to say nothing of the one with additional high probability. Therefore, from my perspective, high probability is not an advantage for this paper, but rather weakens $\frac{1}{T} \sum_{t=1}^T ||E_{z_i}[\nabla f(x_t,z_i)]||^2$.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the symbol $\xi$ in Equation (1)? It should be explained in the main text.
2. What are the meanings of “right parameter” mentioned in line 4 (Abstract) and “problem parameter” in line 19?
3. Although, in line 139, authors denote “it’s easy to verify that (8) is strictly weaker than L-smoothness” and provide a concrete example $f(x)=x^4$, can detailed proof be given to verify this argument?
4. In Table 1, authors make comparisons with some prior work. However, the forms of some results (such as [33, 40, 49]) are not in the same form as the average form $\frac{1}{T} \sum_{t=1}^T ||\nabla f(x_t)||^2$ in this paper. Therefore, are these comparisons appropriate?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The contribution of this paper is a bit weak. Although authors weaken some assumptions, such as noise and smoothness, the form of their results is also weakened. Therefore, I don’t ensure the weakened assumptions are their advantages.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thanks a lot for the reviewer's effort invested on our paper. Below are our responses to the major concern.
**Response to Weaknesses 1-3: Thank you for the suggestions on the presentation. We will revise as follows accordingly.**
1. In Line 14, we will replace '$g_t$' with '$g_t = \frac{\partial f_\xi(x,\xi_t)} {\partial x}|_{x=x_t}$'.\
In Line 16, we will replace '$g(x)$ is unbiased' with '$g(x,\xi)$ is an unbiased estimate of $\nabla f(x)$ such that $E [g(x,\xi)|x] = \nabla f(x)$'. \
In Eq. (2), we will replace '$E[\|g(x) - \nabla f(x)||_2^2]$' with '$E[\|g(x,\xi) - \nabla f(x)||_2^2|x]$'. \
In general, $\nabla f(x)$ does not need to be of the expectation form, while for Eq. (1) ,
$$
\ \ \ \ \ \ \ \ \nabla f(x) = E \left[\frac{\partial f_{\xi}(x,\xi)}{\partial x}\mid x \right]
$$
2. In Line 118, we will replace '[22, 10, 53, 10, 38]' with '[22, 10, 53, 38]'.
3. To accomplish Table 1, we could add some detailed comparisons on convergence for Adam as follows. \
In Line 189, we will add 'In comparisons with these works, our hyper-parameter setup could be relatively simpler than [49], while our results do not need the assumption on bounded gradients from [18, 20]. Moreover, we study Adam with two corrective terms which were overlooked in these works.' \
In Line 203, we will add 'In comparisons, our hyper-parameter setup on $\beta_2$ is simpler than [40] while involving the natural corrective terms. Moreover, we consider a milder noise assumption than the sub-Gaussian noise assumption in [23], with different hyper-parameter setups on $\beta_1$ and $\beta_2$.'
**Response to Weakness 4: We clarify as follows.**
- **High-probability convergence can ensure expected convergence:** \
Let $\Sigma = \sqrt{T} \times \frac{1}{T}\sum_{t=1}^T||\nabla f(x_t)||^2.$
Our result shows that $$Prob\left(\Sigma \gtrsim { \log{1\over \delta}}\right) \leq \delta.$$
Then for some constants $c, C$,
$$ E[\Sigma] = \int_{0}^\infty Prob(\Sigma \geq y) dy \leq \int_{0}^\infty e^{-cy} dy \simeq C.$$
- *$ \frac{1}{T}\sum_{t=1}^T E_{z_i}||\nabla f(x_t,z_i)||^2$ (you mentioned) is the average of **conditional** expectations, while $\frac{1}{T}\sum_{t=1}^TE ||\nabla f(x_t)||^2$ is the average of **full/total** expectations.* \
Previous expected convergence results are on $\frac{1}{T}\sum_{t=1}^TE ||\nabla f(x_t)||^2$. We think that the form you mentioned could be meaningful for the future work. However, no results are given for $ \frac{1}{T}\sum_{t=1}^T E_{z_i}||\nabla f(x_t,z_i)||^2$, to our knowledge.
**Response to Questions 1-4**
1. $\xi$ is the random vector following a probability distribution $\mathcal{D}$. We will add this statement in Line 14 of the revision.
3. The 'right parameter' means the hyper-parameter setup such as (6) in Theorem 3.1. The 'problem parameter' here refers to the (generalized) smooth and noise level parameters. We will make this clearer in the revision.
3. Proof for '(8) weaker than $L$-smoothness': If $f$ is $L$-smooth, i.e., $|| \nabla f(x) - \nabla f(y)|| \leq L ||x-y||$ for all $x,y$, then $(8)$ is trivially satisfied for $L_0 = L$ and any $L_q>0$. \
Proof for `$f(x)=x^4$ satisfies (8)': First, $|f'(y)-f'(x)| = 4|y^3-x^3| =4 |y^2+xy+x^2||y-x| $. Observe that for any constant $L >0$, there always exist $x,y \in \mathbb{R}$ such that $4|y^2+xy+x^2 |> L$ and thus the $L$-smoothness condition does not hold. However, when restricting $|x-y| \le 1/L_q$, we could derive that
$$
4|y^2+xy+x^2| \le 6(y^2 + x^2) \le 6(2x^2+\frac{2}{L_q^2} + x^2) \le 18x^2+\frac{12}{L_q^2} \le \frac{18}{4^{\frac{2}{3}}} (4x^3)^{\frac{2}{3}} +\frac{12}{L_q^2},
$$
where we set $L_q = 18/(4^{3/2})$, $q= 2/3$ and $L_0 = 12/L_q^2$, which leads to (8).
3. [33, 40, 49] study Adam with random-reshuffling samplings and their results are w.r.t. $\min_{k \in [T]}||\nabla f(x_{k,0})||$. Here $x_{k,0}$ denotes the output of the $k$-th epoch (i.e. $kn$-th iteration) of Adam, and $n$ is the number of training data. We will add this remark after Table 1 in the revision.
**Response to Limitations: We clarify that our high probability bounds are stronger than the expected bounds**, following from the 'Response to Weakness 4'. The brief introduction of our proof novelty could be seen in the global rebuttal.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed responses to my questions. I will raise my score to 6. However, I am still confused about the sentence "In general, $\nabla f(x)$ does not need to be of the expectation form, while for Eq. (1)". Why is Eq. (1) of the expectation form, but the author says there is no need for expected form? And due to the implicit expectation of $\nabla f(x)$, the results of this paper are not fullly high-probability. Is my understanding right?
---
Rebuttal 2:
Comment: We thank a lot for the reviewer's quick feedback and generosity in raising the score. As far as we can understand, the reviewer still has two questions:
- whether there is an expectation form in $\nabla f(x)$;
- the result is not full high probability.
We then answer as follows:
- **The loss function $f$ is not necessarily the expected form in Eq. (1).** We indeed only require $f$ to be differentiable and smooth. We will clarify this in the revised version. In this sense, $\nabla f(x)$ also not necessarily includes the expectation.
- **$\\|\nabla f(x)\\|^2 \le \varepsilon$ implies that $x$ is near-stationary, which is a standard measure in non-convex smooth optimization.** Our high probability result shows that $\frac{1}{T}\sum_{t=1}^T \\|\nabla f(x_t)\\|^2\le \varepsilon$, indicating that there exists at least one near-stationary point $x_{\tau},\tau \in [T]$.
- As far as we could understand, (please correct me if I was wrong), the fully high probability you mention is to derive a high probability bound for $\frac{1}{T}\sum_{t=1}^T \\|g(x_t,z_t)\\|^2$. We think that this could be meaningful. However, given the second point, the high probability bound related to $\frac{1}{T}\sum_{t=1}^T \\|\nabla f(x_t)\\|^2$ could implies a near-sationary point while the bound for $\frac{1}{T}\sum_{t=1}^T \\|g(x_t,z_t)\\|^2$ may not, which we will show through an example at the end.
- In addition, since $\frac{1}{T}\sum_{t=1}^T \\|\nabla f(x_t)\\|^2$ is an entirely random variable with respect to $z_1,\cdots,z_{T}$, **our result is fully high probability over random samples $z_1,\cdots,z_T$.** However, when Assumption (A2) holds, we could choose any random sample $z_t'$ that is independent from $z_t$ and obtain that $\nabla f(x_t) = \mathbb{E}_{z_t'}[g(x_t,z_t')]$. Hence, due to the additional expectation on $z_t'$, we agree that the obtained result is not fully high probability without restricting on $z_1,\cdots,z_T$.
We also present a simple example to help better understand the difference of two types of measure. Suppose that the loss function is $f(x)=x^4$, which does not include expectation in its gradient. We now consider the following SGD algorithm (with differences to Adam in step-size):
- Input $x_1$ and step-size $\eta > 0$;
- For $t = 1\cdots T:$
- Draw a random sample $z_t$ such that $\mathbb{E}[z_t \mid x_t]=0$ (e.g., a Gaussian white noise);
- Generate the stochastic gradient $g(x_t,z_t) = 4x_t^3+z_t$ (which is an unbiased estimator);
- Update the sequence as $x_{t+1} =x_{t} - \eta (4x_{t}^3 + z_t)$.
If we obtain that $\frac{1}{T}\sum_{t=1}^T (4x_{t}^3)^2 \le \varepsilon$ with high probability (as in our paper), which is fully high probability over $z_1,\cdots, z_{T}$, then we are able to deduce that at least one gradient $(4x_{\tau}^3)^2,\tau \in [T] $ is small enough and $x_{\tau}$ is near-stationary. However, if we obtain the fully high probability bound $\frac{1}{T}\sum_{t=1}^T (4x_{t}^3 + z_{t})^2 = \frac{1}{T}\sum_{t=1}^T [(4x_{t}^3)^2 + z_t^2 + 8x_t^3z_t]\le \varepsilon$, then we could not derive any stationary point from this inequality since the term $x_t^3 \cdot z_t$ may be non-zero.
We hope that the above answer and example could address your questions. If you have any further inquiries, we are delighted to discuss them with you.
Best regards,
Authors. | Summary: In this paper, the authors analyze the convergence of Adam under milder noise conditions (affline variance) and milder smoothness conditions (both $L$-smoothness and $(L_0,L_q)$-smoothness) and propose a $O(\text{polylog}(T)/\sqrt T)$ convergence rate.
Strengths: This paper analyses the convergence of Adam under milder smoothness conditions compared to the previous work. The result is relatively solid and convincing. The writing structure is also relatively clear.
Weaknesses: 1. As the author claimed in their paper, they did not provide numerical experiments in this paper. While this paper is a theoretical paper focusing on the convergence analysis of Adam, some simple numerical experiments aligning with the results will make it more convincing.
2. This paper exhibits a slight lack of novelty. Since after checking out the proof details, I found that the crucial techniques were almost proposed by the previous related works. However, this weakness is trivial, especially for a theoretical paper, and as I claimed in the Strength part, the result of this paper is solid.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I suggest the authors could recall the readers of the definitions in the proof part, since numerous variables are introduced for proof, like $\mathcal{G}_t$, $M$, $\hat M$, $a_t$, $b_t$ and so on. It's a little inconvenient to check the definition in the previous pages each time.
2. Since coordinate-wise calculations are commonly used in the proof, I suggest the authors could also consider demonstrating their results based on the $L_\infty$ smoothness condition, as discussed in [1]. Also, I wonder about the difference of $(L_0,L_q)$-smoothness and local smoothness.
3. For lemma B.2, I happened also to use this result before and I suggest the author cite [2], as I found this result in lemma A.2 of [2].
4. How do the authors use the Cauchy-Schwarz inequality in the third inequality of line 740? Is this simply derived from $ab \leq 1/4a^2 + b^2$ ? (Here $a, b$ are both scalars).
5. In formula (58), line 557, where the last $\sqrt{\log}$ term comes from?
6. What's the meaning of formulas (59) and (60) since $G^2 \sim O(\text{polylog}T)$ has been claimed in formula (7)
[1] Balles, Lukas, Fabian Pedregosa, and Nicolas Le Roux. "The geometry of sign gradient descent." arXiv preprint arXiv:2002.08056 (2020).
[2] Zou, Difan, et al. "Understanding the Generalization of Adam in Learning Neural Networks with Proper Regularization." The Eleventh International Conference on Learning Representations.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for your valuable feedback and suggestions!
**Response to Weakness**
1. Indeed, numerical results on Adam with/without corrective terms could be found in [10]. We also perform a simple experiment in Table 1 (the attached PDF), which roughly aligns with the result.
2. Thanks a lot for your suggestions and understanding of our paper. For the proof novelty of this paper, please refer to the global rebuttal.
**Response to Question**
1. We will recall the notational definitions at the beginning of each proof part accordingly in the revision.
2. We appreciate the suggestion of the interesting topic to study convergence under $L_{\infty}$-smooth and the interesting work [1] you mentioned, which will be carefully cited and commented in the revision. Due to the limited time and space, we are not able to prove convergence under $L_{\infty}$-smoothness, and we shall leave it as a future work. \
The major difference between local smoothness and $(L_0,L_q)$-smoothness arises from the additional gradient norm in $(L_0,L_q)$-smoothness. If the gradient norm is globally bounded such that $||\nabla f(x)||\le G$, then $(L_0,L_q)$-smoothness implies local smoothness of $f$ as far as we understand, since following from $(L_0,L_q)$-smoothness: for any $x$ and $y \in B(x,1/L_q)$,
$$
||\nabla f(y) - \nabla f(x)|| \le (L_0 + L_q G^q)||y - x||.
$$
But in general case, $(L_0,L_q)$-smoothness could be considered weaker than local smoothness.
3. Thanks a lot for your reminder. We will add its citation in front of Lemma B.2 in our paper accordingly.
4. It's exactly derived from $ab \le a^2/4 + b^2$. We will change 'Cauchy-Schwarz inequality' as '$ab \le a^2/4 + b^2,\forall a,b>0$' in Line 740 of the revision.
5. Note that there is $\mathcal{G}_T$ in Line 556, and recalling the definition from Eq. (18) in Line 229, the additional $\log$ term comes from $\mathcal{M}_T = \sqrt{\log(eT/\delta)}$. We will make this clearer in the revision.
6. [Eq. (59), Eq. (60)] prove that when $\beta_2 = 1-c/T$, $\log (T/\beta_2^T)$ in (49) is smaller than $\mathcal{O}(\log T)$ and further prove $G^2 \sim \mathcal{O}(\text{poly}(\log T))$. In Line 562, we will revise as "Therefore combining (59), (60) and (49), we could verify the order in (7). Also, in Line 525, we will replace "as in Theorem 3.1" with "given by (49)".
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their clarifications. There might exist some minor concerns about novelty and experiment results, But as I claimed in my review, I think it's a good paper, and I'm happy to retain both scores of rating and confidence. Moreover, I would like to mention that analyzing the Adam from the $\ell_\infty$ norm might be interesting, since the geometry of Adam might align with the $\ell_\infty$ norm better as studied in [1, 2, 3].
[1] Kunstner, F., Chen, J., Lavington, J. W. and Schmidt, M. (2023). Noise is not the main factor behind the gap between sgd and adam on transformers, but sign descent might be. ICLR.
[2] Xie, S. and Li, Z. (2024). Implicit bias of adamw: $\ell_\infty$ norm constrained optimization. ICML.
[3] Zhang, C., Zou, D. and Cao, Y. (2024). The Implicit Bias of Adam on Separable Data. arXiv: 2406.10650.
---
Reply to Comment 1.1.1:
Title: Thank you for your positive feedback
Comment: We thank Reviewer 4Xir very much for his positive comment and prompt reply. We also thank Reviewer 4Xir for the constructive comments on investigating Adam with the $\ell_\infty$ norm and we agree that it is important. We thank Reviewer 4Xir for the interesting references which we think are interesting and can better enhance the understanding of Adam. | Summary: This paper studies the high-probability convergence of Adam in the non-convex setting under relaxed assumptions. The authors consider a general noise condition that governs affine, sub-Gaussian, and bounded noise conditions. They also consider a generalized smoothness condition motivated by language model experiments. Under these assumptions, they obtain a convergence rate of $\text{poly}(\log T/\delta)/T$, where $T$ is the number of iterations and $\delta$ is the confidence level.
Strengths: 1. Their result look novel and significant. They have shown the high-probability convergence of Adam under relaxed conditions than all previous papers.
2. The proofs look correct.
3. The paper is well-written and results are clearly presented.
Weaknesses: 1. One major concern is, by choosing $\beta_2=1-1/T$, does the author essentially reduce Adam to SGD with momentum, as this makes $v_t$ almost a constant? Btw, I think for [18] and [23] in Table 1, $\beta_1$ should be $1-1/\sqrt{T}$. Please also check other rows more carefully.
I will increase the score if this concern is addressed.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. The term “affine variance noise" is confusing to me. I think it should only refer to Equation (2), not Equation (3), as "variance" is defined as an expectation. If I understand correctly, the term in Line 3 refers to (3), which means the condition (A3) is actually stronger than (2), right?
2. Why in Table 1 you did not include [19] which you discussed in Section 5.1?
3. The rate in Theorem 4.1 is dimension dependent, whereas the rate in Theorem 3.1 is dimension free. Do you think it is something fundamental in the relaxed smoothness condition?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: See weaknesses and questions above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thanks a lot for the reviewer's effort and valuable suggestions on our manuscript.
**Response to Weakness: We clarify as follows, and we will revise the presentation issue accordingly.**
- First, in the non-asymptotic analysis (see Table 1) which we follows from, the total iteration number $T$ is treated as finite and fixed. Thereby, during the proof, the following two facts are essential: $\beta_2=1-1/T$ as a constant that is not equal to 1; $v_t$ as the exponential moving average of past gradients instead of a constant. Several of our key results, such as Lemma B.2, Lemma B.3, Lemma B.7, and Lemma B.12, are based on these premises. This configuration distinguishes our approach from merely reducing Adam to SGD with momentum, as the proof techniques applicable to SGD with momentum do not directly extend to Adam.
- Second, as we mention in Table 1, at least two works [10, 38] consider the same setup of $\beta_2$ as ours. In addition, [49] have indicated that a $\beta_2$ value closed to one facilitates convergence, whereas a smaller $\beta_2$ may lead to divergence. This observation also somehow supports the validity of our choice for $\beta_2$.
- We will revise $\beta_1$ setup in two references accordingly.
**Response to Questions**
1. Indeed, "affine variance noise" is originally of expected version in Eq. (2). In this paper, in order to derive a stronger high-probability convergence bound instead of an expected bound, we require the stronger almost surely version in Eq. (3), which has been studied in, e.g., [2, 23]. We will add 'almost surely' in Line 3 to distinguish from the expected affine variance noise.
2. Thanks a lot for the reminder of this citation, which will be included in Table 1 in the revision.
3. We ignore the $d$ factor in the two upper bounds of the main theorem, to make the results concise. All these two results are dimension dependent, see the proof parts. Furthermore, in Theorem 3.1, if $C_0\sim \mathcal{O}(1/d) $, then $G^2 \sim \mathcal{O}(1)$ and the convergence bound is of order $\mathcal{O}(d)$ w.r.t. $d$. In Theorem 4.1, if $E_0\sim \mathcal{O}(1/d)$, then $\hat{H} \sim \mathcal{O}(1)$ and the convergence bound is of order $\mathcal{O}\left(d\right)$ w.r.t. $d$. In this sense, there is no fundamental difference between the standard and relaxed smoothness. We will add these remarks in the appendix of the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed explanations!
I understand that $v_t$ is not a constant for $\beta_2=1-1/T$, but an exponential moving average of past gradient squared. Say $v_t=w_0 g_0^2+\cdots+w_{t} g_t^2$. One can show that $w_0=(1-1/T)^{t}\ge 1/e=O(1)$, which means $v_t$ never forgets the initial gradients. I feel this may simplify the analysis in some way. It should be more interesting to extend the result to a smaller $\beta_2$ like $1-1/\sqrt{T}$. That being said, I agree that the analysis is already non-trivial compared to SGDM, and would like to increase my score.
---
Rebuttal 2:
Comment: Thank you so much for your reply and generosity in raising the score. We thank a lot for the valuable suggestion from the reviewer and agree that considering the smaller $\beta_2$ setup would be a quite interesting topic for the future investigation. We shall add a brief remark related to this point after our main results.
Best regards,
Authors. | null | null | Rebuttal 1:
Rebuttal: We thank all reviewers for their comments and suggestions! In the global rebuttal, we have:
- **summarization of proof novelty**
- **a simple experiment in the attached PDF as supplementary material for our main results.**
While our proof borrows some ideas from [42, 10, 14, 2, 38, 19] as we mention in Line 221, **our proof novelty** lies in the following points (and a few others):
- a new type of proxy step-size in Eq. (22) to break the entanglement of stochastic gradients and adaptive step-sizes in Eq. (21), and to handle the error brought by the affine variance noise in Line 256;
- a new decomposition to handle the mismatch between stochastic gradients and the momentum, please also refer to Eq. (19), Line 261 and Line 262;
- some new estimations to handle the corrective terms in Adam, which are seldom considered in previous works, please also refer to Lemma B.12;
- an induction argument to control the potential unbounded smooth parameter in the generalized smooth case, please also refer to Proposition C.6 and Proposition C.7.
Pdf: /pdf/3097cc094455911a22b9e98ff7527971f1c32e75.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MeshFormer : High-Quality Mesh Generation with 3D-Guided Reconstruction Model | Accept (oral) | Summary: This paper introduces MeshFormer, a sparse-view reconstruction model designed to generate high-quality 3D textured meshes from sparse RGB images and their corresponding normal maps. By leveraging voxel representation, 3D inductive biases, SDF loss, and normal information, the model shows comparable inference performance to concurrent methods, while the entire training process can be completed using only 8 GPUs within a week (concurrent methods typically require around 100 GPUs). Experimental results demonstrate the effectiveness of the design.
Strengths: 1. The authors provided a detailed explanation of the motivations behind the model designs (including the introduction of voxel representation, the introduction of 3D full (or sparse) convolution, and so on) and demonstrated the reasonableness of these choices.
2. Compared to baseline methods, this model is simpler to train and demonstrates better qualitative and quantitative results.
3. The ablation study demonstrates the effectiveness of normal input, SDF supervision, geometry enhancement, and other methods proposed in the paper.
Weaknesses: 1. Although the authors provide detailed textual descriptions in the method section, it would be better if more mathematical symbols and equations were used, which could explain the entire pipeline more clearly and unambiguously.
2. For reproducibility, the authors should provide more implementation details, including a more detailed model architecture, the values of hyperparameters (e.g., \lambda in the loss function), and other relevant information.
3. The authors don’t report the comparison of inference time and memory usage between the proposed model and the baseline models.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Can the normal maps of the mesh be completely consistent with the normal maps predicted by the model after the post-processing algorithm?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes, the authors addressed limitations, potential negative societal impact, and mitigation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## More mathematical symbols and equations
Thank you for pointing this out. We will follow your suggestion to include more mathematical symbols and equations in our revision when introducing the method.
## More implementation details
We will follow the suggestion to include more implementation details in our revision, such as the specifics of the network architecture and the weights of the loss terms.
## Inference time and memory usage
We followed your suggestion to include a comparison of inference time and memory usage in the rebuttal PDF (please refer to Table 3). We found that while our method generates meshes at the highest resolution (512^3, 8x larger than 256^3), we still maintain a competitive speed and memory footprint.
## Normal maps of the mesh (after post-processing) vs. normal maps predicted by the model
Unfortunately, the post-processing we used cannot guarantee that the mesh normals will be completely aligned with the predicted normal maps after processing. This is because the algorithm operates in local space and avoids large vertex movements. Additionally, the predicted normal maps may contain errors or conflicts, such as inconsistent neighboring normals, which cannot be perfectly matched. The adopted algorithm is an iterative numerical optimization method and does not compute an analytic solution.
However, we have quantitatively verified that the post-processing module can significantly improve mesh normal consistency with the predicted normal map. For example, before post-processing, only 26.4% of mesh vertices had a normal angle error of less than 2 degrees. After post-processing, this number increased to 40.8%. For a 10-degree threshold, the ratio increases from 78.8% to 86.4%. For more details, please refer to Table 4 in the rebuttal PDF.
We have also included qualitative examples to illustrate the importance of this post-processing module in recovering sharp geometric details and reducing noisy artifacts induced by the marching cubes algorithm. Please check out Figure 5 of the original paper.
---
Rebuttal Comment 1.1:
Title: Comments on Rebuttal
Comment: I am glad the authors agreed to include these discussions in the final revision and provided more experimental results in rebuttal. I will keep my positive score. | Summary: The paper proposes a high-quality feed-forward 3D object reconstruction method from sparse view RGB images. It uses an explicit voxel structure for better geometric inductive bias, auxiliary inputs such as 2D diffusion generated normal images and SDF representation for better geometric details, and an end-to-end trainable pipeline that eliminates the need for multi-stage refinement. The method gives high quality reconstruction results, especially in terms of fine-grained and smooth geometry.
Strengths: 1. Although the network architecture and 3D representations are more complicated than previous methods, they are end-to-end differentiable and alleviate the training burden of multi-stage refinement.
2. The idea of using 2D diffusion generated normal images as input to the reconstruction pipeline is interesting and insightful.
3. It is more computationally efficient to train (Line 73).
4. The qualitative results are impressive, especially the mesh normals.
Weaknesses: 1. In original LRM the only supervision signal needed is RGB images. The proposed method, however, needs access to the full 3D shape for supervising the occupancy. It is fine for hand-made 3D assets but might poses some difficulty when trying to scale to real datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Table 3 row (a) shows the impact of normal input. When you remove the normal input, do you also remove the normal output and the normal loss? I ask this because in section 3.3 you say learning from RGB to geometric details directly can be difficult, so it makes more sense to just remove the normal input but preserve normal supervision to compare.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. It requires 2D diffusion models to generate auxiliary inputs, which can drastically slow down the reconstruction speed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Geometry supervision for real-world training datasets
We agree that image supervision is easier to add when extending to real-world training datasets. However, it is not impossible to obtain corresponding depth maps and even meshes for real-world RGB images, such as through depth sensors or Structure from Motion (SfM). Given the depth map (and even meshes), we can still apply direct 3D geometry supervision. If full 3D shapes are not captured, we can also apply partial supervision to the visible views (only supervising the visible points) while generating the full shape. We acknowledge that this may require more advanced techniques and designs, and we leave it as a promising avenue for future work.
## Table 3, row (a):
Yes, for Table 3, row (a) of the paper, we remove the normal input but preserve the normal output and normal supervision.
## 2D diffusion models drastically slow down the reconstruction speed
We currently use the normal ControlNet of Zero123++ to generate the multi-view normal inputs. It tiles the multi-view RGB images as a single input condition and generates the tiled normal maps in a single diffusion process, which takes approximately 4.1 seconds on an H100 GPU. For the application of single-image to 3D, generating tiled multi-view RGB images takes about 3.6 seconds. The total time on the 2D diffusion model side is only around 7.7 seconds, which is still acceptable.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. I keep my original rating. | Summary: In this work, the authors propose a sparse view reconstruction model that utilizes a set of images (with camera poses) and corresponding normal maps to produce a reconstructed textured mesh. The primary contribution lies in adopting voxel-based 3D representation and employing a network architecture that integrates both 3D convolution and attention layers. Moreover, direct geometry supervision (SDF loss) is applied during the training process, alongside rendering-based losses. Experimental results demonstrate that the generated 3D shapes achieve state-of-the-art performance when compared to existing works on the single-view to 3D task.
However, as highlighted in the weakness section, there are potential misclaims regarding the technical contributions. It is highly recommended to revise the manuscript to cite and discuss these related works. Despite this, I am currently inclined towards accepting the paper and would be happy to champion it if the aforementioned issues are addressed in the final version.
Strengths: - The writing is clear and easy to follow.
- The combination of SDF loss and rendering losses appears novel for training a feed-forward based network. Additionally, the ablation study in Table 3(b) clearly indicates that SDF supervision is crucial for achieving good geometry, as evidenced by the significant CD difference between (b) and (g).
- Although [33] has explored using normal maps for the reconstruction task, it seems new to employ normal maps as inputs and supervision for a feed-forward reconstruction network.
- Experimental results demonstrate state-of-the-art performance over existing baselines, as shown in Table 1 and Figure 3. Furthermore, it is illustrated that existing methods cannot achieve similar performance given the same computational resources (Table 2).
- The ablation study confirms that various components are essential for the final performance, including considering normal input and SDF supervision.
Weaknesses: Possibly Misclaimed Technical Novelties:
However, the current manuscript may contain several misclaims regarding its technical novelties.
One claimed novelty is the adoption of a 3D voxel representation. However, the use of 3D voxel-like volumes in reconstruction is not a new idea and has been well-explored in various works, including:
A. Generalized Deep 3D Shape Prior via Part-Discretized Diffusion Process, CVPR 2023
B. SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation, CVPR 2023
C. Locally Attentional SDF Diffusion for Controllable 3D Shape Generation, SIGGRAPH 2023
D. One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View Generation and 3D Diffusion, CVPR 2024
E. Make-A-Shape: a Ten-Million-scale 3D Shape Model, ICML 2024
Additionally, the use of convolution + transformer layers to process grid input seems to be standard procedure in 2D generation tasks, as seen in:
Diffusion Models Beat GANs on Image Synthesis, NeurIPS 2021
Similar architectures have also been widely adopted in some of the aforementioned 3D reconstruction works, such as [A, C, D, E].
Regarding image conditioning, the cross-attention with image patch features is also well-explored in various works mentioned above, such as [C, D, E].
Technical Quality: 3
Clarity: 4
Questions for Authors: Some suggestions:
- Considering the above existing and concurrent works (Weakness Section), it is difficult to be convinced that some of the proposed modules are novel. It is highly recommended to cite and discuss the differences with these prior works and adjust the claims accordingly.
- Although it is acknowledged in the limitation section that the reconstruction performance will be affected by the errors of 2D models, it is recommended to include this as one of ablation case in Table 3 to better visualize this limitation.
- Furthermore, as no real-world images have been tested within the proposed framework, it is advisable to avoid from using the term "open-world" (L384) to describe the current framework in order to prevent overclaims.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The main limitation is well described in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # No real-world images tested?
We would like to clarify that one of our main testing datasets, OmniObject3D, is a real-world scanned 3D dataset. In addition, we also include some qualitative examples with real-world input in our rebuttal PDF (see Fig. 3), where MeshFormer performs quite well.
The term 'open-world' means that MeshFormer differs from many previous methods (such as A, B, C listed by the reviewer). Those methods are trained on datasets with a limited number of object categories (e.g., tens of categories in ShapeNet) and cannot generalize to novel categories. Unlike those methods (e.g., 3D native diffusion), MeshFormer takes as input sparse-view images and normal maps generated by 2D diffusion models, and demonstrates much stronger generalizability. MeshFormer is thus not limited to the training 3D dataset and can handle arbitrary object categories.
# Ablation study of 2D model errors
We would like to clarify that, in Tab. 3 of the paper, we have analyzed the influence of errors from 2D normal models (rows f and g). Additionally, we provide some qualitative examples in Fig. 8.
For the effect of multi-view RGB, please compare Tab. 1, row ‘Ours’ (predicted RGB) and Tab. 3, row f (ground truth RGB) of the paper.
# Claims about technical novelties
Thank you for pointing this out. We will cite these prior works and discuss the differences in our revision. We fully understand your concerns and would like to address the potential misunderstanding in detail.
## Point 1
The reviewer summarized that the "primary contribution lies in adopting a voxel-based 3D representation and employing a network architecture …." **We respectfully disagree with this argument.** Our main claim is that by proposing a 3D-guided reconstruction model that explicitly leverages 3D native structure, input guidance, and training supervision, MeshFormer can significantly improve both mesh quality and training efficiency. Our main findings/contributions include:
- (a) Using normal images as additional input in feed-forward reconstruction models greatly enhances the prediction of geometric details.
- (b) Proposing to learn and output a 3D normal map, which can be used for further geometric detail enhancement.
- (c) Combining SDF loss and rendering losses in training feed-forward reconstruction models enables a unified single-stage training process. In contrast, concurrent works rely on complex multi-stage "NeRF-to-Mesh" training strategies to export high-quality meshes (e.g., MeshLRM, InstantMesh) and struggle to generate high-quality geometry.
- (d) Explicitly leveraging 3D native voxel representation, network architecture, and projective priors fosters faster convergence speeds and significantly reduces the training requirement.
**We would like to emphasize that all four points are crucial to MeshFormer. We thus do not agree that our primary contribution lies solely or primarily in adopting 3D representation and network architecture.** For example, without points (a), (b), and (c), our mesh quality would be significantly compromised.
## Point 2
We totally agree that the utilization of 3D voxel representation is common in general 3D generation. However, **all works listed by the reviewer (A-E) focus on 3D native diffusion, one of the paradigms in 3D generation, which differs from the route of MeshFormer.** There are some common limitations of this line of work. For instance, all of A-E focus on geometry generation only and cannot predict high-quality texture directly from the network. Also, due to the limited amount of 3D data, 3D native diffusion methods typically struggle with open-world capability and focus on closed-domain datasets (e.g., ShapeNet) in their experiments (A, B, C).
In MeshFormer, **we aim to achieve direct high-quality texture generation and handle arbitrary object categories**. We are thus following another route: sparse-view feed-forward reconstruction, instead of the 3D native diffusion. In this specific task setting, **many of the works provided by the reviewer are not suitable for comparison in our experiments**. More comparable works are recent LRM-style methods (e.g., InstantMesh, MeshLRM, LGM, TripoSR, etc). However, **most of them only utilize the combination of triplane representation and large-scale transformers**.
In our paper, we do not claim to be the first to use voxel representation in 3D generation. Instead, we would like to share our findings:
- In this specific task setting (open-world sparse-view reconstruction with feed-forward textures), we are not limited to the triplane representation. 3D native structures (voxels), network architectures, and projective priors can facilitate more efficient training and significantly reduce the training resources required (from over one hundred GPUs to only 8 GPUs).
- In this specific task setting, we require a scalable network to learn a lot of priors. However, not only triplane-based transformers can be scalable. When marrying the 3D convolution with transformer layers, it can also be scalable.
- In addition to using 3D native representation and networks in 3D native diffusion, we can also combine them with differentiable rendering to train a feed-forward sparse-view reconstruction model with rendering losses.
- For image conditioning, C and E only take a single image as a condition. D **first employs max pooling across multi-view features** and then uses cross-attention across the pooled multi-view features and the voxel feature. The max pooling is typically affected by occlusion (voxels are not visible in all views) and thus becomes less effective. Instead, we propose using cross-attention across all multi-view projected image features and voxel features, which can implicitly leverage structure and visibility priors to focus only on visible regions. We demonstrate that this strategy is more efficient than the pooling strategies used in D as shown in Tab. 3 of the paper and Tab. 1 of the rebuttal PDF.
---
Rebuttal Comment 1.1:
Title: Comments on Rebuttal
Comment: Thanks for the additional experiments and comments. They have addressed my previous concerns.
Regarding the technical contribution, it is understood that the authors would like to highlight the contributions of applying a voxel representation and corresponding networks instead of a tri-plane representation in an open-world reconstruction task. I agree that this work has shown good evidence for the necessity of this choice, and this should be recognized.
Despite this, I believe that a comprehensive discussion of related works, regardless of in-categories setting ([A-C]) or open-world setting (D-E), would help readers understand the field's development. I am glad the authors agreed to include these discussions in the final revision and adjust my rating to acceptance.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for adjusting the score! We will follow your suggestion to cite the mentioned works and include a comprehensive discussion. | Summary: This paper proposes an improved framework for feed-forward reconstruction models. The authors advocate a number of improvements over the initial design of Large Reconstruction Model, including model architecture and training schemes. Experiments show that the method reconstructs better geometry and texture on Google Scanned Objects and OmniObject3D datasets.
Strengths: - The paper is focused on ablating different components for feed-forward sparse-view reconstruction, and in-depth analyses are provided for each design choice. Although there are no complicated new method proposed, such analysis bring value for understanding how and why each component works.
- The proposed method is evaluated on (preprocessed) real-world multi-view datasets, showing improvements over baselines on all metrics. Extensive ablative analyses are also provided to better understand the behaviors of the proposed method.
Weaknesses: - Since this is more of an analysis paper, it would be good if the authors could also document the other components that were tried/ablated but did not see significant differences.
- Since training resources was discussed and compared, it would be nice if there could be an analysis on the mesh generation/reconstruction quality over training time.
Technical Quality: 4
Clarity: 4
Questions for Authors: Please see the questions in the weakness section.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Experiments tried/ablated but did not show significant differences
We are happy to follow the reviewer's suggestions to include more discussions about the experiments we have conducted in our revision, such as:
- the difference between joint training and separate training of the dense model and sparse model;
- the difference between max-pooling, mean-pooling, and cross-attention in projection-aware feature aggregation;
- the comparison of inference time and memory consumption with baseline methods;
- mesh generation quality over training time;
- quantitative analysis of the geometry enhancement module;
- more qualitative examples on real-world images.
Please let us know if there are any additional experiments you are interested in.
## Mesh generation/reconstruction quality over training time
Our MeshFormer can be trained efficiently with only 8 GPUs, generally converging in roughly two days. We have followed the suggestion to include a quantitative analysis of our mesh generation quality over training time. As shown in Table 2 of the rebuttal PDF, the performance quickly improves and nearly converges with marginal changes after the two-day training period. | Rebuttal 1:
Rebuttal: We thank all reviewers for their insightful comments and valuable suggestions. We are pleased to note that all five reviewers were supportive of our work:
- They complimented the impressive mesh quality with fine-grained geometric details (r7MY, bHQc, 3423, mWQL, V11k).
- They praised our fast training speed and significantly reduced computational resources (r7MY, bHQc, 3423, mWQL).
- They noted that our method is well-motivated (3423) and the paper is well-written (mWQL).
- They highlighted our qualitative state-of-the-art performance (r7MY, mWQL, V11k, 3423) and insightful and extensive ablation study (mWQL, V11k, 3423).
- They acknowledged the novelty and benefits of our concrete findings/contributions:
- Using normal images as additional input for feed-forward models greatly helps predict geometric details. (r7MY, mWQL, bHQc, 3423).
- Proposing to output a normal map, which can be used for geometry enhancement (r7MY, 3423).
- The combination of SDF loss and rendering losses enables unified single-stage training and achieves good geometry (r7MY, mWQL, bHQc, 3423).
- Explicitly leveraging 3D native structures and projective priors fosters faster convergence speed (r7MY, bHQc, 3423).
We have also included a PDF with some figures and tables to address the specific concerns raised by the reviewers.
Pdf: /pdf/5e0c2e58cf444e320fba0d3e3974621d5a52fdc7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: In this work, the authors propose MeshFormer, a sparse-view reconstruction model that explicitly leverages 3D native structure, input guidance, and training supervision. They leverage 3D sparse voxels as their representation and combine transformers with 3D (sparse) convolutions to inject 3D prior. Additionally, they propose to take the corresponding normal maps together with sparse-view RGBs as input and also generate them as output, which could be used for geometry enhancement. Extensive experiments show that MeshFormer can be trained efficiently and outperforms state-of-the-art methods in terms of generating high-quality textured meshes.
Strengths: - MeshFormer is able to generate high-quality textured meshes with fine-grained geometric details.
- The authors find that using normal images together with RGB images greatly helps in predicting geometric details. Additionally, the model outputs a normal map, which can be used for geometry enhancement.
- The proposed method explicitly leverages 3D native structure, input guidance, and training supervision, resulting in faster convergence speed and better geometric details.
Weaknesses: - Pixel-based 2D methods (e.g., LGM) can preserve thin details, while 3D-based methods often smooth these details. How do you justify that? For example, in Figure 3 Column 4, the loose thread of the toy is captured by LGM, while MeshFormer ignores it.
- The proposed name "VoxelFormer" seems improper to me. It seems more like a 3D UNet with a deep bottleneck composed of multiple transformer layers.
- The projection-aware cross-attention layer projects 3D voxels onto the m views to interpolate m RGB and normal features. However, in the object case, one 3D voxel usually only corresponds to one view (due to occlusion). This cross-attention is projection-aware but not truly 3D-aware. Have you tried some occlusion-aware attention in your sparse model? Since you already have the coarse structure of the object, it could be used to filter out unneeded features.
- According to Table 3 (d), you mention "we replace the cross-attention with simple average pooling and observe a significant performance drop." Could you also try max-pooling? Additionally, do you concatenate the 3D feature voxel at every level of the network, as done in One-2-3-45++?
Technical Quality: 3
Clarity: 3
Questions for Authors: - Do you use a shared backbone (trainable DINOv2) for both RGB and normal images? Do you use Plücker embedding here?
- Could you provide a more detailed description for the Sparse VoxelFormer architecture? For example, how many sparse convolution layers are used in each resolution?
- Instead of joint training, have you tried splitting the dense model and sparse model for two-stage training?
- The output voxel resolution is $256^3$, while the SDF supervision is $512^3$. I notice that there is an interpolation step in Figure 2. It would be better to add a short text description for this.
- Do you use GT multi-view normals for the teaser? If you use the GT normal images, please include that in the caption.
- I suggest discussing XCube [a] in your literature review. XCube also utilizes sparse voxels as their 3D representation and leverages 3D sparse UNet with transformer layers. Additionally, they generate 3D shapes in a coarse-to-fine manner and use tiny MLPs to predict various attributes, such as normals, semantics, and SDF.
[a] XCube: Large-Scale 3D Generative Modeling using Sparse Voxel Hierarchies. CVPR 2024.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors already include limitations and broader impact in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Thin structures
We would like to clarify that the loose thread of the toy was not displayed due to a slight pose mismatch when visualizing the results. In fact, the loose thread is reconstructed by our MeshFormer. We have included additional views of our generated results (see Figure 1 of the rebuttal PDF), where the loose thread can be observed. You can check the video on our website.
MeshFormer generates high-resolution (512) SDF volumes, which are sufficient to preserve very thin structures, as shown in Figure 2 of the rebuttal PDF.
## The name "VoxelFormer"
The meaning of "VoxelFormer" is two-fold. On the one hand, it refers to combining voxel representation and transformer layers, in contrast to recent LRM-based methods that rely on triplane representation and transformers to achieve scalability. On the other hand, it can also be interpreted as Voxel-Form-er, meaning the module that generates the voxels. We are open and happy to consider other names if the reviewer has more suitable suggestions.
## Occlusion-aware feature aggregation
We agree that occlusion-aware feature aggregation is very important. This is the primary reason we use a cross-attention layer to aggregate the projected multi-view features instead of using a simple average or max pooling method like previous approaches. We hope the cross-attention layer can implicitly utilize prior knowledge of coarse structure and visibility to focus on the visible views. Our experiments also verify its superiority over mean pooling aggregation (Table 3 (d) of the paper). While we could explicitly filter out some views according to the predicted occupancy from the first stage, as the reviewer suggested, we would like to point out that the occupancy band has some thickness, and accurately determining the visibility of each voxel can be quite challenging. This may require more advanced techniques, and we leave this as a promising future direction.
## Table 3 (d)
We followed the suggestion to add an ablation variant using the max-pooling aggregation mechanism (please refer to Table 1 of the rebuttal PDF). We found that while the max-pooling aggregation performs slightly better than average pooling, it is still significantly inferior to our projection-aware cross-attention mechanism.
Yes, we follow the "skip-connection" scheme of the typical UNet to concatenate the voxel features before the bottleneck with the voxel features after the bottleneck. If this is not what you are asking, please let us know.
## Shared backbone and Plücker embedding
Yes, we use a shared backbone (trainable DINOv2) for both RGB and normal images. We did not use Plücker embedding in our experiments. Instead, we leveraged the camera poses to unproject 2D image features into 3D feature volumes.
## Detailed description of the sparse VoxelFormer architecture
Yes, we will follow your suggestion to include more details about the Sparse VoxelFormer architecture in our revision, such as the number of sparse convolution layers used at each resolution.
## Joint training
Yes, we began our experiments by training the dense model and sparse model separately. However, we found that this approach leads to a domain gap for the occupancy field during inference, as the predicted occupancy by the dense model may be imperfect, while the sparse model is only trained with ground truth occupancy. Joint training can mitigate this gap and reduce artifacts during inference.
## Interpolation from $256^3$ to $512^3$
Thank you for pointing this out. We will add the missing description in our revision. Specifically, we generate a sparse feature volume with a resolution of 256, and then trilinearly interpolate it to a sparse feature volume with a resolution of 512. This interpolated sparse features are then fed into the SDF decoder for predicting SDF values, which are subsequently used to compute the loss against the 512-resolution ground truth SDF.
## Multi-view normals for the teaser
Yes, we use GT multi-view normals for the teaser. We will make this clearer in our revision.
## XCube
Thank you for pointing this out. We will cite XCube and discuss it in our revision. We agree that XCube shares many high-level ideas with us, such as hierarchical sparse voxel representation and coarse-to-fine generation. However, we would like to point out that they follow the paradigm of the 3D native diffusion, which can only generate geometry but fails to directly predict texture from the model. In contrast, we follow the paradigm of the feed-forward sparse-view reconstruction and incorporate differentiable rendering into the pipeline, which enables the network to directly generate high-quality texture. Additionally, we combine 3D (sparse) convolution with the transformer layer to increase capacity and scalability of the network, while they mainly rely on 3D convolution only and may be limited in both capacity and scalability.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. I would like to accept this paper. Please include the promised changes in the revision. | null | null | null | null | null | null |
Limits of Transformer Language Models on Learning to Compose Algorithms | Accept (poster) | Summary: This paper studies whether transformers can efficiently learn compositional discrete tasks. In particular, the paper introduces two new tasks: pointer execution neighbor and pointer execution reverse multicount as well as using multiplication and highest subsequence sum from prior work. First, small models are trained from scratch, showing substantially slower learning on the composition than on the subtasks. Next, API models are prompted to solve the same tasks and perform somewhat poorly. Some theory is also provided showing how models that memorize can struggle to learn compositions efficiently.
Strengths: 1. The paper proposes an interesting question as to whether we can determine whether a language model has some higher level concept of task composition that allows it to learn compositions of previously learned tasks efficiently.
2. The paper includes a nice theoretical result via a complexity theory reduction that shows how composition is hard if we assume stylized models that memorize the training data.
Weaknesses: 1. H1 as written cannot be disproven empirically since the "constant" could just be larger than those tested. It seems in the experiments "constant" means 100. If that is what is meant, then just say so in the hypothesis.
2. It is not clear if the notion of "sub-task" is somehow atomic and unique. This makes hypothesis H2 and H3 somewhat ill-defined too. It is possible that there are different sub-tasks (and perhaps many more of them) that better track how the model actually learns. Just because we can posit one way to compositionally solve the task does not mean that the model will learn that way (or even that it can necessarily represent that composition).
3. It is not clear why the new tasks are necessary or what specifically they add over prior, simpler tasks. There needs to be more justification of the somewhat complicated tasks to explain why they are necessary. Generally the presentation of these tasks and the results was unclear and could use improvement to make it more visually clear how matches and transitions are meant to happen in the tasks and more precisely what all the baselines are doing in the experiments.
4. It is not clear why one would expect an untrained small (150m) language model to somehow be able to compose subtasks without being trained to perform composition. As such, the results that the composition does not just arise and indeed takes longer to learn is not surprising.
5. I am somewhat worried that the way the strings representing the nodes are designed is interacting badly with the tokenizers in the API experiments. These are clearly "out of distribution" types of words and they may be tokenized in ways that make it very difficult to solve the task. Did you do any analysis of how these strings get tokenized? The tokenizers are publicly available. Also, it is difficult to fit this section into the story of the paper since there is no comparison to the learning of the subtasks.
Technical Quality: 2
Clarity: 1
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Limitations of not solving the issues raised are addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and observations.
---
**(W1)** As shown in our results, H1 can be fixed to a very large range, and 100 was chosen for illustrational purposes; what is crucial is that we assume it is empirically much lower than H2. This means that H1 is defined to make the hypothesis more accessible and actionable, and it should only depict an upper bound of "a few demonstrations" i.e., we got inspired by what one would usually give in a few-shot and took an upper bound. We also refer to the PAC learning theorem, where the logarithmic factor shows that with very few samples, a large hypothesis space can be reduced, making a constant assumption on limited-complexity tasks reasonable in practice.
---
**(W2)** Thanks for the critique, it is indeed an important aspect that was not sufficiently clarified in the manuscript. In the introduced tasks, every primitive roughly corresponds to a basic block of operations in the equivalent procedural programming language and can be considered an atomic unit. To visualize the atomicity of the primitives within the tasks, we included pseudocode that shows how the introduced tasks (PEN and PERM) can be decomposed and solved with these primitives by a language model trained on next-token prediction in the additional 1-page PDF note.
The definition of these sub-tasks is, per se, not unique. However, our definition of sub-tasks is _minimal_, as every primitive is independently observed in only one task (as described in Section 2.2), and the primitives are atomic. In addition, we argue that our observations are independent of the optimality and uniqueness of primitives and sub-tasks. The inefficiency observed is _relative_ to the given set of primitives and not defined in absolute terms. Finally, we cannot completely exclude the existence of an "optimal" set of primitives that could be easily re-combined by the model to learn compositional tasks efficiently. However, this possibility is deemed unrealistic by some of our experimental results. For instance, in our de-compositionality experiments (Section 4.2 and Tables B8, B9) we see that the model learns the sub-tasks much faster when pre-trained on the composition task. We speculate that this provides evidence that the model learns mechanisms similar to our primitives and that the learning inefficiency is, hence, mostly imputable to the difficulty of the model to recombine the primitives (which would not be solved by a better set of primitives).
---
**(W3)** Thank you very much for your question. We tried to improve the exposition of the tasks and their motivation in our rebuttal to the reviewer teuc. In addition, we also improved Figure 1, please refer to the additional 1-page PDF note.
---
**(W4)** This is a valid concern. Firstly, we argue that experimenting with small, un-trained models can still be interesting to test whether the compositionality can emerge naturally or not (as this result could be then potentially translated to real-world practical application) and provide insights on the intrinsic limitations of Transformer-based models on these tasks. Then, we highlight that in our experiments we also incorporated a certain amount of inductive bias toward compositional solution. For example:
- _In-context learning_. We prompted GPT-4 and Gemini-Pro extensively and demonstrated that they did not learn to compose their primitives successfully enough to even perform the tasks when given a clear description and a large number of examples. Please find the new results with many-shot prompting in the general response.
- _Training the model from scratch_. Note that we provided hints to the model such that learning how to compose learned primitive sub-tasks is made easier. We do this by introducing the PEV (Pointer-Execution Verbose) to the list of sub-tasks, which implicitly shows to the model the order in which the other subtasks need to be applied to solve PEN. For results with different architectures, please see "ablations with different architectures" in our general response.
- _Fine-tuning of larger pre-trained models_. Inspired by your comment, we have designed a new set of experiments to fine-tune larger pre-trained models on our algorithmic tasks, to validate whether the bigger size and/or the pre-train knowledge would affect our observations. Unfortunately, this kind of training takes a considerable amount of time, but we hope to be able to provide practical results before the end of the discussion period.
---
**(W5)** Thank you for raising potential concerns regarding tokenizations. We already searched through different task designs to improve the performance of GPT-4 and Gemini-Pro. The reviews motivated us to conduct further investigations and quantification primarily on the openly available GPT-4 tokenizer. Our findings can be summarized as follows (see the general response for a detailed discussion):
- We ablated different task designs in the initial submission; however, we found that the current sequence design (e.g., "ab1cd") performs best among other alternatives (such as "ab-1-cd").
- As an additional analysis, we tested the tokenization of GPT-4's open-source tokenizer (tiktoken). We found that the digit delimiter is always tokenized separately (e.g., "ab1cd" yields "ab", "1", and "cd"); hence, it does not have a detrimental effect on the attention mechanism. We found that some of the 2-grams were split into 2 tokens; however, removing them from the dataset did not improve the overall task accuracy with the best-performing prompting technique.
- We designed a new natural variation of PEN, dubbed Natural-PEN, where we replaced the synthetic 2-gram sequences with 3-gram, in-distribution, natural English words. Overall, we found that the GPT-4's performance does not improve on this in-distribution, natural English words, yielding even a lower accuracy on Natural-PEN.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response and clarifications.
1. Using ideas like "a few demonstrations" is just not formal. And dressing this up as a formal hypothesis just seems to be misdirection when the experiments just choose some arbitrary number out of thin air. This is honestly a small part of the paper, and does not majorly impact my opinion, but it is bad practice.
2. Thanks for the discussion here. I agree that it is a tricky issue for sure, but I am still not totally convinced that these must be the ways to decompose the tasks or that it is conclusive that the model is actually learning them in the way presented.
3. Thanks for the figure. It is still not clear to me why the new task definitions are necessary.
4. Thanks again for the clarification. Indeed adding the hints that get closer to showing how to do composition is maybe helpful. I guess my point was that maybe these synthetic tasks are missing the diversity of natural data that may have more examples of *how* to do composition in a general sense. And just training on narrow subtasks and expecting composition to arise seems unrealistic (and also not being suggested by people in the literature as being something that would occur, to my knowledge)
5. Thanks for these extra experiments and investigation. This does seem to be better evidence that the tokenization is not a major issue.
I will increase my score to a 4 to reflect the partial resolution of some of the issues I raised, but I think the remaining issues still leave the paper below the bar for acceptance in my mind, especially with several substantial changes to the paper being suggested.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising your score, for the additional comments on our work, and for your involvement in the review process, we really appreciate it.
**(1)** We agree that the definition right now lacks formality and that this should be addressed. We propose a simple variation of our initial hypothesis, which does not affect either our observations or our experimental setup, yet fixes the weak definition of H1.
- H1. A Transformer language model learns a compositional task with fewer samples than those required to learn its most difficult sub-task.
- H2. A Transformer language model learns a compositional task with fewer samples than the sum of the samples needed to learn every sub-task.
- H3. A Transformer language model learns a compositional task with approximately (the same order of magnitude) as many samples as the sum of the samples needed to learn every sub-task.
- H4. A Transformer language model needs more data samples than in H3.
**(2)** It is indeed hard, given the limited explainability of the internal mechanisms of Transformer models, to establish whether or not the model is effectively learning certain operations. However, the experiments on the task decomposition represent in our view a solid clue that LLaMa learns internal operations related to the set of primitives that we define (since pre-training on the compositional task results in a faster learning of the individual sub-tasks). Additionally, we argue that this experiment also proves the validity of our choice of primitives: when trained only on the compositional task, the model autonomously converges to operations that are related to them and can effectively compose them to solve the task. Hence, we believe that assuming that the model should be able to leverage the primitives in the given sub-tasks to learn the compositional task does not seem too far-fetched.
This, in some sense, links back to some of your remarks in the initial review which we might now have completely addressed in the previous response.
> Just because we can posit one way to compositionally solve the task does not mean that the model will learn that way (or even that it can necessarily represent that composition)
In practice, the experiments on de-compositionality show exactly these two points.
To sum up the rather long discussion on the topic, these were the main important clarifications and arguments.
- _atomicity_: the sub-tasks are effectively atomic units roughly corresponding to a basic block of operations in the equivalent procedural programming language;
- _uniqueness_: the sub-tasks are not unique because we could consider an arbitrary set of sub-tasks composed of primitives or combinations of primitives that make each one of them observable. However, we choose the minimal set of sub-tasks that makes every operation observable.
- _validity of the primitive set_: experiments on task de-composition corroborates the hypothesis that the considered primitives can be learned and be effectively composed when training only on the full compositional task. This represents a validation of our choice of primitives and our experimental setup in general.
- _relativity_: the measured inefficiency is relative to our specific definition of sub-tasks and primitives, as correctly pointed out. However, considering all the points above, we speculate that similar results could be measured also for alternative definitions.
**(3)** We included an extensive rebuttal on the nature of the introduced tasks and why they are a meaningful contribution to the community in general in the response to reviewer teuc. We include a summary of our main arguments in the following list.
- First, a main driver for the design of our new tasks was the absence in the landscape of research on compositional learning of algorithmic tasks based on the pointer execution (PE), a synthetic evaluation task first introduced by Bengio et al. [20] to test the generalization capabilities of deep learning models. This task limits the number of confounders in the data and, therefore, reduces (if not annihilates) the number of shortcut solutions that the model can find. In other words, pointer execution tasks force the model to learn an algorithmic (general) solution by design and not rely on any possible statistical shortcut that might compromise the validity of the observations.
- Second, PEN and PERM are particularly suitable for benchmarking compositionality due to their algorithmic nature. We can decompose them into single atomic operations (referred to as primitives in the text), which can make learning the task easier if the model can identify and leverage their compositional nature.
- Finally, our tasks represent a unique example in the sense that the failure of GPT-4 and Gemini-Pro is very apparent despite extreme help, e.g., providing multi-shot examples and showcasing limited compositionality of current SOTA models better than any task we are aware of. | Summary: This paper focuses on analyzing the transformer language models' learning and transferability on compositional discrete tasks. Specifically, it has four hypothesis, and the author studies for a variety of language models, whether does these hypothesis hold.
H1. An LLM can learn to perform a compositional task with constant number of datapoints.
H2. An LLM can learn to perform a compositional task, given as many sample as the most difficult sub-task required.
H3. An LLM can learn to perform a compositional task, given the data samples of relearning all sub-tasks for learning the composition.
H4. An LLM can learn to perform a compositional task, given more data samples in H3.
The authors introduces a new benchmark for creating systematic sub-tasks and testing compositionally.
With LLaMA model, H4 holds; with both GPT-4 and Gemini, using H1 (prompting) fails to perform the tasks, or multi-round code generation with COT technique.
Strengths: Originality: 3.5 / 5
This paper examines how the number of datapoint samples affects the learning of compositional tasks in existing transformer-based large language models (LLMs). The authors created a new, challenging compositional dataset based on computation graphs and demonstrated that learning compositional tasks with LLMs is highly data-inefficient. While the BIG-bench paper indicates the insufficiency of reasoning and compositional abilities in LLMs, this paper innovatively provides a concrete, quantitative, and extensive study on the extent of this insufficiency.
Quality: 3.5/5
The empirical study is pretty extensive with both LLaMA, GPT-4 and Gemini. There are multiple prompting techniques adopted with GPT-4 and Gemini, all of them fails to generate a reliable result. There are also very interesting theotrical proofs in the appendix to bolster the authors' claims.
Clarity: 3/5
Figure 1 is hard to understand just by staring at the graph. For each task, it only provides one example which is non-trivial at all. One can hardly figure out its ground truth program for each example, and whether in a task of PE, is the underlying program the same across all the datapoints. I believe a descriptive caption by the side of each task is necessary. For example, PE refers to a program that takes a sequence of words and returns a list of words all colored green, where the first output word matches the first input word, and any subsequent output word starts with the last two characters of the previous word. However, the figures and tables in the experimental section are pretty clear and helpful to understand.
Significance: 2.5/5
Understanding the problem of the data inefficiency in transformer based LLMs is important to the community which focuses on data efficiency and reasoning, such as neuro-symbolic community.
Weaknesses: As stated in the strengths above. One of the main issue is the clarity issue of the tasks. Besides "what is the task", I also want to understand "why these two tasks are needed". What do PEN and PERM these two datasets bring?
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1. Is the PEN dataset only corresponding to one underlying program?
Q2. What are the insights of PEN and PERM these two datasets?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: I believe the stated limitation regarding addressing weaknesses in the LLM is not appropriate for this specific paper. Instead, the limitation should focus on the choice of the compositional dataset.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback, insightful comments, and observations.
**PEN and PERM: clarification and motivation**
To better understand the PEN and PERM tasks, we start the exposition by explaining the original Pointer Execution (PE) task using our encoding scheme.
The PE task is similar to linked list traversal or pointer chasing. The inputs are words delimited by whitespaces, where each word consists of an address, a delimiter, and a value. In our setup, addresses and values are each encoded using two letters from the English alphabet, and the delimiter is a single digit between 0 and 9. For example, uk7bh has "uk" as the address, 7 as the delimiter, and "bh" as the value. The first word in the input sequence is a single address, which points to a unique word in the sequence, starting the linked list traversal process. The sequence is traversed by reading out the value of the current word and matching it to the unique word whose address equals the current value. The traversal ends once there is no word whose address equals the value of the current word.
For example, given the input sequence "aa bc1yu aa5bc op9mn", the correct word traversal order is "aa aa5bc bc1yu".
This type of task is especially suitable for our study, as it limits the number of confounders in the data and, therefore, reduces (if not annihilates) the number of shortcut solutions that the model can find. In other words, PE tasks force the model to learn an algorithmic (general) solution by design and not rely on any possible statistical shortcut that might compromise the validity of the observations.
However, algorithmic tasks based on pointer chasing have so far been absent from the landscape of research on compositional learning. This is in large part due to the difficulty in expressing the PVR task using any primitive procedure except ``match``, which extracts the value of the word currently processed and finds the word whose address matches the value. Hence, our PEN and PERM tasks emerged from the need for a set of tasks that keep the beneficial properties of pointer-chasing tasks and can simultaneously be expressed as a composition of a set of simple primitives.
The algorithmic primitives we introduce in the PEN task are ``left`` and ``right``, which require the model to find the word to the left resp. to the right of the word that is currently being processed.
The input sequence is also modified and now consists of two sequences with interleaved elements. The first sequence is a valid PE sequence, whereas the second sequence is reminiscent of a PE sequence, but the values and addresses are not restricted to be unique. The two sequences have no common addresses or values. As an example, take the two sequences "ab ab8cd cd6ef" and "gh gh9kl gh6ij". The first one is a valid PE sequence, and the second one has the repeated address "gh". The corresponding PEN sequence is "ab gh ab8cd gh9kl cd6ef gh6ij". The task consists of traversing the pointers of the first of the two interleaved sequences while, at each traversal step, outputting the word to the right of the word currently traversed (the neighbor).
The fact that the model needs to output the neighbor combined with the neighbor's value, possibly having several matching addresses in the sequence, makes PEN a challenging compositional task, which can still be solved using only a few algorithmic primitives.
The PERM task is also based on the PE task, with a slightly different encoding of the sequence elements. The elements now consist of four letters from the English alphabet, to be interpreted as pairs of addresses and values, each encoded using two characters. The first element in the pointer traversal is explicitly provided.
PERM builds on top of the PE task by requesting that the sequence of words in the pointer chasing problem be output in reverse order, and it additionally involves keeping track of the length of the chain of pointers as well as the pointer orientation, i.e., does it point to the left or to the right of the current word.
Let us start with the following example sequence: "aabb eeff ccdd bbcc ddee", with the initial word being "aabb". Following the sequence of pointers starting from "aabb" results in the following sequence: "aabb bbcc ccdd ddee eeff". Each element in the resulting sequence has two numerical values associated with it, these being the length of the preceding pointer chain ('count') as well as the number of 'left'-pointers in the chain preceding the current element ('count left'). In our example here, the relevant sequences and numerical values are represented below:
Input sequence: "aabb eeff ccdd bbcc ddee"
Pointer chasing result: "aabb bbcc ccdd ddee eeff"
Count: 0 1 2 3 4
Count left: 0 0 1 1 2
Each step in the pointer-chasing procedure increases the "count" by one.
"bbcc" -> "ccdd" is the first pointer oriented to the left, and so it increases "count left" from 0 to 1. "ddee" -> "eeff" is the second pointer oriented to the left, and it increases "count left" from 1 to 2.
The final result is obtained by outputting the pointer chasing result in reverse order and associating with each word in the output sequence a number equal to "count" multiplied by "count left" for that word. This results in the following output:
Input sequence: 'aabb eeff ccdd bbcc ddee'
PERM result: 'aabb.0 bbcc.0 ccdd.2 ddee.3 eeff.8'
---
**(Q1)** We cannot guarantee that our decomposition of the PEN task into the primitives "match", "left" and "right" is unique. A different set of primitives will lead to a different decomposition. However, given our set of very simple primitives, it is highly unlikely that there exists a decomposition different from what we provide in the PDF.
---
**(Q2)** We refer to our answer Q1 of reviewer qmuR.
---
Rebuttal 2:
Comment: Your response has clarified my concerns. I have raised my score from 4 to 6. I really hope these clarifications can go into the revised version of the paper to improve the clarity and presentation.
---
Rebuttal Comment 2.1:
Comment: Thank you very much for raising your score, we are glad that our response clarified your concerns.
We will indeed incorporate all this material in the revised version of the manuscript.
Once again, thanks for highlighting these issues and contributing to the improvement of our work. | Summary: This paper evaluates the compositional learning abilities of Transformer-based models with LLaMA-like architecture on tasks requiring the composition of several discrete sub-tasks. To this end, the paper reuses two existing compositional algorithmic tasks and introduces two new ones, focusing on how many samples are needed for models to learn to compose the sub-tasks compared to the sample efficiency of learning the sub-tasks themselves. The study measures the efficiency of models when trained from scratch and the effectiveness of prompting the pretrained language models GPT-4 and Gemini. The experiments suggest that hypotheses that compositional learning requires no more samples than the sum of samples needed for each subtasks should be rejected. The paper also performs few-shot prompting with GPT-4 and Gemini with different prompting techniques to investigate their ability to learn to compose or decompose algorithms in-context and find that they are unreliable for executing sub-tasks or correcting errors in multi-round code generation. Finally, the paper uses complexity theory to support these findings, suggesting that when training feedforward models to memorize information with gradient descent, the sample inefficiency is inevitable.
Strengths: 1. Aside from achieving state-of-the-art performance on many academic benchmarks, transformer-based language models are the undisputed workhorse for numerous real-world applications. However, their training does not necessitate compositional learning explicitly, while many of the tasks they are tasked at solving do require such capability. As such, understanding the limits and requirements for these models to learn to compose independent skills is key to drive our understanding of these foundational models and to improve them.
2. The analyzed tasks in the paper are very well defined to verify that a model that learns a task must know how to perform the subtasks, and that given capability to solve the subtasks, a model must only learn to compose these abilities to solve the task itself. Creating such settings is not trivial, and goes a long way to enhance our understanding of the compositional learning abilities of transformer models.
3. The paper provides a very thorough literature review and contextualizes the work around prior work very well.
4. The presentation of the paper is generally very nice, making a technical read easier and fluent.
Weaknesses: 1. The authors correctly identify tokenization as a possible negative confounder in the defined testbed, and thus use character-based tokenization for the training experiments. However, the same care is not taken when investigating the abilities of GPT4 and gemini to perform in-context-learning. Namely, given the highly synthetic nature of the inputs, it is highly possible that both the out-of-domain distribution of these inputs (deviating from natural texts) as well as differences in how inputs are tokenized (for example, one key can be tokenized with a single token, another by three tokens, and a third be tokenized along with parts of the corresponding value) confounds and skew the results, hindering their usefulness.
2. Moreover, while the authors study many prompting techniques to evaluate GPT4 and Gemini, they use a constant 8-shot prompting. It is known that these models can benefit greatly from increased number of demonstrations, and has been shown that for out-of-domain tasks, one has to prompt the model with significantly more than 8 prompts to perform well (e.g. Levy et al., 2022, Bertsch et al. 2024, Agarwal et al. 2024)
3. The proposed testbed is very well defined, but a single transformer based model is being studied. Understanding the contextualization of the results given difference in the model and tasks properties (for example width-depth ratio, scaling behavior with respect to parameter count or effect length of the inputs) would be very beneficial.ֿ
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Can you imagine similar experiments with more natural compositional tasks that can be learnt, and then used to benchmark SOTA LLMs in the settings they were designed and trained for? For example, do you think it is possible to use something similar to the unobserved local structures as proposed by Bogin et al. (2022) to create such settings?
2. Can you try repeating the experiments with clear instruction for GPT4/Gemini but using as keys only tokens in their respective vocabularies separated by e.g ‘-‘ so that the tokenization and highly synthetic nature of the task would have less detrimental effect on the results?
3. What are the exact specifications of the trained model? You mentioned it is 150M parameters, does that mean it is a 12-layer decoder only model? What was the training procedure in terms of hyperparameters? Do you see any scaling behaviors a-la Kaplan et al., 2020? For example, does using a larger model decrease the amount of samples needed? Also, does the depth of the model affect the ability to learn to compose subtasks?
4. Presentation suggestion - in a few places in the text, it would be very helpful for the reader to be able to read the descriptions if they were coupled with examples.
1. In section 2.2 you define sub-tasks in the computation graphs. I think including a small demonstration of such a computational graph, along with toy examples of sub-tasks and tasks would go a long way to make this section clearer and improve the smoothness of the reading.
2. In Section 3 you define and explain the different tasks and subtasks of your testbed. While Figure 1 is very nice and contributes a lot, it does not explicitly show the different sub-tasks or the procedures involved in deriving the output from each input, and in some cases (for example the counts in PERM) it may take time for the reader to understand what are the exact tasks. I think a more explicit demonstration of the procedure would be very helpful. It can either be a step-by-step demonstration of the procedure for each task (added in the appendix for brevity), or even a textual derivation of the procedure applied to clarify the operations being performed at every step.
3. In section 4, it will be very useful to add a table with an example input and output used for each task and their statistics (e.g. length, histogram on number of neighbor steps needed etc). If the inputs and outputs in Figure 1 are representative, you can also say that directly and point there.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors discuss some limitations, but do not directly address the generalizability of their findings to natural tasks subtasks composition.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and observations.
---
**(W1)** We thank the reviewer for raising potential concerns regarding tokenizations. We already searched through different task designs to improve the performance of GPT-4 and Gemini-Pro. The reviews motivated us to conduct further investigations and quantification primarily on the openly available GPT-4 tokenizer. Our findings can be summarized as follows (see the general response for a detailed discussion):
- We ablated different task designs in the initial submission; however, we found that the current sequence design (e.g., "ab1cd") performs best among other alternatives (such as "ab-1-cd").
- As an additional analysis, we tested the tokenization of GPT-4's open-source tokenizer (tiktoken). We found that the digit delimiter is always tokenized separately (e.g., "ab1cd" yields "ab", "1", and "cd"); hence, it does not have a detrimental effect on the attention mechanism. We found that some of the 2-grams were split into 2 tokens; however, removing them from the dataset did not improve the overall task accuracy with the best-performing prompting technique.
- We designed a new natural variation of PEN, dubbed Natural-PEN, where we replaced the synthetic 2-gram sequences with 3-gram, in-distribution, natural English words. Overall, we found that GPT-4's performance does not improve on this in-distribution, natural English words, yielding even a lower accuracy on Natural-PEN.
---
**(W2)** Thanks for the comment and the relevant pointers! Following your suggestion, we provided the models with a more consistent number of shots on both PEN and PERM to verify whether increasing the number of examples beyond 8 would highlight an increase in the models' accuracy. We provided up to 32 examples (64 did not fit into the 8k context window of the GPT-4 version we are using). However, we did not observe any performance improvement, as reported in the table below.
|Model|Task |Termination Acc.|Match Acc.|Task Acc.|
|-----|---|----|----|----|
|GTP-4|PEN|0.16|0.06|0.0|
|Gemini-Pro|PEN|0.15|0.2|0.0|
|GPT-4|PERM|0.36|0.59|0.0|
|Gemini-Pro|PERM |0.32|0.05|0.0|
We believe that the helpfulness of the additional examples might depend on the considered task and that, in some cases, there could be no improvements or even considerable sensitivity to the number of prompts. If you consider these results relevant, an Appendix on them could be added upon acceptance.
---
**(W3)** This is a limitation of our current results. However, we did experiment with different sizes of LLaMA up to 600M. Because we could train a 33M model to perform the PEN task well, it is reasonable to assume a 150M model should be sufficiently large, and hence present our results with this model. Regarding other architectures, we also ablated a Universal-Transformer-style model in Appendix B.5 thanks to their good performance on compositional tasks like SCAN [61]. This model (UT-style LLaMA) has only one layer with the same configuration (embedding size 1024, 16 heads), and this layer is repeated. We achieved very similar results with UT-style LLaMA (Table B.10) compared to our main model. Please see our general response for more variants of architectures that we tried.
---
**(Q1)** This is an interesting question, thanks for pointing to the prior work for discussion in our paper and suggesting a more general research direction. In our investigation, the tasks considered do not present unobserved local structures since the composition of primitives only admits a specific combination of operations (e.g., RIGHT[MATCH[LEFT[MATCH]]] in PEN), and the trained models should always have (at least in principle) the possibility to observe and learn this specific sub-graph in the computational graph.
This is a desired property as the goal of our investigation was not to stress-test how well Transformer language models can learn arbitrary difficult compositional tasks but rather to measure how **sample efficient** they are in learning the composition of simple primitives in algorithmic tasks. Considering more tasks characterized by more complex phenomena (such as unobserved compositions of the primitives, spurious correlations in the data, semantical ambiguities, etc.) would have hence partially defeated this purpose, contaminating the effects of the intrinsic limitation of the Transformer model with other factors.
However, after observing and validating our hypotheses on the simple synthetic datasets, we still believe that it would be compelling to extend the analysis to more natural compositional tasks such as the ones investigated by Bogin et al. It would allow us to verify how well the observed behavior scales in a real-world scenario, possibly providing new angles to it.
---
**(Q2)** Please refer to the results presented in (W1) and to the note on tokenization included in the general response.
---
**(Q3)** We report here the exact specifications of the models that we trained in our experiments. We use the Adam optimizer, a hidden size of 1024 and 12 layers. The model is a standard LLaMA 2 implementation, decoder only with Rope positional encoding and a Swiglu feedforward net. We did not use Grouped Query Attention, as LLaMA models do not use them in their smaller sizes.
For what concerns the possible ablations on width/depth and larger models, we performed some minor ablations on them, but we observed no changes in the results, discouraging us from proceeding further in this direction (see our response in (W3) as well). However, as pointed out in your question, it would be interesting to observe whether the size and depth of the model can play a role in the learning efficiency, and if this interplay can be characterized through similar scaling laws such as Kaplan's.
---
**(Q4)** We would like to extend our sincere gratitude for the suggestions on the presentation and writing in general. We will incorporate them upon acceptance.
---
Rebuttal Comment 1.1:
Comment: Once again, thank you for your valuable inputs and service as a reviewer. We are at your disposal for any further clarification or discussion regarding our rebuttal. | Summary: The paper investigates the capabilities of Transformer-based language models in learning compositional discrete tasks. The authors evaluate both training LLaMA models and prompting GPT-4 and Gemini-Pro on tasks that require the learning of compositions of several discrete sub-tasks. The results indicate that these models exhibit significant sample inefficiency: LLaMA models require more data to learn compositional tasks than to relearn all sub-tasks from scratch, and in-context prompting with GPT-4 and Gemini is unreliable and often fails in multi-round code generation. The findings are supported by a theoretical analysis showing the sample inefficiency of gradient descent in memorizing feedforward models.
Strengths: - The paper evaluates both training from scratch and in-context prompting methods, providing a thorough analysis of the models' capabilities.
- The authors introduce new algorithmic tasks designed to test compositional learning and providing a theoretical framework to support the empirical findings.
- The study offers a deep dive into the limitations of current LLMs, supported by both empirical data and theoretical arguments, which can guide future research in learning compositional tasks.
Weaknesses: - The tasks and settings used in the experiments may not cover the full range of real-world applications, limiting the generalizability of the findings.
- The performance and conclusions drawn are heavily dependent on the specific tasks designed by the authors, which might not fully represent other compositional learning scenarios.
- Personally, it took me a while to understand how the given algorithmic tasks are designed and how they relate to the broader context of compositional learning. For instance, the `PERM' problem was not immediately intuitive to me.
Technical Quality: 2
Clarity: 2
Questions for Authors: - How well do the findings translate to practical, real-world applications beyond the synthetic tasks used in the experiments? Any specific reason to introduce new algorithmic tasks for evaluation?
- Would the models perform differently on a broader variety of compositional tasks, particularly those that are more complex or domain-specific?
- What specific modifications to model architecture or training strategies could be employed to enhance the sample efficiency of Transformer models in compositional learning?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: - The study focuses on a limited set of compositional tasks, which may not fully capture the diversity of problems faced in real-world applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and questions.
---
**(W1)** We acknowledge that the investigated collection of tasks does not cover the full range of real-world applications. Nonetheless, this is a common choice across different works in this domain [paper references 16, 29, 30]. It allows us to operate in a **fully controlled** environment, limiting the impact of exogenous factors on the empirical observations while having the possibility to stress-test compositionality. Furthermore, we believe that working with synthetic datasets does not limit the generalizability of our findings. When a model fails to learn efficiently in a synthetic, fully controlled environment, it will even more do so in more complex, real-world scenarios, which present several additional challenges for the model (e.g., semantic and syntactic ambiguities, spurious correlations, distribution shifts, etc.).
This is also supported by a deep foundation of literature showing limited compositionality in practice [paper references 5, 8, 9, 11, 13 16].
Moreover, the synthetic nature of the considered tasks does not preclude the possibility of making observations that could be translated to more complex, real-world scenarios as well. One example is the multiplication task (MUL), a fundamental task in mathematics on which LLMs struggle without a significant amount of additional training [paper references 11, 16]. We speculate that both our empirical and theoretical results can shed light on this behavior, justifying it as an intrinsic inefficiency of LLMs in composing the simple sub-operations necessary to solve the task and quantifying their sample inefficiency in certain complexity classes.
---
**(W2)** While strictly speaking, our statements are limited to our tasks, in our investigation, we include a diverse range of algorithmic tasks and observe consistent behavior across all of them. Therefore, we expect that the same conclusions would hold on any similar tasks involving the composition of simple, known primitive operations (which occurs in many algorithmic, mathematical, and reasoning tasks).
---
**(W3)** We tried to clarify the tasks with an updated Figure 1, shown in the 1-page PDF note.
---
**(Q1)** Concerning the extent to which our observations could potentially extend to practical, real-world applications beyond the synthetic tasks used in the experiments, please refer to our observations in (W1). In addition, also refer to the general response (tokenization) with the additional results on the new Natural-PEN.
On the other hand, the reasons behind the introduction of the novel tasks are manyfold:
- First, we argue for the introduction of algorithmic tasks based on the pointer execution, introduced by Bengio et al. [20] and so far absent in the landscape of research on compositional learning. This task limits the number of confounders in the data and, therefore, reduces (if not annihilates) the number of shortcut solutions that the model can find. In other words, pointer execution tasks force the model to learn an algorithmic (general) solution by design and not rely on any possible statistical shortcut that might compromise the validity of the observations.
- As algorithmic tasks, PEN and PERM are particularly suitable for benchmarking compositionality. We can decompose them into single atomic operations (referred to as _primitives_ in the text), which can make learning the task easier if the model can identify and leverage their compositional nature. We clarify their decompositional nature in the response of W2 of reviewer Fa35.
- Finally, our tasks represent a unique example in the sense that the failure of GPT-4 and Gemini-Pro is very apparent despite extreme help, e.g., providing multi-shot examples and showcasing limited compositionality of current SOTA models better than any task we are aware of.
---
**(Q2)** These LLMs would always perform decently if there is enough data. Our findings, however, are that the required data becomes excessive with more complex tasks. Our forecast is, therefore, that the more complex and the more domain-specific the tasks become, the harder it becomes for LLMs to perform them correctly (e.g., different to our education systems, where a student does not necessarily get more data for more complicated concepts). Concretely, in our experiments, this materializes in a model starting to hallucinate wrong solutions and failing at seemingly obvious executions (see GPT-4 experiments and Figure 2).
---
**(Q3)** There exists evidence [17, 61] that Transformers with weight-sharing between different layers and recurrence tend to perform better on compositional tasks than standard Transformer variants. It is for this reason that we also experimented with several such models during our work. We discuss details thereof in the general response.
Additionally, we independently investigated several novel architectural modifications aimed at increasing compositionality within the Transformer. One such idea has been penalizing superposition in favor of improving compositionality inspired by [Elhage et al., 2022, Olah, 2023], by including a single parameter modifying the read-outs from the latent representations for attention and feedforward blocks that penalize superposition and with that possibly encourages compositionality in the latent representations. We also built a Transformer that appends new computations from attention and feedforward passes (instead of adding) to a list of "latent tokens" with an additional attention mechanism to access those fields in the hope that compositionality can be encouraged. While some of those approaches did increase compositional capabilities on small datasets, we have not found a fundamental improvement in the domain of our experiments so far.
[Elhage et al., 2022] https://arxiv.org/abs/2209.10652
[Olah, 2023] https://transformer-circuits.pub/2023/superposition-composition/index.html
---
Rebuttal Comment 1.1:
Comment: Once again, thank you for your valuable inputs and service as a reviewer. We are at your disposal for any further clarification or discussion regarding our rebuttal. | Rebuttal 1:
Rebuttal: We want to thank all the reviewers for their helpful and supporting comments. We are encouraged that they acknowledge the importance of addressing current Transformer models' limitation in learning compositional tasks (DGs4, teuc), and that w.r.t. other benchmarks such as BIG-bench, our work provides a concrete, quantitative, and extensive study on the extent of this insufficiency (teuc) by introducing two novel evaluation benchmarks (i.e., PEN and PERM). Moreover, we are glad that they appreciate the thoroughness of our analysis (teuc, qmuR) with both from scratch and in-context learning, as well as theoretical proofs (teuc, Fa35, qmuR).
In this general response, we summarize our response to the reviewer's major concerns. A point-by-point response to each reviewer can be found in the individual responses.
## Tokenization
We thank the reviewers for raising potential concerns regarding tokenizations. Indeed, we already searched through different task designs to improve the performance of GPT-4 and Gemini-Pro. The reviews motivated us to conduct further investigations and quantifications primarily on the openly available GPT-4 tokenizer. Our findings can be summarized as follows:
**Different task designs**
While preparing the initial submission, we already ablated the structure of the PEN and PERM examples fed into the LLMs to validate whether differences in the representation of the task could affect the tokenization and, indirectly, the performance of the model. One example of an ablated structure involved adding dashes to the sequences (e.g., "ab-1-cd"). However, these variations in the structure did not achieve any improvement over our base formulation of the tasks.
**Tokenizer analysis with GPT-4**
To shed more light on tokenization, we additionally tested the validity of the prompting technique reported in the manuscript for GPT-4 using its open-source tokenizer (tiktoken). Unfortunately, we could not find an open-source tokenizer for Gemini-Pro; the available token counter only provides limited information about the actual tokenization. Analyzing GPT-4's tokenization on our tasks showed that our induced sequence structure with digits as delimiters ensures safe splitting within a word, e.g., "ab1cd" would be mapped to three tokens "ab", "2", and "cd". However, we saw that 13.2% of the 2-grams are split into two tokens (e.g., "bq" is tokenized to "b" and "q"), which may affect the attention mechanism. As an additional experiment, we removed these 2-grams from the dataset. However, this setup could not improve the performance (0.05 vs. 0.19 on PEN) when using the best-performing prompting technique with GPT-4 (Code Interpreter). Hence, tailoring the task to the tokenizer does not lead to an improved performance.
**New Natural-PEN with in-distribution, natural English words**
Inspired by the comments of reviewer Fa35, we additionally designed a natural variation of the PEN, dubbed Natural-PEN, where we replaced the synthetic 2-gram sequences used for the matching in the original tasks with 3-gram, in-distribution, natural English words. To do so, we filtered all valid 3-gram words from Scrabble (https://scrabble.collinsdictionary.com/word-lists/three-letter-words-in-scrabble/) that resulted in a single token when using GPT-4's tokenizer. This gave us a set of 707 words (out of the original 1338 words). Similar to the experiment in the previous paragraph, we find that the GPT-4's performance using the best prompting techniques does not improve on this in-distribution, natural English words, yielding even a lower accuracy on Natural-PEN:
|Setup |Termination Acc.|Match Acc.|Task Acc.|
|-----|----|----|----|
|Few-shot CoT|0.5|0.27|0.0|
|Few-shot CoT, traps removed|0.2|0.29|0.0|
|Code Interpreter|0.05|0.1|0.05|
## Many-shot prompting of GPT-4 and Gemini-Pro
Following suggestions of DGs4, we increased the number of shots from 8 examples up to 32 examples (64 did not fit into the 8k context window of the old GPT-4 version we are benchmarking). However, we did not observe any performance improvement, as reported below.
|Model|Task |Termination Acc.|Match Acc.|Task Acc.|
|-----|---|----|----|----|
|GTP-4|PEN|0.16|0.06|0.0|
|Gemini-Pro|PEN|0.15|0.2|0.0|
|GPT-4|PERM|0.36|0.59|0.0|
|Gemini-Pro|PERM |0.32|0.05|0.0|
We believe that the helpfulness of the additional examples might be dependent on the considered task and that, in some cases, there could be no improvements or even considerable sensitivity to the number of prompts.
## Different architectures/configurations
Several reviewers rightfully questioned whether the architecture we used should be expected to learn compositionally. We obtained results with several different architectures based on the Universal Transformer (UT) [61] and the more recent Hyper-UT [17], for both of which there is evidence that they exhibit better compositional generalization than the original Transformer. For instance, a UT-style transformer displayed good performance on compositional tasks like SCAN [61]. The model based on UT reached a similar performance as the original LLaMA model, as documented in Appendix B.5, which placed it into the H4 category. We then also experimented with a Hyper-UT-style LLaMA, based on the aforementioned Hyper-UT work [17], in which each layer selects its weights from a common weight embedding pool to realize a parameter-efficient model. This model consistently achieved accuracies below that of LLaMA on a large set of small algorithmic tasks. Thus, we chose not to systematically explore its performance in a more challenging compositional algorithmic setting. However, to improve the generalizability of our findings, we will include results with this particular architecture in a new Appendix.
Pdf: /pdf/86c4f0870ff84e1eb4369ccc4d60e9df37680181.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Multi-Reward Best Policy Identification | Accept (poster) | Summary: The present article extends the track-and-stop approach of Garivier et al. to a multi-reward MDP setup. Given an MDP problem with a finite number of reward functions the aim is to develop an algorithm that learns optimal policies for all reward functions simultaneously. Under (drastic) assumptions the authors present a sample based variant of track-and-stop in the multi reward setup. The algorithm is based on replacing the theoretical model complexity $T^*$ (in a multi reward variant) from Garivier et al. by an upper bound $U^*$ that can be estimated. Estimating $U^*$ during the exploration phase results in a practically implementable termination condition that results in worse complexity than the theoretical termination condition. The algorithm is tested on a few (very similar) tabular examples and compared to simple algorithms. An educated guess is performed to design a deep variant of the algorithm that is tested on a multi reward card pole variant and deep sea. Results beat easily the (terrible) results of the benchmark algorithms used.
Strengths: The article is extremely dense with information, perhaps it would have been better to split the article in 2 or 3. The theoretical development is an extension of several earlier articles of the authors with very detailed mathematics. While the tabular examples are of limited practical interest (the assumptions are just too strong) the deep variant is interesting. While I am not so sure about the relevance of the tabular setting, the relevance in the deep setting is obvious (and was realised of the authors for cardpole). The ability of generalisation of NN makes it interesting to train the network simultaneously on perturbations of the true reward function in order to learn a more stable policy that might work well for even more reward settings.
The scientific quality of the article is very high. There is hardly any typo. I appreciate a lot the critic discussion of limitations, this is far above the rather low scientific standard in ML. I could not check the entire proofs of the appendix, what I read seemed solid.
Good article! I am curious to follow up on the future development of the robust DQN variant.
Weaknesses: - I did not enjoy reading the article very much as I was pushed into first reading other articles to get a rough understanding of what is going on. Even reading the basis of the present article ([17], [40]) was not enough. To get an idea why $T^*$ is considered one needs to go all the way back to [20], and even further. It feels a bit like a community wants to stay among each other, the usual RL researcher is excluded by the style of writing. I would suggest to completely skip the discussion in the end (almost a page) and instead use the space to explain the reader what the technique is about and why the theoretical estimate naturally lead to Algorithm 1.
- The assumptions are drastic and should be discussed more clearly. I guess the authors have bandit background and skip discussions of issues that are typical for bandits. In particular, assuming knowledge of rewards and/or the reward gaps. While this is not unusual in bandits it is very much in RL. I am perfectly fine with such theoretical results in particular, as the authors implemented an educated guess algorithm in the deep setting that addresses this issue with the reward gaps.
Technical Quality: 4
Clarity: 2
Questions for Authors: Here is a number of questions and comments.
- $\alpha$ should also be an input for the algorithm 1. Why is there no dependence on $\alpha$ in the theoretical results?
- How do you compute U in Algorithm 1 without knowing the rewards gaps? You estimate M, but what about the gaps? They are estimated in the deep setup, why not in the tabular setting?
- I think the algorithms are not sufficiently explained. In the tabular case $M_t$ should be explained in more detail. What shall the reader get from "use estimate $M_t$ of the model", "update $M_t$" without knowing $M_t$? In the present form the article is hardly comprehensible (even for me as a Mathematician). In that generality (what estimate is used?) I am a bit sceptical about the validity of Theorem 3.3, but as said, I could not try to understand all proofs.
- The use of Borel-Cantelli in the proofs is not correct as it is. The events are not independent and this direction of Borel-Cantelli requires (pairwise) independence. I am pretty sure the SARSA paper contains the details on how to use a conditional independence version of Borel-Cantelli.
- I am a bit puzzled about the connection of Theorem 3.1 and 3.3. If I am not wrong, Theorem 3.1 is a bandit theorem, requiring to play independent "arms" $(s,a)$ while Theorem 3.3 (the algorithm) is an RL theorem requiring rollouts $(s_0,a_0, s_1,a_1,...)$. The latter is obviously much harder and depends a lot on the exploration method. Unfortunately, I have six 50pages NeurIPS papers on the table and cannot go fully through every proof. Could you add two sentences why the theorem should hold? For instance, SARSA convergence holds by rewriting SARSA as Q-learning plus an error that goes to zero by the exploration condition. Here, the situation is much harder.
- A crucial point is to replace $T$ by $U$. In line 169 you mention an upper bound for $U$, which has an additional $1/\Delta^2$ compared to the theoretical bound. Is there a better theoretical bound? The authors should at least discuss briefly in the main text that there is something non-trivial going on (as in the SARSA paper where on-policy exploration is essentially as good as off-policy exploration). As it is it one quickly overlooks the difference in the algorithm to the "allocation".
- I cannot see a scaling in the number of rewards. Is there any theoretical understanding on how $\tau$ scales in the number of rewards?
- How does your algorithm compare to just train for one reward after the other? Does that make sense? The training time for card pole seems quite high but I might be wrong. Typically card pole plots show the number of episode (ca. 200-300), not the number of steps. Is it clear your algorithm is faster than just training 5x card pole? How about comparing your algorithm to training 5x card pole with your different reward settings but keeping the neurone network from the reward before (and using some replay buffer to avoid overfitting)? My feeling is that such a naive approach might be better than the benchmarks used.
- Similar question: Comparing your deep algorithm to performing several deep trainings for different rewards. Is your approach particularly suitable to avoid overfitting? Does your algorithm help for generalisation (as you see in the cardpole example?).
Confidence: 2
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: The tabular setting requires drastic assumption (but gives a clean analysis), the deep setting is an interesting educated guess but does not allow a mathematical analysis. I would say that is quite normal for RL, except, perhaps, the deviation of the educated guess Algorithm 2 from the tabular algorithm 1 is pretty big.
There is no good benchmark to compare to, this makes the numerical results a bit boring. Perhaps a better benchmark would be to compare to training individually the policies. While that might even beat the proposed algorithm in the tabular case (for few rewards) I doubt the same holds for the deep case as the NN would only remember to solve the last reward problem.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review and the time spent on our paper. Below, we address each concern in detail.
>I was pushed into first reading other articles to get a rough understanding of what is going on
We acknowledge the current format assumes familiarity with related literature. We will add a background section to make the technique accessible to a broader audience.
> How do you compute $U$ in Algorithm 1 without knowing the rewards gaps?
We briefly explain the procedure in lines 178-182 and we do not assume knowledge of the gaps in the algorithm.
We use the current estimate of the model $M_t$ and the set of rewards to compute the sub-optimality gaps. We use the data gathered up to time $t$ to estimate the transition function $P_t$ and perform value iteration for each reward to find the sub-optimality gaps. These estimates are used in place of the true gaps to compute $U$.
This technique is known as "certainty-equivalence" principle, meaning that we use the current estimates as if they were exact when computing $\omega_t^\star$. Note that, as more data is gathered, the estimate of the transition function improves, and $\omega_t^\star$ tends to $\omega^\star$.
> I think the algorithms are not sufficiently explained. In the tabular case $M_t$ should be explained in more detail
Currently, the main text provides a brief description: lines 178-182 outline how $M_t$ is updated by estimating the transition function $P_t$, and lines 183-189 explain how to compute the instance-dependent quantities to determine $U$.
In the revised manuscript we will provide a more detailed explanation of the algorithm.
> Why is there no dependence on $\alpha$ in the theoretical results?
The main theoretical results are asymptotic in $\delta$, causing $\alpha$ dependence to vanish. Intermediate results do show $\alpha$ dependency (e.g., see Remark C.2, line 1099).
> The use of Borel-Cantelli in the proofs is not correct as it is. The events are not independent
We rely on Observation 1 in [70] (in the text of the proof there is a typo, since there is no Lemma 4 in [70]). We use the fact that there exists another stochastic process (the one provided by the forcing policy) such that the sum of the probabilities with which action $a$ is chosen is infinite. This sum lower bounds the sequence of probabilities of selecting action $a$ and satisfies the independence conditions required by the Borel-Cantelli Lemma.
> What is the connection of Theorem 3.1 and 3.3?
Both theorems are for MDPs. Deriving Theorem 3.1 requires additional steps compared to the bandit setting, and the optimal allocation $\omega_{opt}$ must satisfy the forward Chapman Kolmogorov equation.
To explain how the two theorems are connected, we consider an information-theoretical approach.
Firstly, note that the optimal exploration strategy induces the stationary distribution $\omega_{opt} =\arg\sup_{\omega \in \Omega(M)} \inf_{r,M'\in Alt(M_r)} \mathbb{E}\_{(s,a)\sim\omega}[{\rm KL}\_{M|M'}(s,a)]$, and $T^\star$ can be interpreted as the average amount of information extracted per time-step from the MDP at stationarity under $\omega_{opt}$.
Secondly, as $T^\star$ is a hard non-convex optimization problem, we instead use $U^\star$ in the design of our algorithm.
As more data is collected, the estimate of the model $M_t$ tends to $M$, and one can show that $\omega_t^\star$ approaches the optimal stationary distribution $\omega^\star$. Asymptotically, the algorithm will visit the MDP according to $\omega^\star$, so the amount of information extracted approaches $U^\star$. Hence, with a proper stopping rule, this is roughly why, as $\delta \to 0$, the sample complexity of the algorithm approaches the value $U^\star$ in Theorem 3.3.
> You mention an upper bound for $U$, which has an additional $1/\Delta^2$ compared to the theoretical bound. Is there a better bound?
Please note that also $U$ has a dependency on $1/\Delta_r^2$. In general, we believe that it is hard to find a better dependency in the gaps that does not depend on some specific property of the MDP.
>Is there any theoretical understanding on how $\tau$ scales in the number of rewards?
The result in Theorem 3.1 indicates that the scaling depends on the "worst" reward in the considered set. Practically, according to $U^\star$, the sample complexity is dictated by the minimum sub-optimality gap $1/(\inf_{r\in {\cal R}} \Delta_r^2)$ (and $\tau$ will scale as $U^\star \ln(1/\delta)$). This scaling is consistent with the scaling obtained in classical reward-free exploration, see for example [17; A]. In [A], the authors show a similar result in a different setting, where the sample complexity also scales according to the worst reward (with a similar $1/\Delta^2$ scaling, though their gap definition is slightly different).
> How does your algorithm compare to just train for one reward after the other? Typically card pole plots show the number of episode, not the number of steps. Does your algorithm help for generalisation?
The optimization problem formulated to compute $\omega^\star$ shows that the scaling depends on the worst-case reward in the set (i.e., the reward with the minimum sub-optimality gap) rather than the number of rewards. Consequently, a sequential exploration process would generally not be optimal (since that would scale according to the number of rewards). Such an approach would likely increase training time and associated costs.
Regarding the Cartpole swing-up experiments, each episode may have a different number of steps. Further note that we use a modified version of the Cartpole swing-up problem, designed to assess the exploration properties of RL algorithms.
Lastly, with a sequential training the ability to generalize may be limited, especially for very different rewards, reducing the effectiveness of such an approach.
**Other ref.**
[A]. Jedra et al. "Nearly optimal latent state decoding in block mdps." AISTATS 2023.
---
Rebuttal Comment 1.1:
Title: Thanks for the answers - I will keep my 7.
Comment: Thanks for your answers! I believe the article is really good and I would even consider an 8 if the article would be much better understandable for more than a handful of experts in the field. In your own interest in spreading your ideas to a wider group of researchers I would suggest to really use the revision for more explanations.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your support! We completely agree with you. In the revision, we will make sure to provide clearer explanations of the background and main ideas behind this family of techniques. | Summary: This paper studies the problem of best policy identification for RL with multiple rewards. The goal is to efficiently identify the best policy for given rewards with a high-level confidence. Authors provide an instance-dependent lower bound for the studied problem and introduce a provably-correct algorithm for a convex approximation of the original problem in the tabular setting. Extensions to deep RL is discussed with numerical results.
Strengths: The authors demonstrate how to efficiently identify optimal policies across multiple rewards. The strengths of this work are summarized as follows:
1. The studied setting is interesting and of practical concern when we seek to optimize the performance across a set of rewards.
2. An instance-dependent lower bound is identified for any Probably-Correct algorithm in the setting of multi-reward best policy identification problem.
Weaknesses: While this paper clearly articulates the idea of performing, here are some problems and potential improvements that authors are suggested to address and consider. More specifically,
1. Environments for the deep RL part is too simple. The Cartpole swing-up and DeepSea environments cannot fully demonstrate the performance of the proposed algorithm in more complex, real-world scenarios. It would be beneficial to include experiments on more challenging benchmark environments for better assessment of scalability and practical applicability.
2. The policy optimality is only considered and defined for stationary deterministic policy (as in definition 2.1), which can be too restrictive. It is not clear when considering the set of Markovian policies (which can be stochastic), whether the proposed lower bound still holds, and whether the performance of the algorithm is still optimal.
3. Theoretical guarantees for the deep RL extension is unavailable. Sample complexity bounds are only provided for tabular settings, which leads to the dependencies on the cardinality of state and action space. And empirical studies for the deep RL settings are not sufficiently convincing due to the simplicity of the environments.
4. In terms of the theoretical results of the lower bound, the proof structure closely follows the prior results (e.g. Marjani et al. 2021, Taupin et al. 2022) in single-reward RL. It is not quite obvious what are the main technical challenges and novelties of extending the prior results (e.g. Marjani et al. 2021) from single-reward RL to multiple-reward RL.
5. While the studied setting can be interesting, the relationship between the MR-BPI problem and reward-free RL as well as multi-objective RL somewhat remains unclear throughout the context. Though discussion has been provided in Section 1 and 5, I am not fully convinced that reward-free RL cannot solve the concerned practical scenario in multi-reward best policy identification problem. Indeed, reward-free RL assumes rewards are unknown, whereas the studied settings assume the knowledge of rewards. As a result, it is not surprising to see that properly utilizing the knowledge of rewards can lead to better performance as shown in the numerical results: the proposed algorithm (MR-NaS) significantly outperforms RF exploration strategies (RF-UCRL, ID3AL). However, reward-free RL is a more general type of algorithms and can be particularly useful in practice when it is hard to accurately learn rewards or when rewards are sparse. As such, the emphasis of the two settings are rather different. It might not be a fair comparison, and it is desirable to provide the fundamental reasons that can explain such performance improvement in numerical experiments. Therefore, more thoughtful insights should be provided to clearly explain the difference and relationship between these settings.
6. Some minor aspects:
- Grammatical errors need to be addressed, e.g. Line 354, line 91.
- In line 75-80, if only deterministic policies are been considered, it is more appropriate to write $a_t = \pi(\cdot|s_t)$ etc. Do not use the probability simplex notation for policies.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Do we treat MR-BPI as dedicated exploration strategies for multi-objective RL?
2. Most reward-free RL focuses on episodic settings, whereas this paper studies discounted settings for multi-reward MDPs. Is there any particular reason for choosing this (simpler) setting? Do you foresee any technical challenges that can be difficult to resolve in episodic multi-reward settings?
3. In Line 81 - 86 (Set of rewards), for each reward vector, each coordinate $i$ represents the reward for the $i$-th state-action pair, which is a scalar, why do we need the canonical basis $\mathcal{R_{canonical}}$ of rewards, where each element is a vector? (When you write $\mathbb{R}^{SA}$, I assume each element is a real number, not a vector). Do you assume for each $(s, a)$, there is a different reward function? Could you provide a concrete example of your definition of "set of rewards" with your notation? I assume you intended to say there exists $m$ (global) reward functions, and this set of functions $\mathcal{R}$ is thus in $\mathbb{R}^{m \times SA}$.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: There are no unaddressed ethical considerations in the studied context.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and for the time and effort spent reviewing our paper. Below we provide detailed responses to the main concerns raised by the reviewer.
> Simple environments for deep RL
We appreciate the reviewer's concern. However, the selected environments are intentionally designed as hard exploration challenges, as referenced in [36,40,43]
Additionally, we included experiments for a real-world application in appendix D.3 (a radio network control problem). These experiments show the effectiveness of our algorithm in a realistic environment using an industry-level network simulator.
> Optimality is only considered and defined for stationary deterministic policy
For finite MDPs, the optimal policy is stationary and deterministic (see Theorems 6.2.9 and 6.2.10 in [39]). Therefore, in this context, the lower bound holds, and the performance of our algorithm is asymptotically optimal with respect to $U^\star$.
> Theoretical guarantees for the deep RL extension
We acknowledge the absence of theoretical guarantees for the deep RL extension.
However, our primary goal was to establish a foundational understanding of the multi-reward setting for finite MDPs, which is crucial for building towards more complex cases.
Providing theoretical guarantees for deep RL would merit its own manuscript.
> Main technical challenges and novelties of extending the prior results from single-reward RL to multiple-reward RL.
For the lower bound, as in prior work, the proof builds upon classical change of measure arguments. The main difference with respect to the single-reward setting lies in the additional optimization over the set of rewards.
However, there are several differences compared to prior work, highlighted throughout the paper. For example:
1. Concerning the relaxed characteristic rate, deriving a tractable optimization problem for $U^\star$ in presence of a continuous set of rewards is challenging, since the minimum sub-optimality gap can be non-convex (see Sec. B.4 of the appendix). To address this issue, we show that it is sufficient to consider a finite set of rewards. This finding is used in our numerical results (see Sec. D.1.2 in the appendix or Sec. 3.4 in the main text).
2. Regarding the algorithm, we remove the need for an MDP-specific constant in the forced exploration (see line 195), which limited the usability of NaS [17]. We further provide a novel high-probability forced exploration result (see also remark C.2, page 35).
3. Regarding the extension to Deep-RL, prior works in BPI mostly consider the tabular case. Furthemore, the extension of [40] to the multi-reward setting is non-trivial. First, by inspecting $U^\star$, we design an exploration procedure that explores according to the difficulty of the rewards. Secondly, we experimented with various architectural changes of the neural networks to accommodate the multi-reward setting.
> I am not fully convinced that reward-free RL cannot solve the concerned scenario [...] reward-free RL is more general and can be useful when it is hard to accurately learn rewards or when rewards are sparse.
As correctly noted by the reviewer, considering a set of known rewards can provide superior performance with respect to the more general reward-free RL setting and this is the main motivation behind our setting.
Determining the best policy for any possible reward is a harder problem that is unlikely to occur in some practical problems. Hence, for applications where there are only a finite number of rewards to optimize, the multi-reward setting would provide a better model choice w.r.t. the reward-free setting. We provide a concrete example of this scenario in Appendix D.3.
Regarding the sparsity of the reward, our method can also handle scenarios with sparse rewards, as demonstrated in our numerical results.
Lastly, while the comparison may not be completely fair, there are currently no other pure exploration algorithms in the multi-reward setting (and reward-free RL is one of the closest to our setting). For this reason, we compare to different reward-free methods, as well as an adaptation to the multi-reward setting of PSRL [35] that we provide in the appendix.
> Do we treat MR-BPI as dedicated exploration strategies for multi-objective RL?
The multi-reward setting, as well as reward-free RL, share some similarities to multi-objective RL. In multi-objective RL, one seeks to identify the Pareto frontier of non-dominated policies. This objective is, in general, a significantly harder challenge compared to identifying the optimal policies for a finite set of rewards.
> Is there any particular reason for choosing the discounted setting instead of the episodic one?
While most reward-free approaches focus on episodic settings, in practice, many algorithms used in industry employ the discounted setting. Our goal was to strike a balance between providing theoretical results and developing a practical algorithm that practitioners can readily apply.
We believe the discounted setting can be extended to episodic, leveraging recent advances in [A].
> Why do we need the canonical basis of rewards? Could you provide a concrete example of ”set of rewards” with your notation?
Each element of the canonical basis is a vector of size $SA$. For example, $e_1$ represents a sparse reward where $r(s_1,a_1)=1$ and $0$ otherwise, expressed in vector form as $e_1 = \begin{bmatrix} 1 & 0 & \dots & 0 \end{bmatrix}^\top$. Similarly, $e_2$ is a reward vector that assigns a reward of $1$ to the second state-action pair and $0$ otherwise.
In some experiments we explored according to this canonical basis of rewards (as previously explained, see also Sec B.4 in the appendix). For a concrete example, we also refer the reviewer to section B.4.1 of the appendix.
**Additional references**
A. Al-Marjani et al. "Towards instance-optimality in online pac reinforcement learning." arXiv:2311.05638 (2023).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response.
As pointed out the authors, the studied multi-reward setting, reward-free RL and multi-objective RL does share similarities with different emphasis respectively. Hence, it would be important to provide a rigorous comparison between all these settings to clearly credit their own strengths and articulate the differences. It is essential to acknowledge such difference especially when it is not a fair comparison, so as to avoid misunderstanding for future studies. Effort should be made to provide a better understanding.
As such, I prefer maintaining my current score while not championing for a reject. I do think results of the multi-reward setting could be interesting, but more careful revision would be extremetly beneficial.
---
Rebuttal 2:
Comment: We appreciate the reviewer's feedback and understanding.
However, we have already acknowledged the differences between our work and these other settings (see lines 33-42 and the discussion in Section 5), and we are committed to further clarify the distinction.
We have already extensively compared our approach with reward-free RL in both the tabular and deep settings (both in the main text and in the appendix), as it is the closest existing framework to ours in terms of exploration objective.
Regarding multi-objective RL, we remark that our focus is on a different exploration objective, since we are not interested in identifying the Pareto frontier of non-dominated policies. Currently, there are no best policy identification techniques available for multi-objective RL, making a direct comparison impractical (which would merit a separate, dedicated study).
We hope these clarifications address the reviewer's concerns and help in understanding the scope and contributions of our work. | Summary: The paper addresses the challenge of identifying the best policy in RL when there are multiple rewards. The authors get a lower bound on the sample complexity and design an optimal exploration policy. The authors propose two algorithms: MR-NaS for tabular environments and DBMR-BPI for Deep RL. These algorithms efficiently identify the best policy across multiple rewards in RL.
Strengths: The paper presents a comprehensive and well-balanced analysis of theoretical and empirical results. The appendix provides supplementary evidence that strengthens the authors' arguments.
Weaknesses: 1. The paper is inspired by [17], but it would be better to explicitly acknowledge this inspiration in the main part, rather than mention it at Remark C.2. Furthermore, a more in-depth discussion of the challenges in proof caused by the novel forcing policy would strengthen the paper's contribution.
2. Even though more details is covered in the appendix, the paper should provide some details on the convex optimization. For example, a discussion of the computational costs associated with these methods would provide valuable context for readers. Sometimes, the computational cost of convex optimization methods are high.
3. The abstract would be better if providing a more impressive motivation for multi-reward RL, emphasizing its significance and potential impact.
4. Didn’t put some innovative aspects in the main text, leaving vital details to be discovered in the appendix. This can lead to important contributions being overlooked.
Technical Quality: 3
Clarity: 4
Questions for Authors: Please address the issues mentioned in the weaknesses.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: As noted in the paper, the assumption of a unique optimal policy limits the method's applicability to a broader range of scenarios. The computational cost of convex optimization may prevent it to deal with large-scale dataset.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and valuable feedback. We appreciate the time and effort spent reviewing our paper, as well as your positive comments on the comprehensive analysis of theoretical and empirical results.
Below, we address each of your concerns and outline the corresponding revisions we plan to make to improve the paper.
> The paper is inspired by [17], but it would be better to explicitly acknowledge this inspiration in the main part [...] a more in-depth discussion of the challenges in proof caused by the novel forcing policy would strengthen the paper’s contribution.
We would like to clarify that we acknowledge this inspiration in multiple points, including at the beginning of Sec. 3 (line 118), Sec. 3.1 (lines 133-134), and Sec. 3.3 (line 172). To further highlight this point we plan to include this sentence in the introduction:
"Our work is inspired by [17] for single-reward BPI and extends the results and analysis to the multi-reward setting".
Regarding the challenges in the proof, we agree with the reviewer that expanding on these in the main text would enhance the paper and highlight an important contribution. We have already briefly described how our approach removes the need for an MDP-specific constant in the forced exploration (discussed at line 195), which previously limited the usability of NaS [17]. Due to lack of space, more details on the novelty of the forcing policy were left in the appendix. We will make sure to include these in the main part of the paper by using the extra page in the final version of the paper. More precisely, we will further enhance our discussion by highlighting the challenges addressed in our proof and emphasizing our improved result for the high-probability forced exploration theorem, which we believe is of independent interest.
> Even though more details is covered in the appendix, the paper should provide some details on
the convex optimization. For example, a discussion of the computational costs [...]. The computational cost of convex optimization may prevent it to deal with large-scale dataset.
We agree with the reviewer that the paper should provide more details on the convex optimization problem defining $U^\star$.
We plan to include a subsection discussing the technical aspects of solving the optimization problem in the paper (note that currently in appendix D we provide the total runtime of our simulations), including implementation details and further results on the computational cost.
Lastly, for Deep-RL, our algorithm DBMR-BPI has similar computational complexity as DBMF-BPI [40], or Boostrapped DQN [43] at inference and training time (due to a vectorized implementation of DBMR-BPI).
> The abstract would be better if providing a more impressive motivation for multi-reward RL, emphasizing its significance and potential impact.
We appreciate the suggestion and we agree that the abstract could better emphasize the motivation for multi-reward RL. In the introduction, we already highlight the practical importance and potential impact of Multi-Reward RL and its relevance with respect to existing similar settings. For instance, reward-free RL might not always applicable in industry. In some applications, there are only a finite number of rewards to optimize.
To illustrate this aspect concretely, in Section D.3 of the appendix, we present an example of applying DBMR-BPI to a radio network control problem using an industry-level network simulator, demonstrating how current reward-free exploration techniques are limited when the number of rewards is finite.
We will make sure to highlight these aspects in the abstract and include these consideration in the final version of the paper.
> Didn’t put some innovative aspects in the main text, leaving vital details to be discovered in the appendix.
We agree with the reviewer that, due to the limitation in space, some important aspects were left to the appendix. We will integrate the main innovative aspects into a dedicated section by using the extra page in the final version of the paper. This section will include a summary of the theoretical and practical contributions, as well as their implications.
> The assumption of a unique optimal policy limits the method’s applicability to a broader range of scenarios.
While we agree that the uniqueness assumption may be limiting for a continuous set of rewards, we believe that it is less restrictive when the set of rewards is finite, as it is generally easier to guarantee the uniqueness of an optimal policy in such cases.
Furthermore, note that, as discussed in Appendix A (Limitations section), the assumption on the uniqueness of the optimal policy is common in the BPI literature [18, 16, 17, 40, 20]. Addressing MDPs with multiple optimal policies or identifying $\varepsilon$-optimal policies necessitates the use of more complex overlapping hypothesis tests [67]. We refer the reader to this Appendix for more details and further consideration on this extension.
Lastly, while the theoretical results may require this assumption, in practice we experienced both MR-NaS and DBMR-BPI to perform well in the numerical experiments even in presence of multiple optimal rewards.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal, I've revised my score to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for your time and feedback! We are glad our response was able to clarify the concerns you raised, and we appreciate you taking the time to reconsider and update the score accordingly. | null | null | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their comprehensive reviews. We are pleased that you recognize the scientific quality and the rigor of our theoretical and empirical contributions.
Our work strives to bring a comprehensive and well-balanced analysis, bridging both theoretical analysis and practical methods. Our work provides not only fundamental theoretical results in the tabular setting but also practical extensions to deep reinforcement learning (DeepRL).
The numerical experiments for both the tabular and DeepRL cases are conducted on environments designed to be challenging for exploration, highlighting the effectiveness of our approach across different settings.
Lastly, we present a real-world application in Section D.3 of the appendix, where we apply DBMR-BPI to a radio network control problem using an industry-level network simulator. This example emphasizes the relevance and applicability of our contributions.
We hope our responses clarify and address the reviewers' concerns, and we are committed to revising the paper based on their feedback in our final submission. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Pard: Permutation-Invariant Autoregressive Diffusion for Graph Generation | Accept (poster) | Summary: The paper introduces PARD, a graph generation model that combines autoregressive and diffusion models. Traditional autoregressive models are effective but sensitive to order, while diffusion models are permutation-invariant but need many denoising steps and extra features. PARD overcomes these issues by generating graphs block-by-block using a partial order of nodes and edges. It employs a shared diffusion model with an equivariant network and a higher-order graph transformer, supporting parallel training like GPT. PARD achieves state-of-the-art performance on various datasets, including large ones like MOSES, without extra features.
Strengths: 1. This work presents a successful showcase of the combination of autoregressive modeling with diffusion model on graph.
2. The proposed partial order ensures the permutation-invariance in the autoregressive generation process.
3. Impressive experimental results show the effectiveness and efficiency.
Weaknesses: 1. It is unclear how the diffusion model is employed in PARD. Sec 3.1 and the second part of Eq. 6 are not quite relevant to each other. Can you elaborate on that?
2. Please provide proof or reference for some statements. e.g.: 2-FWL expressivity for the proposed higher-order transformer
3. Please provide the results of other baselines on QM9 if possible.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Seems that in Fig. 2 the prior of the diffusion process is not the same as the second part of Eq. 6. What is the choice of the graph distribution at timestep T?
2. The total number of diffusion steps is directly related to number of blocks. I anticipate the generic graphs will have more blocks than QM9. Can you show the total number of diffusion steps for other datasets?
3. Some of the discussions are redundant can be moved to appendix, like the comparison to DiGress and GRAN in Sec. 3.1 and Sec 3.2, the energy view in Sec. 3.4. The running time comparison can be moved to the experiment section.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: There is no discussion about the limitation in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Thank you for these detailed questions and feedback for improving the presentation. Let us provide detailed response to all your questions.
>1. **It is unclear how the diffusion model is employed in PARD. Sec 3.1 and the second part of Eq. 6 are not quite relevant to each other. Can you elaborate on that?**
Sure! As you know, diffusion requires input and output to have exactly the same size (same number of nodes and edges), which is shown in Sec 3.1. Hence, to use diffusion for each block’s conditional distribution, we should pad the input (which has B blocks) to the same size of the output (which now has B+1 blocks), that is why in Eq. 6 the second part is actually conditional on the target block’s size, and the first part of Eq.6 captures the distribution of next (i.e. (B+1)-th) block’s size (|.| denotes size). The padding procedure is also shown in Fig. 2, where dashed edges and nodes are added (virtual nodes and edges). After padding, we use the second part of Eq. 6 to model the diffusion process, which is the same as procedures introduced in Sec. 3.1, except that we DO NOT modify all previously generated B blocks: noise is only introduced in the target (B+1)-th block. Please let us know whether this is clear, and we would like to hear your feedback on how to make this easier to understand. Note that you can also find/study the exact implementation in the provided anonymous repo, under pard/parallel/task.py.
>2. **Proof for the statement: 2-FWL expressivity for the proposed higher-order transformer.**
As the higher-order transformer combines PPGN and Transformer, it shares the same expressivity as PPGN. The reason is simple: the Transformer part (with weaker expressivity) can implement identity mapping, and does not affect the expressivity of the PPGN part. For the 2-FWL expressivity of PPGN, please see the [paper] (https://arxiv.org/abs/1905.11136) for the detailed proof.
>3. **Please provide the results of other baselines on QM9 if possible.**
For QM9, we adopt the more challenging setting that considers explicit hydrogens, as introduced in DiGress. We avoid the easier setting because most baselines, including DiGress and SwinGNN, can achieve 99% accuracy on it, rendering it an ineffective testbed due to the lack of distinction between models. However, we do not have the performance numbers for other baselines as they did not report their numbers in this more challenging setting. Given the time limitation of the rebuttal period, we apologize that re-running all baselines for the new setting is unrealistic, and hope you can understand.
>4. **The total number of diffusion steps is directly related to the number of blocks. I anticipate the generic graphs will have more blocks than QM9. Can you show the total number of diffusion steps for other datasets?**
As shown in Appendix Table 6, we can achieve the same result using only 140 total diffusion steps in QM9. To demonstrate the number of blocks for a generic setting, we report the numbers for the grid dataset, which has the largest graph size (100~400). For this dataset, there is a maximum of 27 blocks, with an average of 18.625 blocks. With the default 40 steps we used, the total number of diffusion steps is 745. Note that we haven't conducted an ablation study to reduce the number of diffusion steps as shown in Table 6, and we expect that a smaller number of diffusion steps can be used without sacrificing performance.
>5. **Some of the discussions are redundant can be moved to appendix**
Thank you for your suggestion, we fully agree that the part related to DiGress and GRAN can be moved to the appendix, and we will make this adjustment. Please let us know if you think other interesting results should be moved to the main section.
>6. **There is no discussion about the limitation in the manuscript.**
We briefly discuss the limitation in the Neurips checklist section, but we would like to elaborate further here. One major limitation is that each block currently uses the same number of diffusion steps, which is not ideal and can result in unnecessary sampling steps. Empirical evidence shows that approximately 50% of blocks can be predicted directly without any sampling steps, as they do not encounter transformation problems (introduced in Sec. 3.3). It would be highly beneficial to develop a hardness measure (such as graph energy) for the symmetry problem and use this measure to determine the appropriate number of diffusion steps for each block. This approach could significantly reduce the total number of diffusion steps.
---
### If you feel that we have addressed most of your concerns, please consider supporting us in achieving a better overall score. Thank you for your time.
---
Rebuttal 2:
Title: Thank you for the rebuttal
Comment: Thank you for the response. I want to mention that one of the main contributions of this work is its improvement in the efficiency. Given the discussion, the inference time of the proposed method highly relies on the number of blocks and does not necessarily offer efficiency on large graphs in the current version. However, the idea it self is interesting and the results are promising. Therefore, I have increased my score to 6 to support this work.
---
Rebuttal 3:
Comment: Thank you for your further support. We want to mention that for larger graph, our Pard should still outperforms DiGress in both efficiency and performance. When consider larger graph, each block's number of diffusion steps can be further reduced as there can be dramatically less symmetry inside. In fact, we have found that DiGress struggle to perform good in the Grid dataset (the largest dataset we considered) given the rich symmetry inside. Our Pard breaks symmetry easily with the AR mechanism inside, such that later blocks will have less and less symmetry and lots of blocks does not need any diffusion steps (50% in QM9 dataset). In the final version of the paper, we will add the ablation study on Grid, to show that the effectiveness in both efficiency and performance comparing to single-block version DiGress. At the end, we appreciate the discussion you provided, and we are grateful for the feedback to improving our paper. | Summary: This paper proposes a graph generation method that combines AutoRegressive (AR) models and diffusion models. By utilizing a unique partial order, it addresses the issue of non-exchangeable probabilities in AR models and the efficiency problem in diffusion models.
Strengths: 1. The proposed block-wise AR diffusion model in this paper offers a new idea for graph generation, particularly by introducing the use of weight-degree to differentiate blocks.
2. The limitations of equivariant networks demonstrated in this paper also hold value for further exploration and resolution within the community.
3. The overall structure and writing of the paper are relatively clear.
Weaknesses: 1. There is a part in the paper that I believe needs to be clarified more clearly to ensure logical coherence. Why does diffusion based on equivariant network solve the flaw in equivariant modeling? I think besides the analogy of tempering iron (or higher/lower energy), more mathematical proofs are needed.
2. Ablation of PPGN is necessary to demonstrate its effectiveness.
3. Following the experimental settings of GDSS, NSPDK is also an important metric for QM9 and ZINC250K.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is there a reasonable explanation for the significant improvement of FCD on ZINC250K compared to other baselines? Similarly, why is there such a large difference in performance between Scaf. and baseline methods on MOSES?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Thank you for giving positive feedback to our paper, we would like to address all your questions in detail.
>1. **Why does diffusion based on an equivariant network solve the flaw in equivariant modeling?**
The underlying magic behind the randomness introduced in the diffusion process is relatively straightforward. During each training and inference step, noise is added to the current graph structure (assumed to have many symmetries) through resampling. This process modifies the graph to a different structure with fewer symmetries. Consequently, the number of sampling steps is crucial in controlling the degree of symmetry-breaking. A larger number of sampling steps increases the likelihood of successfully breaking the current symmetries, thereby making it easier to denoise to the target graph. This concept shares similarities with the findings of Abboud et al. in their paper "The surprising power of graph neural networks with random node initialization." We acknowledge the need for a formal proof of its connection to graph energy, and we have highlighted this in our paper to encourage future research. Note that our paper is the first work mentioning the issue of equivalent generative models, and pointing out the connection to diffusion. We hope this can motivate future research for equivalent generative modeling.
>2. **Ablation of PPGN is necessary to demonstrate its effectiveness.**
We have done an ablation study on the QM9 dataset. Please see the following table.
|QM9|Pard w. Transformer |Pard w. PPGN |Pard w. PPGNTransformer|
|-|-|-|-|
|Maximum Hops|3|3|3|
|Total Diffusion Steps |140|140|140|
|Validity |26.3 |96.6 |97.1 |
|Uniqueness| 94.5|96.3 |96.0 |
|Mol stability|17.6|84.67| 86.2 |
|Atom Stability|81.4|98.2| 98.4 |
Interestingly, this indicates that PPGN is essential for diffusion with autoregression. As the transformer architecture is used in full-graph diffusion like DiGress, we hypothesize that it is less sensitive when using diffusion directly without any autoregression. Hence, we did another ablation by setting maximum hops $K_h=0$. At $K_h=0$, all nodes have a fixed degree of 1, resulting in a single block per graph, equivalent to full graph diffusion (only) without autoregression.
|QM9|Pard w. Transformer |Pard w. PPGN |Pard w. PPGNTransformer|
|-|-|-|-|
|Maximum Hops|0|0|0|
| Total Diffusion Steps |140|140|140|
| Validity|93.1|93.3|93.8|
|Uniqueness|96.3|96.7|96.9|
|Mol stability|74.7|77.1|76.4|
|Atom Stability|97.6|97.9|97.7|
The above result confirms that performance is not much sensitive to architecture when **no** autoregression is employed.
>3. **For experimental settings of GDSS, NSPDK is also an important metric for QM9 and ZINC250K.**
We apologize for unintentionally missing the NSPDK metric. We now implemented the NSPDK metric, and computed it for QM9 and ZINC250K. Notice that we do not have the number for baselines in QM9 dataset, as we use the harder QM9 setting considering explicit hydrogens, which is introduced in DiGress. For reference, SwinGNN achieves 2.01e-4 NSPDK for a simpler setting without hydrogens. We will update the table in the final version to include the NSPDK metric.
|NSPDK |QM9|ZINC250k|
|-|-|-|
|Pard |2.40e-04|**7.10e-04**|
|DiGress| \-|8.75e-3|
|SwinGNN|\- |1.64e-03|
|GDSS |\- |1.80e-02|
>4. **Is there a reasonable explanation for the significant improvement of FCD on ZINC250K compared to other baselines? Similarly, why is there such a large difference in performance between Scaf. and baseline methods on MOSES?**
Our method's superior performance on these metrics can be attributed to the synergistic combination of autoregressive (AR) and diffusion models, which allows us to better capture the underlying molecular distribution. For both ZINC250K and MOSES datasets, we used the standardized evaluation code provided in the MOSES repository to compute FCD and Scaf. metrics, ensuring consistency with baseline methods. We are confident in the reproducibility of our results and can provide model checkpoints.
However, we are not very familiar with some molecule-specific metric like Scaf. We acknowledge a deeper understanding of molecule-specific metrics like Scaf. would be beneficial. Future collaboration with domain experts in molecular science could provide valuable insights into these performance differences and help refine our approach further.
---
### We hope we have addressed your concerns and questions, and thank you again for your detailed feedback. If you feel more confident with these additional ablations and explanations, please help us in achieving a better score. Thank you.
---
Rebuttal Comment 1.1:
Comment: As the discussion deadline is approaching, we kindly ask whether you still have additional questions. Given that we have answered all questions you have now, and demonstrated extensive ablations, we would like to hear you back for your thoughts. | Summary: The work proposes a new graph generative model based on an autoregressive procedure. It proposes an approach to deciding a partial order of graph nodes according to their degrees in a node-removal procedure. Based on the partial order, the work devises a new graph generative model.
Strengths: The graph algorithm of deciding a partial order of graph nodes would be interesting if such an algorithm does not exist in the literature of graph theory.
Weaknesses: The work lacks justification. As the field has moved to generative methods with discrete-diffusion models, which are already permutation-invariant, it is less clear about the advantage of designing a complex autoregressive model to satisfy the permutation-invariant property.
The advantage of the model is not obvious even considering only autoregressive models. Note that Chen et al. [9] have an approach of "optimizing" node orders for the generative model and show that the likelihood calculation is more accurate with their approach than a pre-determined order. How does the work justify its advantage over such an approach?
The analysis in 3.3 does not seem to be reasonable. The **probability calculations** are indeed the same for nodes in the same orbit, but they may get different connections in the sampling procedure and then break the symmetry. The analysis in 3.3 is well known, and it is not a concern for generative models. In some diffusion-based generative models, the starting graph is a graph with no edges, then all nodes are in the same orbit, but it is not an issue at all because the edge sampling process will break the symmetry.
Without clear justification, I don't know where performance improvements are from (maybe architecture improvement?). I feel that the work should have a thorough investigation of the model.
Technical Quality: 1
Clarity: 3
Questions for Authors: How do you justify the advantage of using an autoregressive model with partial order?
Confidence: 4
Soundness: 1
Presentation: 3
Contribution: 2
Limitations: The proposed model seems to have a long running time because it needs to run a diffusion model at the generation of each block.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Thank you for your questions. We have conducted extensive new ablation studies. We want to show that our analysis/motivation is not just a fancy story, but indeed the primary driver behind performance.
>1. **Lack of justification: it is less clear about the advantage of designing a complex autoregressive model to satisfy the permutation-invariant property.**
The reviewer questions why we introduce an autoregressive (AR) approach to a diffusion model, given that the default diffusion model already captures the permutation-invariant property. In DiGress, we observed that full-graph diffusion, while permutation-invariant, requires (1) many sampling/denoising steps to break symmetry and (2) additional features like eigenvectors to further break symmetry. This shows that directly capturing the FULL joint distribution and solving the transformation difficulty (in Sec. 3.3) via diffusion is challenging. Additionally, AR methods still dominate LLMs, indicating their potential to benefit diffusion models. As stated in the abstract, this paper aims to combine the strengths of both approaches: AR’s training and inference efficiency and diffusion’s data efficiency through permutation-invariance.
To verify our analysis quantitatively, we conduct an ablation study on the maximum hops $K_h$, which controls the degree of autoregression. At $K_h=0$, all nodes have a fixed degree of 1, resulting in a single block per graph, equivalent to full graph diffusion (only) without autoregression. Increasing $K_h$ produces more blocks with smaller average block size, indicating more AR steps.
|QM9|Pard (no AR)|Pard|Pard|Pard|
|-|-|-|-|-|
|Maximum hops|0|1|2|3|
| Average number of blocks|1|4.3| 5.6| 7.75 |
| Diffusion steps per block|140| 32| 25| 20|
| Total diffusion steps |140| 140| 140|140|
| Validity| 93.8 | 97.1| 96.7| 97.0|
| Uniqueness|96.9|96.5|96.2| 96.1|
| Mol stability|76.4|86.1|85.4| 86.3|
| Atom Stability|97.7|98.3|98.3| 98.4|
This ablation study maintains 140 total diffusion steps across all trials, using the same model architecture, diffusion algorithm, and training settings. The significant improvement from $K_h=0$ to $K_h=1$ confirms that our enhancement stems from breaking the full joint distribution into several **conditional** distributions, effectively combining diffusion and AR approaches.
>2. **The likelihood calculation is more accurate with Chen[9]’s approach than a predetermined order. How does the work justify its advantage over such an approach?**
The reviewer is concerned that the predefined partial order is not optimal, given that Chen[9] shows that a learned order can be better than a predefined node order like BFS and DFS. We want to clarify that we are not arguing that our predefined **partial** order is optimal, instead, we claim that **partial order** should be used to replace the absolute order in an autoregressive approach as it is fully deterministic and a canonical partial order can be easily defined, which is the cornerstone of the permutation-invariant property we achieved. On the contrary, the absolute node order contains randomness (as finding a canonical node order is as hard as solving a graph isomorphism test), cannot achieve permutation invariance and hence is data inefficient. Our experiments comparing with many AR methods already verified this.
Nevertheless, our predefined **partial** order may not be optimal. In future, a reasonable follow-up work could be to learn a task-dependent partial order based on graph structures, e.g. one can consider the variational approach in Chen[9]. We look forward to seeing better partial order(s) being discovered.
>3. **The analysis in 3.3 is well known, and it is not a concern for generative models, as the sampling process will break the symmetry.**
The analysis in 3.3 is indeed known in the link prediction setting, but this is the first time being mentioned for graph generation. If you know some paper has mentioned it in generation, we are happy to add references.
Indeed, sampling breaks the symmetry and makes the transformation achievable in certain conditions. We have clearly stated this in Sec. 3.4, which motivates us to use diffusion for capturing block conditional distribution. Nevertheless, as sampling is the only way to help symmetry breaking, lots of sampling steps are needed for a symmetry-intense setting. Splitting a FULL joint distribution to several conditional distributions simplifies it, with each conditional distribution containing less symmetry, which is easier to capture. Hence we show not only a significant improvement in generation quality, but also a great reduction in sampling steps.
When $K_h=0$, more diffusion steps breaks symmetries more, thus one may argue that we just need to increase diffusion steps. Thus, we conduct an additional ablation for steps when $K_h=0$. As shown, there is still a large gap to Diffusion+AR.
|Maximum Hops|0|0|0|0|1|
|-|-|-|-|-|-|
| Total Diffusion Steps|70|140|280|490| **140**|
| Validity| 92.1 |93.8 |94.3|95.2| **97.1** |
| Uniqueness|96.4|96.9|96.5|96.9| **96.5**|
| Mol stability|74.0|76.4|79.3|79.2| **86.1** |
| Atom Stability|97.4|97.7|97.9|98.0| **98.3** |
>4. **Pard seems to have a long inference running time.**
In diffusion, inference time depends on (1) total number of diffusion steps and (2) resource usage per step. Pard outperforms full-graph diffusion in both aspects:
1. Pard uses 1/4 steps (140 vs. 500) and significantly outperforms DiGress.
2. DiGress denoises the whole graph at each step, while Pard only denoises the next block (significantly smaller) with less resource per step.
3. Pard, with its AR mechanism, allows previously generated blocks to be cached, accelerating inference similar to KV cache in LLM inference.
- - -
### We hope these ablations showcase the advantage of Pard and the intrinsic source of the performance improvement clearly. We are happy to address any concerns and hope you can consider reevaluating our work.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Can you further elaborate on how the proposed model uses fewer steps than DiGress? If my understanding is correct, the generation of each block would require multiple denoising iterations. Suppose there are 5 blocks, and the total number of iterations is 1/4 of DiGress, then on average each block takes 1/20 of DiGress iterations. Is this correct?
If the generated blocks are cached, does it mean that their representations are not updated by later generated structures?
---
Rebuttal 2:
Title: Response to reviewer's question
Comment: Thank you for the timely response. For your additional two questions:
1. Yes, each block only takes 1/20 of DiGress iterations. Please see the first ablation study table, you will find the 'Diffusion steps per block' indicates the number of steps we used for each block. Total diffusion steps = num of blocks * diffusion steps per block.
2. Yes, all generated blocks are fixed, and are used to be conditioned on to generate the next block. The noise and randomness is only introduced in the target block. In Section 4.2, we leverage this property to enable parallel training of all blocks, similar to how next-token predictions in LLM training are parallelized.
Please let us know for any further questions!
---
Rebuttal Comment 2.1:
Title: About the update to generated block
Comment: I feel a little strange that the generated block is not updated. Initial blocks should have a lot of symmetry, and late blocks break the symmetry by adding new structures. If the representation of initial blocks is not updated, they will provide less information than they should. Is that true?
---
Reply to Comment 2.1.1:
Title: Further response to reviewer
Comment: You are correct: the symmetry is not evenly distributed in all blocks. Actually, we have found that about 50% blocks have no symmetry inside: directly predict the next block gives 100% accuracy and no noise is needed at all. This means that the 1/4 total diffusion steps of the DiGress is not the optimal: AT LEAST you can achieve 1/8 fraction of total diffusion steps comparing with DiGress and still outperform it (as there are about 50% blocks does not need any diffusion steps). In fact, in the response to the last reviewer, we have mentioned the following:
> One major limitation is that each block currently uses the same number of diffusion steps, which is not ideal and can result in unnecessary sampling steps. Empirical evidence shows that approximately 50% of blocks can be predicted directly without any sampling steps, as they do not encounter transformation problems (introduced in Sec. 3.3). It would be highly beneficial to develop a hardness measure (such as graph energy) for the symmetry problem and use this measure to determine the appropriate number of diffusion steps for each block. This approach could significantly reduce the total number of diffusion steps.
However, since we haven't yet found a straightforward solution, we decided to assign the same number of diffusion steps to each block. If the reviewer has concerns about our results, we encourage you to run the provided source code, as our results are fully reproducible. If any issues arise, we are also able to provide checkpoints for further verification.
---
We are more than willing to provide any additional information you require and kindly ask that you reconsider our work. We believe that our paper does not exhibit "technical flaws, weak evaluation, inadequate reproducibility, or incompletely addressed ethical considerations."
---
Rebuttal 3:
Title: For the reviewer's another potential question
Comment: We realized that you may also have another question: the parallel version of our algorithm shares representations across blocks, hence the earlier generated blocks provided less information to the future block as their representation cannot be updated. Actually, we have fully described the answer to this question in our original paper's section 4.2: why we want to design a parallel version of our algorithm.
Initially, our algorithm is not parallel at all: representations are not shared across steps, and each step uses its own graph representation to gain the best performance. Nevertheless, we realized that this approach has a huge suffer in training time: assume that you have K blocks in total, you get roughly K times more training time than the DiGress as you have K times more input graphs by viewing each conditional step's input graph as an individual graph. We solve the training time issue with our proposed **nontrivial, novel and critical** parallel training algorithm, such that all steps share the same blocks representation. Notice that this is an extremely important design: the advantage of causal transformer over RNN is the ability to parallel train all next-token predictions. And of course, you get less information from the previous tokens for the future token predictions: that's why Bert is preferred for document embedding instead of decoder-only GPT. Nevertheless, given the efficiency of GPT for capturing the whole distribution and parallel training, it is still the dominant approach of LLM, and the problem you mentioned does not block its effectiveness.
We suggest the reviewer to take a look at the section 4.2 for the detailed designs and explanations.
---
Rebuttal Comment 3.1:
Comment: As the discussion deadline is approaching, we kindly ask whether you still have additional questions. Given that we have answered all questions you have now, and demonstrated extensive ablations, we would like to hear you back for your thoughts.
---
Rebuttal 4:
Comment: Dear Reviewer **adow**, We hope this message finds you well.
As the discussion deadline is rapidly approaching (in approximately 10 hours), we are reaching out to respectfully request your final feedback on our rebuttal. You provided the lowest score with soundness 1 (poor) and contribution 2 in the category of "technical flaws, weak evaluation, inadequate reproducibility, or incompletely addressed ethical considerations." We greatly appreciate the thorough review and the numerous questions you raised, as they have helped us improve our work.
In response, **we have diligently addressed all of your questions in detail in many rounds, providing extensive ablation studies to support our claims and clarify any potential misunderstandings**. We believe this additional information directly addresses the concerns that led to the low score. Given the significance of your evaluation and the substantial effort we've put into addressing your queries, we kindly request that you review our responses and provide your updated assessment. Your feedback is crucial for a fair evaluation of our work and will greatly **assist the area chair** in making an informed decision.
We understand that time constraints can be challenging, but we would be incredibly grateful if you could take a moment to review our rebuttal and adjust your feedback accordingly. Your expertise and insights are invaluable to this process, and we look forward to your response.Thank you for your time and consideration. We appreciate your dedication to maintaining the high standards of our field. | Summary: This paper proposes to integrate autoregression models with diffusion models seamlessly to harnesses the effectiveness and efficiency of the autoregressive model while maintaining permutation invariance without order sensitivity. It also proposes architectural improvement to make the model and algorithm efficient and scalable. The presentation is smooth and the experimental results on both molecular and general graph generation demonstrate its effectiveness.
Strengths: It proposes a novel graph decomposition method considering not individual node and its degree but subsets of nodes with structual similarity. In this way, it removes node order sensitivity in the graph but only needs to maintain the order of the blocks. Within each block, the diffusion model focuses on a much smaller graph and thus has the efficiency to generate a denoised graph.
Weaknesses: It would be better if the authors can provide some insights about the hyperparameter the maximum of hops $K_h$.
Technical Quality: 3
Clarity: 4
Questions for Authors: It would be better if the authors can provide some insights about the hyperparameter the maximum of hops $K_h$.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors have adequately mentioned several limitations of their work which sound quite reasonable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Thank you for giving positive feedback to our paper, we would like to further answer your question in detail.
>**Provide some insights about the hyperparameter the maximum of hops $K_h$**
Let us first lay out $K_h$'s impact on the model: There is a relation between $K_h$ and block-size (hence the number of autoregression steps). As $K_h$ increases, the "weighted degree" of nodes diversify more with fewer equivalences, which induces a partial order with smaller blocks (i.e. fewer number of nodes and edges inside each block). Thus, the larger the hops $K_h$ is, the larger is the number of blocks and hence the number of autoregression steps.
Originally, we did not do ablation on the maximum hops during our experiments. We have an intuition that there would be a tradeoff, such that the smaller the block size, the easier it would be to learn within block diffusion, but the harder the autoregressive (AR) generation of the next block would be due to the exposure bias and the consequent error propagation of AR methods (If a block is mistakenly predicted during inference, the error will be propagated and the future blocks conditioned on this one will be impacted---this is true for all AR methods like LLMs). As the maximum hops essentially controls the combination ratios of diffusion model and autoregressive approach, we would like to choose it to make the number blocks to be 3 to 5 times smaller than the total number of nodes (such that each block has 3 to 5 number of nodes on average). Hence, for all molecular graphs, we set $K_h=3$. For all non-molecular graphs, we set $K_h=1$, as this provides a reasonable number of blocks based on our intuition.
Nevertheless, we would also like to formally and quantitatively answer the influence of $K_h$. Hence we have conducted an ablation study on the QM9 dataset, changing $K_h$ from 0 to 4. Notice that when $K_h=0$, all nodes have a fixed degree 1, hence it only provides a single block, which is equivalent to full graph diffusion (only) without any autoregressive approach. As $K_h$, block size (count) tends to decrease (increase) and we have a diffusion-AR hybrid as proposed.
| | Pard | Pard | Pard | Pard | Pard |
|:--------------------------- | :------- | :------- | :-------- | :------ | :------- |
| Maximum hops | 0 | 1 | 2 | 3 | 4 |
| Average number of blocks | **1** | 4.3 | 5.6 | 7.75 | 7.83 |
| Diffusion steps per block | 140 | 32 | 25 | 20 | 18 |
| Total diffusion steps | 140 | 140 | 140 |140 |140 |
| Validity | 93.8 | 97.1 | 96.7 | 97.0 | 97.1 |
| Uniqueness | 96.9 | 96.5 | 96.2 | 96.1 | 95.9 |
| Mol stability | 76.4 | 86.1 | 85.4 | 86.3 | 85.7 |
| Atom Stability | 97.7 | 98.3 | 98.3 | 98.4 | 98.3 |
As you can see from this ablation on $K_h$, we have found:
1. Combining autoregressive and diffusion works significantly better than diffusion-only ($K_h$=0) approach.
2. The performance seems to be robust with respect to the maximum hops (at least in range 1-4).
-------------------------------------------------
### We hope that we have fully addressed your concerns, and we are looking forward to your further input and your support of the paper in score.
---
Rebuttal Comment 1.1:
Comment: As the discussion deadline is approaching, we kindly ask whether you still have additional questions. Thanks again for supporting us. | Rebuttal 1:
Rebuttal: ### We thank all the reviewers for their feedback and suggestions on our work. Here we first re-iterate the key contributions of our work, and then summarize the list of additional experiments we performed.
---
### **Why a Hybrid (AR+Diffusion) Approach for (Graph) Generation?**
Our proposed Pard for graph generation is the first of its kind in bringing together autoregressive (AR) and diffusion-based generation. We want to emphasize that this approach stands on clear motivation, in other words, we have not simply “slap” diffusion on AR for no obvious reason.
Our contributions and rationale/justification for the effectiveness of PARD are: 1) We establish a **novel partial order** of nodes that is permutation-invariant. Thanks to this novel partial order, autoregression becomes permutation-invariant, which is not the case for ANY prior graph autoregressive method in the literature. This addresses the challenging pain point of AR of being data inefficient and having bad generalization. 2) With the order being partial, the graph is split into blocks, such that we decompose the complex joint distribution is decomposed into **easier** conditional probabilities. Next, we study how to model each block's conditional distribution. For the first time, we have identified the fundamental problems of equivariant model for generation: general graph transformation without symmetry breaking is impossible for ANY equivariant model. This further motivates us to choose diffusion approach to model each block's conditional distribution: the magic of randomness and annealing process. Following the line of reasoning/designs step by step, we found that the combination of AR and diffusion is not accidental, but an inevitable choice. Even more, this inevitable combination successfully combines the strength of both approaches while getting rid of their shortcomings:
1. With being permutation invariant (+ advantage of diffusion), it generalizes better than AR and being much data efficient (- downside of AR).
2. With decomposing the hard joint probability to simpler conditional distributions (+ advantage of AR), it uses significantly less diffusion steps while dramatically outperforms pure diffusion method (- downside of diffusion). What is more, each inference step uses less cost as it does not access to the full graph but just the generated part, and can use cache (like the KV cache in LLM inference) to avoid repeated computations (+ advantage of AR).
The impressive performance improvement (by reviewer 4) strongly shows the effectiveness of Pard. And many detailed advantages have been observed by reviewers:
* Reviewer 3 has summarized it all well: “By utilizing a unique partial order, it addresses the issue of non-exchangeable probabilities in AR models and the efficiency problem in diffusion models.”
* Reviewer 1’s understanding is also spot-on: “... it removes node order sensitivity …. Within each block, the diffusion model focuses on a much smaller graph and thus has the efficiency to generate a denoised graph.”
* Reviewer 4 states: “successful showcase of the combination of autoregressive modeling with diffusion model”
### **Additional Ablations and Experiments**
Nevertheless, we have further done extensive ablation studies to verify Pard's rationality and designs. There is an interplay between the number of AR vs. diffusion steps in Pard, that is indirectly controlled by maximum hops $K_h$. As $K_h$ increases, the weighted degree of nodes diversify and hence we obtain fewer node equivalences = smaller blocks = more AR steps.
In contrast, when $K_h=0$, we have only a single block, which corresponds to diffusion-only model (no AR).
1. In the first ablation on QM9 dataset, we keep the total number of diffusion steps (across all AR steps) fixed, and study how performance changes for $K_h$ varied from 0 to 4.
2. In the second ablation, we increase the number of diffusion steps when $K_h=0$ to see if more diffusion within a single block helps achieve better performance without AR, and compare that to $K_h>0$ (i.e. diffusion+AR hybrid).
Based on reviewer feedback, we also do ablation regarding architecture. Recall that Pard combines Transformer with PPGN (coined PPGNTransformer) in a novel design.
3. We report performance results for Pard w/ PPGN only and Transformer only as compared to the original PPGN+Transformer.
4. We also report results for an additional molecular metric as requested by a reviewer.
---
### We hope that the additional experiments, discussions, and clarifications lead to a better appreciation of the contributions of our work. Graph generation is a hard problem, gaining popularity only in the last 5 or so years, and we strongly believe that Pard sets a new milestone in this area with its rigorous design and SOTA empirical performance. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
$\text{ID}^3$: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition | Accept (poster) | Summary: This paper presents a method called $\text{ID}^3$ for the task of synthetic face recognition. The authors highlight that the accuracy of face recognition using generated data still lags behind that of training directly on real face data. They propose optimizing the generation process from the perspectives of diversity and consistency.
Strengths: - Clear explanation of formulas and algorithm flow.
- Achieved SOTA results compared to methods from the past two years.
Weaknesses: - There has been extensive research on ID preserving, and recent models based on LDM (e.g., Face0, PhotoMaker, FaceStudio, InstantID) can also be used for synthetic face recognition. The paper lacks analysis and comparative experiments on these models.
- The Face Attribute Conditioning Signal includes age and pose (pose angle range: [-90°, 90°]). However, the visual results in the paper do not reflect these attributes. The variation in pose is minimal, and there is no demonstration of different levels of age (which you mentioned as [0-100]).
- The paper devotes too much space to mathematical derivations and lacks intuitive visual results. For example, using different attributes and ID information to guide the model could be visualized by showing how the various layers of the Unet perceive this information.
Technical Quality: 2
Clarity: 3
Questions for Authors: - What is the resolution of the training and generated images?
- How long does the training process take using 8 NVIDIA Tesla V100 GPUs?
- What is the image generation speed during inference?
- How much GPU memory is required for inference?
- How do the ID information and attribute information affect the Unet in the network structure? Is it through cross-attention or along with the timestep information?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Using ID attributes to assist in the generation results is already common in diffusion-based tasks. This method is essentially a conditional guided generation, and its technical contribution is limited.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that Reviewer enp6 finds our formulas and algorithm flow clear and that ID$^3$ has advantages over existing models over the past two years. Here we respond to your questions as follows. Hopefully it will address your concerns.
**Response to Weakness 1**
Thanks for pointing out these interesting works on LDMs. However, we would like to clarify that the primary goal of LDMs differs fundamentally from SFR. These LDMs aim to change styles or expressions given a base face image whereas SFR is to generate a fake face dataset **all from scratch** in place of the real one to achieve face recognition. Yet, we were curious about LDMs' performance on SFR as well before starting this research, although they were not designed for SFR. Among LDMs, we chose the best performing model InstantID and ran multiple tests on it. We empirically found that the best test result is no better than: 81.5 on LFW, 68.9 on CFP-FP, 62.9 on CP-LFW, 62.9 on AgeDB, 63.8 on CA-LFW, with an average of 68.01. These results are far behind the results of the SFR generative model family: SynFace, SFace, ID-Net, DigiFace, iDiffFace, DCFace and ID$^3$. (Please compare these results with those in Table 1.)
We investigated the reasons: we found that while these LDMs claimed to be ID-preserving in the pixel space, their feature embeddings are not discriminative enough for face recognition, since there was no inductive bias (neither loss functions nor architectures) to achieve face discriminativeness. Hence, there is no reason for them to work reasonably well in SFR. In our submitted work, we followed the prevailing practice in SFR by benchmarking ID$^3$ against SFR generative models other than these LDMs. Thanks for the suggestion, though. We will add these comparisons and discussions to the manuscript.
**Response to Weakness 2**
The pose angle range and the age range in Line 134 are for theoretical purpose and are the domains of definition only. The extreme values (e.g. -90 in degree, 90 in degree, 100 in age) have low occurrences even in the training dataset and thus has low prior probability to be captured from ID$^3$. We demonstrate the distribution of age and pose in the training set shown in Fig. 2 in the **one-page PDF**.
From the distribution, we observe that the highest age in our training set is 70; the largest pose is 60 in degree and the smallest -60 in degree. Our model can interpolate within these ranges but is less likely to extrapolate outside of these ranges. To generate face images of large pose and high age, one can collect more such data and add them to the training dataset, which increases their occurrences during training.
**Response to Weakness 3**
Thanks for the suggestion. For intuitive understanding of our work, we will add more visual results to the manuscript which demonstrate how ID$^3$ works while generating different identities of different poses and ages. Now that the ID embeddings and face attributes affect UNet through the cross-attention mechanism, we visualize the attention map in the UNet given different identities and face attributes, as shown in the **one-page PDF**.
From the visualization result, we observe that ID$^3$ first determines the general pose and the outline of a face, and then fill in the facial details and background noise. This provides an interesting insight for understanding how the diffusion model works in ID$^3$.
Besides, we would like to divert your attention to Fig. 2 in the main paper, which gives insights for how our proposed ID-preserving sampling algorithm (Alg. 2) works. We observe that in the middle of Fig. 2, the original score function $\nabla \log p(\mathbf{x}_t | \mathbf{y}, s)$ generates low-quality face images with cluttered facial details whereas the adjusted score function $\nabla \log \tilde{p}(\mathbf{x}_t | \mathbf{y}, s)$ produces high-quality results of various attributes. One can find the correspondence between the adjustment of the vector field and the resulting images.
**Response to Q1:**
The resolution of the training and generated images is 64x64.
**Response to Q2:**
It takes roughly one week to train ID$^3$ when using 8 NVIDIA Tesla V100 GPUs.
**Response to Q3:**
The image generation speed is 0.5 images/second.
**Response to Q4:**
For inference, it takes 21G GPU memory.
**Response to Q5:**
It is through cross-attention. The ID information and attribute information both acts as keys and values in the cross-attention mechanism while $\mathbf{z}_t$ acts as queries.
**Response to Limitation:**
Yes, there indeed exist many diffusion-based models (LDMs) that use ID attributes to assist in generating images. But again they are designed for image generation but not for SFR. We argue that image generation and SFR are two different tasks with two different goals. Face image generation is to change styles or expressions given a base face image whereas SFR is to generate a fake face dataset **all from scratch** in place of the real one to achieve face recognition. Moreover, these LDMs do not perform reasonably well even when applied to SFR. It is worth noting that our technical contribution goes beyond image generation and identify three important characteristics for successful SFR: intra-class diversity, inter-class diversity and ID preservation. Various technical contributions are proposed in this paper to achieve these three characteristics, respectively. For example: ID-preserving loss, Theorem 3.1 and Algorithm 2 are proposed to achieve ID preservation; anchor generation (solving the Tammes problem) (see Line 191-193) is to achieve inter-class diversity; solving Eq. (9) is to achieve intra-class diversity.
Therefore, our technical contribution consists of not only deriving a conditional diffusion model but also a new ID-preserving sampling algorithm as well as a face dataset generation algorithm that respects face distribution on the face manifold to achieve intra-class diversity, inter-class diversity and ID preservation.
---
Rebuttal Comment 1.1:
Comment: The authors have partially addressed my concerns, and I will give them a higher score. If you can provide me with an anonymous code link to check details about how the Attribute Conditioning Signal and Identity Loss work, I can further increase the score.
---
Reply to Comment 1.1.1:
Comment: Sorry for the delayed response. We just have finished refactoring the entire project for better readibility, and here goes the anonymous link to the project: [https://anonymous.4open.science/r/id3_sfr-FB8B/](https://anonymous.4open.science/r/id3_sfr-FB8B/). To check the details about how the Attribute Conditioning Signal works, you may refer to the class *IdPoseAgeFeaEmbedder* defined in ./ldm/modules/encoders/modules.py; to check the details about how identity loss works, you may refer to the function *id_preserving_loss* defined in the class DDPM in ./ldm/models/diffusion/ddpm.py. Please let us know if you have any further questions or concerns. | Summary: This paper focuses on synthetic face recognition and proposes to concentrate on three aspects: inter-class diversity, intra-class diversity, and intra-class identity preservation. Based on those, an ID-preserving loss is employed to generate diverse but identity-preserving facial images. This paper also demonstrates the proposed loss is equal to lower bound of an adjusted conditional log-likelihood over ID-preserving data.
Strengths: 1. This work is well-written and well-organized. It brings some insights for SFR.
2. The idea of 3 aspects is good, and quite general for SFR
3. The proposed method shows advances when using the FFHQ dataset
Weaknesses: Here are several concerns regarding this work:
1. The idea of Attribute Conditioning Signal is not fit for synthetic face recognition tasks, because factors contributing to solid FR training cannot be determined by simply adjusting face attributes. One reason is that the attribute network (e.g., pose, age) is not generalized enough, as the pre-trained models are obtained from relatively small-scale datasets compared to FR datasets. Additionally, the authors have not addressed which attributes are effective for FR, leaving this important question unanswered.
2. The performance trained on FFHQ dataset appears good; however FFHQ dataset has explicitly banned its use for face recognition applications. Furthermore, FFHQ is relatively small(210k images) which doesn’t contain enough diversity, that’s the reason facial attributes can be of improvement in this experiment. For more details on FFHQ please refer to: https://github.com/NVlabs/ffhq-dataset
3. When it comes to the relatively large dataset CASIA-WebFace, the improvement over DCFace is marginal. One problem is that DCFace is trained with CASIA-WebFace only, not the FFHQ+CASIA mentioned by the author.
4. Experiments are not sufficient. For example, DCFace provides experiment results on 3 data volumes: 500k, 1M and 1.2M. These are not included in this paper.
5. There are some typos, for example, Y_i should be given in line 194
Technical Quality: 2
Clarity: 3
Questions for Authors: Based on Algorithm 3, the pipeline would be: firstly, generate multiple embeddings close to the anchor; Then use the diffusion model to synthesize the images. The questions are:
1. Given the unpack(Yi) and generate different attributes, the generated identity image would be affected by the attributes. Do the authors test whether the identities change given the different face attributes on the test? How to make sure the generated identity aligns with the input embedding in the generation phase?
2. How to make sure the diffusion model can generalize(recognize) this specific input embedding, considering the training embedding only covers a small range of the available embedding(training set)?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: This paper shows an attempt to generate diverse facial images of each identity. However, what makes solid FR training is not studied. Furthermore, domain GAP exists when adopting the trained diffusion model for generation, the input embedding might differ from the embedding of the synthesized image. Consequently, the focus is on changing facial attributes but preserving identity is interesting but the overall improvement is marginal, and FFHQ has license issues related to the application of Face Recognition. I think it will be more convincing if the results of CASIA-WebFace under 1M and 1.2 M settings are presented.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that Reviewer 37yP appreciates our work for its insight, generality and effectiveness. Here we respond to your questions as follows. Hopefully it will address your concerns.
**Response to Weakness 1:**
Thanks for pointing it out. Factors contributing to solid FR training are complicated, and it is impossible to disentangle them all from identity and to quantify them all explicitly and accurately. But it does not suggest that this problem is totally unsolvable. Some attributes can be easily quantified and disentangled from identity such as age and pose, while others intertwine with identity such as gender and micro-features. In this paper, for now we explicitly explore those quantifiable attributes (e.g. age and pose) by injecting them into ID$^3$ as conditioning signals and leave the unquantifiable attributes automatically and flexibly captured by the U-Net of ID$^3$ via model training. We find this practice leads to sufficiently good performance on our SFR task. Our proposed ID$^3$ serves as a framework into which more quantifiable attributes can be introduced, which we leave for future exploration. We will add more discussions to the manuscript.
To examine whether the attributes we used in our paper are fit for SFR tasks, we perform the following ablation study: as we increase intra-class pose variation, the SFR performances on cross-pose test sets (including CFPFP and CPLFW) are boosted whereas the performances on cross-age test sets (including AgeDB and CALFW) remain almost unchanged. Results and distribution plots are shown in Tab. 1 and Fig. 1 in the **one-page PDF**. From these results, we observe that the distribution of pose angle on FFHQ, Dataset by IDiffFace, Dataset by ID$^3$ PoseOnly 1 and 2 are all unimodal. And their performances are inferior to Dataset by ID$^3$ PoseOnly 3 which exhibits multimodal distribution of pose angle. When comparing the results of ID$^3$ (Tab. 1 in the main text) and the results of ID$^3$ PoseOnly 3, we conclude that age is another essential attribute that should be injected into ID$^3$ as conditioning signals. We also would like to divert your attention to Table 2 in the main paper, which shows that without using attributes as conditioning signals, the performance is inferior to our original ID$^3$. These empirical findings suggests that the attributes as conditioning signals are fit for SFR tasks.
Regarding the generalization of the attribute network, for simplicity and efficiency, the attribute network we used in our paper is a lightweight model, which we find performs efficiently and reasonably well in our task. Using a more powerful attribute predictor would further benefit our SFR task.
**Response to Weakness 2:**
The use of FFHQ as training data is for the sake of fairness, as SFR community all use FFHQ for model training. To have a fair comparison, we make sure in our paper that all competing models are trained on FFHQ and evaluated on standard benchmarks. Besides, we further use a larger dataset, CASIA, for training, as in Tab. 1.
**Response to Weakness 3:**
According to the original paper of DCFace (see the beginning of Section 5), the authors of DCFace claim that
*"For $G_{id}$ which generates ID images, we adopt the publicly released unconditional DDPM [25] trained on FFHQ [36]. For $G_{mix}$, we train it on CASIA-WebFace [29] after initializing weights from $G_{id}$."*
This suggests the final performance of DCFace depends on FFHQ and CASIA. Furthermore, we successfully replicated its performance reported in their paper using their official implementation: https://github.com/mk-minchul/dcface.git, which suggests DCFace is indeed trained on FFHQ+CASIA.
Back to the argument that "the improvement over DCFace is marginal", note that ID$^3$ trained on CASIA only and DCFace is trained on FFHQ+CASIA, and yet ID$^3$ outperforms DCFace.
**Response to Weakness 4:**
Thanks for the suggestion. Here we show more results on the larger data volume with the base scale of 1.2M (>=):
|Models|Add Real Faces To FR Training |CFP-FP | CPLFW | AgeDB | CALFW | average |
|----------|----------|----------|----------|----------|----------|----------|
|DCFace|Yes (15% Real Faces)|88.40| 84.22|90.45| 92.38|90.86|
|ID$^3$|No (0% Real Faces)|89.89|84.98|91.15| 92.02|91.15|
**Response to Question 1, 2:**
Yes, we did run the relevant tests, as shown in Fig. A.2 in the supplement. Given different face attributes and an $\mathbf{Y}_i$, the intra-class cosine similarity ranges from 0.25 to 0.9 whereas the inter-class cosine similarity ranges from -0.25 to 0.25. This suggests that, given $\mathbf{Y}_i$, an identity do not change as we vary attributes only. For visualization, in Fig. 3 in the main paper, we also demonstrate the invariance of identity when attributes vary.
The alignment between the generated identity and the input embedding stems from two factors: the manifold manifestation of the pretrained FR model $f_{\phi}$ and the generalizability of ID-preserving loss. First, the pretrained FR model $f_{\phi}$ is trained using ArcFace loss, which forces similar face embeddings to have large cosine values on manifold ($S^{d-1}$) and dissimilar face embeddings to have small values. Our proposed Alg. 3 (which solves Eq. (9) using $lb=0.7$ and $ub=0.9$) is based upon this manifold manifestation and ensures that $\mathbf{y}_i$ is similar to $\mathbf{w}_i$ with some inherent variations (intra-class diversity). Second, the ID-preserving loss maximizes the inner product between an input embedding and the embedding of the generated face image; we empirically find this loss function generalizes well to the embeddings that are close to (but not equal to) an anchor embedding on the manifold ($S^{d-1}$). We also experiment with $ub=1.0$, but it does not yield better SFR performance. We believe the choice of $lb$ and $ub$ is critical to this generalization of recognizing $\mathbf{y}_i$. We investigated this effect in Ablation (ii) (see Appendix E).
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their rebuttal, which addresses part of my concerns. It impresses me in terms of directly using numerical facial embedding for SFR.
However, I still have the following concerns:
---
**C1[minor]. Weakness 2:'SFR community all use FFHQ for model training':**
I partly agree. I am aware that some of these SFR methods use pretrained generative models on FFHQ to provide Identity image (DCFace), and pre-mixing up facial image (SynFace). However, DigiFace does not involve training on FFHQ, and DCFace employs a pretrained model (on FFHQ) to provide style images while training a diffusion model on CASIA-WebFace. Given that the FFHQ dataset has restrictions concerning its application to FR, I think we should be careful when directly training a synthesizing model on FFHQ before the SFR community obtains permission to use this dataset.
I agree that FFHQ can be adopted for reference, but I think it is better to present the SFR model trained on CASIA-WebFace at different data volumes. Since CASIA-WebFace is a larger dataset, experiments conducted on it would be more convincing and general.
---
**C2[minor]. Weakness 4: 'Add Real Faces To FR Training'**
DCFace doesn't add real face sto FR training, instead, they add the synthesized image into the generated dataset 'we include the same $X_{id}$ for 5 additional times for each label' in Section 5.2. The $X_{id}$ is generated by the $G_{ID}$
, 'For sampling ID images, we generate 200, 000 facial images from $G_{ID}$' in Section 3.3. This means that the images used in the dataset are entirely synthetic. It’s unclear how '15%' is calculated in the rebuttal.
---
**C3[Major] . Weakness 4 : 'results of 1.2M'**
(1) I have recommended that the authors include experiments of 1M and 1.2M to make the paper more convincing in the original review opinion. However, only 1.2M results are given. Considering the limited rebuttal time it is understandable.
(2) Normally results of 5 evaluation datasets are provided: LFW, CFP-FP, CPLFW, AgeDB, CALFW. The LFW results of 0.5M given by the manuscript are vastly worse than the competitor DCFace in Table 1. However, the LFW result is missing in the rebuttal, making the average of the rest marginally higher than DCFace. I recommend putting all the evaluation datasets especially **LFW** results for a fair comparison.
I will keep my current score unless the full evaluation results are provided for a larger data volume.
---
Reply to Comment 1.1.1:
Comment: **Response to C1:**
Thanks for the suggestion. Yes, we agree that care shall be taken when using FFHQ.
----------------------------------------------------------------------------------------------------------
**Response to C2:**
We apologize for the confusion caused by '15%'. The number '15%' was due to our false memory. We mistakenly cited this result from somewhere else. After having carefully examined the DCFace paper again, however, it suddenly occurred to us again that, to achieve promising results, DCFace requires manual data-filtering on the generated data. This can be seen from Sec 3.3 in the original paper of DCFace (https://arxiv.org/pdf/2304.07060):
*"ID Image Sampling. For sampling ID images, we generate 200,000 facial images from $G_{id}$, from which we remove faces that are wearing sunglasses or too similar to the subjects in CASIA-WebFace with the Cosine Similarity threshold of 0.3 using $F_{eval}$. We are left with 105,446 images."*
Moreover, in Style Image Sampling (see the 2nd paragraph of Sec. 3.3), the authors explored the manual sampling $X_{sty}$ from the pool of images whose gender/ethnicity matches that of $X_{id}$.
We also have noted that the authors included *the same $X_{id}$ for 5 additional times for each label* and that *$X_{id}$ is generated by $G_{id}$ trained on FFHQ using DDPM* (see the beginning of Sec. 5 in the DCFace paper). This suggests that to generate ID embeddings, the authors made use of additional data, FFHQ, while using CASIA as training data. ID$^3$, on the other hand, uses CASIA only.
In general, the authors of DCFace established several manual-filtering standards and used them to clean the data generated by their model. This practice is not in line with the standards of the SFR community. As we have claimed in our paper, in Line 40-41, *"Also note that, critically, the SFR dataset generation process should be fully automated without manual filtering or introducing auxiliary real face samples."*
Furthermore, the manual-filtering used in DCFace is a time-consuming process. When scaling up to larger data volumes, DCFace is less likely to attain promising results whereas ID$^3$ is fully automated without any form of manual filtering and can linearly scale up to any data volume (please see our response to REVIEWER qiUe on Weakness 2 where we have analysed the time complexity of ID$^3$ which scales linearly with $m$ and $N$).
The above were the points we would like to have made in the table. Here we clarify it as follows:
|Models | Data | Requires Manual Filtering |LFW| CFP-FP| CPLFW| AgeDB |CALFW |average|
|----------|----------|----------|----------|----------|----------|----------|----------|----------|
|DCFace | CASIA+FFHQ|Yes |98.83 |88.40 |84.22 |90.45| 92.38| 90.86|
ID$^3$ |CASIA| No | 97.73 |89.89 |84.98 |91.15 |92.02 |91.15 |
----------------------------------------------------------------------------------------------------------
**Response to C3**
Thanks for pointing it out. We would like to clarify that the average result has taken the LFW result into account already. We were unable to show the LFW result because of the character limit (6000) in the rebuttal. Here we show the full evaluation result. You may check whether the average value took LFW into account: 90.86 = (98.83 + 88.40 + 84.22 + 90.45 + 92.38) / 5; 91.15 = (97.73 + 89.89 + 84.98 + 91.15 + 92.02) / 5.
|Models | Data | Requires Manual Filtering |LFW| CFP-FP| CPLFW| AgeDB |CALFW |average|
|----------|----------|----------|----------|----------|----------|----------|----------|----------|
|DCFace | CASIA+FFHQ|Yes |98.83 |88.40 |84.22 |90.45| 92.38| 90.86|
ID$^3$ |CASIA| No | 97.73 |89.89 |84.98 |91.15 |92.02 |91.15 |
From these results, we observe that ID$^3$ outperforms DCFace on three test sets out of five with a higher averaged performance (91.15>90.86). And considering that DCFace **requires manual-filtering to refine the generated data** and **uses additional data for generation** (see the response to C2), our proposed model ID$^3$ (operating as a unified automated system without any form of manual filtering and using CASIA only) is more promising and efficient, especially in generating larger and larger synthetic face datasets.
On the theoretical side, ID$^3$ provides theoretical insights and advantages over existing works such as DCFace. Our work suggests that inductive biases regarding face manifold and distribution shall be introduced into an SFR generator as a whole.
Therefore, we believe the existence of ID$^3$ will inspire more interesting works to appear in this community which further advances the field of SFR. Code will be made publicly available upon the revision of the manuscript. | Summary: This paper proposes ID3, an identity-preserving-yet-diversified diffusion model for generating synthetic face data for face recognition. ID3 leverages identity embeddings and facial attributes to control inter-class and intra-class diversity of generated faces while preserving intra-class identity consistency, demonstrating state-of-the-art performance on multiple synthetic face recognition benchmarks.
Strengths: See questions section in detail.
Weaknesses: See questions section in detail.
Technical Quality: 3
Clarity: 3
Questions for Authors: The paper addresses a well-motivated and important problem in the field of synthetic face recognition. The proposed ID3 model demonstrates significant innovation in leveraging diffusion models conditioned on identity embeddings and facial attributes to generate diverse yet identity-consistent synthetic faces. Moreover, the theoretical analysis provided in the paper, which proves the equivalence between minimizing the proposed loss function and maximizing a lower bound on an adjusted data likelihood, lends credibility and rigor to the proposed approach.
However, there is room for improvement in the presentation and writing of the manuscript. One area that could benefit from further clarification is the explanation of notations and symbols used in the mathematical formulas. For instance, the meaning of the variable d in S
d−1 is not clearly defined, which may lead to confusion for readers. Additionally, the formatting and typesetting of some equations, such as Equation 3, could be enhanced to improve readability and aesthetic appeal.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Question: One area that could benefit from further clarification is the explanation of notations and symbols used in the mathematical formulas. Additionally, the formatting and typesetting of some equations, such as Equation 3, could be enhanced to improve readability and aesthetic appeal.**
**Response:** We are glad that Reviewer ThNX finds our proposed model innovative. And thanks for pointing out these issues regarding the presentation and writing. The quantity $d$ in $S^{d-1}$ is the dimensionality of the face embedding space. We will clarify them in the manuscript and rewrite some equations (e.g. Equation 3) for better readability and aesthetic appeal.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. After careful consideration of the points raised in your rebuttal, I have decided to maintain my original rating for the paper. | Summary: The paper "ID3: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition" introduces a novel synthetic face recognition (SFR) approach using diffusion models. It focuses on maintaining identity consistency while providing high diversity in generated face images. The proposed ID3 model leverages identity-preserving losses and a structured sampling algorithm that respects identity characteristics. This effectively addresses the common pitfalls of existing SFR approaches that lead to poor generalization on real-world data.
Strengths: * **Originality**: The paper presents an innovative use of diffusion models tailored to synthetic face recognition, emphasizing identity preservation.
* **Quality**: Demonstrated improvement over state-of-the-art models through extensive benchmarking.
* **Clarity**: Exceptionally clear presentation and thorough explanation of the methodology and results.
* **Significance**: This paper addresses significant challenges in synthetic data generation and offers substantial benefits for training more robust and generalizable face recognition systems.
Weaknesses: * **Generalization**: Additional tests on further diversified real-world datasets could strengthen the generalization claims.
* **Complexity**: It would be beneficial to have details on the computational demands and scalability of the model when deployed in practical, real-world scenarios.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. What measures have been taken to ensure the model's robustness against diverse ethnic and age groups, given the model's reliance on identity embeddings?
2. Are there potential improvements or variations in the diffusion model that could further enhance identity preservation without sacrificing diversity?
3. How does the model perform under constrained computational resources, and are there any strategies for optimizing its efficiency?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The paper discusses potential limitations, including the need for extensive computational resources and the model's performance dependency on the quality of input identity embeddings. It also mentions the ongoing challenge of bridging the gap between synthetic and real-world face recognition performance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that Reviewer qiUe appreciates our work in terms of originality, quality, clarity and significance. Here we respond to your questions as follows. Hopefully it will address your concerns.
**Weakness 1: Additional tests on further diversified real-world datasets could strengthen the generalization claims.**
**Response:** Thanks for the suggestion. To show the generalization of ID$^3$, we further run tests on RFW and obtain the following comparison results against the best SoTA model (IDiffFace) other than ID$^3$:
| Models | African | Asian | Indian | Caucasian | average | std |
|----------|----------|----------|----------|----------|----------| ----------|
| IDiffFace | 71.10 | 75.83 | 78.68 | 82.05 | 76.91 | 4.64 |
| ID$^3$ | 78.25| 79.23 | 84.05 | 85.28 | 81.70 | 3.48 |
Note that RFW (Racial Faces in the Wild) is a comprehensive diverse benchmark for face recognition. It is notable for its diversity, capturing the facial variations within and across different racial groups, including but not limited to Caucasian, Black, Asian, and Indian individuals. It includes faces from different age groups, genders, and a variety of lighting and pose conditions, ensuring a realistic and challenging test bed for face recognition algorithms.
From the table above, we observe that ID$^3$ outperforms IDiffFace by clear margins, which suggests its effectiveness and generalization to different groups of age, pose, ethnics, etc.
**Weakness 2: It would be beneficial to have details on the computational demands and scalability of the model when deployed in practical, real-world scenarios.**
**Response:** Thanks for the suggestion. The time complexity of our proposed Algorithm 3 is $O(mNT)$, where $N$ is the number of identities to be generated, $m$ is the number of images per identity and $T$ is the diffusion steps. The execution runtime of Algorithm 3 (for generating a face dataset that contains 500,000 face images with 10,000 identities) is 17.36 hours. This is a lot more efficient than manual effort which usually takes months to establish a dataset. The scalability of ID$^3$ is linear with $m$ and $N$, which suggests that ID$^3$ is promising in generating larger-scale datasets for synthetic face recognition when deployed in practical, real-world scenarios.
**Question 1: What measures have been taken to ensure the model's robustness against diverse ethnic and age groups, given the model's reliance on identity embeddings?**
**Response:** ID$^3$ models attributes in two ways. For those attributes that can be explicitly disentangled from identity, such as age and pose, ID$^3$ take explicit measures by treating age as one of conditioning signals, which renders ID$^3$ aware of age variations when generating ID-preserving faces accordingly. This makes ID$^3$ robust to diverse age groups. For those attributes that intertwine with identity, such as ethnics, ID$^3$ take implicit measures by automatically and flexibly capturing them through model training. Specifically, recall that ID$^3$ assumes access to the pretrained ArcFace, on the manifold of which dissimilar groups are separated and similar groups are gathered in terms of ethnics. Through end-to-end training, this characteristic acts as supervisory signals that are propagated back to the diffusion model of ID$^3$ (see Figure 1 in the main text). This makes ID$^3$ robust to diverse ethnic groups.
**Question 2: Are there potential improvements or variations in the diffusion model that could further enhance identity preservation without sacrificing diversity?**
**Response:** The proposed model ID$^3$ is inherently a conditional diffusion model conditioned on face attributes. Face attributes may include many aspects. Some can be explicitly disentangled from identity while others are implicitly intertwined with identity. Therefore, ID$^3$ models the former as explicit conditioning signals and models the latter as part of face image features. In this paper, for simplicity, we only show the effect of two major face attributes that are disentangled from identity: age and pose. More such attributes can be added into our generation framework, such as expression. When more such attributes are incorporated into ID$^3$ explicitly, ID$^3$ becomes aware of these attributes upon which ID$^3$ utilizes the remaining information from face image features for better ID preservation enhancement.
**Question 3: How does the model perform under constrained computational resources, and are there any strategies for optimizing its efficiency?**
**Response:** The proposed model ID$^3$ is inherently a conditional diffusion model (DDPM) that normally costs much computational resources (e.g. ID$^3$ takes 21G GPU memory for inference; iDiffFace takes 24G GPU memory for inference). When deployed under constrained resources, one can use many efficient methods that aim to optimize DDPM inference efficiency such as DDIM [1], PLMSSampler [2] and SD (Spectral Diffusion) [3] while maintaining SFR performance.
**References**
[1] Song, Jiaming, Chenlin Meng, and Stefano Ermon. "Denoising diffusion implicit models." arXiv preprint arXiv:2010.02502 (2020).
[2] Liu, Luping, et al. "Pseudo numerical methods for diffusion models on manifolds." arXiv preprint arXiv:2202.09778 (2022).
[3] Yang, Xingyi, et al. "Diffusion probabilistic model made slim." Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition. 2023. | Rebuttal 1:
Rebuttal: We thank all reviewers for their time and effort in reviewing our work and their valuable comments about the paper. Attached is the one-page PDF that contains some figures and results for the rebuttal.
Pdf: /pdf/3c2aa4647e6e2f46090e5e2529b27d5676f4d4b1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation | Accept (poster) | Summary: The paper introduces DropBP, an innovative approach to accelerate the fine-tuning of Large Language Models (LLMs) by selectively dropping layers during backward propagation. This method is presented as a means to reduce computational costs and activation memory, significant challenges in the efficient fine-tuning of LLMs. The authors have provided a clear implementation of DropBP as a PyTorch extension and demonstrated its effectiveness through experiments on various LLMs and datasets.
Strengths: - The concept of dropping backward propagation layers to reduce computational overhead is differential from previous work and addresses an important issue in training large models.
- The paper includes extensive experiments that validate the effectiveness of DropBP in reducing training time and memory usage while maintaining accuracy.
- The development of a PyTorch extension for DropBP facilitates easy integration with existing training codes, enhancing the practical applicability of the method.
Weaknesses: - The motivation is not well illustrated. I agree with that dropping sublayers could lead to training efficiency as the model turns to a shallower counterpart. However, I mean, pervious work like LayerDrop and others omit the layer computation in the forward pass. Then the computation could be removed in the subsequent backward computation with essential engineering efforts. Thus it lacks a clear distinction in terms of technical innovation compared to these previous works.
- While the paper proposes omitting sublayer computation in the backward pass, it's unclear why the forward pass computation remains unchanged. Justifying this choice or exploring alternatives would strengthen the contribution.
- The faster convergence observed in Figure 5 with DropBP compared to the vanilla model is counterintuitive. The observation here quite confuses me since the backward pass optimizes a partial computation graph, concerns regarding overfitting arise. The paper would benefit from a discussion on potential regularization techniques employed to address this, and a comparison with related work (e.g., [1]) that utilizes sublayer dropping for regularization in training a deep Transformer model.
[1] Li et al., 2021 (AAAI) Learning Light-Weight Translation Models from Deep Transformer
Technical Quality: 3
Clarity: 2
Questions for Authors: some typos
- Line 49: As a results -> As a result
- Line 62: a effective -> an effective
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for carefully reviewing our submission and providing valuable feedback. Please see below for our response to the questions and comments.
**Q3.1.** Pervious work like LayerDrop and others omit the layer computation in the forward pass. Then the computation could be removed in the subsequent backward computation with essential engineering efforts. Thus it lacks a clear distinction in terms of technical innovation compared to these previous works.
**Q3.2.** It's unclear why the forward pass computation remains unchanged.
**A3.1-2.** We appreciate the reviewer's insightful comments. We understand the reviewer's questions as follows:
1. ***Motivation:*** What is the reason for focusing on reducing computations in the backward propagation rather than the entire computation process?
2. ***Methodology:*** Why does the approach omit only backward propagation computations while keeping forward propagation unchanged?
3. ***Related Works:*** How does this differ from algorithms like LayerDrop [2], which drop computations during forward propagation as well?
First, the goal of LayerDrop is to ultimately perform layer-wise pruning to reduce inference time by dropping layers during forward propagation. In contrast, the aim of our DropBP is to **enhance training efficiency directly by reducing FLOPs and training memory usage.** Therefore, our approach does not require strictly dropping the forward path. Of course, while it is possible to drop the forward propagation, we chose to avoid this because **the training process is highly sensitive to dropping the forward propagation**, as shown in ```Fig. E```.
- ```Figure E``` *in the attached PDF*.
In ```Fig. E```, we compared the loss curves of DropBP and Progressive Layer Dropping (PLD) [3], a representative layer dropping algorithm, when the same drop rate was applied to training computations. **The results demonstrate that DropBP achieves much more stable training compared to PLD.** This is because dropping the forward path can cause output deviations, which can negatively impact loss and all gradients.
Furthermore, in ```Appendix D``` of the manuscript, we compared our DropBP with Layer Dropping algorithms for fine-tuning LLMs. As shown in ```Table 7``` of the manuscript, our **DropBP achieved higher accuracy compared to traditional Layer Dropping algorithms.**
In summary, while Layer Dropping algorithms aim to improve inference efficiency by randomly dropping the forward path and pruning it during inference, our DropBP focuses on selectively dropping backward paths, that are relatively less sensitive than the forward path, to accelerate training and reduce memory usage while maintaining high accuracy.
Following the reviewer's advice, we will incorporate these clarifications into the paper to reduce confusion and strengthen the contribution.
**Q3.3.** The faster convergence observed in Figure 5 with DropBP compared to the vanilla model is counterintuitive. The observation here quite confuses me since the backward pass optimizes a partial computation graph, concerns regarding overfitting arise. The paper would benefit from a discussion on potential regularization techniques employed to address this, and a comparison with related work (e.g., [1]) that utilizes sublayer dropping for regularization in training a deep Transformer model.
**A3.3.** We agree with the reviewer's observation that ```Fig. 5``` of the manuscript appears to show rapid convergence, which may suggest overfitting. However, this is because the x-axis is plotted against training time. When plotted against training steps, a completely different pattern emerges, as shown below:
+ ```Figure B``` *in the attached PDF*.
As shown in ```Fig. B```, when the drop rate is 0.5, the convergence of loss per step is almost identical to the baseline. However, with drop rates of 0.75 and 0.875, **the convergence speed per step is slower**. Nonetheless, **DropBP significantly reduces the time consumed per training step** because it skips the backward propagation computations for the dropped layers. Consequently, **the convergence speed per training time is actually faster for DropBP compared to the baseline**.
Moreover, our DropBP does not optimize a partial computation graph but instead randomly drops layers according to the drop rate, meaning the layers being trained change with each step. **This can be interpreted as structured dropout, leading to an ensemble effect where multiple models are effectively trained simultaneously.** As a result, it can serve as a regularization technique to mitigate overfitting issues. In fact, when applying DropBP, we did not observe overfitting, where training loss decreases but validation loss worsens.
Additionally, through [1], **our DropBP can also be interpreted that DropBP reduces co-adaptation among layers.** Specifically, [1] claims that Layer Dropping (LD) [2] successfully achieves regularization by reducing co-adaptation among layers through dropping layers during training. This effect is also applicable to DropBP, where some backward paths are randomly dropped instead of strictly training all layers.
Thanks to the reviewer's insight, we have confirmed that DropBP can also be interpreted as a regularization technique. In the revised manuscript, we will incorporate this analysis, and in future research, we plan to specifically analyze the regularization effects of applying DropBP and work on improving it.
**Q3.4.** some typos exist.
**A3.4.** hank you for catching the typo. We will make sure to correct it in the revised version.
[1] Li et al., "Learning Light-Weight Translation Models from Deep Transformer."
[2] Fan et al., "Reducing Transformer Depth on Demand with Structured Dropout."
[3] Zheng and He, "Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping"
---
Rebuttal Comment 1.1:
Title: After reading the response
Comment: Thank you for the effort the authors have put into addressing my concerns. To this end, most of my concerns have been satisfactorily addressed. Specifically:
- Regarding motivation, I agree that the authors have demonstrated that PLD-like models face convergence challenges during the training phase if the forward pass is dropped. While I still have some doubts about the results, more detailed discussion should be included in the next version to clarify this issue. It's not entirely clear to me whether these challenges are solely due to the increased model capacity, as larger models can sometimes be more robust to the training data than smaller ones.
- The training curve plotted against training time, rather than training steps, seems meaningful. A better illustration of this would help others understand your work more clearly.
Overall, I would like to slightly raise my score.
---
Reply to Comment 1.1.1:
Title: Dear Reviewer NAjC
Comment: Dear Reviewer NAjC,
We would like to express our sincere gratitude to the reviewer for the careful review and for the increased score. We fully agree with the reviewer’s insightful suggestions, particularly the need for a comparison of Layer Dropping algorithms and DropBP when the model scales up, and the importance of showing the training curve according to the training steps to clarify our arguments. We will incorporate these valuable insights into the final version of our paper.
Sincerely,
Authors of Paper # 9316 | Summary: The paper proposes a novel method to reduce the computational and memory costs associated with fine-tuning large language models (LLMs). The authors introduce DropBP, a technique that randomly drops layers during backward propagation, effectively reducing the computational operations (FLOPs) and activation memory needed. This method assigns drop rates based on the sensitivity of each layer to ensure stable training. The approach is applicable to both full fine-tuning and parameter-efficient fine-tuning (PEFT) methods. The paper reports significant improvements in training time, convergence speed, and maximum sequence length when fine-tuning LLaMA2 models with DropBP.
Strengths: - DropBP introduces a novel method for reducing the computational and memory costs associated with fine-tuning LLMs. This is an important contribution to the field, given the increasing size and complexity of these models.
- The paper provides empirical evidence that DropBP significantly reduces training time (by 44%), accelerates convergence (1.5× faster), and increases the maximum sequence length (up to 6.2×) on a single NVIDIA A100 GPU. These results demonstrate the effectiveness of the approach. The authors conduct thorough experiments on multiple datasets and models, providing a robust evaluation of DropBP's performance across different scenarios.
Weaknesses: - The paper mentions that the sensitivity calculation is done only once and has negligible overhead. However, more details on this process and its potential impact on training time would provide a clearer understanding of any trade-offs involved.
- The paper could benefit from a more detailed theoretical analysis of why DropBP works as effectively as it does. This would strengthen the paper by providing a deeper understanding of the underlying principles.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can you provide more details on the sensitivity calculation process? Specifically, how is the sensitivity of each layer computed, and what is the computational overhead associated with this step?
- What are the best practices for tuning the drop rates in DropBP? Are there guidelines or heuristics that practitioners can follow to optimize performance for their specific use cases?
- How well does DropBP integrate with other recent advancements in efficient training techniques, such as mixed precision training or distributed training frameworks? Have you explored these combinations in your experiments?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations have been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for carefully reviewing our submission and providing valuable feedback. Please see below for our response to the questions and comments.
**Q2.1.** Can you provide more details on the sensitivity calculation process? Specifically, how is the sensitivity of each layer computed, and what is the computational overhead associated with this step?
**A2.1.** Please referto the global response **GA2** above.
**Q2.2.** The paper could benefit from a more detailed theoretical analysis of why DropBP works as effectively as it does. This would strengthen the paper by providing a deeper understanding of the underlying principles.
**A2.2.**
+ ```Figure C``` *in the attached PDF*.
That's a good point. We interpret transformer models as a collection of numerous blocks, each composed of various modules with residual connections. Our hypothesis is that we can fine-tune LLMs well by training only certain shallow submodules. **To theoretically analyze this hypothesis, we measured the impact of submodules based on their path lengths in LLaMA2-7B, as shown in ```Fig. C```**. Specifically, we followed these steps as suggested in [1]:
1. We first perform a forward pass through the entire network.
2. During the backward pass, we randomly sample $k$ residual blocks, which are back-propagated without passing through skip connections, while the remaining $n-k$ blocks are bypassed through the skip connections.
3. We then measure the norm of the gradient at the input.
We take 100 measurements for each path length $k$. Subsequently, we multiply by the distribution of all possible path lengths, which follows a Binomial distribution, to quantify the gradient contribution from paths of a specific length.
In ```Fig. C(b)```, we observed that **the gradient per path length decreases as the path length increases.** Consequently, ```Fig. C(c)``` demonstrates that shorter path lengths have a greater impact on the gradient. These observations are consistent with the findings in [1], **which attributed this phenomenon to vanishing gradients.** We confirmed that this also occurs in transformers, where **the paths that significantly influence training in LLMs are relatively short.** Therefore, DropBP enables effective training by focusing on these short submodules.
Thanks to the reviewer's advice, this analysis will be included in the final version, and we plan to conduct a more theoretical analysis of this phenomenon.
**Q2.3.** Are there guidelines or heuristics that practitioners can follow to optimize performance for their specific use cases?
**A2.3.** We are very pleased that the reviewer has shown interest in the use case of DropBP. Empirically, **using the identical settings as the baseline is sufficient to achieve good convergence of loss and high accuracy when applying DropBP.** At higher drop rates such as p=0.75 and p=0.875, however, increasing the learning rate by about 1.5 times can slightly improve accuracy. Thanks to the reviewer, we will incorporate these guidelines into the code we release.
**Q2.4.** How well does DropBP integrate with other recent advancements in efficient training techniques, such as mixed precision training or distributed training frameworks?
**A2.4.** We have developed a DropBP library that can be easily integrated into PyTorch, **allowing it to be readily combined with most efficient training techniques** that can be applied on a single GPU, such as parameter-efficient fine-tuning and mixed precision training. As shown in ```Fig. 4``` and ```Table 2``` of the manuscript, we incorporated these combinations into our experiments.
**Additionally, we have confirmed that our library works well with recent distributed training frameworks based on PyTorch, such as FSDP [2].** However, we have encountered some errors when integrating with the DeepSpeed [3] framework. We are currently debugging these issues and plan to resolve them. In the future, we intend to analyze the experimental results once these issues are addressed.
[1] Veit and al., "Residual Networks Behave Like Ensembles of Relatively Shallow Networks."
[2] Zho and al., "PyTorch FSDP: Experiences on Scaling Fully Sharded Data Parallel".
[3] Rajbhandari and al., "ZeRO: Memory Optimizations Toward Training Trillion Parameter Models".
---
Rebuttal Comment 1.1:
Comment: I want to thank the authors for the detailed responses. After reading the response and other reviews, I would like to keep my original score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer Ep4o
Comment: Dear Reviewer Ep4o,
We would like to express our sincere gratitude to the reviewer for the careful and thorough review of our manuscript. We highly appreciate the insightful comments and suggestions provided, and we will incorporate them into the final version of our paper.
Sincerely,
Authors of Paper # 9316 | Summary: The paper proposed to drop layers during backward prop (BP) based on layer sensitivity. The method aims to reduce the cost for gradient computation and storage for intermediate activation in full BP.
Strengths: 1. Reducing the cost of full BP in PEFT has been an important challenge.
2. The method is simple and is easy to integrate to either full fine-tuning or PEFT.
3. Experiments demonstrate that DropBP can speed up the training while retaining the accuracy. The resulting memory reduction makes longer sequence modeling accessible.
Weaknesses: 1. The idea of optimizing NNs with sparse gradient is not new. This paper needs to add more discussion and comparison with related works in sparse learning e.g., [1-3]
2. Table 1 only shows results on two datasets and limited benchmark. It is unclear if the method works well for generation tasks and domain-specific transfer learning.
3. It is unclear which algorithm is used to solve the constraint minimization problem, i.e., to determine the layer-specific rates based on sensitivity, and its extra computational cost.
4. (Minor) In fine-tuning, DropBP drops a set of layers. However, the sensitivity of a set of layers may not be accurately represented by the direct summation of the sensitivities of individual layers in the set.
[1] Sun, Xu, et al. "meprop: Sparsified back propagation for accelerated deep learning with reduced overfitting."
[2] Sung, Yi-Lin, Varun Nair, and Colin A. Raffel. "Training neural networks with fixed sparse masks."
[3] Brock, Andrew, et al. "Freezeout: Accelerate training by progressively freezing layers."
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the long context modeling performance after applying DropBP?
2. Could the authors present Figure 5 with # of steps as the x-axis to demonstrate faster convergence?
3. I wonder if the sensitivities would evolve, and the drop rate needs to be re-allocated through training.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for carefully reviewing our submission and providing valuable feedback. Please see below for our response to the questions and comments.
**Q1.1.** The idea of optimizing NNs with sparse gradient is not new.
**A1.1.** We acknowledge that the idea of optimizing neural networks with sparse gradients is not novel. However, our method differs significantly from sparse gradient methods like meProp, FISH Mask, and Freezeout in terms of purpose, methodology, and effect. Specifically:
- ***meProp***: While meProp accelerates training by masking the output gradient of individual layers, **our DropBP skips entire transformer blocks by leveraging skip connections.** As shown in ```Table G.1.2``` in global response, **our DropBP achieves high accuracy compared to meProp.** Furthermore, meProp requires computing top-K gradient masks at each iteration, **whereas DropBP only needs to calculate the sensitivities once for the entire training process,** reducing the overhead on training time. Additionally, meProp must store activations for all layers during training, while **DropBP saves activation memory by not storing activations for dropped layers**.
- ***FISH Mask***: FISH Mask, which is a parameter efficient fine-tuning such as LoRA, reduces communication costs by updating sparse parameters without decreasing FLOPs. In contrast, **our DropBP reduces computational costs directly by dropping backward operations.** Furthermore, while FISH Mask have to store all activations for backward propagation, **our DropBP eliminates the need to store activations for dropped layers, reducing activation memory.** DropBP and FISH Mask employ complementary methods, which makes it possible to **apply them simultaneously, just as DropBP and LoRA can be.** We are working on this implementation but facing delays due to the complexity of FISH Mask's code. We will include related experiments in the paper after completion.
- ***Freezeout***: Freezeout accelerates training by gradually freezing earlier layers, while **DropBP randomly drops layers from the start, regardless of their order.** Consequently, Freezeout requires storing all activations initially, which complicates increasing sequence length and parallel processing. In contrast, **DropBP maintains low and consistent memory allocation** as shown in ```Table G.1.2```, facilitating easier management, longer sequences, and better parallel processing.
We will include these distinctions and a more comprehensive comparison to suggested related works in our revised paper.
**Q1.2.** It is unclear if the method works well for generation tasks and domain-specific transfer learning.
**A1.2.** Please refer to the global response **GA1** above.
**Q1.3.** It is unclear which algorithm is used to solve the constraint minimization problem, i.e., to determine the layer-specific rates based on sensitivity, and its extra computational cost.
**A1.3.** Please refer to the global response **GA2** above.
**Q1.4.** (Minor) The sensitivity of a set of layers may not be accurately represented by the direct summation of the sensitivities of individual layers in the set.
**A1.4.** We agree with the reviewer's opinion that the total sensitivities of the dropped network do not strictly equal the sum of each individual layer's sensitivity. However, **given the practical constraints of calculating the sensitivities for the vast number of possible combinations of dropped networks in a deep neural network,** we have made the assumption that the total sensitivities of a dropped network can be approximated by the sum of the sensitivities of its individual layers. In our future research, we will explore more accurate methods to calculate the network's sensitivity and determine optimal drop rates.
**Q1.5**. What is the long context modeling performance after applying DropBP?
**A1.5**.
+ ```Table R.1.1.``` *Processing Time Analysis for Calculating Sensitivities When Fine-Tuning LLMs on the Alpaca Dataset.*
| No-tune | DropBP (p=0.875) | |
|---------|---|---|
| | 16K|32K|
| NaN | **6.81**|8.32|
+ ```Figure A``` *in the attached PDF*.
In response to the reviewer's request, we trained the LLaMA2-7B-chat model on the LongAlpaca dataset using sequence lengths of 16K and 32K. The settings followed those of LongLoRA [1]. As shown in ```Fig. A```, our experiments demonstrated that **the model successfully converged on loss with long sequence data of 16K or more**. Additionally, when evaluating the fine-tuned model on a subset of the PG16 test set with a sequence length of 16K, we **achieved lower perplexity (PPL) compared to the non-fine-tuned model** as shown in ```Table R.1.1```, confirming that our **DropBP method enables effective long sequence modeling.**
Due to limited GPU resources and review time, we conducted experiments on smaller training and test sets. Once we secure sufficient resources and time, we plan to obtain more robust experimental results for the revised version and integrate DropBP into LongLoRA for future work..
**Q1.6** Could the authors present Figure 5 with # of steps as the x-axis to demonstrate faster convergence?
**A1.6** Please refer to the global response **GA3** above.
**Q1.7** I wonder if the sensitivities would evolve, and the drop rate needs to be re-allocated through training.
**A1.7**
+ ```Figure D``` *in the attached PDF*.
As shown in ```Fig. D```, our experiments demonstrated that **the sensitivity of each layer converges as training progresses.** Accordingly, we calculates sensitivity just once at the 10% of the training process, minimizing overhead from sensitivity calculations. This approach proved effective in most experiments. However, we believe the reviewer's concern is also valid and we **plan to add a feature that periodically recalculates sensitivity to adjust the drop rate to the DropBP library.**
[1] Chen at al., "LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models"
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thanks the authors for providing new results and analysis in the updated version and I would like to keep my original score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer nmtJ
Comment: Dear Reviewer nmtJ,
We would like to express our sincere gratitude to the reviewer for the careful and thorough review of our manuscript. We highly appreciate the insightful comments and suggestions provided, and we will incorporate them into the final version of our paper.
Sincerely,
Authors of Paper # 9316 | null | null | Rebuttal 1:
Rebuttal: We thank the all reviewer for carefully reviewing our submission and providing valuable feedback. We would like to address several common and important questions in the following global response.
**GQ1.** It is unclear if the method works well for generation tasks and domain-specific transfer learning.
**GA1.** We have conducted additional experiments to evaluate our method on generation tasks and domain-specific transfer learning following the Reviewer's advice as below:
+ ```Table G.1.1.``` *Results of Generation Tasks (MT-Bench) with LLaMA3-8B fine-tuned on the OASST1 datasets.*
| Method | Drop Rate | Memory | Time | Humanities | STEM | Roleplay | Extraction | Writing | Reasoning | Coding | Math | Avg. |
|----------------|-----------|-----|------|------------|------|----------|------------|---------|-----------|--------|------|------|
| No-tune | . | . | . | 6.25 | 5.70 | 5.45 | 5.20 | 4.85 | 4.40 | 3.20 | 1.95 | 4.62 |
| LoRA | . | 57G | 27m | 7.00 | 6.40 | 5.80 | 5.70 | 5.30 | 4.55 | 3.25 | 2.95 | 5.12 |
| LoRA+DropBP | 0.5 | 42G | 21m | 6.55 | 6.25 | 6.05 | 5.50 | 5.05 | 4.45 | 3.75 | 3.25 | 5.11 |
| | 0.75 | 36G | 17m | 6.75 | 5.90 | 5.80 | 5.70 | 5.35 | 4.30 | 3.60 | 3.30 | 5.09 |
| | 0.875 | **32G** | **16m** | 6.60 | 6.55 | 5.90 | 5.70 | 5.70 | 3.95 | 3.40 | 2.80 | 5.08 |
+ ```Table G.1.2```. *Results of Domain-Specific Learning with LLaMA3-8B on the IMDB Dataset.*
| Method | Drop Rate | Memeroy | Time | Accuracy (%)|
|--------------|-----------|-----|------|----------|
| LoRA | . | 40G | 539s | 91.7 |
| meProp | 0.5 | 40G | 507s | 88.5 |
| Freezeout | . | 40G | 445s | 91.5 |
| LoRA+DropBP | 0.5 | 34G | 438s | 92.3 |
| | 0.75 | 32G | 392s | 91.5 |
| | 0.875 | **31G** | **362s** | 91.3 |
The experimental results show that **our DropBP can reduce training memory and time while maintaining comparable accuracy in generation tasks and domain-specific learning tasks.** Thanks to the Reviewer nmtJ, we will include the improved results in the revision.
**GQ2.** I am curious about the specific method for calculating sensitivity and the associated overhead.
**GA2.** Sensitivity calculation process in DropBP involves two main steps:
- ***Step 1. Sensitivity Calculation***: We define the sensitivity of a layer as the variance in gradient normalization between when the layer is dropped and when it is not. **Therefore, calculating all layer sensitivities requires $L$ iterations, corresponding to the number of layers, which is typically fewer than the iterations needed for fine-tuning**. For example, when fine-tuning LLaMA2-70B, 160 iterations are required to calculate sensitivities, which is significantly fewer than the size of training datasets, such as the Alpaca dataset (52K).
- ***Step 2. Drop Rate Allocation***: We employ the simple greedy algorithm from [1] to efficiently determine drop rates. Initially, all layers start with a drop rate of 0, which is gradually increased to achieve the target FLOPs. At each step, we increase the drop rate of the layer by 0.1, selecting the layer that minimizes the increase in total sensitivity. **By using a binary heap for optimal move selection, the algorithm runs with a complexity of $O(L \ log \ L)$.** The overhead is negligible, considering that the computation cost for a transformer's attention layer and linear layer is $O(bsh^2L)$ and $O(bs^2hL)$ respectively, where $b$, $s$, and $h$ represent the batch size, sequence length, and hidden dimension.
As explained, both Step 1 and Step 2 incur very low computational overhead and occur only once during the entire training process. Therefore, the overhead from sensitivity computation is negligible. **The experimental results in ```Table G.2``` below support this claim.** Thanks to reviewers nmtJ and Ep4o, we will include detailed explanations of the sensitivity calculation process in the final version.
+ ```Table G.2.``` *Processing Time Analysis for Calculating Sensitivities When Fine-Tuning LLMs on the Alpaca Dataset.*
| Model | Precision | PEFT | Calculate sensitivity | Training | | | |
|-------------|-----------|-------|-------------|-----|---------|-----|-----|
| | | | | p=0 | p=0.5 | p=0.75 | p=0.875 |
| LLaMA2-7B | Mixed | LoRA | **10s** | 2.2h | 1.7h | 1.4h | 1.3h |
| | BF16 | FFT | **10s** | 2.0h | 1.3h | 1.0h | 0.8h |
| LLaMA2-13B | BF16 | LoRA | **21s** | 2.9h | 2.1h | 1.7h | 1.5h |
| LLaMA2-70B | BF16 | QLoRA | **6m** | 29.6h | 22.2h | 18.4h | 16.5h |
**GQ3.** Could the authors present Figure 5 with training steps as the x-axis to demonstrate faster convergence?
**GA3.**
+ ```Figure B``` *in the attached PDF*.
In response to the reviewer's request, we plotted the training curves over training steps in ```Fig. B(a)```. When the drop rate is 0.5, the convergence of loss per step is almost identical to the baseline. However, with drop rates of 0.75 and 0.875, **the convergence speed per step is slower**. Nonetheless, **DropBP significantly reduces the time consumed per training step** because it skips the backward propagation computations for the dropped layers. Consequently, **the convergence speed per training time is actually faster for DropBP compared to the baseline as shown in in ```Fig. B(b)```**. Thanks to reviewers nmtJ and NAjC, we will include an analysis of the training loss curve in DropBP, using training time and training steps as the x-axis.
[1] Chen et al., "ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training"
Pdf: /pdf/b9ae5c674f11d956e7da5a1b27437b5b7fe11bce.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Unsupervised Object Detection with Theoretical Guarantees | Accept (poster) | Summary: This paper proposes an autoencoder based object detection model that makes predictions about object positions in an unsupervised manner. Imporantly, the authors can provide theoretical guarantees/ bounds about the degree of the model's detection error.
Strengths: The paper is well written and it is easy to follow the authors motivation and structure of the work. I also find the idea very valuable to investigate the theoretical bounds of such models.
Weaknesses: I have summarized my questions and issues and limitations that I see here:
In the context of the CLEVR experiments, I am wondering why the authors don’t evaluate concerning the Gaussian standard deviation as they did for the first dataset?
The authors claim the method requires dynamic objects, but they never mention this in main text. Can the authors provide more justification of this? I.e. why this is/ what part of the approach requires this?
I don’t understand how the decoder can learn to reconstruct complex shapes other than spheres (due to the Gaussian assumption). Also the authors mainly evalaute on data with objects that are spherical. Thus, is it possible to evaluate on other shape forms? If so what is the error here compared to spherical shapes? I do not mention this as a limitation, but it seems quite important to put the method into context. What would be potential ideas to mitigate handling more complex objects?
I do not have enough knowledge about the details of the CutLer and SAM models, but why should the theoretical bound of this work hold for these works as well (the authors compare these in Fig. 6)? Specifically, the authors state "only for our
method are the position errors always guaranteed to be within our theoretical bound." so my question is: why should the other methods lie within this theoretical guarantee?
I am a little confused by the related works section. The authors discuss object-centric representation methods whose goal, unlike that of their method, is to learn a full representation of an object. This includes much more information than just position. In other words, it seems the method of this paper focuses “only” on the learning the position of an object. While this does not diminish the significance of the work, I think the work could benefit from discussing more on this difference between these works, to make the comparisons more fair and also focus more on works that focus on unsupervisedly localising object in images (i.e., works that only focus on position and not on the other aspects of object reperesentations), e.g., [1,2]. So in the end I am also wondering if the authors should actually narrow down the title/contribution claims to "Unsupervised Object Localisation with Theoretical Guarantees"?
If the authors can remark on these issues above, I am happy to consider raising my score.
[1] Siméoni, Oriane, Chloé Sekkat, Gilles Puy, Antonín Vobecký, Éloi Zablocki, and Patrick Pérez. "Unsupervised object localization: Observing the background to discover objects." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3176-3186. 2023.
[2] https://github.com/valeoai/Awesome-Unsupervised-Object-Localization
Technical Quality: 3
Clarity: 4
Questions for Authors: see above
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: see above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > In the context of the CLEVR experiments, I am wondering why the authors don’t evaluate concerning the Gaussian standard deviation as they did for the first dataset?
We thank the reviewer for their suggestion and have performed this experiment – please see fig. 11 of the rebuttal PDF. As all our data points lie within our theoretical bounds, this successfully validates our theory.
> The authors claim the method requires dynamic objects, but they never mention this in main text. Can the authors provide more justification of this? I.e. why this is/ what part of the approach requires this?
We mention this in the Introduction: “Our method guarantees to detect any object that moves in a video or that appears at different locations in images”, and later in assumption 2 of Theorem 4.1: “each object appears in at least two different positions in the dataset”. We will reinforce it further throughout the paper that this is equivalent to requiring dynamic objects. This prevents the object being learned as part of the background – if it did not move, then the encoder would not need to communicate its position to the decoder to reconstruct every image in the dataset, and the object would not be detected. The proof in Appendix A has more details on how this assumption is used.
> I don’t understand how the decoder can learn to reconstruct complex shapes other than spheres (due to the Gaussian assumption).
The decoder takes as input binary maps containing Gaussians centered at the object positions given by the latent variables ($\hat{e}$ in fig. 1). The decoder, consisting of multiple convolution blocks, then iteratively convolves these Gaussians with its kernels (which are not Gaussian, but arbitrary/learned) until it reaches its receptive field size. For example, if the decoder contains 3 7x7 convolutions and the Gaussian bandwidths are narrow, this will result in the receptive field size of $1 + 3 \times (7-1) = 19$, allowing it to reconstruct any object of size up to 19x19 px, regardless of its shape (not just Gaussians).
> Also the authors mainly evalaute on data with objects that are spherical. Thus, is it possible to evaluate on other shape forms? If so what is the error here compared to spherical shapes? I do not mention this as a limitation, but it seems quite important to put the method into context. What would be potential ideas to mitigate handling more complex objects?
While in our CLEVR experiments in the paper we have used spherical objects, the method works on objects of any shape (see answer to previous question). We have performed CLEVR experiments using 3 different shapes – a sphere, a cube, and a cylinder – and show the results in fig. 12 of the rebuttal PDF. As all our data points lie within our theoretical bounds, this shows that our method is applicable to objects of any shape, not just spheres.
> I do not have enough knowledge about the details of the CutLer and SAM models, but why should the theoretical bound of this work hold for these works as well (the authors compare these in Fig. 6)? Specifically, the authors state "only for our method are the position errors always guaranteed to be within our theoretical bound." so my question is: why should the other methods lie within this theoretical guarantee?
The CutLER and SAM methods do not possess any theoretical guarantees on their object detections, and so there is no theoretical bound for their errors. We have only included this comparison to show that, even in the worst case (maximum over the position errors), the errors of our method are always bounded by our theoretical bound while the errors for CutLER and SAM are not, and are much higher than our bound in some settings. The experiment is meant to illustrate how worst-case unbounded errors can occur for such state-of-the-art methods, which can be a safety concern.
> I think the work could benefit from discussing more on this difference between these works, to make the comparisons more fair and also focus more on works that focus on unsupervisedly localising object in images (i.e., works that only focus on position and not on the other aspects of object reperesentations), e.g., [1,2].
We thank the reviewer for their suggestion and will discuss these works in the related work section of the paper. Briefly, works such as [1] and other works in [2] rely on vision transformer (ViT) self-supervised features for unsupevised object detection and segmentation, without possessing any theoretical guarantees for their detection errors. In contrast, our method uses a translationally equivariant CNN-based encoder and decoder with a structured bottleneck, which allows us to prove theoretical bounds for our detection errors.
[1] Siméoni, Oriane, Chloé Sekkat, Gilles Puy, Antonín Vobecký, Éloi Zablocki, and Patrick Pérez. "Unsupervised object localization: Observing the background to discover objects." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3176-3186. 2023.
[2] https://github.com/valeoai/Awesome-Unsupervised-Object-Localization
---
Rebuttal Comment 1.1:
Comment: Thanks so much for the effort and clarification. I have raised my score accordingly.
Concerning my point about other related work: I think the authors could think about differentiatting throughout the work on the difference on learning object representations and explicit object (position) detection. But, the response was fine. And I am convinced if the authors edit the main text as they had responded. | Summary: This paper explores Unsupervised Object Detection with Theoretical Guarantees. This method is a significant advancement in the field of object detection as it provides theoretical guarantees on the accuracy of the detected object positions. By introducing a new approach that ensures reliable object localization, the research contributes to enhancing the robustness and accuracy of unsupervised object detection systems.
Strengths: The method provides theoretical guarantees on recovering true object positions up to small shifts, which is a significant strength compared to traditional empirical approaches in object detection. The ability to interpret the latent variables as object positions enhances the interpretability of the model and facilitates understanding of the learned representations. The use of an autoencoder with a convolutional neural network (CNN) encoder and decoder, modified to be translationally equivariant, offers a unique and innovative approach to unsupervised object detection.
Weaknesses: This work explores the unsupervised object detection, and theoretical analysis. However, the dataset for the experiment is not common, and few comparative experiments with common SOTA object detection model. Besides, although this work provides the theoretical guarantees to recover the true object positions up to quantifiable small shifts, there is no analysis whether it only exists in the unsupervised domain,.or can be adopted in the supervised domain.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In the experiments, the datasets for evaluation is the CLEVR data, please explain why choose it, not other popular object detection datasets?
2. This work validate the theoretical results using lots of experimental results, however only few experiments are carried out for the comparison with SOTA.
3. In object detection, the popular model is about YOLO, and also the metric including accuracy, mAP, and IoU, etc are also the common in supervised object detection.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > In the experiments, the datasets for evaluation is the CLEVR data, please explain why choose it, not other popular object detection datasets?
We chose to base our dataset on CLEVR because it is a dataset commonly used in unsupervised learning, and because it allows us to generate images of the same object at different positions and with different sizes on a static background, which are the assumptions in our model. These assumptions do not necessarily hold in other datasets such as MS COCO or PASCAL VOC, which normally do not contain the same object twice, contain varying backgrounds, and do not have any bounds on the object sizes. In general, our method is more suitable for dynamic object detection from videos (where objects keep their identity but change position) rather than detecting single objects in images.
> This work validate the theoretical results using lots of experimental results, however only few experiments are carried out for the comparison with SOTA.
The aim of our paper is to present the first unsupervised object detection method with theoretical guarantees, and therefore our priority was to prove and validate our theoretical claims in detail. Our focus was less on the current SOTA methods, as, to the best of our knowledge, none of them possess any theoretical guarantees and thus are not directly comparable. We have compared with two current SOTA methods SAM and CutLER, but if the reviewer has any other particular method in mind, we are open to performing a comparison with that method.
> there is no analysis whether it only exists in the unsupervised domain,.or can be adopted in the supervised domain
For our method, deriving theoretical bounds was possible due to the exact equivariance property of the encoder and the decoder, and the restricted form of our latent space. In the supervised domain, many approaches are based on relatively complex transformer-based models, which do not possess any such guarantees. However, we speculate that to obtain guarantees for these methods, one might have to replace parts of their architectures with equivariant or invariant elements and introduce tight bottlenecks, in order to reduce the solution space sufficiently to enable the kind of theoretical analysis we have performed. We hope that our contribution will open up the possibility for such guarantees in future work.
> In object detection, the popular model is about YOLO, and also the metric including accuracy, mAP, and IoU, etc are also the common in supervised object detection.
Compared to YOLO, our method is fully unsupervised and possesses theoretical bounds on the detection errors, while theirs requires supervision and does not possess any such guarantees. While in our paper we prove bounds for the maximum position errors, these can be related to other metrics such as IoU by considering the overlap instead of the distance between the ground truth object and its detection (intuitively, the IoU bounds would be related to the square of the position bounds). We thank the reviewer for this suggestion and will add this to the paper.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their responses. I will maintain the current score. | Summary: The paper proposes a new idea for unsupervised object detection where an CNN based auto-encoder architecture is employed and the latent representation is trained to learn position of objects in images. They further provide theoretical analysis of the proposed idea under strong assumption about the input data and model characteristics. Results from on synthetic data experiments is also provided
Strengths: The idea presented in the paper is interesting as it tries to solve the object detection problem in an unsupervised manner by modeling the latent space such that it explicitly learns object positions.
Weaknesses: The paper lacks results and discussion on the experimental details on how the idea can be effectively implemented. This is particularly important to understand the merits of the proposed idea as it has strong assumption on model architecture and input data (e,g, size of objects). For example, it is not clear how the authors processes input data during training, how the min-batch sampling is done, what input-target pairs are?, what regulations are important to use if at all, how the over-fitting is prevented given the very simplified experimental setting.
Furthermore, it is not clear from the paper how the latent space can learn any semantic information to reconstruct the images as it modeled to learn the position of the objects.
Technical Quality: 2
Clarity: 2
Questions for Authors: - Can the authors provide more clarification on the training procedure and the important aspects that are necessary for the model work?
- Given that the latent space is learning the position encoding for the object, how is it possible to learn to model semantics for reconstruction loss?
- how does the model performance change relative to diversity of object shape and appearance in a single image?
- why is it important to use positional encoding?
- how the reconstruction quality can be guaranteed, especially in a realistic setting?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: - The work is limited by its assumptions on the characteristics of the input data and model architecture.
- The theoretical analysis is dependant on strong assumptions like "the objects are reconstructed at the same positions" which itself is not guaranteed.
- Furthermore, the evaluations do not provide insight into what challenges one should address to successfully train a model based on the proposed idea.
- Please see the Weaknesses for more details.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Can the authors provide more clarification on the training procedure and the important aspects that are necessary for the model work? For example, it is not clear how the authors processes input data during training, how the min-batch sampling is done, what input-target pairs are?, what regulations are important to use if at all
We provide all training details for our experiments in Appendix C. To directly address the reviewer’s questions, in our synthetic experiments we train on images containing white squares of size 7x7 px at different positions on a background of size 80x80 px, with mini-batch size of 128 and learning rate $10^{-3}$, and the input-target pairs both being the same image. In our CLEVR experiments, we train on images containing spheres with different range of sizes (from 4 to 27 px) and at different positions, on a background of size 106x80 px, with mini-batch size of 150 and learning rates $10^{-2}$ and $10^{-3}$. In both cases we do not use any regularisation and train until convergence, discarding any result where an object has not been learned.
> how the over-fitting is prevented given the very simplified experimental setting.
Overfitting is prevented by the translation equivariance property of the architecture: an equivariant encoder and decoder, with a small receptive field, must reconstruct well at every possible sub-window of the image (as they are applied convolutionally). Equivariant models can thus learn from very few images compared to other architectures.
> Given that the latent space is learning the position encoding for the object, how is it possible to learn to model semantics for reconstruction loss?
Semantics are encoded in the ordering of the objects’ positions in the latent space: it is not an unordered list, but rather each position corresponds to a single object identity. Therefore, the encoder and decoder learn to associate each latent position to a different object, which allows the decoder to successfully reconstruct each object.
> how does the model performance change relative to diversity of object shape and appearance in a single image?
We found the model to be robust to lighting and perspective distortions of the objects in the CLEVR dataset (objects becoming larger/smaller when they are closer/further away). Although not included in the paper, the model was also successful in detecting different shapes (spheres, cubes, cylinders) at different orientations – please see figure 12 of the rebuttal PDF.
> why is it important to use positional encoding?
We use positional encodings as one input to the decoder to reconstruct a static background. Because the decoder is translation equivariant, the positional encodings provide it with the positional information necessary to successfully reconstruct different parts of the background. For dynamic backgrounds, an unrelated frame from the same video can be passed instead of the positional encoding (as mentioned in section 3).
> The work is limited by its assumptions on the characteristics of the input data and model architecture.
The assumptions on the input data and model architecture are necessary for our proof of the theoretical bounds of the method – it is impossible to derive any guarantees with no assumptions. We believe these assumptions are not very restrictive and apply to many common use cases for object detection such as traffic monitoring, surveillance, etc, where there are moving objects and a static background (we also mention an extension for dynamic backgrounds in section 3).
> how the reconstruction quality can be guaranteed, especially in a realistic setting?
> The theoretical analysis is dependant on strong assumptions like "the objects are reconstructed at the same positions" which itself is not guaranteed.
This is guaranteed by the Universal Approximation Theorem: a sufficiently-large neural network can approximate any function. If the reconstruction error is high, we simply need a larger network, until the error is low enough.
> Furthermore, the evaluations do not provide insight into what challenges one should address to successfully train a model based on the proposed idea.
In practice, successful training requires videos with a static camera, objects that fit the receptive field of the autoencoder, that stay in-frame for the duration of a video, and no large perspective changes during motion. We provide further training details in Appendix C.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering the questions; I have raised my rating given the clarifications, However I strongly recommend the authors to update the paper with the clarifications they have provided, in particular on overfitting, how the ordering of the objects’ positions are key for the latent space, how sensitive is the proposed solution to object shape and appearance, and why including positional encoding is important. | Summary: This paper presents the first unsupervised object detection approach that is theoretically shown to recover the true object positions up to quantifiable small deviations that are related to the encoder and decoder receptive field sizes, the object sizes, and the widths of the Gaussians used in the rendering process. The authors conduct a thorough analysis of how the error depends on each of these variables and conduct synthetic experiments that validate our theoretical predictions up to a precision of individual pixels.
On a high level, their architecture is based on an autoencoder that is fully equivariant to translations, which they achieve by making the encoder consist of a CNN followed by a soft argmax function to extract object positions, and making the decoder consist of a Gaussian rendering function followed by another CNN to reconstruct an image from the object positions.
The authors also conducted synthetic experiments, CLEVR-based experiments, and real video experiments that validated their theoretical findings up to a precision of individual pixels.
Strengths: I do like the analysis of the current state-of-the-art detection models SAM and CutLER and it is interesting to find that in some cases their errors are much higher than the bound derived by this method.
This paper is well-written and easy to follow.
Weaknesses: 1. It is interesting to learn that SAM and CutLER's errors are sometimes much higher than the bound derived by the proposed method. I would be interested to hear from the authors if they have any insights on how this finding could be used to improve these methods, especially CutLER, which is also an unsupervised object detection and instance segmentation model.
2. The majority of the experiments in this paper are conducted on synthetic datasets, and it is questionable whether the findings can be generalized to real images and videos. Could the authors provide some experiments on real images or videos?
3. Continuing on the previous point, most objects in the synthetic datasets are rigid and have very consistent shapes. However, the challenges in object detection are often in detecting the non-rigid objects or partially occluded objects. I am curious to see if the proposed method can be used to handle these cases.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please check the weakness section
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > It is interesting to learn that SAM and CutLER's errors are sometimes much higher than the bound derived by the proposed method.
We note that all the error plots in the paper contain the maximum position errors, as opposed to average position errors (as described in Appendix C). So, while SAM and CutLER do well in the average case, they are much worse (unbounded) than our method in the worst case. We do this to show that even in the worst case our method is able to contain the errors within our bounds unlike other methods.
> I would be interested to hear from the authors if they have any insights on how this finding could be used to improve these methods, especially CutLER, which is also an unsupervised object detection and instance segmentation model.
For our method, deriving theoretical bounds was possible due to the exact equivariance property of the encoder and the decoder, and the restricted form of our latent space. However, because both SAM and CutLER are relatively complex transformer-based models, it is difficult to derive theoretical bounds for these methods. However, we speculate that to obtain guarantees for these methods, one might have to replace parts of their architectures with equivariant or invariant elements and introduce tight bottlenecks, in order to reduce the solution space sufficiently to enable the kind of theoretical analysis we have performed. We hope that our contribution will open up the possibility for such guarantees in future work.
> The majority of the experiments in this paper are conducted on synthetic datasets, and it is questionable whether the findings can be generalized to real images and videos. Could the authors provide some experiments on real images or videos?
We perform experiments on real videos in Appendix D of the paper, which are based on real YouTube videos of overhead road traffic and a game of mini pool. In these experiments we show that the objects are learned with high precision and the latent space can be intervened on to generate realistic videos with the objects at previously unseen positions. We will be happy to add more if requested.
> Continuing on the previous point, most objects in the synthetic datasets are rigid and have very consistent shapes. However, the challenges in object detection are often in detecting the non-rigid objects or partially occluded objects. I am curious to see if the proposed method can be used to handle these cases.
As this is the first paper to propose any theoretical guarantees for object detection, we decided to focus on producing a detailed analysis of the common case of rigid, non-occluded objects. However, while not discussed in the paper, we observe in our experimental data that the method is relatively robust to partial occlusions (e.g. one sphere covering part of another sphere in CLEVR). Regarding non-rigidity, we believe that as long as the deformed object fits within the receptive field, the method should also be able to detect those objects as long as they are sufficiently similar to the original object. For more complex cases of occlusions and non-rigidity, one might have to amend the method to deal with these cases explicitly (e.g. by modeling the order in which objects are rendered, and modeling the explicit geometry of non-rigid objects). We leave these for future work, to maintain the conciseness and clarity of the current proofs.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their responses. Most of my questions have been addressed. Therefore, I will maintain my current rating as weak accept. | Rebuttal 1:
Rebuttal: In response to reviewer’s ghv9 question, we have performed CLEVR experiments showing the position error as a function of the Gaussian standard deviation (see fig. 11 in the rebuttal PDF). As all our data points (red) lie within our theoretical bounds (blue), this successfully validates our theory.
Additionally, in response to reviewers’ x9ub and ghv9 questions, we have performed CLEVR experiments using a dataset containing 3 different shapes (a sphere, a cube, and a cylinder), showing that our method works for objects of any shape, not just spheres (see fig. 12 in the rebuttal PDF).
We hope that these experiments address the reviewers’ concerns.
Pdf: /pdf/f9cdff8713d787e49ffbd7b3d1537af83fb45e35.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Dynamic Model Predictive Shielding for Provably Safe Reinforcement Learning | Accept (poster) | Summary: Naive model predictive shielding may overly restrict exploration thereby preventing an RL agent from learning a policy with good performance. In order to prevent this, the authors propose a method to optimise a backup policy that is provably safe using an online planner. An approximate model such as double integrator or differential drive is used for planning. Improvements are demonstrated on five benchmarks that involve static or dynamic obstacle avoidance as compared to provably safe and approximately safe RL methods. A provable guarantee is provided for recovery regret.
Strengths: Presentation is clear with backing proofs and demonstrable results
Problem that is being solved is clearly delineated and addressed using sound techniques
Experimental comparisons are performed rigorously with attention to detail
Weaknesses: Literature review and comparisons are partial to the RL literature. There is a long-standing literature in control [A, B, C] to use an approximate model to plan using predictive control. A whole host of methods to learn a safety-filter/shielding on the fly has been explored with robust optimization-based offline and online control techniques. Most of these methods would implicitly solve the problem this paper is trying to address. However, it is interesting that the paper uses the Q function in the online optimization. This aspect is novel and unique to this paper.
It is unclear how much computation and time it takes to run MCTS online at each time in order to do dynamic shielding at runtime.
Dynamics model such as double integrator and differential drive are too simple. It would be interesting to see how well these would work with more complicated and/or higher-dimensional dynamics.
[A] Breeden, Joseph, and Dimitra Panagou. "Predictive control barrier functions for online safety critical control." 2022 IEEE 61st Conference on Decision and Control (CDC). IEEE, 2022.
[B] Wabersich, Kim P., and Melanie N. Zeilinger. "Predictive control barrier functions: Enhanced safety mechanisms for learning-based control." IEEE Transactions on Automatic Control 68.5 (2022): 2638-2651.
[C] Wabersich, Kim P., et al. "Data-driven safety filters: Hamilton-jacobi reachability, control barrier functions, and predictive methods for uncertain systems." IEEE Control Systems Magazine 43.5 (2023): 137-177.
Technical Quality: 2
Clarity: 3
Questions for Authors: Have the authors explored the space of RL training algorithms and methods to test this approach?
Are there any advantages of using DMPS if the performance policy is not using RL and uses imitation learning or no learning at all? Exploration is important only for RL.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors have discussed limitations and there is potential for the approach to scale even though the experiments in the paper are on simple examples.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their many insightful comments and suggestions. We respond to their questions and concerns below.
**Adding traditional control and non-RL methods to literature review**
We appreciate the reviewer's paper recommendations and will include them, along with classical control papers, in our literature review.
We did not compare our approach to non-RL methods experimentally. One of RL's strengths is its generality—agents can satisfy complex specifications with just an abstract environment and a simple reward signal. While this generality is valuable, specialized methods often outperform RL in specific tasks. Since our paper is primarily RL-focused, we believe that the fairest comparison is against other RL approaches.
**Concerns over simplicity of the environment dynamic models**
Since the planner and RL algorithm treat environment dynamics as black boxes, more complex dynamics don't necessarily challenge the agent, as long as they are deterministic and allow sufficient range of motion for task completion. We agree that increasing the dimensionality of the dynamics would make the task harder for both the RL algorithm and the planner and believe that this would be an interesting avenue for future research.
**Using other RL training algorithms**
Our algorithm can generally work with any off-policy RL algorithm. We briefly explored using DDPG and SAC as the training algorithm and did not find reliable improvement.
**Advantages of using DMPS if performance policy does not use RL (instead using IL or no learning)**
RL-based approaches like DMPS can learn from interactions with the environment, allowing them to explore states and actions essential for model learning. Unlike imitation learning, which often suffers from distribution shift, RL methods are more resilient. Moreover, RL methods are more general than alternatives and need minimal input. Imitation learning requires expert trajectories and sometimes an expert controller, which can be costly. Classical learning-free methods rely on environment-specific assumptions. We believe that the generality of RL makes it a valuable paradigm on its own.
---
Rebuttal 2:
Title: Thank you for the rebuttal
Comment: I read the author response and other reviews. Overall, the paper is good but I would still consider it a borderline accept.
For me, the computational cost of running the local planner in real-time and the divergence of the planned trajectories from reality due to imperfect models are still a concern. MCTS is usually distilled into a neural network so that the safety recovery policy can be run in real-time. I am not sure whether that is appropriate here.
Further, I did not go through the proof in the Appendix in detail.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for their response.
We agree with the reviewer that the computational cost of running the local planner poses a potential concern for running the model in real-time. There have been prior works applying highly optimized planning to much larger horizons while running real-time on physical robots. For instance, [5] uses a MPPI planner that explores upto a horizon of 100 while running on a real robot.
Insofar as the plan horizon in our design makes real-time operation prohibitively expensive, we note that at test time, one can truncate the planner horizon to be much lower than at train time, thus allowing the model to operate efficiently in real time. At the end of training, the agent has an accurate Q function, which allows even a short-horizon planner to properly optimize the agent's infinite-horizon return.
To see this empirically, we took the final DMPS models (trained with an H=5 planner) from the double-gates+ environment and evaluated them with truncated horizon. Using just an H=1 planner, our model received a reward of 10.86. Recall from the graph in our global rebuttal that an agent **trained** on double-gates+ with an H=1 planner only received reward of 2.9. The jump in performance demonstrates both the necessity of a non-trivial horizon at train time and the sufficiency of a trivial horizon at test time. Using an H=2 planner at test time gives a reward of 12.47, which closely approximates the H=5 planner (as used in training) reward of 13.0.
The reviewer's concern about the divergence of planned trajectories from reality is also valid. Our work uses analytic models, and these have the potential to be inaccurate. Learning more accurate models is an orthogonal research direction that has been studied prior [1,2,3,4]. An interesting direction for follow-up work could investigate combining this line of research with our own.
[1] Atreya, P., Karnan, H., Sikand, K. S., Xiao, X., Rabiee, S., & Biswas, J. (2022). High-Speed Accurate Robot Control using Learned Forward Kinodynamics and Non-linear Least Squares Optimization. IROS 2022.
[2] Chua, K., Calandra, R., McAllister, R., & Levine, S. (2018). Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models. NeurIPS 2018.
[3] Karnan, H., Sikand, K. S., Atreya, P., Rabiee, S., Xiao, X., Warnell, G., … Biswas, J. (2022). VI-IKD: High-Speed Accurate Off-Road Navigation using Learned Visual-Inertial Inverse Kinodynamics. IROS 2022.
[4] S. Piche, B. Sayyar-Rodsari, D. Johnson, and M. Gerules, “Nonlinear model predictive control using neural networks,” IEEE Control
Systems Magazine, vol. 20, no. 3, pp. 53–62, 2000.
[5] Williams, G., Wagener, N., Goldfain, B., Drews, P., Rehg, J. M., Boots, B., & Theodorou, E. A. (2017). Information theoretic MPC for model-based reinforcement learning. ICRA 2017. | Summary: This paper proposes a new method for safety shielding. More precisely, the authors extend Model Predictive Shielding (MPS), where an agent reverts to a safe backup policy if, for the next predicted state, this policy would not be able to guarantee safety anymore. MPS is often overly conservative, particularly in cases where the backup policy is very different from the final policy (for example, it may only consider breaking, while the final policy may be able to steer around an object). To improve this, whenever an action is potentially unsafe, the agent first uses a short-horizon planner to see if there exists some safe action that may be better than the one of the backup policy (i.e., one for which the backup policy could still recover in the future, but for which our learned agent predicts a higher reward). The authors formalize this framework and show recovery regret for this framework diminishes exponentially with the horizon. Next, they show that an implementation of this framework outperforms prior methods, both in terms of performance and the number of required shield invocations.
Strengths: The topic of the paper, safety shielding, is relevant and significant. Safe RL (and particularly, safety shielding) is a promising line of research but is often overly conservative in practice: the methods proposed in this paper take a step toward reducing this problem while still giving formal guarantees about safety. The topic is relevant for the NeurIPS community (particularly those interested in RL), both as a method that could immediately be used or to extend the method to more complex settings (i.e., with a stochastic/unknown model).
The paper is well-written and easy to read: the intuition behind the method is clear, and the analysis of the results is easy to follow. The framework is well formalized (using visualizations where helpful), and the given pseudo-code helps with reproducibility. The experiments are extensive and convincingly show the advantages of the proposed method.
Weaknesses: Apart from some minor remarks that I add below, this paper has one main weakness: it does not clearly indicate the computational complexity of its method nor the scalability. The results do not show computation times, and (as far as I could tell) no mention is made of either the average planning time or some time limit for this planning phase. From some ball-parking, the additional time required for this method may be significant (solving up to millions of short-horizon planning problems), so a quantification of this computational cost should be provided.
Some more minor remarks:
* The paper only mentions how the framework is implemented (i.e., what RL & planning method it uses) in the appendix: it would be nice to (briefly) mention this in the results section as well;
* In Table 2, the results of CPO and TD3 are not bold, even though some are equal to those of the best frameworks: this should be fixed;
* One limitation of the proposed framework is that it assumes the environment is deterministic: it would be nice to mention this in the limitations section.
Technical Quality: 3
Clarity: 4
Questions for Authors: As mentioned in the 'weaknesses' section, I have one main question: how does the computational complexity of your method compare to those of the benchmarks, particularly MPS? I will change my rating if this question is not adequately answered.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their many insightful comments and suggestions. We respond to their questions and concerns below.
**Questions over computational complexity**
We analyze the question of the planner’s computational cost in more depth in the global rebuttal. We restate a short summary of the global response here.
There is a tradeoff between the quality of the recovery plan, and the computational cost incurred in the planner searching for it. The look-ahead controls this tradeoff. In our implementation, we use MCTS in an anytime planning setting, where we devote a fixed number of node expansions to searching for a plan, and we choose the best plan found at the end of the procedure. The clock time needed would simply be linear in the allocated compute budget. However, the worst-case computational complexity to densely explore the search space, as in general planning problems, would be O(exp(H)) where H is the look-ahead length, since the planning search space grows exponentially. Since the remaining baselines do not do any kind of planning, they use constant time per timestep, though this comes at an optimality cost.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their extensive answers. Based on the other reviews and rebuttals, I have two comments:
Firstly, thanks for the detailed explanation on the computational complexity. I think the details about your particular implementation of the planner (i.e., using a Python implementation of MCTS with a node budget of 100 and horizon 5, leading to planning times of ~0.4s per timestep) are relevant to include in the experimental section of the paper to give readers an idea of the computation cost of using your method.
Secondly, I found your analysis of the effect of the planning horizon on performance interesting to read. Although you explain that the effect of the horizon on performance is limited, I still agree with reviewer BFvm that choosing a horizon that balances computational costs with performance could be tricky in more complex environments. I think this is worth adding to the limitation section of the paper.
I'd like to hear from the authors if they plan to make these changes in the revised version of the paper. Otherwise, I have no further questions or comments.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for highlighting the point regarding the effect of the planning horizon on performance. We will expand on this in the limitations section and incorporate the key discussion points (choosing a horizon that balances computational costs with performance could be tricky in more complex environments), and will add the detailed analyses and graphs to the appendix to provide further information on this topic. We will also add the highlighted implementation details in the Experiments section. | Summary: The authors introduce Dynamic Model Predictive Shielding (DMPS) an extension of Model Predictive Sheilding (MPS) that adress some of its key limitations, such as overconservatism when deploying the backup policy which consequently hinders exploration of the neural 'task' agent and slows down conergence. The key innovation of DMPS is that it incoropoates a local planner for dynamic recovery that leverages both a pre-computed backup policy and the neural 'task' policies Q values for planning for short and long horizon returns while maintaining safety of the system. DMPS is a provably safe reinforcement learning (PSRL) method, meaning that it guarantees safety of the system (regardless of the underlying neural 'task' policy) by only exploring within the set of safe and recoverable states defined by the backup policy. This realised by planning for a fixed n step trajectory and checking whether the system can be recovered by the backup policy given the action proposed by the agent. The authors demonstrate that DMPS outperforms MPS and other baselines in terms of performance and safety in various benchmarks. It also emphasizes the importance of aligning reinforcement learning agents with real-world safety requirements, while discussing some of the limitations of their approach.
Strengths: The paper has several strengths: I find that the paper is very well written and easy to follow, with sufficient details in necessary places and abstractions in other places where the details may not immediately matter, as such, it is a very nice read. The theoretical analysis of the recovery regret is convincing and interesting. Furthermore, the overall framework is attractive from the point of view that it is provably safe, something I personally find is crucial for deploying RL in the real world, rather than safe at convergence or in expectation like a lot safe RL methods. I find that the dynamic planning module is an innovative solution to the intuitive issue faced by most shielding methods (Figure 2) and I feel that this work constitutes a step in the right direction for improving shielding methods and making them more practical. The experimental evaluation I feel is strong and thorough as in most cases DMPS clearly outperforms MPS and REVEL, although I think it is missing something (see weaknesses).
Weaknesses: The key weakness of the PSRL framework is the reliance on a perfect (or sufficiently accurate) dynamics model of the environment, the safety performance of the backup policy and the computation of the safe invariant set. In contrast to the first shielding approaches for RL [1], which operate primarly on discrete state and actions spaces, DMPS does not need to compute the shield before learning can begining which significantly reduces the engineering overhead before training. This of course comes at a cost, in practice the shields in [1] are substatially more lightweight during "inference", (although in theory there could be exponential blow up) in part due to only operating on discrete or discretized state/action spaces but also because a lot of the necessary computation is done before hand. This is a key limitation of DMPS as it relys on planning at each timestep which might be costly and infeasible for on-board computation or edge devices. Fruthermore, it seems that there is still a significant amount of prior knowledge required for DMPS to work effectively, first we have to have a "perfect" dynamics model (for provable guarantees) secondly I presume we need to handcraft a backup policy and then compute its invariant safe set so as to plan for recoverability. The first limitation is mentioned in the paper but not really discussed in much detail, the second limitation is find is crucial and I don't think is really mentioned in the paper. In particular it is a non-trivial challenge to come up with a backup policy that has a maximal safe invariant set, perhaps for the environments the authors consider it is easy (just decelerate) but for more dynamics environments and in general this is not the case and I feel like more discussion about both these limitations (i.e. the limitations of the PSRL setting) is needed.
While I find the experimental evaluation compelling I feel it is slightly misleading and it is missing something. In Table 2 CPO and TD3 score the same or higher in a few of the static benchmarks but there scores are not in bold, is there a reason for this that I am missing? I also feel like a comparison to PPO-Lag or DDPG-Lag would really help make the results that bit more convincing.
All that being said, in principle I advocate for acceptance of this paper.
[1] Alshiekh, Mohammed, et al. "Safe reinforcement learning via shielding." Proceedings of the AAAI conference on artificial intelligence. Vol. 32. No. 1. 2018.
Technical Quality: 3
Clarity: 4
Questions for Authors: Most of my questions are technical:
For each of the enviroments you consider how are the backup policies constructed and how are the invariat safe sets determined?
For each of the environments what are the maxmimum number of steps needed to deccelerate the vehicle to zero or avoid obstacles and is your choice of n=5 sufficient?
What would be suitable ways of modelling the environment from experience to obtain uncertainty estimates, for example would Gaussian Process modelling suffice?
Do you assume any observation noise or just perfect access to the current state, if not how would you incorporate this into your framework?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their many insightful comments and suggestions. We respond to their questions and concerns below.
**Construction of backup policies and determination of the invariant sets**
In static environments, the backup policy involves braking as hard as possible. In dynamic environments, the agent checks if it is in an obstacle's trajectory; if so, it accelerates away, otherwise, it brakes. The backup policy is straightforward, aiming to halt the agent quickly. It can be determined through basic understanding of environment dynamics, or by training a neural backup policy.
The Model Predictive Shielding (MPS) framework automatically synthesizes the invariant safe set from the backup policy. The MPS framework defines “stable states” where safety is guaranteed. In static environments, stable states are those where the agent has zero velocity; in dynamic environments, the agent must have zero velocity and be outside the trajectory of all obstacles. A state s is within the invariant safe set ($S_{rec}$) if a trajectory from s to a stable state can be found by forward-simulating the backup policy without violating safety conditions.
The invariant set's construction is automatic, requiring no manual engineering, and is generally sufficient for most safety problems. States not within $S_{rec}$ that could allow safe navigation are usually undesirable, as they are too close to obstacles for braking to be safe.
**Maximum number of steps needed to decelerate the vehicle and sufficiency of n=5**
The planning horizon doesn't need to cover all steps to decelerate the vehicle to zero. Each state in MCTS expansions is verified to be in $S_{rec}$ through forward simulation, ensuring every state in the search is safe. This simulation isn't part of the planning horizon.
For context, it takes a maximum of 10 timesteps to decelerate to a stop in both the differential drive and double-integrator dynamics.
**Suitable ways to model environment noise**
We assume the environment is deterministic with no observation noise. However, in RL, Gaussian modeling can effectively handle observational uncertainty, as shown in [2]. More complex models, like uncertainty-aware deep learning, have also been explored [1].
**Experiments**
We thank the reviewer for their comments on the experimental results. We have included PPO-lag as a baseline and corrected the bolded numbers in the tables. The tables are attached in the global rebuttal.
[1] Chua, K., Calandra, R., McAllister, R., & Levine, S. (2018). Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models. In S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada (pp. 4759–4770). Retrieved from https://proceedings.neurips.cc/paper/2018/hash/3de568f8597b94bda53149c7d7f5958c-Abstract.html
[2] Kaufmann, E., Bauersfeld, L., Loquercio, A., Müller, M., Koltun, V., & Scaramuzza, D. (2023). Champion-level drone racing using deep reinforcement learning. Nat., 620(7976), 982–987. doi:10.1038/S41586-023-06419-4
---
Rebuttal Comment 1.1:
Comment: Thanks for these insights they are really helpful, and thanks for including PPO-Lag in your experiments. I see this now as quite a strong paper and will raise my score to 7.
Final question: the safe invariant set $S_{rec}$ is this computed automatically before training (with the backup policy) or is this computed on-the-fly by the dynamic planner?
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response and for raising their score! The invariant set is computed on-the-fly, though the overhead for checking admittance in $S_{rec}$ is lightweight for individual states. | Summary: The approach called dynamic model-predictive shielding for safe reinforcement learning is proposed as an improvement over its static counterpart. The main idea is to optimize for expected return on action with respect to the reinforcement-learning task when choosing a shielding backup action, and to incorporate planning horizon prediction into learning for the policy to learn to avoid unsafe actions. This dynamic version is evaluated on several static and dynamic obstacle-avoidance benchmarks and compared to static model-predictive shielding and three more planning-based approaches.
Strengths: The core idea of the approach is interesting and potentially valuable: to achieve synergy between safety and optimal performance in model-predictive shielding via incorporating planning into policy learning and taking expected performance into account during backup planning. Similar attempts have been done previously. In comparison, this work proposes a novel notion of "recovery regret" as a heuristic to guide mutual integration of planning and reinforcement learning.
The strength of the paper is in extensive evaluation and comparison to multiple approaches. The notion of recovery regret can also be of independent interest for model-predictive shielding research. Dynamic shielding outperforms other approaches in the evaluation in terms of the number of shielding invocations, which indicates synergy between planning and learning over time.
Weaknesses: Potential weaknesses of the approach are in scalability of the planner and tightness of the probabilistic bounds on safety.
Minor:
- "more optimal recovery approach" --> an optimal/a better
Technical Quality: 3
Clarity: 3
Questions for Authors: Questions
1. In Figure 1, what are green and red lines, and a blue blob?
2. How does the local planner scale with respect to the look-ahead?
3. Does the local planner have to recompute the look-ahead prediction every time it is invoked or does it reuse previous results if the agent continues along the same trajectory?
4. What is the overhead of the planner's computations?
5. How does the planning limit affect safety and optimality guarantees?
6. MCTS typically struggles to plan for overly constrained problems and complex planning tasks. How does the approach scale with respect to the planning task complexity?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors explicitly discuss limitations which are fairly common for problem domain.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their many insightful comments and suggestions. We respond to their questions and concerns below.
**Figure 2 clarification**
We assume that the reviewer’s question is referring to Figure 2. Figure 2 demonstrates the example described in the text, and in particular, part (d) of the figure is discussed in Section 5 (Lines 208-215). The red and green lines represent two locally optimal paths in the planning tree. The blue blob represents a water puddle that would result in lower returns if crossed. The planner selects the green planner here, informed by its access to the long-term $Q$ function.
**Reusing lookahead predictions in the local planner**
The planner is only used when the neural policy suggests an action that cannot be certified as recoverable. Reusing prior results is beneficial only if the planner is invoked consecutively due to the base policy acting recklessly. In such cases, previous search results can be reused, similar to sub-tree reuse in the POMCP algorithm [2]. Our current implementation does not include this feature, but it can be easily added.
**Planner overhead**
Beyond the computational cost of running the planner (discussed above), there is minimal overhead in our implementation. To minimize the overhead, we designed the planner to leverage the environment's simulated transition function so that the state and action space between the planning problem and the RL formulation is identically the same. Thus, when DMPS needs to invoke the planner, it just initializes a planning environment with the current world state, and the planner directly queries the transition function internally during its search.
**Effect of planning limit on safety and optimality guarantees**
When using a planner with proven completeness (or probabilistic completeness) properties, DMPS can always guarantee a safe recovery plan, given enough planning time. However, due to finite computational limits, the planner might not find a solution in a fixed time. Thus, DMPS can rely on its fallback policy when the planner fails to provide a valid solution (branch 4 in Figure 1). As such, the incorporation of a planner in DMPS does not affect the provable safety guarantees. In practice, we find that DMPS never uses the fallback policy in our experimental settings.
The relationship between the optimality guarantee and the planning depth is outlined in Theorem 5.1, with more specific statements and proofs given in Appendix A.1. The planner’s regret decays exponentially as a function of the planning depth.
**Scaling of approach with respect to planning task complexity**
As the reviewer noted, MCTS struggles with overly constrained problems, making the choice of planner crucial. We used MCTS in our evaluation due to its wide applicability, but some domains may have more effective planners. For instance, Informed RRT* excels at motion planning in constrained environments with dense clusters [1].
[1] Gammell, J. D., Srinivasa, S. S., & Barfoot, T. D. (2014). Informed RRT*: Optimal sampling-based path planning focused via direct sampling of an admissible ellipsoidal heuristic. 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2014, Chicago, IL, USA, September 14-18, 2014, 2997–3004. doi:10.1109/IROS.2014.6942976
[2] Silver, D., & Veness, J. (2010). Monte-Carlo Planning in Large POMDPs. In J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, & A. Culotta (Eds.), Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, Vancouver, British Columbia, Canada (pp. 2164–2172). Retrieved from https://proceedings.neurips.cc/paper/2010/hash/edfbe1afcf9246bb0d40eb4d8027d90f-Abstract.html
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed answers and proposed extensions. It would be beneficial to include this discussion into the paper. I have no further questions.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their acknowledgment. We plan to expand the limitations section with points from the Rebuttal and add the detailed analyses and graphs to the Appendix to provide further information. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful feedback. We summarize the responses to common questions.
**Computational cost of the planner**
There is a tradeoff between the quality of the recovery plan, and the computational cost incurred in the planner searching for it. The look-ahead controls this tradeoff. In general, DMPS can leverage any planner available for the domain, and the computational cost of the planner with respect to the look-ahead will depend on the exact choice of the planner, and its hyperparameters.
In our implementation, we use MCTS in an anytime planning setting, where we devote a fixed number of node expansions to searching for a plan, and we choose the best plan found at the end of the procedure. The clock time needed would simply be linear in the allocated compute budget. However, the worst-case computational complexity to densely explore the search space, as in general planning problems, would be O(exp(H)) where H is the look-ahead length, since the planning search space grows exponentially.
Our implementation used MCTS with 100 node expansions, a plan horizon of 5, and a branch factor of 10, which amounts to exploring 1000 states in total. On average, we found that when the shield is triggered, planning takes 0.4 seconds per timestep. The code we used is unoptimized, written in Python, and single-threaded. Since MCTS is a CPU-intensive search process, switching to a language C++ would likely yield significant speed improvements, and distributing computation over multiple cores would further slash the compute time by the number of assigned cores.
Figure 1 shows the number of simulator state expansions needed by a search-based planner as a function of horizon length in the double-gates+ (double integrator) environment. Each node expansion requires equal computation time, hence this graph illustrates how the computation cost of the planner scales as a function of horizon length. Since we use an anytime planner, the number of actual states expanded (and hence the computational cost) is capped by a hyperparameter that controls the planner computation budget.
**Sufficiency of planning horizon H=5**
Our algorithm is designed to ensure that the planner accounts for both short-term and long-term objectives. As detailed in Section 5, the objective of the planner consists of two terms: 1) the discounted sum of the first $H$ rewards, and 2) the estimated long-term value of the penultimate step $s_H$ in the plan horizon, as determined by the agent’s $Q$ function.
The second part of the objective function is specifically included to ensure that the planner does not return myopic plans, and accounts for progress towards the long-horizon goal. Since the planner optimization objective includes this second term, even a small-horizon planner can output actions with proper awareness of long-horizon recovery events. The length of the horizon affects how close to the globally optimal solution the result is, with a tradeoff of computational cost.
To see this empirically, we reevaluated the double-gates+ environment (double integrator dynamics) with horizons of 1, 5, and 9. The graphs of attained reward and shield triggers from this experiment are attached as Figure 2 in the pdf. Comparing the H=1 and H=5 agents, the H=5 agent substantially outperforms the H=1 agent in both shield triggers and reward. Comparing H=5 and H=9, the H=9 agent reached high performance and low shield triggers faster than the H=5 agent. However, both the H=5 and H=9 agents converge to the same performance eventually.
**Experimental results**
We thank reviewer NRWz for suggesting the inclusion of PPO-Lagrangian as an additional baseline. We have evaluated it on all environments and appended the values to the tables of results. The new tables are attached to this response. Similar to CPO, PPO-lag was unable to make headway on the dynamic environments. We also correct bolding errors in the tables. In Table 1, DMPS, MPS, and REVEL are provably safe RL methods while CPO, PPO-lag, and TD3 are not. The table shows shield triggers for the former and safety violations for the latter.
**Concerns about assumption of perfect information and determinism**
The majority of prior work on PSRL assumes perfect information [1,2,3,4,5]. While traditional model-based RL methods can learn environment models even under partial observability/imperfect information, they cannot provably guarantee safety and are thus not suitable for tasks where safety violations are completely unacceptable.
Much prior work on provably safe locomotion also assumes determinism [3,4,5], with some such algorithms even having been deployed on real robots [3]. However, we agree with reviewers that extending our DMPS approach to stochastic environments is a promising direction for future work. In particular, since prior MPS work has been extended to work in stochastic settings [1,2], we believe our DMPS approach can be similarly extended to the stochastic setting as well.
[1] Anderson, G., Verma, A., Dillig, I., & Chaudhuri, S. (2020). Neurosymbolic Reinforcement Learning with Formally Verified Exploration. NeurIPS 2020
[2] Bastani, O., & Li, S. (2021). Safe Reinforcement Learning via Statistical Model Predictive Shielding. In D. A. Shell, M. Toussaint, & M. A. Hsieh (Eds.), RSS 2021.
[3] Vinod, A. P., Safaoui, S., Chakrabarty, A., Quirynen, R., Yoshikawa, N., & Cairano, S. D. (2022). Safe multi-agent motion planning via filtered reinforcement learning, ICRA 2022
[4] Zhang, W., & Bastani, O. (2019). MAMPS: Safe Multi-Agent Reinforcement Learning via Model Predictive Shielding. CoRR, abs/1910.12639.
[5] Zhu, H., Xiong, Z., Magill, S., & Jagannathan, S. (2019). An inductive synthesis framework for verifiable reinforcement learning, PLDI 2019
Pdf: /pdf/5d60a48e90379d734a0f507d230d78d83df7f091.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper seeks to address provably safe RL problems where safety must be ensured even during training. It proposed DMPS, which enhances prior Model Predictive Shielding approach, to dynamically select safe actions when danger is imminent. DMPS employs local planner to plan for recovery actions and the planner objective consists of both short-term and long-term rewards. Feedback from the planner can then be used to incrementally train the neural policy to guide it towards safe policy set.
Strengths: 1. Quality
* Overall, the approach described in the paper is sound and it combines many established components (e.g. backup policy, local planner, estimate future reward using model unrolling and Q-estimate) to facilitate safe RL.
* The paper provides theoretical bound on the recovery regret as the sampling limit in the local planner approaches inifinity.
2. Clarity
* The paper is written in a clear and lucid manner. The figures, algorithm and equations are structured in a way that is easily understandable to the readers.
Weaknesses: 1. Originality
* The main difference between DMPS and MPS is the use of local planner when backup policy is triggered. The technical approach used in DMPS is not particularly new as there are already some similar approaches of estimating a safety Q-value and perform planning based on the Q-value [1, 2].
2. Significance
* The only difference between DMPS and the prior MPS seems to be the local planner and (as discussed in point 1) this local planner is not particularly novel. Having said that, I do agree that the proposed DMPS does show improvement over MPS in some experiment scenarios.
* While the paper mentions a small planning horizon is sufficient for the local planner to plan safe yet rewarding actions, I feel that this may not be true in most cases. To steer the agent back to safety (and yet rewarding), a long sequence of actions may be required. If the planning horizon is set too small, then DMPS falls back to backup policy and the performance would be the same as MPS. In this case, I guess the only solution is to increase in planning horizon and in turn increase the computational overhead of DMPS exponentially.
* The local planner requires perfect information of the transition and the transition must be deterministic. This may restrict its applicability, especially given that there're prior work on model-based RL where transition can be stochastic and learned instead.
References
[1] Clavera, I., Fu, Y. and Abbeel, P., Model-Augmented Actor-Critic: Backpropagating through Paths. In International Conference on Learning Representations.
[2] Thomas, G., Luo, Y. and Ma, T., 2021. Safe reinforcement learning by imagining the near future. Advances in Neural Information Processing Systems, 34, pp.13859-13869.
[3] Nagabandi, A., Kahn, G., Fearing, R.S. and Levine, S., 2018, May. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In 2018 IEEE international conference on robotics and automation (ICRA) (pp. 7559-7566). IEEE.
[4] Chua, K., Calandra, R., McAllister, R., and Levine, S. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems. 2018.
[5] Janner, M., Fu, J., Zhang, M. and Levine, S., 2019. When to trust your model: Model-based policy optimization. Advances in neural information processing systems, 32.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In the first sentence of Section 6.2, you mentioned that the total return is averaged over the last 10 training episodes. Are you evaluating them using the same (single) random seed and only use the last 10 training episodes for evaluation?
2. Given that both the states and actions are continuous, how do you apply MCTS to the local planner?
3. TD3 only maximizes a single reward objective. In your experiments, I guess you performed some sort of reward shaping for it to balance between safety and reward. Can you elaborate how do you incorporate safety into its objective and is there any weighting used?
4. Similarly for CPO, how do you incorporate safety into it? Do you specify a safety violation constraint?
5. (Related to Qn 3 & 4) I am surprised that TD3 and CPO rapidly overfits to conservative policy in dynamic environment. What do you think is the reason and is the weighting between safety and reward dynamically tuned?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper discussed the known environment model as its limitation. I agree that this is a limitation which warrants further investigation. As studied in [5], it is very challenging for a model to accurately predict future trajectories with long horizon. Since DMPS relies on having an accurate environment model for safety adherence, this may limit its applicability to practical scenarios where environment model is not given and needs to be learned.
Another related point is that it is unclear what value to set for the local planner horizon. Different tasks may require corrective action sequence of different lengths. Setting the horizon too short may revert DMPS performance back to MPS and setting the horizon too long may increase the computational overhead beyond acceptable threshold.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their many insightful comments and suggestions. We respond to their questions and concerns below.
**Concerns over originality and significance of DMPS**
The main novelty of our approach is the synergistic relationship between a local planner tasked with finding good recovery paths around unsafe regions and a neural policy that can learn from the planner's recovery strategies to navigate unsafe regions on its own. We thank the reviewer for highlighting the related papers (which we will cite in the revision), but note that both papers take quite different approaches from DMPS:
The first paper does not deal with safe navigation, focusing instead on performance optimization in general RL settings. In that work, the training of the policy is disconnected from the planner; they first train a policy using their MAAC algorithm and directly insert it into an MPC planner. However, in the PSRL setting, adding a planner after the fact limits the performance of the policy since it was not trained to avoid safety-violating scenarios while simultaneously optimizing the task objective. The synergistic relationship between planner and policy in our approach avoids this problem.
The second paper deals with safe navigation, but it does not use planning, instead taking a model-based RL approach. The key insights of this paper are dynamically adjusting the negative penalty for safety violations such that the penalty can "carry over" a discounted horizon, and using a model to synthetically generate more rollouts in training.
**Reporting of average return in the experimental section**
The mean is computed with randomized seeds. We take five random seeds, and for each random seed, we compute the mean return over the last ten training episodes. We then report the average score over the five random seeds. The standard deviation reported is across the seeds, not the episodes. We will clarify this in the text.
**Using MCTS in continuous settings**
We apply MCTS to the continuous domain following a strategy similar to previous works such as [2,3]. We deal with a continuous action space having a branch factor hyperparameter $B$, and then sampling $B$ continuous actions from the action space for each node expansion.
**Reward shaping in TD3 to incorporate safety priorities**
We tested two variants: one where TD3 received a negative reward penalty and ended the episode upon the first unsafe action, and another where each unsafe action incurred a penalty. The former led to frequent crashes, so we chose the latter. We set the penalty magnitude equal to the positive reward for completing the environment. Hyperparameter tuning showed that reducing the penalty too much led the policy to ignore safety constraints, while beyond a certain point, the penalty magnitude had little effect on behavior.
**CPO safety parameters**
CPO has a hyperparameter specifying the maximum tolerable number of safety collisions. Given our goal is provably safe RL, we set this parameter to 1 following the methodology of REVEL, [1]. We also tested higher tolerance values to ensure CPO wasn't hindered by the safety constraint but saw no performance improvement, so we maintained our original setting.
**Overfitting of TD3 and CPO**
In dynamic environments, reaching the goal requires safely avoiding obstacles. Hyperparameter tuning showed that low penalties for unsafe actions lead agents to tolerate collisions, while high penalties make them overly conservative. This indicates that avoiding obstacles while simultaneously achieving task goals is complex and can't be learned through reward signals alone.
We did not use dynamic tuning. In CPO, the safety constraint is separate from the reward, so dynamic tuning was not possible. In TD3, changing the reward function mid-training is theoretically unsound and would likely destabilize the learned Q function, as it minimizes loss using a replay buffer with rewards spanning 10^6 timesteps.
[1] Anderson, G., Verma, A., Dillig, I., & Chaudhuri, S. (2020). Neurosymbolic Reinforcement Learning with Formally Verified Exploration. NeurIPS 2020
[2] Hubert, T., Schrittwieser, J., Antonoglou, I., Barekatain, M., Schmitt, S., & Silver, D. (2021). Learning and Planning in Complex Action Spaces. ICML 2021
[3] Rajamäki, J., & Hämäläinen, P. (2019). Continuous Control Monte Carlo Tree Search Informed by Multiple Experts. IEEE Trans. Vis. Comput. Graph., 25(8), 2540–2553.
---
Rebuttal 2:
Comment: I'd like to thank the authors for providing a detailed and to-the-point rebuttal.
The two papers I quoted [1, 2] are part of safe RL literature and not in PSRL domains. The domains addressed aren't necessarily the same as DMPS but I think it does show that the usage of safety Q-value and look-ahead have been well investigated and aren't particularly original.
After reading the global response on sufficiency of H=5, I still think that small planning horizon may not be sufficient for most cases. While it may be true in the domains tested in this paper where a shorter sequence of actions is sufficient to ensure safety, other types of domains may require longer sequence of actions to steer agent towards safety (yet rewarding) region. Increasing the look-ahead horizon in this case may increase the computational overhead exponentially.
However, the authors did provide sufficient references showing that their setting (perfect information of transition dynamics and the transition must be deterministic) is common in PSRL literature. Since this work is likely to benefit the PSRL community, I'm willing to increase my final rating.
UPDATE: I've increased the final rating in the Official Review.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for their valuable feedback and we will revise the paper accordingly. We also thank the reviewer for raising their score! | null | null | null | null | null | null |
A Polar coordinate system represents syntax in large language models | Accept (poster) | Summary: This paper proposes polar probes, a kind of structural probe that learns a distance and rotation function that can more accurately classify syntactic structure from language model representations than previous approaches. In particular, the question of whether direction can represent the type of syntactic relationship is answered. The authors find that their polar probe outperforms previous methods quite significantly and show
Strengths: This paper is very well presented and a pleasure to read. The empirical findings are strong and clearly support the hypothesis that the direction, as well as the angle of the representations of an LM projected on a plane represent the syntactic relationships encoded by the model. The authors show that this interpretation is able to much more strongly reconstruct ground truth syntactic parses from hidden state representations than structured probes. The controlled dataset provides a clean comparison in a challenging setting and is a useful resource for future work. A major finding of this work is that it vastly raises the upper bar for how well we should consider syntax to be encoded by language models.
Weaknesses: Weaknesses like the focus on dependency parses and drawbacks of current tokenizers are addressed in limitations, but are still weaknesses nonetheless.
Please include UUAS, LAS and Balanced Accuracy for the evaluation on the controlled dataset separately for comparison.
As thorough as this paper is, I think it could go deeper on the model analysis. It's nice that the layer-wise analysis is consistent with previous work, but this would be mostly expected. For example, could the authors show that models of different sizes capture more/less syntactic complexity? Is there a critical point where syntax becomes well represented and gains are diminishing after more scaling? Do larger models capture more of the "tail" of rare syntactic constructions? This could be carried out on the GPT2 or Pythia family of models.
Nits:
- please make the text in the legend/axis labels for figure 3 bigger
- Typo L36: "proposed settle"
Technical Quality: 4
Clarity: 4
Questions for Authors: N/a
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer rWeD for their insightful and constructive comments,
### Controlled dataset
We agree that reporting performance metrics such as UUAS, LAS, and Balanced Accuracy on the controlled dataset is an important addition. To address this, we will include a new figure in the appendix of the camera-ready version of the manuscript. This figure will display the UUAS, LAS, and Balanced Accuracy for each of the gold edges in the main phrase, with error bars representing standard error. (The figure is available as Figure 4 in the author rebuttal attached pdf)
Our result reveals that the degradation in UUAS and LAS is primarily due to the addition of relative clauses and prepositional phrases. Specifically, the accuracy for the ‘nsubj’ relation decreases by approximately 50% after adding these constituents. Unexpectedly, we also observe a degradation of the same magnitude in the ‘obj’ relation, despite the constant distance between words in the ‘obj’ relation across the three sentence levels.
### LAS and model size
In response to the reviewer’s recommendation, we have conducted a study on how LAS varies with model size across different families (Pythia, GPT-2, BERT, and SOTA). The updated figure, which will be added to the camera-ready version, demonstrates a clear trend: larger models generally achieve higher LAS scores. (The figure is available as Figure 1 in the author rebuttal attached pdf)
### LAS and sentence complexity
We also concur that exploring how performance degrades with sentence complexity is valuable. Our new analysis shows how performance varies with sentence length and depth, indicating that larger models are better at capturing the tails of the syntactic complexity distribution. (The new figures are available as Figure 2A and 2B in the author rebuttal attached pdf)
Finally, we will correct the typos and increase the font size in the legends and axis labels of Figure 3 in the camera-ready version.
---
Rebuttal Comment 1.1:
Title: Thank you for the reply. Interesting rebuttal results
Comment: Thank you to the authors for their response and additional experiments. I appreciate the work put into the rebuttal to address the finer-grained questions that I think this work raises. I have read the other reviews and reconsidered the paper, and I'm raising my score from a 7 to an 8. Other reviewers raised good points (baselines, Muller-Ebstein), but I agree with the authors responses to these/the authors agree to reasonable proposed changes. I see a lot of value in this approach for model interpretability and testing linguistic hypotheses in models of language. I would like to see this paper published | Summary: Whereas prior work (Hewitt and Manning 2018) probed syntactic distance and depth, this work proposed to push that forward by also probing headedness and dependency type. Specifically, this doesn't separately probe those three, but aims for a single vector space where euclidean distance defines syntactic distance but the difference vector maps to a UD dependency label (optimized with a contrastive learning objective).
Strengths: It is a pretty well-written paper, and the framing of the angular probe idea seems well explained and to have some elegance to it (in aiming for a single underlying representation); parts of the implementation seem well-considered to get that single representation.
Weaknesses: - If viewed merely as a combination of probing structure and labeling, it is very similar to a work like Muller-Eberstein et al. 2022. The advantage of this paper - having more of a shared representation -- is appealing, but I wish the consequences of that shared space were better explored.
- Analysis was somewhat lacking: for a probing paper, there were relatively work showing what this tells us about the syntactic behavior of models.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Have the authors looked at extended dependencies? The notion of differences as dependency type seems more specific (and to imply more interesting consequences) if there are multiple "paths" through a tree.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank tm4o for their thorough constructive feedback.
We agree that our discussion of the work by Muller-Eberstein et al. (2022) was not sufficiently detailed, which may have made the novelty of our contributions seem less apparent.
To rectify this, we have expanded our discussion and incorporated four new analyses.
* "Our research shares similarities with Muller-Eberstein et al. (2022), who employed linear probing to classify dependency types in language models (LMs). However, our objectives, method, data and results differ significantly. First, while their study aims to offer 'a lightweight method [to] guide practitioners in selecting an LM encoder for the structured prediction task of dependency parsing,' our goal is to understand how syntactic structures are represented in LMs.
* Second, their method consists of optimizing three separate probes—for structure, labels, and depth—and integrating them with a complex heuristic. In contrast, our method employs contrastive learning to optimize a single probe, trained to identify, in a variety of LMs, a unique subspace that encapsulates all properties of dependency trees.
* Third, the datasets we analyze are not the same. Muller-Eberstein et al. focused on evaluating their probe’s performance on undifferentiated natural sentences. Our study extends beyond this by examining different types of sentences, e.g. synthetic sentences specifically designed to highlight key linguistic phenomena, such as nested structures.
In summary, our study builds upon prior research (e.g., Hewitt and Manning, Muller-Eberstein 2022) to offer a more unified, compact, and interpretable representation of dependency trees in contemporary language models."
To substantiate these elements, we now show, in a revised figure (Figure 3 in the attached pdf), that sentences with similar syntactic structures are represented by the same coordinate system.
“In the space of the Polar Probe, the representation of dependency trees appears to be consistent across sentences’ length and substructures. For example, the coordinates of the main phrase are virtually identical whether it is attached to a prepositional phrase and/or a long-nested structure. This invariance supports the notion that dependency trees are represented by a systematic coordinate system that can be recovered with the Polar Probe.”
Relatedly, we now provide new analyses to clarify the representations of subtrees.
“The analysis of synthetic sentences shows that relative and long-nested substructures are represented, for the most part, less precisely than the main phrases. Similarly, the analysis of natural sentences shows that Label Attachment Scores (LAS) decrease with the length and syntactic depth of the sentence (Figure 2A and B in attached pdf). Overall, these results suggest that the Polar representations of dependency trees lose precision as sentences become increasingly complex. This phenomenon corroborates both with their behavior (Lakkretz et al, Linzen et al) and with human behavior (Silverman & Ratner, 1997).
Finally, to evaluate the universality of this representational system, we now evaluate the Polar Probe across a broader range of models and scenarios.
“LAS tend to increase with the size of the LM, irrespective of its family (e.g. Pythia, GPT, BERT). This gain seems to reflect an overall improvement: Indeed, LAS improves with model size for both short and long sentences, as well as both shallow and deep dependency trees. This result suggests that larger and more recent models progressively shape their representations to fit a Polar Coordinate system.”
### Questions
Q1:
Extended Dependencies:
Our current approach is limited to (valid) binary tree structures. However, the notions of (1) extended dependencies, and (2) multiple paths, (3) ambiguous structures (4) semantic graphs could all, in principle, be investigated through a Polar Coordinates, as this proposal can apply to any (e.g. cyclic) directed and labeled graph. We will amend our discussion to highlight these interesting future research directions.
---
Rebuttal 2:
Comment: Thanks to the authors for their detailed response, and the additional analyses provided on the paper - and so am bumping the score up from 4 to 5. I still feel like there is room for further analysis -- I really like Reviewer's 9Ubp's comment that 'I think the authors were setting up to explore some questions about the meaningfulness in the "hierarchy" of the tree, especially with the controlled sentence dataset, but then I never saw these really come to fruition.'.
---
Rebuttal 3:
Comment: We thank Reviewer tm4o for their response and are grateful that our analyses have been able to address the previous concerns to a satisfactory level.
We acknowledge that the meaningfulness of the Polar Probe's representations on the controlled dataset may not be sufficiently emphasized, as Reviewer 9Ubp noted. To address this issue, we propose adding the following paragraph to the Discussion section:
"One key advantage of the present Polar Probe is its ability to provide linear, compact, and interpretable representation of dependency trees, distinguishing it from other parsing models. Notably, Figure 6 (Figure 3 in the rebuttal PDF) illustrates an unexpected phenomenon: the probe appears to assign the same coordinates to words within identical phrases and syntactic roles, regardless of sentence length, complexity, and semantic variations. This representational invariance offers a promising foundation for further exploration of how recursion and compositionality are effectively represented and processed in neural networks."
We hope this addition clarifies the meaningfulness of hierarchical structures in the Polar Probe and encourages further exploration of its implications for linguistic representation and processing. | Summary: Previous work introduced linear probes to explore how syntactic relationships are encoded in LLM embeddings. This work aims to take it a step further and examine how types of syntactic relationships are encoded in the LLMs. They introduce a polar probe that when optimized can predict the type of syntactic relations via the angle between them. In a multi-faceted evaluation, the model outperforms baselines (which are essentially ablations of the model) in terms of stronger cosine similarity between the same relations, and in terms of tree accuracy (UAS/LAS).
Strengths: - An interesting paper with a clear contribution, building on existing probing work while asking a couple new research questions
- The results appear convincing
- The potential to explore syntax through the lens of LLMs, especially when LLMs can be easily trained on unlabeled text, or especially when LLMs are increasingly multilingual, points to some exciting future directions.
- The evaluation also includes some linguistically interesting example cases. Essentially exactly what I would have asked for (in addition to the larger corpora studies)
Weaknesses: - I find the distinction between probing and parsing to be not entirely clear. At the point where the evaluation is in terms of UAS/LAS, could this not be compared directly to parsers on the same data (especially since building on top of LLM embeddings would be the most performant solution)? And where would the discrepancies be, and what would that mean? Do LLMs not encode those relationships?
- In general the paper seems to suffer from a lack of convincing baselines. The baselines presented -- the structural or angular probe, are steps along the path to the polar probe.
- Cosine similarity between identical syntactic categories is surprisingly low (to me). The ranking of categories in terms of the strength of that correlation is also surprising, ith things like 'case' being quite strong. In general there are many "odd" patterns that I don't have an intuitive explanation for why they occur, and aren't discussed in detail in this work.
- There is no dedicated related work. I do think the parsing literature, and especially the parsing-on-top-of-LLMs literature is relevant.
Technical Quality: 3
Clarity: 3
Questions for Authors: Suggestions / Questions:
Q1 - How the trees are predicted is not clear / whether these are a series of independent predictions or whether they are processed sequentially or decoded jointly?
Q2 Do the same syntactic relations that occur at different levels of the sentence have distinct embeddings? I think the authors were setting up to explore some questions about the meaningfulness in the "hierarchy" of the tree, especially with the controlled sentence dataset, but then I never saw these really come to fruition. Especially the talk of short/relative/long-nested partitions -- where are these discussed? Fig. 5 is mentioned (L241) but Fig 5 is fluff.
L33: "and its neuronal substrate." What? It's just a model.
L36: "to settle"? Though the sentence is bizarre regardless
L24-L253, improper citation formats almost everywhere
"According to linguistics, sentences can be described as linear sequences of words connected by a dependency tree", there isn't a "linguistics" -- there are many competing syntactic frameworks and heated debate as to the pros/cons of each framework, but at the end of the day, these are merely formalisms
L83: Squared euclidean distances (between two word embeddings) cannot trivially represent the presence
84 of dependency relations and their types and directions simultaneously.
Why not? If these are represented in separate subspaces, why is it not possible to represent these three concepts in a vector space?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the limitations are described.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 9UbP for their insightfulreview.
## Weaknesses
### Probing vs. Parsing:
We agree that the distinction between probing and parsing is insufficiently clear. We will amend the discussion as follows:
“The current 'probing' work is related to extensive research on 'parsing'. However, the two fields have distinct objectives, and thus, distinct approaches. Syntactic probes focus on how the vectorial activations of LMs represent the symbolic constructs theorized by linguists (e.g. trees, nodes). For example, the present Polar Probe uses a single linear transformation to provide a compact and interpretable representation of trees through the distance and angle between pairs of word representations. In contrast, parsing studies primarily aim to accurately reconstruct syntactic trees, regardless of the underlying representation. These studies thus prioritize performance over interpretability, which may lead to the adoption of non-linear probes that learn representations that are powerful, but not effectively learnt and utilized by LMs.“
### Lack of Baselines:
Previous research, such as Hewitt and Manning (2019), provides a baseline included in our figures. We also include an untrained baseline to show that the Polar Probe extracts syntactic knowledge from self-supervised training, not from tokenization or the probing mechanism. To the best of our knowledge, the only other labeled probing work involving LLMs is by Muller-Eberstein (2022). We will add this baseline to the figures of the camera-ready version.
In addition, we now provide a series of new analyses to evaluate our Polar Probe
across different LLM families and sizes (attached Figure 1): the results show that the Polar Probe performance improves in more recent and larger models. on variably complex sentences, using the ud-en-ewt test set (attached Figures 2A and 2B): the results show that the Polar Probe gets increasingly imprecise as the length and syntactic depth of the sentences increases.
on specific substructures (attached Figure 4): the results show that the Polar Probe is less accurate on nested trees than on the main phrase.
These novel analyses clarify the conditions in which the Polar Probe retrieves dependency trees.
### Cosine Similarities:
This is a good point. We will now add the following clarification to the results section.
“Note that dimensionality impacts cosine similarity: Given that random vectors in a dimension D typically have a cosine similarity of approximately 1/sqrt(D), a cosine similarity of 0.5 can be interpreted as relatively high. Overall, these results support the notion that LLMs effectively represent syntactic relationships. “
### Pairwise Cosine Similarity Matrix:
We agree that examining the Polar Probe’s pairwise cosine similarity matrix is valuable, especially to linguistics. In particular, we find that the relations that lead to off-diagonal blocks reflect different categories, which in linguistics, can be subtle. For example, both the case-mark and the aux-cop difference are a common source of confusion among linguists. Such is the case that, for clarity, the following examples are provided on the Universal Dependencies website
The following sentence will be added to L196:
“We find that higher cosine similarity among off-diagonal blocks in Fig. 2.C reflects subtle linguistic distinctions. For example, distinctions between case-mark relations (case: 'Sue left after the rehearsal' [after - rehearsal] vs. mark: 'Sue left after we did' [after - did]) and aux-cop relations (cop: 'Sue is smart' [is - smart] vs. aux: 'Sue is considered smart' [is - considered]) illustrate these nuanced differences”
### Related Work:
Due to space constraints, the Related Work section was omitted in the initial submission. We will add this section in the extra page allowed in the camera-ready version of the paper. A first paragraph can be found at the beginning of our response to reviewer tm4o.
### Questions:
Q1:
The trees are predicted following Hewitt and Manning’s approach, adapted to labeled trees. Thus, from the probed euclidean distance matrix, we get a tree by computing the Minimum Spanning Tree (MST) with Kruskal’s algorithm. Then, the predicted edges are labeled and directed according to the highest absolute cosine similarity with a set of prototypical vectors for each relation type.
We acknowledge that this procedure is designed for interpretation rather than performance. This approach demonstrates that the differences of performance arise from the nature of the representations rather than the heuristics of the prediction method.
Q2:
We now revised Figure 5 to clarify its objective (attached Figure 3) and note:
“In the space of the Polar Probe, the representation of dependency trees appears to be consistent across sentences’ length and substructures. For example, the coordinates of the main phrase are virtually identical whether it is attached to a prepositional phrase and/or a long-nested structure. This invariance supports the notion that dependency trees are represented by a systematic coordinate system that can be recovered with the Polar Probe.”
Furthermore, we now provide (attached Figure 4) a detailed analysis of the results obtained from the controlled dataset. Overall, these results show that subtrees are less well represented than the main phrase.
L33:
We agree and will revise it as follows: ”The discrepancy in the system of representations has thus challenged the unification of the theories of human cognitive processes with the underlying computational implementation in the brain.”
Fig1:
We agree, and will revise it: “According to the dependency grammar framework …”
L83:
We agree and will revise it::
“Following the development of the Structural Probe, squared Euclidean distances between probed word embeddings are not designed to represent both the presence of dependency relations and their types and directions simultaneously. ”
We also correct the typos and citations
---
Rebuttal Comment 1.1:
Title: A marked improvement, but a little thin
Comment: I'd like to thank the authors for taking the time to thoroughly answer many of the questions raised in the review. I think I have a clearer understanding now of how best to interpret some of the results.
I also appreciate the additions to the paper, all of which seem like good steps towards rounding out the paper and helping to differentiate it from existing work.
I may come back to this, but at this time I believe I am going to keep the score as is, as I think this score is appropriate for the paper, even in light of the suggested improvements. I am stating that I believe the paper should be accepted even in its current form. But while Fig 3 in particular seems very promising, it is also hard to ignore that its role in the paper in its current form seems something of an afterthought. I believe the paper still struggles a bit to differentiate itself from the previous Muller-Eberstein work (even if the stated goals of each paper are different). Expanding in the direction of Fig 3 appears to be a very sensible way of accomplishing this, but it seems underexplored in its current state. | null | null | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for the detailed and relevant comments,
## The strengths pointed by the reviewers are:
* The paper is well-written (tm4o, rWeD)
* Clear contribution, the work opens several avenues of research (9UbP, rWeD)
* Convincing results (9UbP, rWeD)
* Paper provides with a controlled dataset with linguistically interesting sentences (9UbP, rWeD)
* Elegant implementation (tm4o)
## The weaknesses pointed by the reviewers are:
* Findings on the controlled dataset are unclear. (tm4o, rWeD )
* Show the universality of the Polar Probe (rWeD)
* Unclear difference with Muller-Eberstein, 2022 (tm4o)
* Lack of baselines (9UbP)
## To address the weaknesses pointed out by the reviewers, we have made four major modifications:
### Understand better the role of the controlled dataset, (Fig. 5 is not clear)
First, we have now replaced Fig. 5 with a new figure (Fig. 3 in the attached pdf) to clarify the role of the controlled dataset. Our updated results show that sentences are consistently represented in the same coordinates of the probe’s space, irrespective of their length and substructure. This result illustrate the validity of the polar coordinate system proposed in the present study.
Second, we provide new analyses to evaluate the precision (UUAS and LAS) of different substructures of the syntactic tree (Fig. 4 in the attached pdf). The results show that deep structures (preposition phrases, nested phrases) lead to lower performance than the main phrase – A phenomenon which echos with the behavior of both LLMs’ and humans (Silverman,& Ratner, 1997). In addition, the subject-verb agreement performance gets deteriorated as the linear distance in the sentence between both words increases.
Overall, these new elements clarify how syntactic structure can be linearly decoded from LLMs.
### Show the universality of the Polar Probe
We now added new analyses to demonstrate the universality of the Polar Probe representation system across different model families and sizes (Fig. 1 in the attached pdf). Our results show that the Polar Probe's performance is robust across different models and sizes, and that larger models consistently achieve higher LAS performance.
In addition, we also show how the LAS deteriorates with respect to the length and the depth of the sentence. We have added a new figure (Fig. 2 A and B in the attached pdf) showing the better performance of larger models in the tail of the syntactic complexity distribution.
Unclear difference with (Muller-Eberstein, 2022)
We now clarify the differences between our work and (Muller-Eberstein et al. 2022) in terms of goal, method, data, and consequences. Specifically, we have highlighted our goal to understand whether and how syntactic structures are organized in LMs, whereas Muller-Eberstein et al.'s goal is to propose a lightweight method to “guide practitioners in their choice of LM encoder for the structured prediction task of dependency parsing”. In particular, we now clarify in the discussion that our approach optimizes a single and interpretable probe to find a unique subspace that jointly represents the existence, directionality and labels of dependency trees.
### Lack of baselines
To better contextualize the results, we will add the Muller-Eberstein baseline in Figure 3 of the camera-ready version. Additionally, we added extra references with the performance analysis on the controlled dataset and these scores will be a useful resource for the community to better interpret LAS performance. Lastly, we also added an analysis of the performance of the Polar Probe across different model sizes and families (Fig. 1 in the attached pdf).
Overall, these new analyses strengthen our original conclusion.
Once again, we would like to thank our reviewers for their help in improving the study.
Pdf: /pdf/630fffbd1f03300a52b1e9e95d0623b52fd92c9d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Hierarchical Selective Classification | Accept (poster) | Summary: The authors propose hierarchical selective classification, a method that selects the hierarchical granularity of its prediction based on uncertainty.
Strengths: * The paper is well-written, and the proposed method is quite intuitive.
* I like the idea that if uncertain, it makes sense to predict at a higher level of granularity.
* The theoretical results & statements are sound.
* Extensive experimental results showing the superiority of the proposed method to DARTS and a non-hierarchical baseline.
* Applicability to pre-trained models
Weaknesses: * My biggest uncertainty is the similarity of this work to conformal prediction. To me, it seems that this method is very similar to conformal prediction, where the set of possible prediction sets is restricted via this pre-defined hierarchy. While, as far as I know, it has not been explored, it decreases the perceived novelty.
* A weakness of the setting rather than the method is that it assumes the knowledge of the underlying hierarchy. As such, the applicability is somewhat limited. The paper would benefit from a way to unsupervisedly learn this hierarchy, e.g. based on classes whose predicted probabilities are positively correlated.
* As also touched upon in the concluding remarks, the method is post-hoc rather than being optimized during training, thus, likely not performing up to the highest possible level.
* Minor: Line 158-159 is a worded badly, similar to "... thus, we do A. Unlike others that do A, we do A+B".
Technical Quality: 3
Clarity: 3
Questions for Authors: * Could the authors please comment on the similarity to conformal prediction?
* As such, how is [1] related and/or different?
[1] Tyagi, Chhavi, and Wenge Guo. "Multi-label Classification under Uncertainty: A Tree-based Conformal Prediction Approach." Conformal and Probabilistic Prediction with Applications. PMLR, 2023.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are adequately discussed apart from the necessity of assuming knowledge of the underlying class hierarchy.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback on our paper. We're glad you liked the paper and its underlying idea.
*"My biggest uncertainty is the similarity of this work to conformal prediction. To me, it seems that this method is very similar to conformal prediction, where the set of possible prediction sets is restricted via this pre-defined hierarchy. While, as far as I know, it has not been explored, it decreases the perceived novelty."*
Thank you for highlighting this reference. While we agree that this work merits discussion in our revision, we would like to emphasize a few key differences between our paper and conformal prediction in general, including [1]:
1. [1] addresses multi-label classification, while we address hierarchical classification. In multi-label classification, prediction sets consist of lists of nodes of varying sizes. In contrast, hierarchical classification always returns a single node from the hierarchy. For large hierarchies, such as those in ImageNet (1,000 leaves) or iNat21 (10,000 leaves), uncertainty could lead to very large prediction sets in multi-label classification, rendering them impractical and uninterpretable.
2. Unlike [1], which constructs its own tree, our approach is grounded in the use of a predefined, interpretable tree. This distinction is not just technical but fundamental to our methodology and performance.
3. Our method builds on selective classification, specifically addressing the trade-off between risk and coverage, an aspect not covered by [1].
*"A weakness of the setting rather than the method is that it assumes the knowledge of the underlying hierarchy. As such, the applicability is somewhat limited. The paper would benefit from a way to unsupervisedly learn this hierarchy, e.g. based on classes whose predicted probabilities are positively correlated."*
While it's true that our algorithms require a class hierarchy, there are many well-established hierarchies available, such as WordNet and taxonomic data for plants and animals. Consequently, popular datasets like ImageNet and iNaturalist have been built upon these hierarchical structures, providing ready-to-use trees. In cases where a class hierarchy is not provided with the dataset, it can be generated using LLMs [2,3,4,5]. Thanks to your remark, will ensure these references are included in our paper's revision, so users are aware that our methods are applicable to datasets without published hierarchies as well.
*"As also touched upon in the concluding remarks, the method is post-hoc rather than being optimized during training, thus, likely not performing up to the highest possible level."*
While we agree that incorporating a training regime to enhance HSC could be beneficial, our focus in this paper was on post-hoc methods, due to their immense importance and strength. As newer and more advanced models become available, our plug-and-play method, compatible with any pretrained classifier, enables state-of-the-art performance without requiring any retraining, reducing the cost of enjoying the benefits of HSC to almost zero. This feature is highly appealing to practitioners, particularly given the high costs and complexity of training modern neural networks, and even more so with recent large multimodal models. Furthermore, it is worth noting that post hoc methods are well-established and widely utilized in practice. There exists extensive literature discussing post-hoc methods, temperature scaling [6] and split conformal prediction [7] are two prime examples, among many others, such as [8,9].
With that said, we agree that training models can be beneficial. In fact, we have conducted extensive experiments on model training. Since hAURC cannot be optimized directly, we developed an optimizable alternative. Our best-performing method entailed training models to predict the lowest common ancestor (LCA) of pairs of samples, including identical pairs (intended for testing). To achieve this, we developed a hierarchical loss function that penalizes the model based on the hierarchical distance between the ground truth LCA and the predicted node.
For example: if the model is presented with an image of a Labrador and an image of a Golden Retriever, it should predict "dog" as the LCA. If the model instead predicts "animal", which is correct but less specific than "dog", it incurs a higher loss. We believed that this hierarchical loss function can improve the model's hierarchical understanding and, consequently, enhance HSC performance.
After training models of various architectures with this loss and utilizing various other configurations, such as leveraging in-batch negatives for a richer training signal and multiple classification heads to prevent deterioration in the model's accuracy, the improvement in hAURC we observed ranged from 3\% to 5\% compared to the pretrained baseline. While we consider these results solid, we felt this has gone too far away from the main scope of the already packed paper. Do you think it should be included in the revision?
[1] Tyagi et al. Multi-label Classification under Uncertainty: A Tree-based Conformal Prediction Approach. Conformal and Probabilistic Prediction with Applications. PMLR, 2023.
[2] Chen et al. Constructing Taxonomies from Pretrained Language Models, NAACL 2021.
[3] Funk et al. Towards Ontology Construction with Language Models.
[4] Zeng et al. Chain-of-Layer: Iteratively Prompting Large Language Models for Taxonomy Induction from Limited Examples.
[5] Li et al. Eliciting Topic Hierarchies from Large Language Models.
[6] Guo et al. On Calibration of Modern Neural Networks.
[7] Vladimir Vovk. Conditional validity of inductive conformal predictors.
[8] Cattelan and Silva. How to Fix a Broken Confidence Estimator: Evaluating Post-hoc Methods for Selective Classification with Deep Neural Networks.
[9] Galil et al. What Can We Learn From The Selective Prediction And Uncertainty Estimation Performance Of 523 Imagenet Classifiers, ICLR 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. The responses have cleared up my uncertainties, and I have adjusted my score. Regarding the end-to-end training, I believe the extension is interesting and might benefit future work that is derived from the current manuscript. As such, I would appreciate this extension in either the Appendix or as a separate work, whereby I leave it up to you to decide which you believe to be more feasible. | Summary: The paper introduces a hierarchical selective classification technique that incorporates hierarchical risk and coverage. The authors additionally proposed an algorithm that guarantees target accuracy. Experimental results demonstrate the method's effectiveness.
Strengths: Hierarchical selective classification is a new area and therefore the current method is one of the first techniques to deal with such problem. Its application to critical settings can be substantial.
Weaknesses: • The need of a prior tree among classes can limit its usage for complex scenarios. The construction of such tree can be a non-trivial step for the applicability of the approach.
• The main contribution looks an extension of previous methods for the hierarchical case.
Technical Quality: 3
Clarity: 3
Questions for Authors: • Regarding results like in Table 2, would be possible to calibrate the coverage (as done on selective networks) for fair comparison?
• Have the authors thought about the building the hierarchical tree structure as a pre-processing step? I asked that because such prior is key for wide applicability.
• A well know problem of selective approaches, exposed on [1] is that given the non-differentiability of selection mechanism the binary function g is replaced by a relaxed function g: X → [0, 1], that way not performing selection during training, but instead assigning a soft instance weight to each training sample. The same effect is observed in the proposed method?
[1] Gumbel-Softmax Selective Networks, https://arxiv.org/pdf/2211.10564.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: • The requirement for a hierarchical class tree prior to using the method is a limitation.
• The current experimental section is limited in terms of baseline approaches and datasets, making it challenging to gain a comprehensive understanding from the existing experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback on our paper.
*"The need of a prior tree among classes can limit its usage for complex scenarios. The construction of such tree can be a non-trivial step for the applicability of the approach."*
While it's true that our algorithms require a tree structure, there are many well-established predefined hierarchies available, such as WordNet and taxonomic data for plants and animals. Consequently, popular datasets like ImageNet and iNaturalist [1] have been built upon these hierarchical structures, providing ready-to-use trees. In cases where a hierarchical tree structure is not provided with the dataset, it can be generated using LLMs [2,3,4,5]. Thanks to your remark, will ensure these references are included in our paper's revision, so users are aware that our methods are applicable to datasets without published hierarchies as well.
*"The main contribution looks an extension of previous methods for the hierarchical case."*
We argue that our approach is not merely a simple extension, but a better alternative with potentially much better performance than classic selective classification. For instance, [6] boasts classifiers that achieve 99\% accuracy on ImageNet with up to 47\% coverage. However, these classifiers outright reject at least 53\% of the samples, failing to provide any useful information to the user about them.
In contrast, our method transforms most of these otherwise rejected samples into valuable information.
As the example in our introduction illustrates, this information could, in some scenarios, be a matter of life and death. Thus, we feel this is no mere extension to the hierarchical case, but rather, a utilization of previously unused hierarchical knowledge to turn a large number of rejections into very usable information.
*"Regarding results like in Table 2, would be possible to calibrate the coverage (as done on selective networks) for fair comparison?"*
Optimizing selective classification under accuracy constraints involves approaches and algorithms different than those used for optimizing under coverage constraints. There is a well known distinction in the literature between selective algorithms optimizing for an accuracy constraint [7,8] and selective algorithms optimizing for a coverage constraint [9,10]. While it is definitely possible to construct an HSC algorithm optimized for coverage, our focus in this work is on accuracy constraints. Consequently, our comparisons are made against baselines that also target accuracy. While selective networks like SelectiveNet are notable baselines for coverage calibration, they are not directly comparable to our approach due to our focus on accuracy constraints.
*"A well know problem of selective approaches, exposed on [1] is that given the non-differentiability of selection mechanism the binary function g is replaced by a relaxed function g: X → [0, 1], that way not performing selection during training, but instead assigning a soft instance weight to each training sample. The same effect is observed in the proposed method?"*
Thank you for bringing this paper to our attention. This seems like a more potent alternative to SelectiveNets. We will make sure to reference it in our revision.
*"The current experimental section is limited in terms of baseline approaches and datasets, making it challenging to gain a comprehensive understanding from the existing experiments."*
Our experiments were conducted on two large-scale, widely accepted datasets. We considered three algorithmic baselines, and compared *each* baseline with our algorithms on over 1,100 deep neural networks. Often, we repeated the evaluations per model 1,000 times, meaning a single comparison often consisted of over a million individual comparisons. Experiments of this magnitude require a considerable amount of resources. Existing works in the domain of hierarchical classification typically consider no more than two datasets, and evaluate only few models at most. Therefore, we hope you'll agree that our experimental section is adequately comprehensive, and that the results presented clearly highlight the benefits of our methods.
[1] Grant Van Horn et al. Benchmarking Representation Learning for Natural World Image Collections. https://arxiv.org/abs/2103.16483.
[2] Catherine Chen, Kevin Lin, Dan Klein. Constructing Taxonomies from Pretrained Language Models, NAACL 2021. https://arxiv.org/pdf/2010.12813
[3] Maurice Funk, Simon Hosemann, Jean Christoph Jung, Carsten Lutz. Towards Ontology Construction with Language Models. https://arxiv.org/pdf/2309.09898
[4] Qingkai Zeng, Yuyang Bai, Zhaoxuan Tan, Shangbin Feng, Zhenwen Liang, Zhihan Zhang, Meng Jiang. Chain-of-Layer: Iteratively Prompting Large Language Models for Taxonomy Induction from Limited Examples. https://arxiv.org/pdf/2402.07386
[5] Grace Li, Tao Long, Lydia B. Chilton. Eliciting Topic Hierarchies from Large Language Models. https://arxiv.org/pdf/2310.19275
[6] Ido Galil, Mohammed Dabbah, Ran El-Yaniv. What Can We Learn From The Selective Prediction And Uncertainty Estimation Performance Of 523 Imagenet Classifiers, ICLR 2023. https://arxiv.org/pdf/2302.11874
[7] Yonatan Geifman, Ran El-Yaniv. Selective Classification for Deep Neural Networks, NeurIPS 2017. https://papers.nips.cc/paper_files/paper/2017/hash/4a8423d5e91fda00bb7e46540e2b0cf1-Abstract.html
[8] Jia Deng; Jonathan Krause; Alexander C. Berg; Li Fei-Fei. Hedging your bets: Optimizing accuracy-specificity trade-offs in large scale visual recognition.
[9] Yonatan Geifman, Ran El-Yaniv. SelectiveNet: A Deep Neural Network with an Integrated Reject Option, ICML 2019. https://proceedings.mlr.press/v97/geifman19a/geifman19a.pdf
[10] Guy Bar-Shalom, Yonatan Geifman, Ran El-Yaniv. Window-Based Distribution Shift Detection for Deep Neural Networks, NeurIPS 2023. https://proceedings.neurips.cc/paper_files/paper/2023/hash/4791edcba96fbd82a8962b0f790b52c9-Abstract-Conference.html
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' rebuttal. I would raise my score from 4 to 5 in response to the authors' rebuttal. I will also discuss with the other fellow reviewers and/or AC. | Summary: The paper introduces a new framework for selective classification called hierarchical selective classification. In a setting where a hierarchy in the classification task is present, the authors devise a selection strategy that considers confidence at different levels of the classification hierarchy. Extensive experimental analysis is performed over 1115 ImageNet classifiers.
Strengths: The main strengths of the paper are:
1. the idea of applying selective classification in a hierarchical setting is novel;
2. the theoretical analysis relies on conformal prediction, which guarantees the soundness of the results;
3. the proposed framework can impact high-risk settings, as shown in the healthcare example.
Weaknesses: Overall, I think the paper is solid. My main concern is that the empirical evaluation could be improved, especially regarding motivations and attention to detail.
A few examples:
* I do not fully understand why the authors focus so much on showing how different training regimes affect HSC performance. I guess this improves the overall predictive performance of the (hierarchical) classifier, which is expected to impact the HSC task positively.
* As the authors correctly claim, the training regimes were not optimized for hierarchical selective classification. Despite the clear computation-wise motivation, I argue that including regimes optimized for HSC would make the empirical evaluation sounder.
* a few lines are off: for instance, I would argue that line 279, i.e.,
>CLIP achieves an exceptional improvement, surpassing 40%
>
does not match what is shown in Figure 4 (which shows an improvement below 40%).
Technical Quality: 3
Clarity: 2
Questions for Authors: I have a few questions/remarks regarding the paper.
* Q1. I think the authors are not discussing an implicit (and, in my opinion, quite relevant) assumption of their strategy, i.e. the errors at different levels of the hierarchy are assumed to be the same. However, I argue this is not exactly the case in real life. For example, failing to distinguish a golden retriever from a labrador differs from failing to distinguish a dog from a spider. Can the authors elaborate on this point?
* Q2. Can the authors discuss the points I highlighted as the main weakness?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The paper briefly discusses limitations. I think this section could be expanded, e.g. considering Q1.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback.
*"I do not fully understand why the authors focus so much on showing how different training regimes affect HSC performance. I guess this improves the overall predictive performance of the (hierarchical) classifier, which is expected to impact the HSC task positively."*
The methods presented in this paper are compatible with any pre-trained base classifier. As such, we were particularly interested in exploring how to guide practitioners towards choosing base classifiers with high HSC performance.
The experiments in this section draw inspiration from [1], who found that training regimes such as knowledge distillation significantly impact selective performance. Their findings provide valuable guidance for practitioners, helping them choose models trained using regimes that result in high selective performance. Following their findings, we set out to find if training regimes could contribute to hierarchical selective performance as well, and if our findings align with those presented in [1].
We initially shared your concern that the improvement in hAURC could be attributed to overall improvement in predictive performance resulting from the training regimes. This led us to analyze the improvement in accuracy compared to the improvement in hAURC. From the results below, it seems to us that the improvement in hierarchical performance is significantly more pronounced than the one in accuracy. Furthermore, We see examples of regimes with similar improvements in accuracy having significant differences in hierarchical performance improvements (e.g., "Pretraining" and "Distillation").
We present in the table below the mean improvement in accuracy compared to hAURC.
| Training Regime | Median Accuracy Improvement (\%) | Median hAURC Improvement (\%) |
|-----------------|---------------------------------|------------------------------|
| CLIP | 7.90 | 38.14 |
| Pretraining | 1.96 | 19.47 |
| SSL | 4.12 | 18.97 |
| Distillation | 1.48 | 11.73 |
| Adversarial | 0.85 | 5.92 |
We initially chose not to include this analysis in the paper due to concerns about the paper being already packed with too much information. Do you think it would be preferable to include these results in the revision?
*"As the authors correctly claim, the training regimes were not optimized for hierarchical selective classification. Despite the clear computation-wise motivation, I argue that including regimes optimized for HSC would make the empirical evaluation sounder."*
While we agree that incorporating a training regime to enhance HSC could be beneficial, our focus in this paper was on post-hoc methods, due to their immense importance and strength.
Our method’s strength lies in its compatibility with any pretrained classifier. As newer and more advanced models become available, our plug-and-play method enables access to state-of-the-art classifiers without requiring any retraining, reducing the cost of enjoying the benefits of HSC to almost zero. This feature is highly appealing to practitioners, particularly given the high costs and complexity of training modern neural networks, and even more so with recent large multimodal models.
Furthermore, it is worth noting that post hoc methods are well-established and widely utilized in practice. Thanks to their effectiveness, there exists extensive literature discussing post-hoc methods, temperature scaling [2] and split conformal prediction [3] are two prime examples of post-hoc practical approaches, among many others, such as [1,4].
With that said, we agree that training models can be beneficial for improving HSC performance. In fact, we have conducted extensive experiments on model training. Since hAURC cannot be optimized directly, we developed an alternative that could be optimized. Our best-performing method entailed training models to predict the lowest common ancestor (LCA) of pairs of samples, including identical pairs (intended for testing). To achieve this, we developed a hierarchical loss function that penalizes the model based on the hierarchical distance between the ground truth LCA and the predicted node.
For example: if the model is presented with an image of a Labrador and an image of a Golden Retriever, it should predict "dog" as the LCA. If the model instead predicts "animal", which is correct but less specific than "dog", it incurs a higher loss. We believed that this hierarchical loss function can improve the model's hierarchical understanding and, consequently, enhance HSC performance.
After training models of various architectures (including ResNet50, ViTs, EfficientNet, and others) with this loss function and utilizing various other configurations, such as leveraging in-batch negatives for a richer training signal and multiple classification heads to prevent deterioration in the model's accuracy, the improvement in hAURC we observed ranged from 3\% to 5\% compared to the pretrained baseline. While we consider these results solid, we felt this has gone too far away from the main scope of the already packed paper and decided to leave it outside of it. We would value your feedback, do you think this method and results should be included in the revision?
*"I would argue that line 279 does not match what is shown in Figure 4 (which shows an improvement below 40\%)"*
Thank you for pointing this out, we meant to say some instances of CLIP surpass 40\%. We will report the median of the sample (38\%) instead.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their responses.
I have a couple of remarks/comments:
> From the results below, it seems to us that the improvement in hierarchical performance is significantly more pronounced than the one in accuracy.
If I get how improvements are computed (i.e. $\Delta(Metric)=\frac{|Metric_{new} - Metric_{orig}|}{Metric_{orig}}$), I think it is not that straightforward to compare the % improvements of two different metrics. I feel this is affected by the original baseline value starting point. Given an absolute improvement, the relative improvement increases as the baseline value decreases, and vice versa. I.e., an absolute improvement of $\delta$ on baseline values $b_1, b_2, b_1 << b2$ results in relative improvements $r_1, r_2, r_1 >> r_2$.
Since the lower the hAURC, the better, and the higher the accuracy, the better (as in the just presented example); seeing higher percentage improvements might be misleading, and I would be cautious about the conclusions here.
> We initially chose not to include this analysis in the paper due to concerns about the paper being already packed with too much information. Do you think it would be preferable to include these results in the revision?
I think these results could be safely added to the Appendix, as they could explain why we see improvements in hAURC through different training regimes.
> However, in practice, the differences are smaller than we anticipated and are very highly correlated (which is interesting by itself). Do you think we should include these results and others with this risk in the revision?
Concerning the results w.r.t. the hierarchical loss, the authors could add the results in the Appendix.
In light of these comments, I am still in favour of a positive score for this paper.
---
Rebuttal 2:
Title: Rebuttal Part 2 for Reviewer vmdY
Comment: *"Q1. I think the authors are not discussing an implicit (and, in my opinion, quite relevant) assumption of their strategy, i.e. the errors at different levels of the hierarchy are assumed to be the same. However, I argue this is not exactly the case in real life. For example, failing to distinguish a golden retriever from a labrador differs from failing to distinguish a dog from a spider. Can the authors elaborate on this point?"*
That's a great question! In fact, we conducted all of our experiments using a hierarchical risk that considers scenarios like the ones you've illustrated. We chose not to include these results in the paper due to space constraints and for the sake of simplicity and readability.
We developed a hierarchical risk that considers mistake severity with respect to the hierarchy, as follows:
$R_h = 1 - \frac{\phi(LCA(\hat{v}, v))}{\phi(\hat{v})}$
Where $v$ is the ground truth node, $\hat{v}$ is the predicted node, $\phi(x)$ is the coverage of node $x$, and $LCA(x,y)$ is the lowest common ancestor of nodes $x$ and $y$. The most severe mistake occurs when $LCA(v,\hat{v})$ is the root of the hierarchy, resulting in a risk of 1. The higher the coverage of $LCA(\hat{v},v)$, the more $\hat{v}$ has in common with $v$ in hierarchical terms.
One can safely assume that in any plausible hierarchy the LCA of (labrador, golden retriever), for example, dog, has significantly higher coverage compared to the LCA of (dog, spider), resulting in the former misclassification having lower risk compared to the latter.
We provide below some of the results of our experiments on ImageNet, complementing Table 1 in the paper (all other tables and figures can also be made with this type of loss):
| | ImageNet-1k (1115 models) | |
|:--------------:|:-------------------------:|:-------------:|
| Inference Rule | hAURC | Hier. Gain |
| | $\times$ 1000 | (\%) |
| Selective | 24.98$\pm$0.29 | - |
| MC | 24.50$\pm$0.34 | 3.31$\pm$0.30 |
| Climbing | 24.04$\pm$0.31 | 4.71$\pm$0.22 |
The results align with those presented in the paper, where the risk is 0/1 loss. Climbing remains the inference rule with the best (lowest) hAURC and the highest hierarchical gain. It's theoretically possible to create examples where the hierarchical risk diverges from the 0/1 loss, such that models making less severe mistakes are penalized less by the hierarchical risk. However, in practice, the differences are smaller than we anticipated and are very highly correlated (which is interesting by itself). Do you think we should include these results and others with this risk in the revision?
[1] Ido Galil, Mohammed Dabbah, Ran El-Yaniv. What Can We Learn From The Selective Prediction And Uncertainty Estimation Performance Of 523 Imagenet Classifiers, ICLR 2023. https://arxiv.org/pdf/2302.11874
[2] Guo et al. On Calibration of Modern Neural Networks.
[3] Vladimir Vovk. Conditional validity of inductive conformal predictors.
[4] Cattelan and Silva. How to Fix a Broken Confidence Estimator: Evaluating Post-hoc Methods for Selective Classification with Deep Neural Networks. | Summary: The paper proposes an extension of selective classification following a class hierarchy to reduce the specificity of model prediction when there is a high uncertainty. In particular, if the prediction confidence of a class is smaller than a predefined threshold, the proposed algorithm would proceed towards a higher class level in the hierarchical structure (the parent node), until the confidence of the considering node exceeds that threshold. The paper also formulises hierarchical risk and coverage, so that the area under curve can be used as a metric to benchmark different selective classification methods. An extensive number of pretrained classifiers on ImageNet dataset are then used to evaluate the proposed method and show promising results. The paper also include a PAC-like theoretical result, so that when finding the optimal threshold, one can select appropriate hyper-parameters to achieve their desired outcome with certain confidence level.
Strengths: The paper goes into details to provide an adequate background about selective classification, the definition of heirarchical risk and coverage as well as its area under curve as a metric to quantify the performance of hierarchical-based selective classification. It also links to previous studies in the same subfield. In general, the paper is well written and easy to follow.
The paper also includes a theoretical result on the guarantee of the learning algorithm when one wants to find the optimal thresholding value for their hierarchical selective classification. This simple theoretical results does strengthen the paper.
The paper also include an extensive number of experiments and ablation studies to provide insights into the newly-proposed method.
Weaknesses: The paper relies on the setting with the following assumptions:
- It is an inference rule. This means that the algorithm is used at test time only. If this could be even integrated into training is a plus.
- It needs a validation set to find the optimal hyper-parameter $\theta$, or the threshold (partly mentioned in the conclusion). It is understandable because there is no training involve here, so there is a need for that. However, in some cases, there may not be additional data available.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could the authors clarify if it can also be integrated into training a whole model to perform hierarchical selective classification?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive reply,
*"It is an inference rule. This means that the algorithm is used at test time only. If this could be even integrated into training is a plus.", "Could the authors clarify if it can also be integrated into training a whole model to perform hierarchical selective classification?"*
While we agree that incorporating a training regime to enhance HSC could be beneficial, our focus in this paper was on post-hoc methods, due to their immense importance and strength. Our method’s strength lies in its compatibility with any pretrained classifier. As newer and more advanced models become available, our plug-and-play method enables access to state-of-the-art classifiers without requiring any retraining, reducing the cost of enjoying the benefits of HSC to almost zero. This feature is highly appealing to practitioners, particularly given the high costs and complexity of training modern neural networks, and even more so with recent large multimodal models.
Furthermore, it is worth noting that post hoc methods are well-established and widely utilized in practice. Thanks to their effectiveness, there exists extensive literature discussing post-hoc methods, temperature scaling [1] and split conformal prediction [2] are two prime examples of post-hoc practical approaches, among many others, such as [3,4].
With that said, we agree that training models can be beneficial for improving HSC performance. In fact, we have conducted extensive experiments on model training. Since hAURC cannot be optimized directly, we developed an alternative that could be optimized. Our best-performing method entailed training models to predict the lowest common ancestor (LCA) of pairs of samples, including identical pairs (intended for testing). To achieve this, we developed a hierarchical loss function that penalizes the model based on the hierarchical distance between the ground truth LCA and the predicted node.
For example: if the model is presented with an image of a Labrador and an image of a Golden Retriever, it should predict "dog" as the LCA. If the model instead predicts "animal", which is correct but less specific than "dog", it incurs a higher loss. We believed that this hierarchical loss function can improve the model's hierarchical understanding and, consequently, enhance HSC performance.
After training models of various architectures (including ResNet50, ViTs, EfficientNet, and others) with this loss function and utilizing various other configurations, such as leveraging in-batch negatives for a richer training signal and multiple classification heads to prevent deterioration in the model's accuracy, the improvement in hAURC we observed ranged from 3\% to 5\% compared to the pretrained baseline. While we consider these results solid, we felt this has gone too far away from the main scope of the already packed paper and decided to leave it outside of it. We would appreciate your opinion, do you think it's preferable to include this method and results in the revision?
*"It needs a validation set to find the optimal hyper-parameter, or the threshold (partly mentioned in the conclusion). It is understandable because there is no training involve here, so there is a need for that. However, in some cases, there may not be additional data available."*
While we acknowledge that collecting a calibration set (or reducing it from the training set) comes at a cost, we believe this limitation is "soft" in our case, since, as mentioned in the note following Theorem 1 and in the remarks in Appendix D, our guarantees hold for calibration sets of any size. Even for a small calibration set, our algorithm provides users with the flexibility to set the other parameters ($\delta$ and $\epsilon$) according to their requirements and constraints. For example: a user with a low budget for a calibration set who may be, perhaps, more interested in controlling $\delta$ can still achieve a reasonable constraint by relaxing $\epsilon$ instead of increasing the calibration set size.
Additionally, as we mentioned above, post-hoc methods that require an additional calibration set are widely accepted and used by the community thanks to their efficacy [1,2,3,4].
[1] Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger. On Calibration of Modern Neural Networks. https://arxiv.org/pdf/1706.04599
[2] Vladimir Vovk. Conditional validity of inductive conformal predictors. https://arxiv.org/pdf/1209.2673
[3] Luís Felipe P. Cattelan, Danilo Silva. How to Fix a Broken Confidence Estimator: Evaluating Post-hoc Methods for Selective Classification with Deep Neural Networks. https://arxiv.org/abs/2305.15508
[4] Ido Galil, Mohammed Dabbah, Ran El-Yaniv. What Can We Learn From The Selective Prediction And Uncertainty Estimation Performance Of 523 Imagenet Classifiers, ICLR 2023. https://arxiv.org/pdf/2302.11874
---
Rebuttal Comment 1.1:
Title: Comments by Reviewers h9cA
Comment: Thank you, the authors, for clarifying my concerns. I have a positive view of the paper, although I am not very familiar to the research field. Hence, I keep my rating as is. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Focus On What Matters: Separated Models For Visual-Based RL Generalization | Accept (poster) | Summary: Visual-based Reinforcement Learning (RL) often fails to generalize across unseen environments. This work proposes SMG (Separated Models for Generalization) to improve the generalization in VRL by introducing two models to separately extract task-relevant and task-irrelevant representations through image reconstruction. Specifically, SMG proposes two additional consistency losses on relevant features, improving generalization. Extensive experiments, including video-hard DMC, color-hard DMC, and manipulation tasks, show SMG excels in diverse settings and tasks, demonstrating robust performance.
Strengths: - Separating foreground and background for reconstruction makes sense for improving the generalization in VRL.
- Extensive experiments in various experimental settings demonstrate the effectiveness of SMG.
- The learned mask looks very effective (Fig. 3 and Fig. 7).
Weaknesses: - Distinguishing between controllable and uncontrollable parts for learning a mask model has been widely discussed in the community, like TIA [1], Denoised MDP [2], ISO-Dream [3] and so on. Although I appreciate authors' efforts to discuss its difference against TIA (appendix E.2), I think the novelty of learning mask models to distinguish noise from the environment is limited. Nevertheless, I believe that this paper has made contributions in applying mask models to the field of visual RL generalization.
- I'm curious about the performance of the proposed method in some more challenging settings, like RL-Vigen [4].
- As there are many losses, it is better to add a detailed pseudo code about how to calculate all these losses, which can make the paper more readable.
- This proposed SGM is considered to be seamlessly combined with any existing off-policy RL algorithms. As the experiments mainly consider SAC as the RL backbone, I'm curious about its performance with other methods, like DrQ or SVEA.
- The related work part only discusses observation generalization in RL and some other types of generalization also should be discussed, like dynamic generalization [5,6] and task generalization [7,8].
Overall, I lean toward boardline of this work. I will participate in subsequent discussions and would like to adjust my scores, especially for the response to my concerns about experiments.
[1] Learning Task Informed Abstractions
[2] Denoised MDPs: Learning World Models Better Than the World Itself
[3] Iso-Dream: Isolating and Leveraging Noncontrollable Visual Dynamics in World Models
[4] RL-ViGen: A Reinforcement Learning Benchmark for Visual Generalization
[5] Context-aware dynamics model for generalization in model-based reinforcement learning
[6] Why generalization in rl is difficult: Epistemic pomdps and implicit partial observability
[7] Zero-shot task generalization with multi-task deep reinforcement learning
[8] Task Aware Dreamer for Task Generalization in Reinforcement Learning
Technical Quality: 3
Clarity: 2
Questions for Authors: - How do you determine the hyperparameter $\rho$? In Fig.9, this paper shows different results about $\rho$ in walker-walk. Are there more results to show the relationship between SGM and $\rho$?
- Why do reconstruction-based methods benefit generalization? Are there any explanations?
---------------
**After reading the authors' response and other reviewers' comments, I have raised my scores from 4 to 5.**
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes, this work has discussed its limitations in Sec. 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for your constructive comments and suggestions. We address each of your comments as follows.
### Q1: I think the novelty of learning mask models to distinguish noise from the environment is limited.
---
A1:
Thank you for your professional analysis. However, we would like to elaborate further on the novelty of our method:
- **The key idea behind SMG is to focus on the task-relevant features across training and testing scenarios.** The separated models structure is employed as an effective tool for extracting these task-relevant representations from observations.
- Compared to previous approaches, SMG introduces several key innovations to enhance performance in generalization settings. **We are the first to extract reward-relevant features via the Q-network in a model-free setting, provide the background encoder with accurate supervision, use the learned foreground to bootstrap the training process, and more fully utilize the learned mask through attribution augmentation.** We believe these improvements can inspire future research to advance the separated models structure into broader fields.
- We have added a experiment to test TIA[5] in generalization setting; please refer to the __Author Rebuttal 2.1__. As shown in Table 1, TIA fails to generalize in video-background settings in all seven tasks. This further demonstrates the effectiveness of our improvements (two consistency loss and data augmentation techniques) to the separated models architecture under the generalization settings.
### Q2: I'm curious about the performance of the proposed method in some more challenging settings, like RL-Vigen.
---
A2:
We have attempted to conduct experiments on three Adroit tasks introduced by [RL-Vigen](https://github.com/gemcollector/RL-ViGen/tree/ViGen-adroit?tab=readme-ov-file)[3]. However, we found that SAC-based algorithms (including SVEA, SGQN, and our SMG) perform poorly in these challenging tasks. The results reported in RL-ViGen are based on using VRL3 [4] as the backbone.
VRL3 relies Adroit offline data to pre-train the encoder. When we attempted to build SMG upon VRL3, we discovered that the [download link](https://drive.google.com/drive/folders/14rH_QyigJLDWsacQsrSNV7b0PjXOGWwD?usp=sharing) for the offline data is currently invalid. As a result, we are unable to conduct experiments at this time. We have emailed the author to report this issue and will proceed with the experiments once we receive a response.
Nevertheless, we report the reconstruction process of SMG in Adroit-Pen; please refer to __Author Rebuttal 2.5__. SMG successfully outputs an accurate foreground image in this challenging task, and we believe VRL3 would benefit from the learned task-relevant representation.
### Q3: As there are many losses, it is better to add a detailed pseudo code about how to calculate all these losses, which can make the paper more readable.
---
A3:
Thank you for your advice, the pseudo code is showed in __Author Rebuttal 1.4__
### Q4: I'm curious about its performance with other methods.
---
A4:
This is a good suggestion. We have added experiments that use SVEA[2] as a backbone; please refer to __Author Rebuttal 2.3__. The performance of SMG improves in most tasks, which is reasonable since SVEA is a stronger algorithm than SAC.
### Q5: Some other types of generalization also should be discussed.
---
A5:
Thank you for your reminder. We have revised Section 5; please refer to __Author Rebuttal 1.3__.
### Q6: How do you determine the hyperparameter $\rho$ ?
---
A6:
We invite the reviewer to refer to __Author Rebuttal 2.6__.
### Q7: Why do reconstruction-based methods benefit generalization?
---
A7:
There might be some misunderstanding. **We do not think that directly using a reconstruction-based loss benefits generalization; rather, it may exacerbate the overfitting problem to task-irrelevant features. The advantage of reconstruction-based methods lies in their ability to improve sampling efficiency [1].** Our key idea is to enhance the agent’s generalization ability while utilizing the reconstruction-based loss to improve sampling efficiency. To achieve this, we introduce the separated models (please refer to lines 39–43 of our paper).
### Reference
---
[1] Improving sample efficiency in model-free reinforcement learning from images.
[2] Stabilizing deep q-learning with convnets and vision transformers under data augmentation.
[3] RL-ViGen: A Reinforcement Learning Benchmark for Visual Generalization.
[4] Vrl3: A data-driven framework for visual deep reinforcement learning.
[5] Learning task informed abstractions.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I'd like to appreciate the authors' efforts in addressing my concerns and supplementing extra experiments.
- Q1: Overall, I think the concept novelty of this work is limited as there are many works learning a mask model to distinguish between controllable and uncontrollable parts for handling distracting environments. However, I still believe this work has enough technique novelty to apply this insight to improving the generalization in visual RL, which is also beneficial for the community by extending relatively mature technologies to new settings.
- Q2: Thanks for the reconstruction results in Adroit. In RL-Vigen, several settings can be handled by SAC-based algorithms like SGQN. For example, in Table-top Manipulation, RL-Vigen also considers visual appearances and lighting changes, and SGQN can handle this setting to some degree.
- Q3-6: Thanks for the response and supplemented results, which should be added to the paper for better highlighting the paper's contribution.
- Q7: Thanks for your clarification. There is still a small concern. As the authors have mentioned, reconstruction-based losses may exacerbate the overfitting problem but it can improve sampling efficiency. So is it more advantageous or disadvantageous for this task?
- Other comments: I'm curious about whether this method can be extended to more general visual generalization tasks, like foreground noises, camera views changing, and so on.
Overall, most of my concerns are addressed. Although there are several minor points, I have raised my scores into 5.
---
Reply to Comment 1.1.1:
Title: Discussion reply to the reviewer iwu6
Comment: Thank you for helping us improve the paper and update the score. We really appreciate your valuable comments!
We will address your concerns as follows:
- Q1: Thank you for recognizing our work! We also hope our paper can promote more research in visual-based generalization problems.
- Q2: Thanks for your suggestion, we’d like to add the experiments in the camera-ready version.
- Q3-6: Yes, we will revise some paragraphs and add the additional experiments to the appendix.
- Q7: Overfitting to task-irrelevant features typically occurs when a single autoencoder is used to reconstruct the entire observation, as the encoder must learn all the features present in the observation. However, since SMG employs two autoencoders to separately fit task-relevant and task-irrelevant features, our model avoids such overfitting problem and can also take the advantage of high sample efficiency. But for typical methods using a single autoencoder, it's definitely a disadvantage in generalization tasks.
- Other comments: We'd like to discuss these settings separately:
- Foreground noise: SMG is particularly suited for such setting, as we use the extracted foreground for data augmentation. With a small modification, such as adding some noises to the foreground image (e.g., color jitter) before synthesizing it with the background image during the data augmentation stage, we can successfully simulate the testing scenario during training.
- Camera views changing: This setting is relatively more challenging due to the involvement of spatial transformations. Directly applying SMG to this setting may not work, as SMG is trained with a fixed camera view. However, this issue could be addressed by incorporating an additional camera during training and adding STN blocks[1] to our model, adopting a similar approach to MoVie[2].
- Overall, when applying SMG to different generalization settings, we should first consider the similarity between the data augmentation types and the testing scenarios, and then adjust the augmentation methods accordingly.
---
__Reference__
[1] Spatial transformer networks.
[2] Movie: Visual model-based policy adaptation for view generalization. | Summary: This paper presents a novel method that utilizes two model branches to extract task-relevant and task-irrelevant representations separately from visual observations, aiming to enhance the zero-shot generalization ability of RL agents. The approach introduces four additional loss terms and two consistency losses to guide the agent's focus towards task-relevant areas across different scenarios. The proposed method can be seamlessly integrated into existing standard off-policy RL algorithms as a plug-and-play module. Experimental results demonstrate the effectiveness of the proposed model on two environments, surpassing previous benchmarks such as SAC and DrQ.
Strengths: 1. This paper is clearly written and easy to follow.
2. Based on the separated models architecture, this paper proposes multiple effective loss functions to focus on task-relevant features in visual-based RL generalization.
3. The authors provide detailed validations on the DMC environment and robotic manipulation tasks. They demonstrate the advantages of the proposed loss terms across multiple tasks in DMC (Table 3) and showcase the state-of-the-art performance of SMG (Table 1, 2).
Weaknesses: 1. While the paper compares the performance with model-free RL methods, it would be beneficial to also include a comparison with model-based RL methods. Previous works such as DreamerPro [1], Iso-Dream [2], and Denoised-MDP [3] have addressed visual distractions to enhance the generalization ability of RL agents.
[1] Dreamerpro: Reconstruction-free model-based reinforcement learning with prototypical representations.
[2] Iso-Dream: Isolating Noncontrollable Visual Dynamics in World Models.
[3] Denoised mdps: Learning world models better than the world itself.
2. The paper lacks sufficient discussion and analysis of its limitations.
3. The serial numbers in some figures appear to be somewhat disorganized.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weaknesses section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper discusses the limitations, but that is not enough.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for your constructive comments and suggestions. We address each of your comments as follows.
### Q1: It would be beneficial to also include a comparison with model-based RL methods.
---
A1:
To the best of our knowledge, there are still no model-based methods that effectively address the generalization problem in visual-based RL. Therefore, in our original paper, we did not include comparisons with model-based methods.
Although a series of model-based methods (e.g., TIA[1], DenoisedMDPs[2]) use a similar separated models structure like SMG, they address a fundamentally different problem. These algorithms primarily aims to enhance the agent’s robustness to distractors during training. Specifically, TIA trains and tests both in video background environments, whereas our method trains without background and tests in the video background environments (just like the difference between I.I.D and OOD). Methods like TIA do not incorporate specific designs, such as data augmentation techniques, to bridge the gap between training and testing scenarios, and therefore lack generalization capabilities.
Nevertheless, we have included TIA as a baseline to support our point; please refer to the __Author Rebuttal 2.1__. As shown in table 1, TIA fails to generalize in video-background settings in all seven tasks. This further demonstrates the effectiveness of our improvements (two consistency loss and data augmentation techniques) to the separated models architecture under the generalization settings.
### Q2: The paper lacks sufficient discussion and analysis of its limitations.
---
A2:
Thank you very much for the reminder, we have added a paragraph to Section 6 that describes the limitation of our work. Please refer to the __Author Rebuttal 1.2__.
### Q3: The serial numbers in some figures appear to be somewhat disorganized.
---
A3: Thank you for your feedback. The order of Figures 3 and 4 is indeed incorrect due to LaTeX’s automatic formatting. We will ensure this is corrected in the final version of the paper.
### Reference
---
[1] Learning task informed abstractions.
[2] Denoised mdps: Learning world models better than the world itself. | Summary: This paper presents a novel approach called SMG (Separated Models for Generalization) to improve generalization in visual-based reinforcement learning (RL). The approach works by using separate foreground and background encoders/decoders and employing a mask to isolate task-relevant regions. In addition, it also applies four additional losses(mask ratio, background, Q-value and empowerment losses) to to enhance the model’s ability to distinguish between two types of representations. To make the learned models generalize to different visual styles, it introduces attribution augmentation and consistency losses. The authors position this as a plug-and-play method that can enhance existing RL algorithms' generalization capabilities.
Experiments show SMG outperforms baseline methods, particularly in video-background settings where it maintains performance even with significant visual changes. Ablation studies validate the importance of each component.
The main contributions are:
- SMG: A separated model architecture with two branches to extract task-relevant and task-irrelevant representations from visual observations.
- Two consistency losses to guide the agent's focus on task-relevant areas across different scenarios.
- Strong performance on DMControl benchmark tasks, especially in video-background settings.
Strengths: This paper has several strengths:
- SMG achieves state-of-the-art performance on the DMControl Generalization Benchmark, particularly excelling in the challenging video-background settings. This demonstrates the practical effectiveness of the approach.
- SMG is a plug-and-play method that can enhance existing RL algorithms' generalization capabilities. It is designed to be easily integrated with existing off-policy RL algorithms, enhancing its practical value and potential for wide adoption.
- This paper includes detailed ablation studies that validate the importance of each component in the SMG architecture, providing insights into the method's workings.
- This paper is well-written. And it also provides clear visualizations of the reconstruction process, helping readers understand how SMG extracts and utilizes task-relevant features.
Weaknesses: This paper has several weaknesses:
- My major concern is the overclaim made by this paper. While it claims to address the generalization gap in visual-based reinforcement learning, the method proposed primarily tackles scenarios where only the backgrounds differ. However, visual generalization challenges are more diverse and include variations such as different lighting conditions and textures, which are common in real-world robotics applications. These scenarios appear to be overlooked in this paper.
- SMG introduces a lot of loss terms and associated hyperparameters, which could complicate tuning in practical applications.
- Specifically, the mask ratio $\rho$ appears to be crucial for performance, as it is the sole factor preventing the model from classifying everything as foreground. Given that $\rho$ represents the ratio between the foreground and the entire image, it likely necessitates per-task tuning, which could prove to be challenging and not scalable.
- The foreground consistency loss, as discussed in Section 3.3, heavily depends on the predicted mask to construct the augmented observation. During the initial stages of training, this process relies on potentially inaccurate mask predictions and attributions. Although the authors describe this as a bootstrapping process, further analysis regarding its stability and potential failure modes would be beneficial.
- The paper could be strengthened by considering a broader range of baselines. For example:
- Recent studies [1] suggest that visual encoders pre-trained on large-scale image datasets can improve the visual robustness of a policy. This paper does not make any comparisons with visual pre-training methods.
- Large vision foundation models like SAM [2] could potentially be utilized to provide supervision for generating foreground masks. Would this approach be more effective than training a mask predictor from scratch?
- The additional computation overhead introduced by the extra modules is concerning.
- The architecture, which involves separate models, essentially doubles the number of parameters compared to baseline methods. Although the authors argue that the performance improvements are due to the novel architecture rather than the increased number of parameters, this could still be problematic for practical applications with limited computational resources.
- Training time: The reported wall time for SMG is significantly longer than that of the baseline methods (22 hours versus 8-13 hours for 500,000 steps).
[1] Hansen, Nicklas, et al. "On pre-training for visuo-motor control: Revisiting a learning-from-scratch baseline." arXiv preprint arXiv:2212.05749 (2022).
[2] Kirillov, Alexander, et al. "Segment anything." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the "Weaknesses" section. Some additional questions are noted below:
- The terminology used for "foreground" and "background" is somewhat confusing. To clarify, "foreground" actually refers to the task-relevant parts of the image, while "background" refers to the task-irrelevant parts, correct?
- The necessity for background reconstruction is unclear. The authors claim that "improving the precision of background prediction can consequently enhance the foreground as well," but a more detailed explanation of this assertion would be beneficial.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: There is no separate section for limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for your constructive comments and suggestions. We address each of your comments as follows.
### Q1: Some testing scenarios appear to be overlooked in this paper.
---
A1:
We admit that real-world deployment scenarios can be more diverse and complex. **However, the four testing scenarios used in our paper are widely accepted benchmarks in visual-based RL [3,4,5].** For example, the color variation settings simulate different illumination conditions, while the video variation settings simulate training a robot in different environments, such as training indoors and testing outdoors. In addition, the testing scenarios in the two robotic manipulation tasks simulate texture changes, Therefore, we don’t think the real scenarios are overlooked.
Moreover, no algorithm has yet effectively solved the video-hard setting except ours, which is the closest scenario to realistic applications. We hope our paper can promote more research in this challenging setting.
### Q2: SMG introduces a lot of loss terms.
---
A2:
This phenomenon is common in methods that utilize separate models. For example, DenoisedMDPs [3] has four loss terms, Iso-dreamer [4] has six loss terms, TIA [2] has seven loss terms, and IFactor [5] has eight loss terms, yet all achieve excellent performance. Therefore, we believe the focus should be on the difficulty of adjusting the loss weights for different tasks rather than on the number of loss terms.
**In all seven tasks, we used the same weights for the five auxiliary loss terms.** The results demonstrate that weighting four of the five terms equally is sufficient to achieve outstanding performance. This indicates that our method is robust enough across different tasks and does not require additional time for adjusting the weights.
### Q3: About mask ratio $\rho$
---
A3:
Please refer to __Author Rebuttal 2.6__
### Q4: Although the authors describe this as a bootstrapping process, further analysis regarding its stability and potential failure modes would be beneficial.
---
A4:
Thanks for rising this concern.
**The key guarantee that ensures the models learns accurate foreground images and masks is the utilization of $L_{q}$ and $L_{action}$ to extract reward-relevant and action-relevant features.** The bootstrapping process then starts based on these features.
We have added an experiment to show a failure case by removing $L_{action}$ and stopping the gradient from $L_{q}$ to $z^+$; please refer to __Author Rebuttal 2.7__. As shown in Figure 3, the foreground models failed to learn meaningful images, and the background models learned everything from the input observations. This failure case occurs approximately once every five seeds. However, when we add the two loss terms back, we have never observed such failures.
### Q5: The paper could be strengthened by considering a broader range of baselines.
---
A5:
Thanks for your suggestion. We have added two more baselines:
- SAM-G [3] is a method that utilizes both pre-trained image encoders and the Segment Anything Model. **Although the results reported in the paper use four times the batch size and twice the training steps compared to SMG, SMG still achieves comparable performance.** This is because methods with pre-trained models heavily rely on the similarity between pre-training datasets and the RL task scenarios. Utilizing Large vision foundation models also increasing parameters and training time (SAM-G takes around 2d to train). Despite being trained from scratch, SMG can quickly distinguish foreground and background parts from environments and is more flexible than these pre-trained methods. Please refer to __Author Rebuttal 2.2__.
- TIA[1] is a model-based method that uses a similar separated models methods to us. However, as TIA is not designed for visual generalization, it fails in our testing scenarios. Please refer to __Author Rebuttal 2.1__.
### Q6: The additional computation overhead introduced by the extra modules is concerning.
---
A6:
We agree that algorithms are required to be time efficient to train in realistic tasks, but we do think the time consumption of SMG is reasonable.
- Although SMG requires 22 hours to train for 500k time steps, **in almost all tasks, SMG already outperforms the baselines’ 500k performance at 200k time steps (please refer to Figure 11 in our original paper).** We trained for 500k time steps to ensure a fair comparison. Therefore, it is possible to train for only half the time reported in our paper.
- Algorithms that perform well in more complex tasks inevitably lead to more complicated model structures and an increase in the number of parameters. Despite the fact that baselines training consume less time, they simply can't generalize at all in the robotic manipulation tasks. And we think the training time for SMG is acceptable as methods like SAM-G cost much more training time.
### Q7: "foreground" actually refers to the task-relevant parts of the image, while "background" refers to the task-irrelevant parts, correct?
---
A7:
Yes, you are right. We think the terminologies “foreground” and “background” are more accessible for readers to understand our key idea. Therefore, we will add a clear definition in Section 3.1:
“In the following paragraphs, we will use ‘foreground’ to refer to task-relevant parts and ‘background’ to refer to task-irrelevant parts.”
### Q8: The necessity for background reconstruction is unclear.
---
A8:
Please refer to __Author Rebuttal 1.5__
### Q9: There is no separate section for limitations in the paper.
---
A9:
Please refer to __Author Rebuttal 1.2__
### Reference
---
[1] Learning task informed abstractions.
[2] Learning world models with identifiable factorization.
[3] Generalizable visual reinforcement learning with segment anything model.
[4] Stabilizing deep q-learning with convnets and vision transformers under data augmentation.
[5] Generalization in Reinforcement Learning by Soft Data Augmentation
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their response.
I still feel that the proposed method appears to be tailored primarily for generalization across varying visual backgrounds. I would recommend that the authors consider moderating the scope of their claims accordingly.
---
Reply to Comment 1.1.1:
Title: Discussion reply to reviewer jV8P
Comment: Thanks for your response. There seems to be some misunderstanding regarding our method and the experimental settings. SMG is definitely not tailored specifically for visual background settings, and we’d like to elaborate further:
- SMG is not primarily tackles scenarios where only the backgrounds differ. The key idea behind SMG is to focus on the task-relevant features across training and testing scenarios, such idea is applicable to any generalization tasks. Although more diverse generalization scenarios exist in the real world, we have tried to include a sufficient and widely accepted range of test settings in the paper to demonstrate the generalization ability of SMG.
- We adopt a widely used evaluation setting similar to previous works [1,2,3,4], which includes color-easy, color-hard, video-easy, video-hard, and five different testing scenarios in robotic manipulation tasks. Only two of the nine settings are related to visual backgrounds: the other two color settings in DMC introduce randomized colors (both the foreground and background), while the five settings in robotic manipulation tasks involve changes to both color and textures (please refer to figure 5 and figure 6 in our paper).
- Despite SMG’s impressive performance in the video background settings, we’d like to highlight the improvements SMG delivers in other settings as well. In the random-color settings (Table 6 in our paper), SMG outperforms all baselines in 7 out of 10 tasks, with the performance gap within 5% in the other 3 tasks. In robotic manipulation tasks (Table 2 in our paper), SMG is the only method to maintain stable performance across 5 random-texture settings. These results indicate that SMG not only performs well in video-background settings but also exhibits superior generalization capability in random-color and random-texture settings.
---
Other than that, I hope the answers to your other questions have addressed your queries. If there are any additional concerns, please let us know!
---
Reference
[1] Generalization in Reinforcement Learning by Soft Data Augmentation.
[2] Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation.
[3] Look where you look! Saliency-guided Q-networks for generalization in visual Reinforcement Learning.
[4] Spectrum Random Masking for Generalization in Image-based Reinforcement Learning. | Summary: The authors propose a novel objective to improve robustness of the visual encoder in RL to background noise and to color perturbations. First, the authors split the visual encoder into two models: background encoder/decoder and foreground encoder/decoder. The proposed training objective contains multiple components:
- overall reconstruction loss that combines outputs of the background and the foreground decoders modulated by a mask;
- mask ratio loss that prevents the foreground mask from taking up too much of the image;
- background reconstruction loss that uses the learned mask to generate a new data sample;
- q-value loss that makes the foreground representation capture value information;
- empowerment loss that makes the foreground representations capture the agent actions;
- foreground and q-value consistency losses that make sure that changing the background (using the learned mask) doesn't change the foreground features and q-values
The method is tested on DMC generalization benchmark and on robotic manipulation tasks.
Strengths: - The method performs really well with various distractors;
- The idea of re-using the learned masks for augmentations is interesting and, as far as I can tell, novel;
Weaknesses: - The writing is a bit sloppy, with many typos and confusing sentences;
- The resulting objective is too complex and has too many terms;
- No comparison to TIA, although the presented method is quite similar. Was that because you only compare to model-free methods?
Typos (some of them, I didn't write down all of them, please run a spell checker on the text):
line 13: achieving free from overfitting : not clear what this means
line 38: further strengths -> further strengthens
line 100: focused in -> focused on
line 536: Comparision -> comparison
Technical Quality: 3
Clarity: 2
Questions for Authors: - In your objectives, you're maximizing the mutual information between foreground representations of two consecutive states and the action that was taken between them. Have you tried minimizing MI between background representations and actions and or rewards? This could be done with information bottleneck method for example. If you haven't tried this, do you this can help?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have adequately described limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for your constructive comments and suggestions. We address each of your comments as follows.
### Q1: The writing is a bit sloppy, with many typos and confusing sentences
---
A1:
Thank you for your careful review. We have thoroughly reviewed the paper multiple times and assure you that the errors listed in __Author Rebuttal 1.1__ will be corrected in the final version of the paper.
We hope that the current version is clear enough for readers to understand our key ideas. If there are any sentences that are still prone to misunderstanding, please let us know.
### Q2: The resulting objective is too complex and has too many terms
---
A2:
This phenomenon is common in methods that utilize separate models. For example, Iso-dreamer [4] has six loss terms, TIA [2] has seven loss terms, and IFactor [5] has eight loss terms, yet all achieve excellent performance. Therefore, we believe the focus should be on the difficulty of adjusting the loss weights for different tasks rather than on the number of loss terms.
**In all seven tasks, we used the same weights for the five auxiliary loss terms.** The results demonstrate that weighting four of the five terms equally is sufficient to achieve outstanding performance. This indicates that our method is robust enough across different tasks and does not require additional time for adjusting the weights.
The main concern may stem from why we use a smaller weight for $L_{fore\ consist}$ than others. This is because a too-large weight would lead to the model overfitting the inaccurate attribution predictions in the early stage (as we use the model output under raw observation as ground truth), and cause the foreground encoder cost more time steps to learn an accurate task-relevant representation. **We have added a new ablation study to provide readers with more insight into our loss weight setting.** By setting $\lambda_{fore\ consist}$ to 1 (refer to __Author Rebuttal 2.4__), the performance of SMG drops around 10~20% across different tasks.
### Q3: No comparison to TIA, although the presented method is quite similar. Was that because you only compare to model-free methods?
---
A3:
To the best of our knowledge, there are still no model-based methods that effectively address the generalization problem in visual-based RL. Therefore, in our original paper, we did not include comparisons with model-based methods.
Although a series of model-based methods (e.g., TIA[2], DenoisedMDPs[3]) use a similar separated models structure like SMG, they address a fundamentally different problem. **These algorithms primarily aims to enhance the agent’s robustness to distractors during training.** Specifically, TIA trains and tests both in video background environments, **whereas our method trains without a background and tests in the video background environments (just like the difference between I.I.D and OOD).** Methods like TIA do not incorporate specific designs, such as data augmentation techniques, to bridge the gap between training and testing scenarios, and therefore lack generalization capabilities.
Nevertheless, we have included TIA as a baseline to support our point; please refer to the __Author Rebuttal 2.1__. As shown in Table 1, TIA fails to generalize in video-background settings in all seven tasks. This further demonstrates the effectiveness of our improvements (two consistency loss and data augmentation techniques) to the separated models architecture under the generalization settings.
### Q4: Have you tried minimizing MI between background representations and actions and or rewards?
---
A4:
This is a very interesting question. Why we aim to maximize MI between foreground representation and actions or rewards is because this is the most straightforward way to learn a good task-relevant representation. We did consider minimizing the MI between background representation and actions or rewards as well, but we abandoned this for the following reasons:
TIA[2] attempts to minimize the mutual information between the background representation and rewards with the loss term $L_{Radv}=-max_qq(r_t|s_t^-)$ . TIA implements this by maximizing the prediction loss of the background reward model (https://github.com/kyonofx/tia/blob/main/Dreamer/dreamers.py#L368). This approach appears problematic because the reward model can easily learn parameters that satisfy this condition, such as outputting 0 for any state, which would make the prediction loss consistently high. We have also done ablation experiments on TIA before, and the results show that this loss term does not improve the model performance significantly. Given that incorporating this term would further complicate the optimization objective, we chose not to include it.
As for the MI between the background representation and actions, since the background encoder does not include actions as its input, the background representation inherently lacks action-relevant features. Therefore, there is no need to minimize this MI term.
### Reference
---
[1] Generalizable visual reinforcement learning with segment anything model.
[2] Learning task informed abstractions.
[3] Denoised mdps: Learning world models better than the world itself.
[4] Iso-dream: Isolating and leveraging noncontrollable visual dynamics in world models.
[5] Learning world models with identifiable factorization.
---
Rebuttal Comment 1.1:
Comment: Thank you for clarifying the difference with TIA and running the additional experiments.
Also thank you for clarifying that you use the same hyperparameters across all tasks. Please emphasize that in the text, I think that strengthens your claims.
I increase my score to 7.
---
Reply to Comment 1.1.1:
Title: Discussion reply to reviewer C7ia
Comment: Yes, we will revise some paragraphs and add the additional experiments to the appendix. Thank you again for helping us improve the paper and update the score, we really appreciate your valuable comments! | Rebuttal 1:
Rebuttal: We revised the paper and added suggested experiments according to the reviewer’s comments. The detailed revisions are described as follows. The additional figures and a table are attached in the pdf file.
# 1. Revisions
### 1.1. Revise some typos and sentence
line 13: achieving free from overfitting -> achieving free from overfitting to task-irrelevant features.
line 38: further strengths -> further strengthens
line 100: focused in -> focused on
line 458: perceptrons -> perceptions
line 523: Bootstraping -> Bootstrapping
line 536: Comparision -> comparison
### 1.2. Added a paragraph describing limitations in section 6
SMG is particularly well-suited for robotic manipulation tasks in realistic scenarios. However, when the observation contains too many task-relevant objects, the complexity of accurately learning a mask increases. This can lead to a decline in SMG’s performance. For instance, in an autonomous navigation task, the presence of numerous pedestrians in the view makes it challenging to accurately mask all of them.
### 1.3. Introduce more related works in section 5
In addition to view generalization, considerable research has focused on dynamic generalization [3,4,5] to develop a global model capable of generalizing across different dynamics. Additionally, several studies [6,7,8] have explored task generalization, which aims to enable learned agents to generalize to new tasks.
### 1.4. Add the pseudo code of SMG
| __Algorithm 1__ SAC with separated models |
| :----|
|__Denote__ network parameters $\theta$, mask ratio ρ, batch size $N$, replay buffer $\mathcal{B}$|
|__Denote__ policy network $\pi_{\theta}$, foreground encoder $f^+_{\theta}$,background encoder $f^-\_{\theta}$|
|__for__ each iteration time step __do__|
|$\qquad a,o',r\sim\pi_{\theta}(f^+_{\theta}(o)),\mathcal{P}(o,a),\mathcal{R}(o,a)$|
|$\qquad \mathcal{B}\leftarrow \mathcal{B}\ ∪\ (o,a,r,o')$|
|$\qquad$ __for__ each update time step __do__|
|$\qquad \qquad \\{o_i,a_i,r_i,o'\_i\\}_{i\in[1,N]}\sim \mathcal{B}$|
|$\qquad \qquad o^+\_i,mask_i\sim f^+_{\theta}(o_i)$|
|$\qquad \qquad o^-\_i\sim f^-_\theta(o_i)$|
|$\qquad \qquad o^{aug}_i \leftarrow o^+\_i\ast mask_i+\epsilon\ast(1-mask_i)$ // $\epsilon$ is sampled from image dataset|
|$\qquad \qquad L_{recon}\leftarrow L(o_i,\ o^+_i\ast mask_i+o^-_i\ast(1-mask_i))$ // Equation 2|
|$\qquad \qquad L_{fore\ consist}\leftarrow L(o^+\_i,f^+_\theta(o^{aug}_i))$ // Equation 7|
|$\qquad \qquad L_{back}\leftarrow L(\epsilon,f^-_\theta(o^{aug}_i))$ // Equation 4|
|$\qquad \qquad L_{action}\leftarrow L(o_i,o'_i,a)$ // Equation 6|
|$\qquad \qquad L_{mask}\leftarrow L(mask_i,\rho)$ // Equation 3|
|$\qquad \qquad L_{q\ consist}\leftarrow L(Q_\theta(f^+\_\theta(o_i),a),Q\_\theta(f^+_\theta(o^{aug}_i),a))$ // Equation 8|
|$\qquad \qquad L_{aux}\leftarrow L_{recon}+L_{fore\ consist}+L_{back}+L_{action}+L_{mask}$ // auxiliary loss|
|$\qquad \qquad L_{critic}\leftarrow L_{q}+L_{q\ consist}$ // critic loss|
|$\qquad \qquad$ __update__ $\theta$ with $L_{actor},L_{critic},L_{aux}$|
|$\qquad$ __end for__|
|__end for__|
|$L_{q},L_{actor}$ is defined by SAC|
||
### 1.5 Revise line 136
"Improving the precision of background prediction can consequently enhance the foreground as well. Since the foreground and background are complementary, providing supervision for the background prevents the foreground from learning all parts of the observation."
# 2. Experiments
### 2.1. Add TIA [2] as a new baseline
The results are shown in the 4th column of __Table 1__.
### 2.2 Add SAM-G [1] as a new baseline
The results are shown in the 5th column of __Table 1__.
Training SAM-G for one seed on our RTX3090 graphics card took nearly two days. Therefore, we are currently unable to complete all the training during the rebuttal period. Instead, we report the results from the SAM-G paper, noting that they used four times the batch size and twice the training time steps compared to SMG. Despite this, SMG still achieves comparable performance to SAM-G.
### 2.3. Add SVEA as a new backbone method
The results are shown in the 8th column of __Table 1__.
### 2.4. Add a new ablation of $L_{fore}$
The results are shown in the 6th column of __Table 1__.
### 2.5. Visualize the reconstruction process in adroit-pen
The results are shown in the __Figure 1__.
### 2.6. Further study the impact of ρ in peg-in-box
ρ does need to be set for different tasks but the choice of ρ can be roughly estimated based on the percentage of the task-relevant area. It is also possible to directly reuse settings from similar tasks. For instance, we used ρ=0.06 for three different tasks and ρ=0.12 for two different tasks.
In addition, an imprecise ρ only slightly affects the performance. We have added a new ablation for ρ in peg-in-box task with a much larger interval; The results in Figure 2 indicate that variations do not significantly influence the performance. As the optimal ρ being 0.12, even when setting ρ to 0, SMG still accurately masks out the task-relevant area and the performance drops by only 8%.
### 2.7. Add a new ablation of removing $L_{q}$ and $L_{action}$
The results are shown in the __Figure 3__.
# Reference
[1] Generalizable visual reinforcement learning with segment anything model.
[2] Learning task informed abstractions.
[3] Context-aware dynamics model for generalization in model-based reinforcement learning.
[4] Why generalization in rl is difficult: Epistemic pomdps and implicit partial observability.
[5] Graph networks as learnable physics engines for inference and control.
[6] Zero-shot task generalization with multi-task deep reinforcement learning.
[7] Task Aware Dreamer for Task Generalization in Reinforcement Learning.
[8] Learning modular neural network policies for multi-task and multi-robot transfer.
Pdf: /pdf/2435444b4e9be9ba285d133fe59ce7b591b3f3d6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
LoRANN: Low-Rank Matrix Factorization for Approximate Nearest Neighbor Search | Accept (poster) | Summary: This paper investigates approximate nearest neighbor (ANN) search, where, given a collection $\mathcal{X}$ of points in $\mathbb{R}^d$, the task is to find the top $k$ data points that are closest to a query point $q$ according to some similarity or dissimilarity measure (denoted by $\delta(\cdot, \cdot)$), such as inner product. There are many classes of algorithms in existence[1], with this particular work falling into the clustering-based paradigm.
In clustering-based (aka Inverted File or IVF) ANN search, $\mathcal{X}$ is partitioned into a set number of clusters, often using a geometric clustering algorithm such as (spherical) KMeans, with every cluster represented using some sketch of the cluster such as its mean. When presented with $q$, the algorithm first identifies $\texttt{nprobe}$ clusters to search by ranking the clusters according to the distance between $q$ and their representative points. It then computes $\delta(q, \cdot)$ with points within the $\texttt{nprobe}$ clusters, and returns the top $k$ points from that set.
This work concerns the second step. Typically the computation of $\delta(q, \cdot)$ uses Product Quantization (PQ) to reduce memory consumption and to perform the distance computation efficiently. Instead, this work reduces the dimensionality of the data matrix within each cluster using its low-rank approximation. The key insight is that, the low-rank approximation is constrained to the space of rank $r$ matrices that predict the inner products well on a specific query distribution.
[1] "Foundations of Vector Retrieval" by S. Bruch. Springer.
Strengths: * The proposed method relies on a very simple yet effective method for supervised dimensionality reduction in the context of ANN search
* The paper is easy to read and arguments straightforward to follow
* Results are encouraging
Weaknesses: Post-discussion Update: The authors have addressed my concerns around the experimental setup, and have expressed interest in adopting a more clear narrative and framing of their contributions.
-----------------
* Presentation:
- I think the authors can shed quite a bit of fluff by positioning the work as I did in my summary. This work's contribution is very much in the speeding up and improving the accuracy of the score computation phase in clustering-based/IVF ANN search. Presented that way, the authors can immediately focus on the regression problem instead, and not introduce distractions such as the details of clustering, the importance of MIPS (section 2.1), and more. It'd make for a cleaner presentation of your idea, and lets your readers understand the scope of your contributions more clearly.
- As a minor point, Theorem 1 is a vacuous statement. It's neither necessary to explain the findings of the paper, nor is it insightful enough to birth new research directions. Perhaps you can move it entirely to the appendix if you insist on including it in the work.
- It must be noted that the method presented in this work is supervised. That is a critical differentiating factor between LoRANN and existing methods such as PQ and Scann.
* Methodology: One of the interesting insights that led to Scann is that not all inner products are equally important. For a quantization method to be successful, it needs to preserve the inner product between $q$ and high-ranking data points better than inner product between $q$ and low-ranking points. In your work, you model the problem as regression, and attempt to minimize the error of inner production approximation equally for all data points. What motivates this uniform weighting? Have you considered a ranking formulation of the problem rather than regression? There is a vast literature on learning-to-rank which, in fact, is very relevant to your idea, but where Scann's insight is baked into its machinery/objective.
* Experiments: Because the methodology is very straightforward and the novelty is minimal, I expect a much stronger experimental evaluation of the proposed method. Here are a few points to consider:
- Your main experiments conflate two orthogonal axes of evaluation: effect of clustering vs effect of score computation; this I believe stems from the way you present your work. Your contribution, as I noted above, is to the score computation phase of IVF-based ANN search. To evaluate your contributions fairly against SOTA IVF methods, you must partition the data once. Given this fixed set of partitions, you can directly compare the efficacy of LoRANN against PQ and Scann's quantization protocol. By running each method independently as you do now, such that each produces its own partitioning of the data separately, you run the risk of conflating the effect of clustering on IVF's accuracy with the effect of the specific choice of dimensionality reduction/quantization. As it stands, I cannot deduce the exact reason why your method should work better.
- You are also comparing a supervised method that adapts to a query distribution, with unsupervised baselines. Not only is it not a fair comparison, your results are also not informative. It is not surprising that your method does well: you give it an unfair advantage (as confirmed by Figure 1 - left) by finding a matrix that can predict inner products on *a specific query distribution*. A more reasonable experiment would be to (a) compare a variant of LoRANN that's trained on the data points only (i.e., without training queries) with other IVF methods, and (b) incorporating the query distribution into Scann (its objective can use information about the query distribution). There are other methods that can use a query set to improve quantization ([2,3] are a couple of examples).
- As a very simple baseline, consider partitioning the data using centroids obtained from a partitioning of queries!
- Frankly, a comparison with graph methods is nice, but is rather tangential. I encourage you to contrast your method with other IVF methods first, focus your discussion to justifying your proposal against SOTA IVF methods, and then conclude your work with a comparison with graph methods for completeness.
[2] "Query-Aware Quantization for Maximum Inner Product Search" by Zhang et al. AAAI 2023.
[3] "A Learning-to-Rank Formulation of Clustering-Based Approximate Nearest Neighbor Search" by Vecchiato et al. SIGIR 2024.
Technical Quality: 3
Clarity: 2
Questions for Authors: My questions mainly concern your experimental evaluation:
* Setup: What's the training set used to train LoRANN? You have kindly given statistics about each dataset, but sadly did not include any information about the size of the query sets.
* Figure 1: What is the size of the initial candidate set in the right figure, where reranking is enabled? It is important to know this because the size of the initial set can explain the small difference between the different curves (e.g., if you retrieve a very large set followed by re-ranking, the accuracy of each method pre-reranking becomes less and less important)
* PQ is obviously sensitive to the bitrate, a hyper-parameter. Can you elaborate how LoRANN holds up against PQ as you sweep the rank parameter and PQ's code size, in terms of speed and memory usage?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: I really like this work and the simplicity of the idea. I'd love to see this work in print, but I think it can be so much more complete with a proper set of experiments. As it stands, the incomplete experiments limit the reach of this work. A stronger formulation (using ranking, e.g.) can even enhance the results too. I hope my feedback proves helpful in strengthening the arguments of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for constructive feedback and good suggestions for improving the manuscript. In particular, clarifying the main contribution of our article as improving the accuracy of score computation in clustering-based ANN search would indeed make the presentation cleaner and our argument stronger; we will revise the manuscript accordingly.
We address the remarks and questions about our experimental methodology by clarifying the details of our experimental setup and providing additional experimental results. Most importantly, we want to clarify that our method does not get an unfair advantage by using any training data the other methods do not have access to. In all the experiments (except the experiments of Section 5 that specifically verify that our method can adapt to the query distribution in the OOD setting), our method uses only the corpus itself as a global training set and draws local training sets as detailed in Section 3. All the data sets, except Yandex used in Section 5, are benchmark data sets for which no specific training set drawn from the query distribution is available. We follow Aumüller et al. [1] by dividing the original data set randomly into a corpus and a set of 1000 test queries. Thus the corpus and the test queries are drawn from the same distribution, and LoRANN exhibits superior performance compared to PQ even in this standard in-distribution setting. This critical detail was missing in the original manuscript, and we thank the Reviewer for pointing this out.
As the Reviewer suggests, it would be natural to also consider non-uniform weightings. The anisotropic quantization used in SCANN works well for GloVE which has a specific data distribution, but does not seems to improve the performance of standard PQ for most data sets as evidenced by both our experiments and experiments by the Faiss authors [2]. We also performed preliminary experiments with optimizing weighted MSE and ranking losses, but they did not improve accuracy while making the models much slower to train. Meanwhile, the simple MSE already results in superior performance compared to earlier methods. However, as the Reviewer suggests, exploring different supervised formulations and loss functions is still an interesting direction for future work.
To address the remark about conflating the effects of clustering and score computation, in Figure 1 of the attached pdf, we perform an additional experiment where the clustering is kept constant to directly compare the proposed score computation method to the score computation method (product quantization) employed by IVF-PQ. The proposed score computation method outperforms product quantization on all of the data sets. Unfortunately, due to the limited time for author response, we were unable to include anisotropic quantization of SCANN in this experiment, but since SCANN did not outperform IVF-PQ in our end-to-end experiments (see Figure 3 and Appendix E of the original manuscript), we do not expect that the performance of anisotropic quantization would be better than the performance of PQ.
To compare the proposed method to PQ for fixed memory consumption, in Figure 2 of the attached pdf, we vary the code size of PQ, and compare LoRANN to IVF-PQ with hyperparameters resulting in similar memory usage. The results of this experiment verify that for fixed memory consumption, LoRANN has superior performance compared to PQ that is a typical choice in memory-limited use cases.
[1] M. Aumüller et al. ANN-Benchmarks: A benchmarking tool for approximate nearest neighbor algorithms. Information Systems 87 (2020): 101374.
[2] Indexing 1M vectors. Faiss Wiki on GitHub.
---
Rebuttal Comment 1.1:
Title: Response to Author Rebuttal
Comment: Thank you so much for carefully answering all my questions and clarifying your experimental setup! I also appreciate the empirical data you collected for the experiments I requested in my review, in such a short amount of time. This apples-to-apples comparison highlights the strengths of your method even more.
I have no further questions or concerns. You have adequately addressed the issues I raised. It is delightful to see such a simple algorithm perform so well in practice.
I hope you incorporate my suggestions regarding the structure of the presentation, the reframing of your contributions, and the experiments you've additionally run into a revision of your work. | Summary: This paper introduces a new method for the nearest neighbor search problem. Leveraging the low-rank assumption, the authors combine low-rank matrix factorization, clustering, and quantization to enhance the speed of nearest neighbor search. The authors conducted extensive experiments to demonstrate the advantages of their method over numerous baselines.
Strengths: 1. The authors conducted extensive experiments to compare their methods with other baselines.
2. The method proposed by the authors is easy to follow and implement.
Weaknesses: 1. It seems that all the techniques mentioned in this paper have already known to be useful for nearest neighbor search.
2. As shown in Figure 2, all the components contribute to the final results. I don't see any reason why any component applied there is unique to the new algorithm. For example, the clustering and 8-bit quantization techniques appear to be applicable to any existing nearest neighbor search algorithm or library. Thus, I question whether it is fair to employ too many techniques when comparing with other standard nearest neighbor search libraries.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Regarding the low-rank approximation, I don't understand why this method is fundamentally different from first performing dimension reduction on the dataset and then applying any standard nearest neighbor search algorithm.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for their feedback. However, we want to clarify that our method does not reduce to techniques that have already been used in the earlier ANN literature. As nicely summarized by Reviewer udkL, the main novel contribution of the manuscript is a new supervised method (reduced-rank regression) for cluster-specific score computation in clustering-based ANN search. To the best of our knowledge, this score computation method has not been considered in the earlier ANN literature.
We also want to clarify that the proposed method is fundamentally different from first performing a dimensionality reduction and then applying any ANN algorithm. Our cluster-specific supervised method does not reduce to global dimensionality reduction, since it is neither global nor unsupervised. The key differences are: (1) we compute the scores locally at the clusters by cluster-specific reduced-rank regression models, not by one global model, allowing much greater compression; (2) the proposed score computation method is supervised, i.e., we predict the inner products via reduced-rank regression, and not by an unsupervised dimensionality reduction method.
In addition to the novel score computation method, we propose a concrete ANN algorithm (LoRANN), that combines this score computation method with $k$-means clustering, global dimensionality reduction, and 8-bit quantization. As the Reviewer correctly points out, these techniques have been used earlier in ANN algorithms. However, we combine them in a novel fashion: for example, LoRANN performs the entire score computation using 8-bit integer vector-matrix multiplications. We also do not think that using these techniques makes the comparison to the standard nearest neighbor libraries unfair. We use them to design a practically useful ANN algorithm and to ensure a fair end-to-end comparison with the state-of-the-art ANN libraries that also utilize these techniques.
However, we acknowledge that, as also pointed out by Reviewer udkL, our experimental validation was unclear, since we did not directly compare the novel component of our algorithm (supervised score computation) against the product quantization that is used for score computation by the state-of-the-art clustering-based libraries. In Figure 1 of the attached pdf file, we present the results of an experiment that demonstrates the performance improvement provided by our novel score computation method. In particular, when the clustering is constant across methods, the proposed score computation method outperforms product quantization.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. I have increased my score to 5. | Summary: The paper describes a method for computing approximate nearest neighbors in
high dimensions. Computing nearest neighbors is a classical problem in
computational geometry, with applications in many areas of computer science.
The classical solutions in low dimensions do not generalize to high dimensions.
The approach in the paper has two main ideas: the first is performing k-means
clustering, computing nearest neighbors on the means, and then computing
more accurate nearest neighbors inside the cluster.
The second is reducing the computation in each cluster to multivariate
regression which can be solved approximately by low rank matrix factorization.
Strengths: The result appears to be very useful in many applications.
Weaknesses: Unfortunately I am not an expert in this field and cannot comment on how this
result compares to the current state of the art.
Technical Quality: 4
Clarity: 4
Questions for Authors: N/A
Confidence: 1
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We acknowledge that it is not easy to review an article without field-specific knowledge and appreciate the effort. The ANN-benchmarks project (the link is provided on page 14 in Appendix B of the original manuscript) is the de facto standard for performance evaluation in the field of ANN search. We perform the experiments in the ANN-benchmarks framework, and use their leaderboard to pick the baseline methods. Thus, it should be possible to verify that the comparisons presented in the article are to the actual state-of-the-art even without being an expert in the field. | Summary: The paper presents LoRANN, a novel algorithm for Approximate Nearest Neighbor (ANN) search that leverages low-rank matrix factorization and k-means clustering. The core idea is to approximate the ordinary least squares solution of the inner product computation via reduced-rank regression. The authors also introduce a quantized 8-bit version of LoRANN, which is memory efficient and performs well on high-dimensional data. The experiments demonstrate that LoRANN outperforms existing methods on both CPU and GPU.
Strengths: The authors provide extensive experimental results, reporting that their method outperforms leading product quantization-based algorithms and has faster query times than graph-based methods at certain recall levels.
Weaknesses: There exists room for improvement in the visual presentation in this paper. Additionally, it is best to keep the starting or ending points consistent to better compare all methods.
At different recall levels, LoRANN is sometimes faster, and sometimes slower compared to other methods (GLASS, CAGRA). The authors should analyze the reasons that lead to this phenomenon
Technical Quality: 3
Clarity: 2
Questions for Authors: Does LoRANN provide any theoretical guarantees on approximation quality or search time?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The limitations of this work have been discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the suggestion on improving the presentation of our graphs and will incorporate this change. We would also be happy to hear any additional suggestions regarding the visual presentation.
As mentioned in Section 8 (Limitations), the reason for the lower relative performance of LoRANN compared to graph algorithms (Glass, CAGRA) at the highest recall levels ($>0.9$) is that our method is clustering-based: too many clusters have to be explored to reach the highest recall levels. This performance degradation at the highest recall levels applies also to other SOTA clustering-based methods such as SCANN and IVF-PQ. Our method improves the performance of cluster-specific score computation, but it cannot overcome this limitation. Naturally, addressing this drawback is subject to further research.
We do not currently provide theoretical guarantees on approximation quality, but this is an interesting direction for future research. The search time of our algorithm can easily be expressed in terms of given hyperparameters, and we will include this detail in a further revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal and I will remain my rating. | Rebuttal 1:
Rebuttal: We thank the reviewers for their constructive feedback. Here we address the most important concerns about novelty (Reviewer bJwE) and experimental methodology (Reviewer udkL) by clarifying our contribution and experimental setup, and performing new experiments:
- We clarify that our method does not get an unfair advantage by using training data that the baseline methods do not have access to. In all of the experiments (except those in Section 5: Out-of-distribution queries) our algorithm draws its training sets from the corpus itself.
- We clarify that the main methodological contribution of the manuscript is a novel supervised score computation method for clustering-based ANN search.
- We empirically verify that the proposed score computation method outperforms the score computation method (product quantization) used by the SOTA clustering-based algorithm IVF-PQ.
Detailed comments are provided in our reviewer-specific responses.
Additionally, the attached pdf includes two new experiments: (1) In Figure 1, we keep the clustering constant to directly compare the proposed score computation method to product quantization (PQ); (2) In Figure 2, we vary the code size of PQ to compare LoRANN to IVF-PQ with different hyperparameters resulting in similar memory usage.
Pdf: /pdf/65ac4a9bab356ec5aaece6873e46eaabb56488ae.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A provable control of sensitivity of neural networks through a direct parameterization of the overall bi-Lipschitzness | Accept (poster) | Summary: This paper investigates and proposes a novel bi-Lipschitz neural network architecture. This architecture provides a simple, direct and tight control of the Lipschitz and inverse Lipschitz constants through the use of two parameters, the ideal minimum, equipped with theoretical guarantees. To devise their architecture the authors exploit convex neural networks and the Legendre-Fenchel duality. The authors also propose a variant of their bi-Lipschitz architecture that is more scalable by exploiting partially input convex neural networks. Finally, the authors propose a set of experiments to showcase the utility of our model in concrete machine learning applications, namely, uncertainty estimation and monotone problem settings and show that it can improve previous methods.
Strengths: - The paper is well written. After writing a clearly structured related work (with an extensive background and related work proposed in Appendix A), the authors propose their new design and explicitly explain how the forward pass of their network is computed as well as the expressivity and how the backpropagation can be done.
- The authors acknowledge that the computational cost of their approach can pose serious limitation and propose to overcome this problem with partially input convex neural networks.
Weaknesses: - It would be interesting of the authors could provide experiments with both their architectures with respect to computational cost, and highlight time of training etc.
Technical Quality: 3
Clarity: 3
Questions for Authors: NA.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Lack of experiments wrt to computational cost and maybe an experiment with a more larger dataset than the current ones use in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for spending your time on carefully reviewing our paper. We really appreciate all the advice you provide to improve our paper. Please find below answers to your questions. We also summarized the comparison of the time and space complexity of our models in an independent thread.
> It would be interesting of the authors could provide experiments with both their architectures with respect to computational cost, and highlight time of training etc.
Thank you very much for this advice. Since a few reviewers were also interested in this topic, both theoretical and experimental discussions are summarized in the global rebuttal. We would appreciate it if you could verify this. This will be added in the updated version of our paper. Please note that we did not directly compare training times in seconds but instead compared the number of floating-point operations (FLOPs) for each architecture, as training time is heavily dependent on factors such as the machine, code, and libraries used.
> maybe an experiment with a more larger dataset than the current ones use in the paper.
We conducted an additional experiment using the convolutional version of our model (BLNNconv) on the problem of uncertainty estimation (as in Subsection 4.2) with the CIFAR-10 vs. SVHN dataset to illustrate the scalability of our method. For this problem, we implemented the model so that we first process the data through BLNNconv and then transfer it to the DUQ. The result compared to DUQ is as below. Our model is not only scalable to large-scale networks but also improves out-of-detection performance (the AUROC of SVHN). Using BLNNconv instead of the fully connected BLNN also improved the computation time (e.g., 2.5 times faster for the 5 first iterations).
| | Accuracy | Loss | AUROC SVHN |
|-------|-----------|----------|-----------|
| DUQ | 0.929 | 0.04 | 0.83|
| BLNNconv | 0.930 | 0.04 | **0.89** |
We hope we addressed all your questions and concerns. We will be glad to provide further explanation and clarification if necessary.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. I believe this paper to provide a good contribution and maintain my score.
---
Rebuttal 2:
Comment: Thank you very much for considering our rebuttal! We greatly appreciate your positive position on the contributions of our paper. | Summary: This paper proposes a novel neural network architecture called BLNN (Bi-Lipschitz Neural Network) that allows direct control and parameterization of the overall bi-Lipschitzness of the network. The main contributions include: i) a framework that allows tight control of Lipschitz and inverse Lipschitz constants of networks via using convex neural networks and the Legendre-Fenchel transformation, ii) comprehensive theoretical analysis, iii) empirical evaluation showing the nice performance of BLNN on tasks like function fitting, out-of-distribution detection, and monotone regression.
Strengths: **Originality:**
The paper presents a novel approach to constructing bi-Lipschitz neural networks that is distinctly different from existing methods. The use of convex neural networks and Legendre-Fenchel transformation to directly parameterize overall bi-Lipschitzness is quite novel. The extension (e.g. partially bi-Lipschitz networks, etc) is also new.
**Quality:**
The quality of the paper is good. The authors provide detailed proofs and analyses for their key claims, including the bi-Lipschitz properties of their construction and the expressive power of the resulting networks. The experiments cover various scenarios, from simple function fitting to uncertainty estimation and monotone regression. The results are quite competitive.
**Clarity:**
The paper is generally well-structured and clearly written. However, given the technical nature and the length of the paper, understanding the paper fully is still a tough task.
**Significance:**
The paper's contributions are significant in its solid theoretical developments. The significance is further underscored by the improved performance on tasks like out-of-distribution detection and monotone function learning. In conclusion, this paper presents a novel approach to an important problem in deep learning theory and practice.
Weaknesses: 1. Computational Complexity: A detailed analysis of time and space complexity compared to traditional networks can be helpful.
2. Scalability and Practical Implications: There's insufficient exploration of how the method scales to very large networks or complex datasets (e.g. TinyImageNet).
3. Hyperparameter Sensitivity: More discussions on this issue will be beneficial.
4. The paper could be more explicit about scenarios where the theoretical guarantees might not hold, and could explore potential extensions to other network architectures beyond feedforward networks.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. How does the proposed method perform on larger, more complex datasets like TinyImageNet or ImageNet?
2. Can the authors clarify the computational complexity of their approach?
3. Can the authors provide a more comprehensive study on hyperparameter sensitivity?
4. Can the authors comment on other network structures (e.g. implicit models, DEQs, etc)?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: It seems that an improved discussion on potential negative societal impacts or broader ethical considerations is still missing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for spending your time on carefully reviewing our paper. We really appreciate all the advice you provide to improve our paper. Please find below answers to your questions. We also summarized the comparison of the time and space complexity of our models in an independent thread.
(W=Weakness, Q=Question)
>W1+Q2
We apologize for omitting this important point. Please find in the global rebuttal both theoretical and experimental discussions on this topic. This will be added in the updated paper.
> W2+Q1
The main goal of our work was to create a new paradigm for the bi-Lipschitz control and to analyse its behavior. Nevertheless, we understand this is an important direction of exploration. We conducted an additional experiment using the convolutional version of BLNN (BLNNconv) on the problem of uncertainty estimation (as in Subsection 4.2) with the CIFAR-10 vs. SVHN dataset to illustrate the scalability of our method. We implemented the model so that we first process the data through BLNNconv and then transfer it to the DUQ. The result compared to DUQ is as below. Our model is not only scalable to large-scale networks but also improves out-of-detection performance (the AUROC of SVHN). Using BLNNconv instead of the fully connected BLNN also improved the computation time (e.g., 2.5 times faster for the 5 first iterations). Moreover, the amortization method of Reviewer u3yR may be a fundamental solution to the computational complexity of our model (which can already at least scale to CIFAR10) but out of the scope for this current work.
| | Accuracy | Loss | AUROC SVHN |
|-|-|-|-|
| DUQ | 0.929 | 0.04 | 0.83|
| BLNNconv | 0.930 | 0.04 | **0.89** |
> W3+Q3
Thank you very much for this advice. Here, we would like to focus on the inverse Lipschitz and Lipschitz hyperparameters ($\alpha$ and $\beta$). Note that other hyperparameters such as batchsize, depth…, follow the basic properties of the core neural network ICNN, and therefore, not directly related to the goal of this work (see e.g., Schalbetter, Adrian. *Input convex neural networks for energy optimization in an occupied apartment*. MS thesis. ETH Zurich, 2020.).
We would like to start the discussion by reminding that there is already an analysis with different $\alpha$ and $\beta$ for uncertainty estimation in the Appendix Table 5 p.50. We discuss in detail how the value of $\alpha$ and $\beta$ influence the performance of the model. The influence is rather intuitive, and we would like to verify this with additional experiments with a downscaled FashionMNIST for further clarity. The observations are as follows:
1. Changes in performance due to hyperparameter changes are continuous (no hyper-sensitivity).
| $\alpha$ | $\beta$ | Accuracy | Loss| AUROC MNIST| AUROC NotMNIST|
|-|-|-|-|-|-|
| 0.2 | 0.99 | 0.8600|0.08|0.84|0.95|
| 0.2|1.0| 0.8630|0.08|0.84|0.95|
2. An increase of the domain of search leads to better performance due to higher expressivity.
| $\alpha$ | $\beta$ | Accuracy | Loss|
|-|-|-|-|
|0.0|0.1| 0.5680|0.24|
|0.0|0.2 | 0.7485|0.14|
|0.0|0.3 |0.7825|0.11|
3. But too loose smoothing leads to worse performance.
| $\alpha$ | $\beta$ | Accuracy | Loss| AUROC MNIST| AUROC NotMNIST|
|-|-|-|-|-|-|
|1.0|4.0| 0.1070|0.10|0.50|0.50|
|1.0|1.0| 0.8455|0.11|0.81|0.98|
4. Too low sensitivity leads to worse out-of-distribution detection.
| $\alpha$ | $\beta$ | AUROC NotMNIST|
|-|-|-|
|0.0|0.2 |0.42|
|0.3|0.2 |0.83|
Moreover, in Appendix G.3 we also investigated the influence of Lipschitz constant on convergence speed in many different Lipschitz architectures (Figure 18 p.50). While increasing the Lipschitz constant (i.e., decreasing the smoothness of the network) leads to slower convergence, this decrease in speed is the smallest for ours among the compared models. In that sense, our method is more stable with respect to these hyperparameters.
> W4+Q4
Concerning the first part of W4, our theoretical guarantee is almost valid for usual practical scenarios. For example, the setting of Theorem 3.5 agrees with our experiments.
As for the potential extensions to other network architectures, we would like to first remind that our work is primarily foundational, providing a novel paradigm for the creation of bi-Lipschitz architectures and addressing some non-negligeable issues of the field. Therefore, this paper focuses on the fundamental design and properties of our model related to the control of bi-Lipschitzness and its theoretical analysis. We acknowledge that there are some applications outside the bi-Lipschitz control framework that we could not investigate. That is also why we have tried to provide an extensive discussion and analyses in the appendix and kept the formulation as general as possible to promote further extensions and other interpretations for future work.
Now, in the context of DEQ and implicit models, the LFT can be indeed re-formulated as finding the solution $z$ of $z=x-\nabla F(z) +z$ which corresponds to a DEQ. In that sense, our model can be regarded as a *bi-Lipschitz* DEQ. We believe this novel interpretation is interesting for future work to increase the generality of our model.
Furthermore, as mentioned in the first answer, we can extend our BLNN to convolutional layers as well. We just have to change the core neural network of the BLNN, which is the ICNN, to the convolutional ICNN proposed in the original work of Amos et al. (2017).
> It seems that […] discussion on potential negative societal impacts […] is still missing.
We will ensure that a more detailed discussion is included in the revised version.
We hope we addressed all your questions and concerns. We will be glad to provide further explanation and clarification if necessary.
Moreover, clarity is one of our major concerns. If you have any recommendations or some parts of the paper that were difficult to understand, we will be glad to improve them for the next version.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. I maintain my positive evaluation of this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for considering our rebuttal and for your positive position on the acceptance of our paper! | Summary: This paper proposes to control the bi-Lipschitzness of a neural-network by parameterizing the output by the Legendre-Fenchel-Dual. This involves parameterizing a strongly convex function and computing the minimum of that function in the forward pass. Several benchmarks are studied in simple regression tasks and uncertainty quantification.
Strengths: -The framework is interesting because it parameterizes bi-Lipschitz networks in a way that is not layer-wise and instead takes advantage of the Legendre-Fenchel transform LFT / convex conjugate of parameterized strongly-convex functions (ICNN), which only modifies the output of the output.
-Computing the LFT of a given function can be costly, however the paper offers a non-asymptotic bound for the Lipschitz constant and tractable gradient.
-The experimental results show a considerable improvement in tightness and regularity over other Lipschitz controlled networks like spectral normalization, AOL and Sandwich layers on small regression tasks. In particular BiLipNet behaves a lot better when the Lipschitz constant is overestimated in existing parameterizations.
Weaknesses: -Computing the LFT seems to be quite expensive, hence why the experiments are only on simple 2d problems and fashion-MNIST. For this reason I'm doubtful that it will be used for any large-scale network training pipelines where tight Lipschitz control and estimation is challenging.
-The provable approximation class is limited to alpha-strongly monotone functions and is the derivative of some function almost everywhere. Lipschitz layers like AOL, SLL and Sandwich layer are all solutions to the LipSDP framework which only requires the activations themselves to be alpha-strongly monotone for alpha >= 0 (Fazlyab et al., 2019).
Technical Quality: 3
Clarity: 4
Questions for Authors: -Is it also necessary that BLNN is a strongly monotone function? It seems that many of the regression experiments involve monotone target functions (figure 2 and 3), but I'm not sure if that is because BLNN is not capable of representing monotone functions or just not a great representer due to your approximation theorem. If it can represent non-monotone functions, it would be interesting to see a simple regression comparison to SLL, AOL, etc. Answering this question will greatly help my evaluation.
-The Lipschitz parameterization of SLL, AOL, and Sandwich layers commonly uses compositions of 1-Lipschitz layers for the application of certified robustness. How would the BLNN parameterization compare to existing 1-Lipschitz layer networks in the certified robustness setting? I’d imagine BLNN might be much more expressive than composed 1-Lipschitz layers which could have a big impact.
-Have you considered amortizing the LFT computation as done in this paper? https://arxiv.org/abs/2210.12153
-I'm curious if there is any possibility of extending BLNN to convolutional layers? These settings are interesting for larger image classification problems like CIFAR10 and Imagenet.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limitation are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for spending your time on carefully reviewing our paper. We really appreciate all the questions highlighting the significant potential and future directions of our work. Please find below answers to your questions and some clarifications to other important points of your review.
(W=Weakness, Q=Question)
>Q4
Yes, we can extend our BLNN to convolutional layers. We just have to change the core neural network of the BLNN, which is the ICNN, to the convolutional ICNN proposed in the original work of Amos et al. (2017). We will definitely clarify this in the revised version. In the next answer, we provide experimental results with this architecture.
> W1
We conducted an additional experiment using the convolutional version of BLNN (BLNNconv) on the problem of uncertainty estimation (as in Subsection 4.2) with the CIFAR-10 vs. SVHN dataset to illustrate the scalability of our method. For this problem, we implemented the model so that we first process the data through BLNNconv and then transfer it to the DUQ. The result compared to DUQ is as below. Our model is not only scalable to large-scale networks but also improves out-of-detection performance (the AUROC of SVHN). Using BLNNconv instead of the fully connected BLNN also improved the computation time (e.g., 2.5 times faster for the 5 first iterations). Please also refer to the global rebuttal for this topic.
| | Accuracy | Loss | AUROC SVHN |
|-|-|-|-|
| DUQ | 0.929 | 0.04 | 0.83|
| BLNNconv | 0.930 | 0.04 | **0.89** |
> W2
This is indeed a limitation of our algorithm but also an improvement compared to prior works in some aspects. From the theoretical perspective, several (bi-)Lipschitz layers do not provide a provable approximation class, and, from the practical perspective, some bi-Lipschitz models are also restricted to strongly monotone functions (e.g., BiLipNet). We would like to add that most existing layer-wise approaches have difficulty in creating partially bi-Lipschitz networks (e.g., controlling the spectral norm of linear matrices cannot handle this case) but ours can easily realize this with the PBLNN. Finally, while out of the scope of this work, there are some straightforward ways to increase the expressive power of BLNN. Please see the next answer.
> Q1
The following answer is also related to W1 and W2, and is therefore a bit longer.
Yes, the vanilla BLNN is necessarily a strongly monotone function. However, our model still has a higher expressive power and tighter bounds than other (bi-)Lipschitz units (= the most basic architecture guaranteed to be bi-Lipschitz without using composition of several bi-Lipschitz entities). Indeed, most existing models can only create simple (bi-)Lipschitz units with low expressive power (e.g., only linear) and they have to compose those units to achieve higher expressive power, but this leads to looser bounds. In comparison, our model can **with only one unit** achieve high expressive power and keep tight bounds. This is an improvement realized thanks to a parameterization not layer wise. Experiments of figures 2 and 3 were designed to illustrate this fact.
Now, we can of course, like any previous (bi-)Lipschitz methods, start to employ our BLNN unit as a component of a deeper network. For example, we can successively compose several BLNNs to improve its expressive power. With this approach, it **can indeed represent non-monotone functions**. We have tested to fit the sign function and the composition of 5 BLNNs achieves an accuracy around 0.
Some other extensions to increase the flexibility were already mentioned such as the PBLNN, and the BLNN used as a pre-processing stage for the next networks as we did in the answer to W1. Future research should work on how to use this novel unit in various fields inside and outside bi-Lipschitz problems.
Nevertheless, this direction is slightly out of the scope for this work. Indeed, our work is primarily foundational, providing a novel paradigm for the creation of bi-Lipschitz architectures and addressing some non-negligeable issues of the field. Therefore, this paper focuses on the fundamental design and properties of our model related to the control of bi-Lipschitzness and its theoretical analysis. One of the main contributions of this work resides thus in the construction of such a general framework for improved bi-Lipschitz control equipped with theoretical guarantees. While we ran several experiments related to bi-Lipschitz control to illustrate the benefits of our model, we acknowledge that there are some applications outside the bi-Lipschitz control framework that we could not investigate. This is also why we have tried to provide an extensive discussion and analyses in the appendix and kept the formulation as general as possible to promote further extensions and encourage other interpretations for future work.
> Q2
Thank you for pointing out this interesting possibility. We also believe that BLNN might perform better in this setting by composing many BLNNs. However, this is out of the scope for this paper as explained in the above answer. This is still one of the first avenues we will investigate for future works.
>Q3
Amortizing the LFT computation as the cited paper may be a fundamental solution to the computational complexity of our model (which can already at least scale to CIFAR10). However, this approach is currently out of the scope of this work as the main goal of our paper is to provide a bi-Lipschitz architecture with guaranteed bi-Lipschitz control and build its foundation starting with well-known basic algorithms such as the gradient descent. This amortization technique is more involved as we have to carefully analyse how such amortization influences the provable bi-Lipschitz bounds of the architecture, which is not trivial.
We hope we addressed all your questions and concerns. We will be glad to provide further explanation and clarification if necessary.
---
Rebuttal Comment 1.1:
Comment: Thank you for clarifying the representation power of BLNN and its application to convolutional. I agree that the parameterization is quite convenient when considering compositions of BLNN blocks over other Lipschitz constrained layers. I think the framework is generally quite interesting so I will raise my score.
Regarding my question about amortization of LFT: I would certainly not expect you to produce these results during the rebuttal phase, but I thought it could be a helpful reference for alleviating the computational costs of BLNN.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for considering our rebuttal and for your positive evaluation of our work! We greatly appreciate your insight regarding the amortization of LFT, as we also believe this could be a solution to improve the scalability of our method. We will mention this in the main paper and further analyze it in future work. | null | null | Rebuttal 1:
Rebuttal: # Time and Space Complexity of the BLNN and its Variants
This global rebuttal discusses the time and space complexity of the BLNN and its variants. Figures can be found in the attached PDF. This discussion will be added to the updated version of the paper.
## Theoretical Discussion
Concerning the time complexity, a forward pass of BLNN with $T$ iterations for the LFT has a time complexity of $O(T)$, where we supposed that the computation of a neural network is $O(1)$. Based on Theorem 3.7, the backward pass will require $O(d_x^3+d_\theta)$ as we have to solve one linear system and compute one Hessian vector product consecutively. $d_x$ is the dimension of the input and $d_\theta$ is the number of parameters. Concerning the space complexity, using Theorem 3.7, the storage capacity is $O(d_x+d_\theta)$ independent of $T$. If we are in an over-parameterized regime with $d_\theta>>d_x^3$ then we can neglect the dependence on $d_x$. With the brute force method (i.e., without Theorem 3.7), the forward is at least of $\Omega(T(d_x+d_\theta))$ and the storage capacity is at least proportional to $T$. Therefore, if we are in an over-parameterized regime with large iteration steps $T$, then the approach of Theorem 3.7 becomes more scalable in both time and memory.
## Experiments
We also computed with experiments the time and space complexity for one iteration of BLNN and its variants: BLNN (with brute force backpropagation), BLNN with Theorem 3.7, PBLNN with only one variable constrained to be bi-Lipschitz, and a traditional feed forward network as a control. Respective parameter sizes are 1.42M, 1.42M, 1.11M and 1.38M. The input size was 3x32x32, simulating a CIFAR dataset, and the batch size was varied in the set {1, 5, 25, 75, 100}. Results are shown in the attached PDF.
As we can observe in Figures 1 and 2 of the PDF, while both our BLNN and BLNN with Theorem 3.7 present higher time and storage complexity than a traditional feed-forward neural network, Theorem 3.7 greatly contributes to reducing both computational and space requirements (improvements of order of $10$ and $10^2$, respectively). Comparing our model with Theorem 3.7 to a traditional feed-forward neural network, we can conclude that their difference in complexities is only a constant factor of order 10. This explains the scalability of our model to large datasets. Finally, PBLNN provides evidence that we can considerably decrease the complexity of the model by limiting the number of variables we impose bi-Lipschitzness.
Pdf: /pdf/30de7094ded5731f2f1fc274837d7805c587f8fb.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Solving Zero-Sum Markov Games with Continuous State via Spectral Dynamic Embedding | Accept (poster) | Summary: This paper studies the two-player zero-sum stochastic Markov games (2p0s-MGs) with large scale or continuous state spaces. These problems have a large cardinality and function approximation methods are needed. The paper consider a spectral dynamic embedding method and proposed SDEPO. This methods utilized the transition dynamics in the construction of the state-space value function. SDEPO is able to converge with order $1/\epsilon$, which matches the optimal rate in single agent RL.
Theorems are provided for the last iterate convergence of the SDEPO algorithm. The effectiveness of the algorithm has been verified in games against baseline methods.
Generally the paper is well structured, but section 3-4 should be better explained while section 5 and 6 focused on the "practical algorithms" is stretched quite far from the analytical results in the previous sections.
Strengths: The function approximation approach for markov games is a necessity for those problems with large/infinite state cardinality. It is indeed true that the dynamics of the problem was not utilized in previous methods. This algorithm seems to be the first work addressing this.
This work adapted the spectral dynamic embedding for stochastic nonlinear control problems and proposed SDEPO, the motivation is clear and well stated.
Weaknesses: The only evaluation for the proposed algorithm is a rate of success in playing games with baseline algorithms. To the reviewer, this seems to be very limited. For a submission with strong theoretical focus, current result fails to validate the convergence properties of the proposed algorithm.
The sections for main results are not very well-written and is a bit difficult to read, more explanation would be appreciated. Although this could be due to the page limit.
Assumption 5 is in fact quite strong, a brief discussion on the impact and reasoning should be provided.
Section 5 and 6 seems a bit rushed and is intended to bring out the neural networks, the prior sections discussed the setting with tabular actions, where in these sections the action space is seen as continuous and more algorithms have been added, with no analytical results. I suggest the authors focusing on the existing setting with better presentation, explanation and more experiments.
Another problem this paper did not address is what are the current existing algorithms involving dynamics and function approximation in the single agent setting. The single agent RL with function approximation literature should be somewhat addressed in general.
Technical Quality: 2
Clarity: 2
Questions for Authors: What is the effect of truncation w.r.t. the later regularity condition assumptions? Does a more limiting truncation negatively impact these assumptions on the problem?
What is the reason for the consideration of one-sided $\epsilon$−optimal Nash equilibrium? The author stated that many existing works also consider this, but an explanation would be appreciated.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors did not fully address the limitations of this paper. The authors mentioned that the algorithm is non-independent and relies on some assumptions. There are some other limitations of this work, which has been raised in the previous sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your positive feedback on our manuscript. We give the point-to-point responses to the weaknesses and questions as follows.
[Empirical evaluation:]
We add a simulated experiment to validate the effectiveness of SDEPO directly (please see the Global Response for further details). Our original empirical study mainly focus on SDEPO with a neural network policy, and show its ability to deal with complex tasks with continuous action space.
[Main result:]
Due to space limit, we only presented the main results in Section 4 and provided detailed explanations in the appendix. Specifically, we thoroughly discussed the sources of errors in policy evaluation and compared the two feature generation methods in Appendix D, and analyzed the results of policy optimization in Appendix E. We will add more explanations in the main context in our revision.
[Assumption 5:]
This assumption is commonly used in the literature [1,2] and can be archived with $\epsilon$-greedy policy. Actually, we focus on finding stochastic policies converging to the Nash equilibrium (NE) of Markov games (MGs). While a deterministic policy NE may not exist for Markov games, a stochastic policy NE always exists [3]. A classic example is the rock-paper-scissors game, where the only NE is achieved by stochastic policies mixing between the three actions equally. Hence, we consider stochastic policy here and assume $\pi(a|s)\geq \underline c$, which can be ensured this with $\epsilon-$greedy policy and set $\underline{c}$ to be $\epsilon$.
[Section 5 and 6:]
In section 5, we extend SDEPO with neural networks policy. This extension is designed to accommodate continuous action space for complex tasks, where SDEPO could not handle. To bring the interests in a broader audience of computer science literature, we believe it is valuable to derive practical implementations from the theoretical-guaranteed basis. Additionally, we conduct a simulated experiment to validate the effectiveness of SDEPO directly (please check Global Response).
[Related work in the single agent setting:]
Function approximation in single-agent RL has been extensively studied in recent years to achieve a better sample complexity that depends on the complexity of function approximators rather than the size of the state/action space. One line of work studies RL with linear function approximation [4,5]. Typically, these methods assume the optimal value function can be well approximated by linear functions, and achieve polynomial sample efficiency guarantees related to feature dimension under certain regularity conditions. Another line of works studied the MDPs with general nonlinear function approximations [6,7]. [6,7] present algorithms with PAC guarantees for problems with low Bellman rank and low BE dimension, respectively. We note that MGs are inherently more complex than MDPs due to their min-max nature and it is generally difficult to directly extend these results to the dual-player dynamic setting of MGs.
[The effect of truncation w.r.t. the later regularity conditions:]
In our paper, we use the Random Features/Nystrom Features generation methods to provide efficient truncated feature embeddings to represent the transition kernel $P$ and the reward function $r$. Actually, we prove, in Theorem 4, that the evaluation error of Q-function with $m$ truncated Random /Nyström Features is $O(m^{-1/2}+m^3/{\Upsilon^2_1\Upsilon_2n^{1/2}})$/ $O(m^{-1}+m^3/{\Upsilon^2_1 \Upsilon_2 n^{1/2}})$, where $n$ is the per-iteration sample size and $\Upsilon_1$ and $\Upsilon_2$ denotes regularity constants relating the eigen-spectrum of the stationary distribution (in Assumption 4). Hence, to achieve a small policy evaluation error, Theorem4 indicates that the less regular in the eigen-spectrum of the stationary distribution (with smaller $\Upsilon_1$ and $\Upsilon_2$), the more features and samples is needed. Typically, larger truncations provides a better approximation of the original infinite-dimensional function space, but at the cost of increased sample complexity. Thus, it is essential to strike a balance between approximation error and statistical error when selecting the optimal truncation size. Larger $\Upsilon_1$ and $\Upsilon_2$ allow us to select a smaller truncation size and thereby enhance the precision of the predicted Q-function and convergence to NE.
[The reason for the consideration of one-sided NE:]
One-side NE is a common objective in the MG literature [1,8], as it can be directly extended to establish a two-sided NE. Specifically, we can apply algorithms with the roles switched, and add the two one-sided bound to achieve a two-sided NE[8].
[1] Ahmet Alacaoglu, et al. A natural actor-critic framework for zero-sum markov games. In International Conference on Machine Learning, pages 307–366. PMLR, 2022
[2] Lan, G. Policy mirror descent for reinforcement learning: Linear convergence, new sampling complexity, and generalized problem classes. arXiv:2102.00135, 2021
[3] John Nash. Non-cooperative games. Annals of mathematics, pp. 286–295, 1951.
[4] Lin Yang and Mengdi Wang. Sample-optimal parametric q-learning using linearly additive features. In International Conference on Machine Learning, pages 6995–7004. PMLR, 2019.
[5] Chi Jin, et al. Provably efficient reinforcement learning with linear function approximation. In Conference on Learning Theory, pages 2137–2143. PMLR, 2020.
[6] Nan Jiang, et al. Contextual decision processes with low bellman rank are pac-learnable. In International Conference on Machine Learning, pages 1704–1713. PMLR, 2017.
[7] Chi Jin, et al. Bellman eluder dimension: New rich classes of rl problems, and sample-efficient algorithms. Advances in Neural Information Processing Systems, 34, 2021.
[8] Yulai Zhao, et al. Provably efficient policy optimization for two-player zero-sum markov games. In International Conference on Artificial Intelligence and Statistics, pages 2736–2761. PMLR, 2022
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed explanations provided by the authors and the newly updated numerical results. The authors have addressed my questions and I have decided to increase my score.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for recognizing our efforts in rebuttal. We greatly appreciate your decision to increase the score. We will incorporate the discussion on the theoretical analyses and newly updated numerical results in our revision. | Summary: This paper proposes a new algorithm named Spectral Dynamic Embedding Policy Optimization (SDEPO) to solve the zero-sum Markov games with continous state and finite actions. The convergence analysis indicates that the proposed method achieves the best-known sample complexity as the case of finite-state space; this paper is the first theoretical result in handling the continuous state space with known dynamic and infinite horizon.
Strengths: This paper is the first result for solving the NE of infinite-horizon two-player zero-sum Markov games with continuous state space when the dynamic is known. Moreover, this paper resents sufficient introduction on the technical backgrounds and preliminaries. All assumptions are clearly listed. Lastly, the theoretical results are verified using emprical experiments.
Weaknesses: 1. The Assumption 1 is not reasonable. It says, whatever the state $s$ and the action $(a,b)$ are, the agent can move to any state $s'$ with a positive probability. Please correct me if I am wrong.
2. I am confused about what is new in the Spectral Dynamic Embedding method. It seems that both Bochner and Mercer theorem are well-known. This paper simply applies them to represent the transition probability and the reward using some kernels. Then everything is the same as traditional method in RL.
3. A mild comment on Assumption 3: Since the optimal policy might be deterministic, it means that $\pi(a|s)$ is likely to be zero for some $a$. During the training, the policy $\pi_k$ will tend to the optimal policy; the mass at non-optimal action will also approach to $0$. It means if $\underline{c}$ is larger than $\epsilon$, then $\pi_k$ will never converge to the optimal action in the sense of $L_\infty$ norm. From my understanding, the author needs to set $\underline{c}$ to be $\epsilon$ and it won't affect the complexity.
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1: Can the author justify the use of Assumption 1? It seems to be unrealistic in RL.
Q2: This paper seems to be a simple combination of linear MDP + policy gradient. What is the novel part of this work? I feel hard to consider representing $\mathbb{P}$ and $r$ using kernel methods as a new thing.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the effort you have dedicated to our paper, and we give the one-to-one responses to the weakness and questions as follows.
Response to W1&Q1:
In Assumption 1, we assume that the transition function satisfies that $s_{t+1}=f(s_t,a_t,b_t)+\epsilon_t$, which means that next state is determined by certain deterministic function $f$ with an additional random noise $\epsilon_t$. Actually, such transition function exists in many RL tasks, such as robot control [3] and real-time bidding games [4], where the transition is almost determined by the actions with unpredictable system factors (internal system chaos and external disturbances). Typically, the random noise is often assumed as Gaussian and Assumption 1 is common in the literature [1,2,3,4].
Even for the Gaussian noise, most of the transition probability would concentrate in a limited area according to the 3-$\sigma$ rule and $s_{t+1}$ would most likely to be near $f(s_t,a_t,b_t)$, though “the agent can move to any state with a positive probability”. Moreover, our result can be generalized to compactly supported noises, e.g. the truncated Gaussian noise. We will add more discussion about this in our revision.
Response to W2&Q2:
The reviewer doubts about the novelty of the Spectral Dynamic Embedding method and, furthermore, our proposed SDEPO method. Here, we note that our goal is to propose a provably efficient method for two-player zero-sum Markov games (MGs) with a continuous state space. The process of SDEPO seems straightforward, i.e., it first generates $m$ feature embeddings with Spectral Dynamic Embedding, and then applies natural gradient policy improvements for each player based on generated feature embeddings. Note that such feature embedding generation + policy optimization structure is quite common in the RL literature [5,6]. The difficulty lies in choose proper feature embedding/policy optimization methods, and prove that the two parts can interact well to achieve the convergence to the Nash Equilibrium (NE).
Basically, a desirable feature embedding method should provide a good approximation to the underlying dynamics of MG with a modest $m$. That is exactly what our two specific feature generation methods achieve. The Random/Nystrom Features methods could provide efficient finite linear approximations to represent the transition kernel $P$, as finite-dimensional truncations of the Bochner decomposition and Mercer decomposition, respectively.
In the policy optimization procedure, we choose the natural policy gradient method to adjust the policy of each player iteratively in an alternating fashion. Specifically, for each player, it first conducts least square policy evaluation to estimate the state-action Q value function of current policy upon based on the generated features and then improve the policy by natural policy gradient based on the estimated Q function.
The critical part lies in the theoretical analyses. First, in Theorem 1, we prove that the policy evaluation error of each player’s Q-function is $O(m^{-1/2}+m^3/n^{1/2})$/$O(m^{-1}+m^3/n^{1/2})$ ,where $n$ is the per-iteration sample size. With $m$ being $O(n^{1/7})$/$O(n^{1/8})$ for Random/Nystrom features, an optimal evaluation error can be achieved. More importantly, we bound the error of the one-side Nash equilibrium explicitly with the policy evaluation error in Proposition 5, and establish the overall iteration complexity to achieve an $\epsilon−$optimal NE.
Though Spectral Dynamic Embedding and the natural gradient policy strategy seem common, the merit of our method lies in identifying effective embeddings to enhance policy optimization and analyze the impact of evaluation error on the convergence to NE.
Besides, we note that we focus on MGs instead of MDPs, which are inherently more complex due to their min-max nature of MGs. Unlike methods for MDPs that leverage a single-player perspective, our approach must address the dual-player dynamics of MGs. Furthermore, our problem further differentiates linear MG. Unlike algorithms dealing with linear MG [7,8], where the underlying dynamics is assumed to be linear and known, we only assume that Assumption 1 holds and construct finite-dimensional dimension embeddings.
Response to W3:
In this paper, we focus on finding policies converging to the NE of MGs. Note that a deterministic policy NE may not exist for a general Markov game while a stochastic policy NE always exists for games with finite actions [9]. A classic example is the rock-paper-scissors game, where the only NE is achieved by stochastic policies mixing between the three actions equally. This is different from the single-agent MDPs, where a deterministic optimal policy always exists. Hence, we consider stochastic policy here and assume $\pi(a|s)\geq \underline c$. We can ensure this with $\epsilon-$greedy policy and set $\underline{c}$ to be $\epsilon$.
[1] Ren, T., et. al. A free lunch from the noise: Provable and practical exploration for representation learning. UAI 2022.
[2] Horia, M.,et. al. Certainty equivalence is efficient for linear quadratic control. NIPS 2019.
[3] Ignasi, C., et. al. Model-based reinforcement learning via meta-policy optimization. CoRL 2018.
[4] Hambly, B., et. al. Policy gradient methods find the Nash equilibrium in N-player general-sum linear-quadratic games. arXiv:2107.13090, 2021.
[5] Chengzhuo, N., et. al. Representation learning for general-sum low-rank markov games. arXiv:2210.16976.
[6] Masatoshi, U., et. al. Representation learning for online and offline rl in low-rank mdps. arXiv:2110.04652.
[7] Zixiang C., et. al. Almost optimal algorithms for two-player markov games with linear function approximation. arXiv:2102.07404.
[8] Qiaomin X., et. al.. Learning zero-sum simultaneous-move markov games using function approximation and correlated equilibrium. In COLT 2020.
[9] John Nash. Non-cooperative games. Annals of mathematics, pp. 286–295, 1951.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer [sWhd],
Thank you again for your constructive feedback. The end of the discussion period is close. We would be grateful if we could hear your feedback regarding our answers to the reviews. Here, we clarify the following part of our work again.
[Novelty of this paper] The primary goal of our paper is to propose a provably efficient method for two-player zero-sum stochastic Markov games with a continuous state space. Achieving Nash equilibria of Markov games is inherently more complex than identifying optimal policies of MDPs due to the min-max nature of Markov games and the non-stationary environment that each player faces. Our method first generates the $m$ feature embeddings through Spectral Dynamic Embedding method, and then evaluates the Q functions based on the embeddings and applies natural gradient policy improvements for each player based on the evaluated Q functions. Unlike linear Markov games, where the underlying representation is assumed to be known, we need to choose a proper feature embedding method and a suitable policy optimization strategy. We prove that the two parts can interact well to achieve the convergence of policies to a Nash equilibrium with a desirable overall iteration complexity. Specifically, we theoretically analyze the policy evaluation error of each player’s Q-function based on the embeddings and bound the error of the one-side Nash equilibrium explicitly with the policy evaluation error. Note that even the kernel method and natural gradient method seems common in the RL literature, the merit of our method lies in identifying effective embeddings to enhance policy optimization and analyze the impact of evaluation error on the convergence to NE.
[The rationality of Assumption 1] Assumption 1 assumes that next state is determined by certain deterministic function with an additional Gaussian noise. Such assumption is actually realistic (satisfied in many RL tasks) and is commonly assumed in both theory and practice in the RL literature. Your main concern of assumption 1 is that it requires the transition probability to remain consistently positive, attributed to the Gaussian noise. Nevertheless, due to the $3-\sigma$ rule, most of next state’s probability mass would concentrate within a limited area. Moreover, our results can be generalized to compactly supported transition randomness which admits certain kernelized representation (such as truncated Gaussian). We will add more discussion about this in our revision.
We sincerely hope to engage in further discussions with you. Thank you for your time and consideration! | Summary: The authors introduce an innovative approach to solving 2p0s-MGs with continuous state spaces, providing both theoretical guarantees and practical improvements over existing methods. The SDEPO algorithm and its variants offer efficient and scalable solutions for complex Markov games, potentially applicable to various domains in reinforcement learning.
Strengths: 1. This paper proposes a new Spectral Dynamic Embedding Policy Optimization algorithm that effectively addresses two-player zero-sum Markov games with continuous state space and finite action space.
2. To handle the finite action spaces, a practical variant of SDEPO is proposed to manage continuous action spaces, with empirical results showcasing its superior performance.
3. The complexity result of SDEPO matches the best-known results for policy gradient algorithms in the single-agent setting, proving its efficiency.
Weaknesses: 1. The spectral embedding methods can be computationally intensive in practice due to the complexity of handling spectral dynamic embeddings.
2. Why were these specific feature generation methods chosen? Is the proposed method sensitive to feature generation methods?
3. The experiments are somewhat limited, expanding the empirical section to include more complex and diverse scenarios would significantly strengthen the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weakness part.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your positive feedback on our manuscript. We give the point-to-point responses to the weaknesses and questions as follows.
Response to Weakness 1:
It is really a good question to consider the computational overhead of the spectral dynamic embeddings. Actually, in our SDEPO algorithm, the spectral embedding method would only be executed once at the beginning to generate $m$ embeddings. To achieve the SOTA $\tilde O(\frac{1}{(1-\gamma)^3 \epsilon})$ iteration complexity, $m$ should be $\tilde O(n^{1/7})$/$\tilde O(n^{1/8})$ for random features/Nystrom features and the overall computational cost of random features/Nystrom feature generation procedure is $\tilde O(n^{1/7})$/$\tilde O(n^{3/8})$, where $n$ is the per-iteration sample size in the policy optimization procedure of SDEPO. Hence, the computational cost of the spectral embedding methods is mild compared to the policy optimization procedure in SDEPO. Actually, this phenomenon also exists in other represent learning method for Markov games, e.g., [1].
Response to Weakness 2:
Actually, the feature generation method lies at the core of our proposed SDEPO, i.e., SDEPO first generates $m$ feature embeddings and then conducts policy optimization based on the generated embeddings. A desirable feature embedding method should be able to provide a good approximation to the underlying dynamics of the Markov game with a modest $m$. That is exactly why the two specific feature generation methods achieve here. The Random Features/Nystrom Features generation methods could provide efficient approximations to represent the transition kernel $P$ and the reward function $r$, based on two well-known decompositions Bochner decomposition and Mercer decomposition, respectively. Actually, we prove that the evaluation error of each player’s Q-function with Random Features/Nyström Features is $O(m^{-1/2}+m^3/n^{1/2})$/$O(m^{-1}+m^3/n^{1/2})$ where $n$ is the per-iteration sample size in the policy optimization procedure of SDEPO. With $m$ being $\tilde O(n^{1/7})$/$\tilde O(n^{1/8})$ for random features/Nystrom features, we could achieve a optimal bound for the evaluation error of each player’s Q-function. Generally speaking, our SDEPO algorithm requires a good feature generation method to represent the environment and future work will explore alternative feature generation methods such as divide-and-conquer approaches [2] and greedy basis selection techniques [3].
Response to Weakness 3:
Thank you for your insightful suggestion to include more complex and diverse scenarios in our experimental section. Actually, the intention of this paper is to propose a provably efficient method for two-player zero-sum stochastic Markov games with an infinite state space, and our SDEPO achieves the best-known iteration complexity $\tilde O(\frac{1}{(1-\gamma)^3 \epsilon})$. We note that all the existing methods [4,5,6,7,8,9] for two-player zero-sum stochastic Markov games with an infinite state space mainly focus on the theoretical aspect and do not provide any experimental results at all. Actually, these methods all involve a computational inefficient subroutine, i.e., [4,5,6,7] need to solve a difficult ’find_ne’/’find_cce’ subroutine, and [8,9] have to tackle a comprehensive constrained optimization problem.
With a purpose to explore the ability of our proposed method in handling real-world complex tasks, we derived a practical variant of SDEPO, named SDEPO-NN to deal with both continuous action spaces and continuous state spaces, and direct validated the effectiveness of SDEPO-NN on the simple push task.
To investigate the effectiveness of SDEPO, we conduct a simulated experiment on a simple zero-sum Markov game. (please see Global Response)
We would include more complex and diverse scenarios in the experimental study and add more comparison in our long version of this paper.
[1] Chengzhuo Ni, et al. Representation learning for general-sum low-rank markov games. arXiv preprint arXiv:2210.16976, 2022.
[2] C.-J. Hsieh, et al. A divide-and-conquer solver for kernel support vector machines. In International Conference on Machine Learning, 2014.
[3] A. J. Smola and B. Scholkopf. Sparse greedy matrix approximation for machine learning. In ¨ Pro ceedings of the International Conference on Machine Learning, pages 911–918, San Francisco, 2000. Morgan Kaufmann Publishers.
[4] Qiaomin Xie, et al. Learning zero-sum simultaneousmove Markov games using function approximation and correlated equilibrium. In Conference on Learning Theory, pages 3674–3682. PMLR, 2020.
[5] Zixiang Chen, et al. Almost optimal algorithms for two-player zero-sum linear mixture Markov games. In Proceedings of The 33rd International Conference on Algorithmic Learning Theory, volume 167, pages 227–261. PMLR, 2022.
[6] Chris Junchi Li, et al. Learning two-player mixture markov games: Kernel function approximation and correlated equilibrium. arXiv e-prints, pages arXiv–2208, 2022.
[7] Shuang Qiu, et al. On reward-free RL with kernel and neural function approximations: Single-agent MDP and Markov game. In International Conference on Machine Learning, pages 8737–8747. PMLR, 2021.
[8] Baihe Huang, et al. Towards general function approximation in zero-sum Markov games. In International Conference on Learning Representations, 2022.
[9] Chi Jin, et al. The power of exploiter: Provable multi-agent RL in large state spaces. In International Conference on Machine Learning, pages 10251–10279. PMLR, 2022.
---
Rebuttal Comment 1.1:
Comment: We thank you for your review and appreciate your time reviewing our paper. The end of the discussion period is close. We would be grateful if we could hear your feedback regarding our answers to the reviews. We are happy to address any remaining points during the remaining discussion period. Thanks a lot in advance! | null | null | Rebuttal 1:
Rebuttal: We thank all of the reviewers for the detailed comments. Here we numerically verified the convergence of SDEPO. Here, we designed a simple zero-sum Markov game with a continuous state and finite action space ($\mathcal{S} = \mathbb{R}$,$ |\mathcal{A}|$ = 5). As for the transition probability and reward function, we segment state space into 42 distinct intervals as follows:
1、One interval from $-\infty$ to -10,
2、40 intervals from [−10,10), divided every 0.5 units,
3、One interval from 10 to $+\infty$.
In the $ith$ interval, $P(s,a,b) = f(s,a,b)+\epsilon$ where $\epsilon\sim\mathcal{N}(0,1)$, $f(s,a,b)=\epsilon_{i,a,b)}$ with $\epsilon_{i,a,b)}\sim Unif(-10.5,10.5)$.
The reward function is $r(s,a,b)=\epsilon_{i,a,b)}^\prime$ with $\epsilon_{i,a,b)^\prime}\sim Unif(-1,1)$ and initial distribution is $Unif(-10.5,10.5)$.
We ran SDEPO for 120 iterations, and measured the convergence of $\underline{\pi}$ by metrics in Proposition 1. As shown in Figure 1, SDEPO with random features and Nyström features both converge after 60 iterations.
We discretized the state space of this environment and compared it with OFTRL [1], a tabular method where the environment is known. We adopted the parameter settings recommended in [1] and adjusted the environment to a 100-horizon setting. As shown in Figure 1 in the pdf file, our method demonstrated superior convergence in this environment. This likely stems from the fact that OFTRL operates on the discretized state space, whereas our method computes on the original state space.
Note that we do not include any algorithm for two-player zero-sum stochastic Markov games with an infinite state space, as all the existing methods [2,3,4,5,6,7] mainly focus on the theoretical aspect and do not provide any experimental results at all. Actually, these methods all involve a computational inefficient subroutine, i.e., [2,3,4,5] need to solve a difficult ’find_ne’/’find_cce’ subroutine, and [6,7] have to tackle a comprehensive constrained optimization problem.
[1] Policy Optimization for Markov Games: Unified Framework and Faster Convergence
[2] Qiaomin Xie, et al. Learning zero-sum simultaneous-move Markov games using function approximation and correlated equilibrium. In Conference on Learning Theory, pages 3674–3682. PMLR, 2020.
[3] Zixiang Chen, et al. Almost optimal algorithms for two-player zero-sum linear mixture Markov games. In Proceedings of The 33rd International Conference on Algorithmic Learning Theory, volume 167, pages 227–261. PMLR, 2022.
[4] Chris Junchi Li, et al. Learning two-player mixture markov games: Kernel function approximation and correlated equilibrium. arXiv e-prints, pages arXiv–2208, 2022.
[5] Shuang Qiu, et al. On reward-free RL with kernel and neural function approximations: Single-agent MDP and Markov game. In International Conference on Machine Learning, pages 8737–8747. PMLR, 2021.
[6] Baihe Huang, et al. Towards general function approximation in zero-sum Markov games. In International Conference on Learning Representations, 2022.
[7] Chi Jin, et al. The power of exploiter: Provable multi-agent RL in large state spaces. In International Conference on Machine Learning, pages 10251–10279. PMLR, 2022.
Pdf: /pdf/3f82d35821a56d6766e5c88777151efa865a8779.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.