title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
AdvAD: Exploring Non-Parametric Diffusion for Imperceptible Adversarial Attacks
Accept (poster)
Summary: In this paper, a diffusion-based adversarial attack method is proposed and extensive experiments were conducted to show the effectiveness of the method. Strengths: 1. the paper is well-written and easy to read. 2. both adversarial attack and diffusion model are hot topics. 3. both Linf and L2 results are given in the experiment. 4. An ablation study on T is provided 5. The experiment's results are convincing. Weaknesses: 1. because this paper proposed a new algorithm, at least a pseudo code should be provided to show how the algorithm works. 2. In the table1, it shows that the performance of the model measured in FID sometimes does not match that in other metrics measuring similarity, e.g., psnr, ssim. For example, in the experiment on visionmemba, the FID of the proposed method is not the smallest, as that in other similar metrics. Can the author give some explanation about this? Technical Quality: 3 Clarity: 3 Questions for Authors: same Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: same Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your time in processing our manuscript and the valuable feedback. Our point-by-point responses are as follows.   **Weakness 1: Pseudo code.** **Re:** We appreciate your attention to this detail. We acknowledge that due to page limitations, we did not include the pseudo code in our initial submission. Instead, we made every effort to optimize our presentation and used clear illustrations with corresponding equation numbers (Figure 1) to introduce the workflow of our algorithm. We apologize for any difficulties this may have caused in understanding our manuscript. We have provided the complete pseudo code of our AdvAD and AdvAD-X in the attached PDF of the global author rebuttal, and it has also been included in the revised manuscript to enhance the clarity. Additionally, we have submitted runnable source code in the Supplementary Material for reproducibility, and we will also release the code after this work is accepted for publication. **Weakness 2: The results of FID are not always the best.** **Re:** It can be observed that the FID results of AdvAD are the second lowest only when attacking ConvNeXt and VisionMamba, with SSAH having the lowest results in these cases. Upon careful examination, we consider this to be due to the poor attack performance of SSAH on these two models, for the following reasons: Firstly, please note that the ASRs of SSAH are only **84.6%** and **49.8%** (ours are **100%** and **99.7%**) when attacking these two backbones (as shown in Table 1). As a method for directly optimizing the feature space in imperceptible attacks, SSAH shows such a low attack success rate, indicating that the added adversarial perturbations have minimal impact on the high-dimensional features within the attacked model layers. In other words, when SSAH attacks fail, the high-dimensional features of the crafted adversarial examples and the original images should be closer. Meanwhile, FID measures the Fréchet distance between the high-dimensional features of adversarial examples and original images extracted by a pre-trained Inception-V3. Therefore, the features of the adversarial examples generated by SSAH and the original images in the Inception-V3 model should also be closer, resulting in a lower FID score compared to ours. When all the attacks work normally, due to the inherently lower perturbation strength of AdvAD, the FID score, along with other metrics, achieves the best results among the comparison attacks including SSAH, demonstrating the effectiveness of our novel modeling approach. --- Rebuttal Comment 1.1: Title: Experimental verification regarding the previous response to the comment on FID metric (Weakness 2). Comment: To further validate the explanation provided above and offer a better understanding, we directly calculate the cosine similarity of the extracted global features between all adversarial examples and their corresponding original images. The average results for the two backbones are shown in the last two rows of the following table, respectively. | Metric | ConvNeXt | | VisionMamba | | |------------------------------------|----------|------------|-------------|------------| | | **SSAH** | **AdvAD** (ours) | **SSAH** | **AdvAD** (ours) | | **ASR (%) $\\uparrow$** | 84.6 | 100.0 | 49.8 | 99.7 | | **$l_2$ $\\downarrow$** | 2.24 | 1.49 | 1.95 | 1.62 | | **FID $\\downarrow$** | 3.04 | 5.07 | 2.08 | 3.67 | | **Avg. Feature Similarity (Attacked Model)** | 0.5745 | 0.4184 | 0.5424 | 0.2321 | | **Avg. Feature Similarity (Inception)** | 0.9908 | 0.9845 | 0.9938 | 0.9888 | It can be more easily observed that, consistent with our previous explanation, the lower FID results of the SSAH attack are due to its ineffectiveness in attacking the ConvNeXt and VisionMamba backbones. As shown in the sixth row of the table, despite the higher strength of the perturbation injected into the adversarial examples (as indicated by the $l_2$​ metric), the feature similarity of SSAH remains higher than ours. This suggests that the adversarial perturbation crafted by SSAH is more difficult to impact the high-dimensional features on which the model's final classification decision relies, leading to its lower ASR. For the Inception model, which is used to calculate the FID metric, SSAH's feature similarity is anomalously further amplified to above 0.99 (the last row), indicating that the adversarial perturbation crafted by SSAH has almost no effect on the high-dimensional features of the clean images for the Inception model, resulting in its lower FID results. However, when all attacks are performing correctly, the magnitude of changes in image-space and feature-space can be considered as positively correlated in the scenario of imperceptible attacks. In this case, benefiting from the inherently lower perturbation strength of our modeling approach, AdvAD consistently achieves the best FID score when attacking the other models, as shown in Table 1 and Table 5 (Appendix). --- Rebuttal 2: Title: Respectful Request for Further Discussion Comment: Dear Reviewer ytMn, We sincerely thank you for your valuable feedback and the time you have taken to process our submission. As the discussion phase is nearing its end, we respectfully ask for your help once again to review our responses and let us know if they address your concerns. Following your valuable suggestions, we have included the pseudo code for our AdvAD and AdvAD-X in the attached PDF of the global author rebuttal, and it has also been incorporated into the revised version to further enhance clarity. Additionally, we have provided a detailed explanation and corresponding experimental verification for the FID results, which you expressed concerns about. Please also let us know if you have any further questions about this paper. We have made every effort to enhance our work based on your insightful comments, and we would deeply appreciate it if you could further support us! Best regards, The authors
Summary: This work proposes a novel adversarial attack framework called Adversarial Attacks in Diffusion (AdvAD). Unlike prior methods that rely on generative models or specific loss functions, AdvAD formulates attacking as a non-parametric diffusion process. This approach theoretically explores a fundamental modeling strategy instead of leveraging the denoising or generation capabilities of diffusion models with neural networks. AdvAD iteratively refines the attack by crafting subtle adversarial guidance based solely on the targeted DNN, without requiring any additional network. This process progressively transforms the original image into an imperceptible adversarial example. Strengths: 1. The concept of leveraging the core principles of diffusion models to craft adversarial examples is intriguing. 2. The writing style in the Abstract and Introduction sections is engaging. 3. The effectiveness of AdvAD is validated through experiments. Weaknesses: While I have carefully reviewed the work several times, some uncertainties remain. These are addressed in the following "Questions" section. Due to these uncertainties, the initial recommendation is a borderline decision, which can be revised upwards or downwards based on the authors' response. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The paper claims that AdvAD achieves superior imperceptibility compared to other methods. However, after multiple readings, the justification for this remains unclear. While the text mentions "much subtler yet effective adversarial guidance at each step", "grounded on diffusion model's theoretical foundation", and various diffusion-related formulations, it lacks a clear explanation (or theoretical verification) of how using non-parametric diffusion leads to attacks with lower perturbations. 2. Based on my understanding, AdvAD shares similarities with traditional attacks like PGD, but with a fixed number of optimization iterations (controlled by the total diffusion model timesteps T) and a gradually decaying learning rate (controlled by the $α_t$ value). I hypothesize that the decayed learning rate is a key factor contributing to imperceptibility. The learning rate starts low (at the beginning of T steps) and gradually decreases to 0 (at the end of 0 steps). This suggests that small perturbations are added first, followed by even smaller ones, potentially leading to adversarial examples closer to the classification boundary and thus exhibiting higher imperceptibility. Could the authors comment on the validity of this hypothesis? 3. Following the previous question, to isolate the impact of the decayed learning rate strategy, it would be beneficial to include comparative experiments. These experiments would directly apply a small learning rate initialization and a decayed learning rate schedule to a traditional attack algorithm. If this approach achieves similar performance to AdvAD, it would suggest that the emphasis on diffusion model theory might not be necessary. If not, the authors should highlight the specific benefits of the diffusion model mechanism that contribute to imperceptibility. 4. Does the selection of the initial Gaussian noise sample $ε_0$ affect the performance of AdvAD? 5. In Table 1, is the FID metric computed between the perturbed images and the "imagenet-compatible dataset"? If the goal is to assess the realism of the adversarial examples compared to natural images, a more comprehensive evaluation would be to compare them with the raw ImageNet images. The "imagenet-compatible" dataset, with only 1000 images, might not adequately capture the entire distribution of natural images. 6. Line 248 states that "the optimizer usually cannot find the global optimal solution, and optimization-based methods tend to show sub-optimal ASR." How can AdvAD achieve better ASR compared to these optimization-based methods? 7. In Table 2, which classifier was used to generate the adversarial samples? The robustness of the adversarially trained model Inc-V3_{adv} and the ensemble model Inc-V3_{ens4} appears very low (almost 100% ASR). Additionally, were Res-50, Swin-B, and ConvNeXt-B clean classifiers or adversarially trained? 8. Line 292 suggests that "a larger T exhibits better imperceptibility, while a smaller T implies stronger black-box transferability." Can the authors elaborate on this trade-off? 9. There are a few minor writing errors throughout the paper, such as "a' adversarial" (Line 140), "fistly" (Line 144), and "an' scheme" (Line 191). 10. There are specific questions regarding the proofs of theorems and propositions in the Appendix: (1) In the second row equality of Eq. 6, shouldn’t $\epsilon_T$ be $\epsilon_{T+1}$ in the first term on the numerator of the first term? (2) In Line 546, it Is clear that $\epsilon_{T+1}$ satisfies the inequality as $\epsilon_{T+1}$ equals $\epsilon_0$, but how does this hold for $\epsilon_{T}$ as well? (3) Eq. 20, third row inequality: How is this inequality derived? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes, limitations are identified in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your time in processing our manuscript and the valuable feedback. Our point-by-point responses are as follows. **Question 1: Explanation and verification of AdvAD.** **Re:** 1) **Intuitive explanation.** The superior imperceptibility of AdvAD first comes from its modeling philosophy inherited from diffusion models. For image generation tasks, one of the widely recognized advantages of diffusion models lies in breaking down the complex problem of directly generating images (e.g., GANs) into a series of simpler tasks, gradually pushing a Gaussian distribution toward the image distribution by approximating the noise at each diffusion step, thus to significantly reduce the difficulty of generation with a divide-and-conquer manner. Similarly, compared to previous attack frameworks that directly optimize or inject perturbations into images, the proposed _non-parametric diffusion process_ allows AdvAD to decompose attacking into the relatively simpler task of imposing _adversarial guidance_ by manipulating diffusion noise at each diffusion step to progressively guide the endpoint of diffusion from the original image to a desired adversarial example. Thus, the adversarial guidance injected at each step could be subtler but more effective, ultimately achieving imperceptible attacks with inherently lower perturbation strength. 2) **Theoretical verification.** As a completely novel attack paradigm without any loss function, optimizer, or additional neural network, the essence of AdvAD exactly lies in the _subtler_ (for imperceptibility) yet _more effective_ (for attack efficacy) adversarial guidance, and both the characteristics are theoretically grounded. Firstly, the conditional sampling technique enables diffusion models to sample from a given conditional distribution (Eq. 3). Derived from it, the proposed AMG module treats the goal of attack as a condition and gradually moves the original diffusion trajectory of the image towards this adversarially conditional distribution through a non-parametric diffusion process (Eq. 4-8), while the PC module synergistically ensures the process is streamlined and controllable (Theorem 1), providing guarantee of successful attack. Secondly, beyond the aforementioned modeling philosophy, the imperceptibility is also supported in the unique property of adversarial guidance as described in line 156-161. In addition, our Proposition 1, 2 and experimental results (especially for Sec. 4.5) provide further analysis and validation of this point. **Question 2, 3: Hypothesis and PGD with decaying learning rate.** **Re:** 1) As shown in Eq. 9, the $l _\\infty$ upper bound of the guidance strength is irrelative with step $t$. This means that there is no 'decaying learning rate/step size' coefficient for the guidance during the non-parametric diffusion process, and no loss function (e.g., CrossEntropy) is used as in traditional algorithms for gradient ascent. We assume your hypothesis originates from the two decreasing curves about Proposition 1 in Figure 4. Actually, Proposition 1 is an analytical conclusion drawn from extensive formula derivations in Appendix B.3, which demonstrates that the effect of adversarial guidance on the entire process diminishes from strong to weak. This is because, due to the nature of the non-parametric diffusion process, the earlier the guidance is injected, the greater its cumulative impact on the subsequent steps, corresponding to more movement distance toward the target distribution for the noisy sample at that step. 2) We conduct experiments of the PGD+decaying step size strategy. We modify the PGD attack as: $x ^{t+1} = \\Pi(x ^t + \\lambda _t \\cdot \\eta \\cdot sign(\\nabla _{x _t}L _{CE}(x ^t, y)))$, where $T=1000$, $\\lambda _t$ is consistent with our Proposition 1 for alignment, and $\\eta$ is a fixed small factor for the initial learning rate. We explored a lot of $\\eta$ values to determine the optimal range, and the results attacking three models with different architectures with three typical $\\eta$ are presented in Table R2 of the attached PDF. It can be observed that for PGD with this strategy, the ASR is clearly proportional to $\\eta$, while the imperceptibility is inversely proportional to $\\eta$. However, regardless of how $\\eta$ is adjusted, this strategy _can not_ simultaneously match AdvAD in both ASR and imperceptibility. Firstly, for $\\eta$ = $5\\times10^{-5}$, when attacking VisionMamba, ASR of this strategy is 10.5% lower than AdvAD with similar PSNR. For ResNet50, the strategy has a 0.2% higher ASR but a 5.09 dB lower PSNR. When $\\eta$ = $3\\times10^{-5}$, the ASR against VisionMamba and Swin further degrade, being 10.9% and 21.4% lower than AdvAD, respectively. When $\\eta=1\\times10^{-5}$, this method fails to attack all the models. **Question 4: Initial Gaussian noise.** **Re:** All Gaussian noises are randomly initialized and have almost no impact because they merely serve as a medium of the injected adversarial guidance. With various seeds, the effect on the final results is akin to slight random fluctuations, at ±0.1% for ASR and ±0.02 for PSNR. **Question 5: Calculation of FID.** **Re:** For adversarial attacks, our goal is to assess the similarity between adversarial examples and their corresponding clean images to measure the imperceptibility of attacks, rather than the gap with the natural image distribution. Therefore, we adopt the FID calculation method consistent with previous work. Moreover, in Table 1, we have included results of the MUSIQ metric, which assesses the realism of adversarial examples without reference images. **Due to the character limit, point-by-point responses to the remaining questions are placed in Part 2 of the global author rebuttal. Please kindly refer to that section for details.** --- Rebuttal Comment 1.1: Title: Thanks for the response. Comment: After reviewing the authors' response and considering the innovative nature of the attack they have implemented, I have decided to raise my score to 5. --- Reply to Comment 1.1.1: Comment: Thank you for the response. We sincerely appreciate your valuable comments and are grateful for the recognition of our work.
Summary: This paper proposes a new method, AdvAD, for generating imperceptible adversarial attacks against deep neural networks (DNNs). AdvAD is based on a novel non-parametric diffusion process, which initializes a fixed diffusion noise and then manipulates it at each step using adversarial guidance crafted by two modules: Attacked Model Guidance (AMG) and Pixel-level Constraint (PC). This process gradually leads the image from its original distribution to a desired adversarial distribution. The paper also introduces an enhanced version, AdvAD-X, which aims to achieve extreme performance under an ideal scenario. Extensive experiments demonstrate the effectiveness of both AdvAD and AdvAD-X in terms of attack success rate, imperceptibility, and robustness compared to state-of-the-art methods. Strengths: The paper's main strength lies in its novel approach to generating imperceptible adversarial attacks. The use of a non-parametric diffusion process is innovative and shows promising results in terms of attack effectiveness and imperceptibility. The paper is well-written and easy to follow, with clear explanations of the proposed method and experimental setup. Weaknesses: The paper has several weaknesses. The technical novelty is limited, as the core idea of using diffusion models for adversarial attacks has been explored in prior work (e.g., DiffAttack, ACA). The paper lacks a clear comparison with these existing diffusion-based attack methods, making it difficult to assess the incremental contribution of AdvAD. Another thing is how much cost is it really to generate adversarial examples this way. I mean, one important use of generating adversarial examples is for adversarial training. So if it becomes really expensive to generate adversarial examples, such a use wouldn't be really meaningful. Then what is the use of these adversarial examples really? Technical Quality: 3 Clarity: 3 Questions for Authors: What are the specific differences between AdvAD and existing diffusion-based attack methods like DiffAttack and ACA? A detailed comparison would help clarify the novelty of the proposed approach. What are the potential defensive measures against AdvAD, and how can they mitigate the threat posed by this attack? How does the computational complexity of AdvAD compare to other state-of-the-art methods, especially for large-scale datasets or models? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your time in processing our manuscript and the valuable feedback. Our point-by-point responses are as follows.   **Weakness 1 & Question 1: Clear and detailed comparison with existing diffusion-based attack methods.** **Re:** Compared to other recent notable works exploring adversarial attacks based on diffusion models, the proposed AdvAD is a completely novel approach distinct from existing attack paradigms. It is the first pilot framework which innovatively conceptualizes attacking as a non-parametric diffusion process by theoretically exploring fundamental modeling approach of diffusion models rather than using their denoising or generative abilities, achieving high attack efficacy and imperceptibility with intrinsically lower perturbation strength. To further highlight and clarify our contributions, we have provided a detailed comparison below between AdvAD and other diffusion-based attack methods in various aspects. 1) **Motivation.** Existing diffusion-based attacks achieve imperceptibility by utilizing the denoising or generative abilities of diffusion models to heavily modify texture or semantic attributes of images in an image edition-like way rather than inject typical adversarial noises. In contrast, the proposed AdvAD aims to address the fundamental challenge of imperceptible attacks, that is, performing attack with inherently minimal perturbation strength from a modeling perspective. 2) **Theory of Attack.** Although diffusion models are introduced in the existing diffusion-based attacks, their theoretical foundations still adhere to prior attack paradigms, either by integrating the optimization of adversarial loss functions (e.g., cross-entropy) or classical attack methods (e.g., PGD) with nested loops of forward and backward diffusion steps. In comparison, we propose a novel non-parametric diffusion process for performing attacks as a new paradigm. The modeling of AdvAD is theoretically derived from conditional sampling of diffusion models, supporting its attack performance and imperceptibility, and does not require any loss functions, optimizers, or additional neural networks. 3) **Methodology.** Among the existing diffusion-based attacks, DiffAttack and ACA combine the optimization of adversarial losses with the Stable Diffusion to generate unrestricted adversarial examples, while Diff-PGD and AdvDiffuser incorporate the classic PGD attack into the normal diffusion process. Furthermore, as typical diffusion models, these attacks require using pre-trained neural networks to estimate the noise term at each diffusion step. For our AdvAD, the adversarial example is crafted via an uni-directional non-parametric diffusion process from an initialized and fixed diffusion noise to final sample. At each step, a much subtler (for imperceptibility) yet more effective (for attack performance) adversarial guidance is calculated via the proposed AMG and PC modules without additional networks, then injected by manipulating the diffusion noise. 4) **Effectiveness.** Due to the uncertainty of generative models and the unrestricted attack setting, existing diffusion-based attacks inevitably cause unnatural and unreasonable artifacts in the crafted adversarial examples, especially for images with complex content. Benefiting from our modeling approach, the proposed AdvAD is able to apply precise control over the perturbations and accomplish imperceptible attacks with inherently low perturbation strength. Extensive experimental results have also demonstrated the superiority of AdvAD. Based on AdvAD, we further propose an enhanced version AdvAD-X to explore the extreme performance in an ideal scenario for the first time, which also possesses theoretical significance and provides new insights for revealing the robustness of DNNs. **Weakness 2 & Question 3: Computational complexity.** **Re:** We respect and concur with your point of view. One of the important roles of adversarial attacks is to promote corresponding defense methods and enhance the robustness of DNNs, and algorithm efficiency is a key factor to consider in practice of adversarial training. In column 3 of Table 1, we have reported the total time cost of each attack for processing 1000 natural images. Thanks to our novel non-parametric and uni-directional diffusion process, which does not require additional networks or nested loops of forward and backward diffusion steps, the computational complexity of AdvAD is mainly concentrated in taking partial derivatives to obtain adversarial guidance (note that this is not the gradient of loss function) at each step. Thus the cost of AdvAD is significantly lower than ACA and DiffAttack, which contain Stable Diffusion within loops of optimization. For Diff-PGD, it is more time-consuming than AdvAD when attacking small models (ResNet-50, 1.5x) but slightly faster for larger models (Swin-Base, 0.8x). Additionally, for non-diffusion optimization-based attacks, the computational complexity of AdvAD is lower than PerC-AL and comparable to the others, while the performance of AdvAD is much better. **Question 2: Potential defensive measures against AdvAD.** **Re:** In our research, we have considered the robustness of AdvAD against defense methods and presented the experimental results in Section 4.3. It can be observed that due to the inherently low perturbation strength of AdvAD, it is relatively more susceptible to defense methods based on purification, which is similar to other state-of-the-art restricted imperceptible attack methods. However, the results indicate that AdvAD can relatively easily bypass robust models. We hypothesize that this is because these robust models typically use previous attack algorithms based on adversarial losses or gradient ascent for adversarial training, whereas AdvAD, as a new attack paradigm, is not included. Thus, we also anticipate that AdvAD will further advance the development of related defense fields. --- Rebuttal 2: Title: Respectful Request for Further Discussion Comment: Dear Reviewer jCt2, We sincerely thank you for your valuable feedback and the time you have taken to process our submission. As the discussion phase is nearing its end, we respectfully ask for your help once again to review our responses and let us know if they address your concerns. In brief, we have provided a detailed point-by-point comparison with other diffusion-based methods from various perspectives to further complement and highlight the significant innovations and contributions of our paper, in addition to those you kindly recognized in the Strengths section. Moreover, we have elaborated on the computational complexity and potential defense methods following your constructive suggestions. It is our pleasure to receive your feedback. Please also let us know if you have any further questions about this paper. We have made every effort to enhance our work based on your insightful comments, and we would deeply appreciate it if you could further support us! Best regards, The authors
Summary: In this paper, the authors propose the Adversarial Attack in Diffusion method called AdvAD, which crafts imperceptible perturbations from the model perspective without the need for additional networks. During the non-parametric diffusion process, the proposed AdvAD method introduces Attacked Model Guidance (AMG) and Pixel-level Constraint (PC) modules to ensure both imperceptibility and transferability of the adversarial examples. Moreover, an enhanced version AdvAD-X is further introduced to achieve better imperceptibility and lower computational complexty. Experiments conducted on CNN, ViT, and Mamba architectures using the ImageNet benchmark dataset demonstrate that the proposed AdvAD method achieves superior performance compared to baseline methods in terms of attack success rate, imperceptibility, and robustness. Strengths: 1. The proposed AdvAD method reformulates the generation of imperceptible adversarial examples as a non-parametric diffusion process, effectively eliminating the negative effects caused by the uncertainty of generative models. 2. The proposed AMG and PC modules inject adversarial guidance progressively at each step without the need for additional networks, and they have a solid theoretical guarantee that the error caused by approximation is strictly bounded. 3. Experimental results verify the effectiveness of the proposed AdvAD method, which achieves higher attack success rate, lower perturbation strength, and better image quality. Weaknesses: 1. In section 3.4, the details of AdvAD-X are not explained clearly. The exact step of adopting original diffusion noise in the DGI strategy is not discussed. Additionally, in the CA strategy, the definition of non-critical image regions is not introduced. 2. In section 4.4, the transferability experiment lacks comprehensiveness. As shown in Table 3, the proposed AdvAD method is only compared to a limited number of baseline methods. 3. In Table 3, with a limited budget of updating iterations, the proposed AdvAD (T=10) performs very similarly to the classic PGD method when using Mob-V2 as a surrogate model, improving the black-box ASR by less than 1%. 4. The experimental settings for Table 4 are not fully introduced, such as the surrogate model used for crafting adversarial examples. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How are non-critical image regions defined for AdvAD-X? Is this definition universally applicable to all images, or is it adjusted for each image separately? 2. Why were other attack methods (such as NCF, ACA, Diff-Attack, Diff-PGD, etc.) not considered in the transferability experiment shown in Table 3? 3. As only a few values (e.g., 10, 100, 1000) are discussed, how can the optimal value of the step $T$ be chosen to ensure both imperceptibility and transferability in practice? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The proposed AdvAD-X method is only suitable for ideal scenarios where the crafted adversarial example is directly input to DNNs without undergoing quantization. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your time in processing our manuscript and the valuable feedback. Our point-by-point responses are as follows.   **Weakness 1 & Question 1: Details of AdvAD-X: exact step with DGI strategy, definition of non-critical image regions.** **Re:** Due to the page limitation of the main paper, we placed the ablation study results of AdvAD-X in **Appendix C.3**, including the number of iterations of AdvAD-X (column 2, Table 6). For your convenience, we transcribes it here as follows: | Attack | Iter. | ASR ($\\uparrow$) | $l_2$ ($\\downarrow$) | PSNR ($\\uparrow$) | SSIM ($\\uparrow$) | FID ($\\downarrow$) | | :-------- | :-------: | :-------: | :------: | :-------: | :-------: | :------: | | AdvAD$\\textcolor{blue}{\\dagger}$ | 1000 | **100.0** | 0.97 | 52.60 | 0.9984 | 2.3894 | | AdvAD + CA$\\textcolor{blue}{\\dagger}$ | 1000 | **100.0** | 0.89 | 53.27 | 0.9987 | 2.2033 | | AdvAD + DGI$\\textcolor{blue}{\\dagger}$ | **3.97** | **100.0** | **0.34** | 63.60 | **0.9997** | 0.2317 | | AdvAD-X$\\textcolor{blue}{\\dagger}$ | 4.05 | **100.0** | **0.34** | **63.62** | **0.9997** | **0.2301** | From AdvAD to AdvAD-X, the effect of the two strategies of CA and DGI are shown in this table. It can be observed that adding CA in each step of AdvAD slightly improves impercepbility while maintaining the attack success rate of 100\%. However, the DGI strategy significantly reduces the iterations of performing AMG and PC from 1000 to 3.97, which indicates that our framework theoretically only requires very little injected adversarial guidance to successfully perform attacks, proving the performance of our modeling method as well as the effectiveness of the adversarial guidance. In AdvAD-X, which finally uses both CA and DGI, the guidance strength in each step is further suppressed, resulting in a slight increase in the total number of iterations required adaptively, but the final perturbation strength continues to decrease to a more extreme level. For the CA strategy, we adopt the GradCAM (Gradient-weighted Class Activation Mapping) to obtain mask about non-critical image regions for each image separately (as mentioned in lines 201-204 of the main paper). GradCAM is a widely used technique to indicate important regions of the input image that a classification model focuses on when making decisions about a particular category. Specifically, we calculate the heatmap via GradCAM as a mask $\\boldsymbol{m}$ with the same resolution of image. In this mask , each pixel value ranges from 0 to 1 with lower score representing lower importance, thus the non-critical regions are defined. Consequently, AdvAD-X suppresses the intensity of injected adversarial guidance at each step by performing element-wise multiplication with the mask $\\boldsymbol{m}$, which can be expressed as: $$\\boldsymbol{\\hat{\\epsilon}}_t = \\boldsymbol{\\epsilon} _0 - \\boldsymbol{m} \\cdot \\sqrt{1-\\alpha_t} \\nabla _{\\boldsymbol{\\hat{x}}_t}\\text{log}(1-p _f(y _{gt}|\\boldsymbol{\\hat{x}} _{t}^{0}))$$ Additionally, we have provided the pseudo code for the AdvAD and AdvAD-X algorithms in the PDF file of the global author rebuttal for better understanding. We also hope the discovery that AdvAD-X can successfully attack with extremely low perturbation strength with floating-point raw data can bring new inspiration to reveal the robustness and interpretability (e.g., decision boundaries) of DNNs. **Weakness 2 & Question 2: Other attack methods (NCF, ACA, Diff-Attack, Diff-PGD) are not considered in Table 3.** **Re:** 1) Rather than only focusing on transferability, the purpose of Table 3 is to show that although one of the main advantages of the proposed method is the imperceptibility achieved by the novel modeling approach with inherently low perturbation strength, our AdvAD also surpasses other state-of-the-art restricted imperceptible attack methods in transferability, even at much smaller perturbation cost. This is despite the evident positive correlation between transferability and perturbation strength as described in line 292-295. Additionally, more than a comparison of transferability, Table 3 also serves as an ablation study to illustrate the effect of step $T$, demonstrating AdvAD could be flexibly adjusted through $T$ as a novel and general attack framework. 2) The NCF, ACA, and other methods you mentioned are unrestricted attacks, which dose not impose objective limitation on perturbation strength, resulting in significant alterations to the images (as shown in Table 1, their $l _2$ distances are x10-x75 greater than AdvAD) and inevitably producing unnatural artifacts or unreasonable semantic modifications sometimes (as visualizations in Figure 2 and 5). Given this substantial difference in perturbation cost, comparing the transferability of unrestricted and restricted methods is inappropriate. Therefore, we did not include these methods in Table 3. Instead, as aforementioned, we compared our method with other state-of-the-art restricted imperceptible attacks and demonstrated that our approach surpasses them in both imperceptibility and transferability, simultaneously. **Weakness 3 & Question 3: Similar performance to PGD using Mob-V2 as a surrogate model, and the optimal value of $T$.** **Re:** In the global author rebuttal, we have further discussed transferability of the proposed AdvAD and provide more experimental results to demonstrate its effectiveness, including more values of $T$, the optimal value selection, more comprehensive comparisons with other attacks, etc. Please kindly refer to the corresponding section for more details. **Weakness 4: Experimental settings of Table 4.** **Re:** We apologize for this omission. The attacked model adopted in Table 4 is the classic ResNet50, and the total step $T$ is consistently set to 1000 for all $\\xi$. We have added the above description of the experimental setting in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for the authors’ responses. My concerns have been properly addressed. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback, and we are glad that we have addressed your concerns.
Rebuttal 1: Rebuttal: We thank all the reviewers for their valuable feedback and recognition of our contributions. We also appreciate they find the proposed method is novel (**K1mG**, **ygiL**, **nUXD**, **ytMn**), theory is solid (**K1mG**, **ygiL**), paper is well-written (**K1mG**, **jCt2**, **nUXD**, **ytMn**), and the experiments have demonstrated the effectiveness of our AdvAD and AdvAD-X (**K1mG**, **ygiL**, **jCt2**, **nUXD**, **ytMn**).   ### **Part 1: Responses to the common concerns of step $T$ and transferability of AdvAD.** We find that three reviewers (**K1mG**, **ygiL**, **nUXD**) are curious about the effect of step $T$ and the transferability exhibited by the proposed AdvAD. Although the primary focus of our method is to achieve imperceptible attacks with inherently minimal perturbation strength from modeling perspective, AdvAD is actually a complete novel and general attack framework that distinct from previous gradient-ascent or optimization-based ones since there are no loss functions, optimizers, or additional neural networks required. By conceptualizing attacking as a non-parametric diffusion process with adversarial guidance, AdvAD can flexibly adjust the decomposition granularity of the diffusion trajectory through the step $T$. A larger $T$ corresponds to a finer decomposition granularity, resulting in subtler yet effective adversarial guidance, which lead to final adversarial examples with lower perturbation strength. Conversely, a smaller $T$ corresponds to a coarser granularity, resulting in higher perturbation strength, which favors transferability. Despite of the contradiction between perturbation strength and transferability, compared to other state-of-the-art restricted imperceptible attacks, AdvAD still demonstrates both better imperceptibility and transferability simultaneously as shown in Table 3 of the manuscript. To further demonstrate the effect, we present five more values of $T$ in Table R1 of the attached PDF. Additionally, to elaborate the relationship between transferability and imperceptibility of AdvAD, as well as the optimal trade-off in practice, we have plotted two line graph according to Table R1. As shown in Figure R1 (a), as the value of $T$ on the horizontal axis changes, the relationship between imperceptibility and transferability shows a clear proportional trend mentioned above, consistent across different surrogate models. For the optimal trade-off, we consider the intersection point of the two curves to represent a balance between imperceptibility and transferability. Accordingly, for the ResNet-50 and MobileNetV2 models, the optimal values of $T$ are 50 and 25, respectively. Moreover, Figure R1 (b) illustrates more direct curves of this relationship and the positions of other comparison methods within it. Note that, all the other comparison methods are located to the lower left of the curve of AdvAD. This indicates that our method consistently achieves the best results in both transferability and imperceptibility, fully demonstrating the effectiveness of the proposed AdvAD and the novel non-parametric diffusion based modeling approach. All the supplementary results have been included in the revised manuscript.   ### **Part 2: Responses to the remaining questions of Reviewer **nUXD**.** **Question 6: Why AdvAD achieves better ASR compared to the optimization-based methods?** **Re:** Optimization-based methods require balancing attack performance and imperceptibility through loss functions and may fall into local optima. In contrast, our method uses inherently small yet effective adversarial guidance to gradually push the non-parametric diffusion towards an adversarially conditioned distribution, without including any penalty terms for imperceptibility, thereby achieving a higher ASR. **Question 7: Settings of Table 2.** **Re:** As stated in the first row, the left half of Table 2 employs a standard ResNet50 to generate adversarial examples for three post-processing defense methods, while the right side presents the results of white-box attacks on adversarially trained robust models. Among these, Inc-V3 represents an earlier (2018) classic robust model, whereas the others are state-of-the-art adversarial training models recently published (e.g., [48] at NIPS 2023), thus exhibiting enhanced robustness. **Question 8, 9: Step $T$ and typos.** **Re:** We have elaborated on the trade-off issue in Part 1 of this author rebuttal. For writing errors, we thank you for kindly pointing them out, and we have carefully proofread and corrected all typos in the revised version. **Question 10 (1), (2): Initial case of Proof 1.** **Re:** As depicted in Figure 1 and Eq. 7 of the main text, the $\\hat{x}_t^0$ is calculated using $\\hat{\\epsilon} _{t+1}$ from the previous step, thus to streamline the whole process. Therefore, the $\\epsilon$ is initialized at step $T+1$ and $x$ at $T$, and the initial case we need to prove is from $T$ to $T-1$, which means deriving the relationship of $\\hat{x} _{T-1}$, $\\bar{x} _{T-1}$, etc., given the relationship of $\\|\\hat{\\epsilon} _{T} - \\epsilon _0 \\| _\infty$ obtained by imposing the PC module. **Question 10 (3): Eq. 20.** **Re:** Sincerely thank you for your carefulness and pointing the issue of Eq. 20. We are sorry that the sequence of Eq. 20 is mistakenly written upside-down. The correct process should be from bottom to top because $\\|\\hat{\\epsilon} _{k} - \\epsilon _0 \\| _\infty$ is actually not a conclusion but the condition that should be satisfied by applying the PC module as stated in Eq. 9, 10 of the main text. Substituting this condition into the left side of the inequality in the third row now, the inequality and the relationship of $\\|\\hat{x} _{k-1}-x _{ori}\\| _\\infty$ can be easily derived. This typographical error does not affect the correctness of the proof, and we have corrected it in the revised version. Pdf: /pdf/23929ef2957aecf3d478e47eddc6e1b1e50c696a.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper studies a task about generating the imperceptible adversarial noise using the diffusion model. The proposed method theoretically models the attack process as a non-parametric diffusion process. Extensive experiments demonstrate the effectives of the proposed method. Strengths: (1) The writing is good and easy to follow. (2) The technique novelty is good, which studies the imperceptible adversarial perturbation generation as a non-parametric diffusion process. The behind theory is solid. (3) Authors conduct extensive experiments and compare many baselines to show the effectiveness of the proposed method. Weaknesses: (1 ) The adopted defense methods are not strong, lacking some strong baselines, like the random smoothing. [1] Cohen, Jeremy, Elan Rosenfeld, and Zico Kolter. "Certified adversarial robustness via randomized smoothing." international conference on machine learning. PMLR, 2019. (2) Could the generated adversarial examples used for black-box (or transfer-based) attack settings? What's the limitation? Technical Quality: 3 Clarity: 3 Questions for Authors: see weakness Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your time in processing our manuscript and the valuable feedback. Our point-by-point responses are as follows.   **Weakness 1: Defense of random smoothing.** **Re:** In Table 3 of the paper, we have evaluated the robustness of proposed AdvAD using numerous advanced defense methods, including three post-processing purification methods and six adversarial training robust models to defense attacks. Random smoothing is indeed another strong defense method, and we have conducted experiments to test the performance of our attack on it following your suggestion. Since random smoothing is not a truly end-to-end model but a method that uses the base model to make multiple predictions on noise-augmented images, we adpot a semi-white-box setup to fully test the attack performance. Specifically, we craft adversarial examples using only the base model without smoothing process, then apply an additional 100 rounds (n=100) of random smoothing voting on the generated adversarial examples and tested the final attack success rate. The results are as follows: | |    $\\sigma = 0.25$ | |    $\\sigma = 0.50$ | |    $\\sigma = 1.00$ | | |---------|-----------------------------|----|-----------------------------|----|-----------------------------|----| | | **Top-1 Error (%)** | $l _2$    | **Top-1 Error (%)** | $l _2$    | **Top-1 Error (%)** | $l _2$    | | **clean** | 17.3 | / | 30.3 | / | 46.8 | / | | **AdvDrop** | 25.2 (+7.9) | 5.97 | 33.5 (+3.2) | 6.21 | 48.7 (+1.9) | 5.61 | | **SSAH** | 21.8 (+4.5) | 13.84 | 32.4 (+2.1) | 14.82 | 46.9 (+0.1) | 13.68 | | **AdvAD** | **28.2 (+10.9)** | **2.41** | **36.8 (+6.5)** | **2.51** | **50.4 (+3.6)** | **2.08** | It can be seen that for all random smoothing defenses using base models pre-trained with different variances ($\\sigma$), the proposed AdvAD achieves a higher attack success rate with smaller $l_2$-norm perturbations compared to other state-of-the-art imperceptible attacks (attack ), which further demonstrates the effectiveness of the proposed AdvAD. In the revised version, we will include the relevant references and results. **Weakness 2: Transferability and limitation.** **Re:** Sure. The proposed AdvAD achieves the best imperceptibility with inherently minimal perturbation strength through a novel non-parametric modeling approach, while also surpassing other state-of-the-art restricted imperceptible attacks in transferability at the same time. The transferability of AdvAD is directly related to the step $T$, that is, the imperceptibility is positively correlated with the size of $T$, while transferability is negatively correlated with it. In Table 3 of the original manuscript, we have reported the relevant experimental results, and we have further discussed transferability in more detail in the global author rebuttal section. In Table R1, we provided results corresponding to additional values of $T$, and in Figure R1, we plotted the relationship curve between transferability and imperceptibility to find the optimal trade-off, conducting a more comprehensive evaluation. All experimental results demonstrate the superiority of AdvAD. Please refer to that section for more details. Regarding limitations, although AdvAD achieves better transferability at lower perturbation strength, its primary focus remains on the imperceptibility of the attack. Consequently, its transferability is inevitably weaker than other black-box attack methods that operate in larger perturbation spaces and are specifically optimized for transferability. However, the proposed AdvAD is essentially a general attack paradigm with a novel modeling approach and a solid theoretical foundation. Therefore, by relaxing the constraint of perturbation strength and incorporating enhanced designs for the transferability, AdvAD also has significant potential in the direction of black-box attacks. We leave this aspect for future work. --- Rebuttal Comment 1.1: Comment: Thanks for your response. It addresses most of my concerns. Hope authors could include these discussions into the revision. --- Reply to Comment 1.1.1: Comment: Thank you for the response. We appreciate your valuable review and we will incorporate the discussions to the revised version accordingly.
null
null
null
null
null
null
Understanding Information Storage and Transfer in Multi-Modal Large Language Models
Accept (poster)
Summary: This paper presents an approach to investigate the layer in which multi-modal large language models retrieve factual information and show several insights into their behavior. To investigate causal tracing, they propose replacing the input text tokens with different ones so that the model can respond differently from the correct answer with the replaced text tokens, then they copy activations from the clean layers until the model can reconstruct the correct answer. Through this investigation, their primary finding is that information to answer a visual question is mainly retrieved from early MLP and self-attention layers in MLLMs, which is different from the insight that LLMs retrieve factual information from middle layers. They also reveal several facts: only a subset of visual tokens are involved in sending information to the LLM's early layers, and mid-layer self-attention layers are involved in transferring information from the early causal layers to the question’s final token. They also demonstrate that an approach similar to editing factual knowledge of LLMs can be used to overwrite the knowledge of MLLMs. Strengths: 1. Their main strength is providing an approach to locate the layers in which MLLM models are retrieving factual information. Since the research on MLLM is popular these days, the technique may be a good one to analyze the behavior of MLLM. 2. They present a new dataset to study the issue. 3. The insight obtained from their approach is interesting. Unlike LLMs which retrieve factual information from middle layers, MLLMs retrieve factual information from early layers. This fact can be new to the community. 4. Other observations such as how visual tokens are used in MLLMs are insightful too. These facts might be useful to think about the design of MLLMs. Weaknesses: 1. They offer an interesting observation that MLLM retrieves factual information from early layers, but they lack insight into why this happens. I think it is very important to give intuitions on why their behavior is different from LLMs. 2. They present the insight from their approach using specific examples as in Fig. 3. However, if my understanding is correct, they do not provide numerical stats of which layers are responsible for retrieving factual information. Since they construct a new dataset, I guess it is easy to compute quantitative values to compare in which layers LLM and MLLM are retrieving information. 3. The overall idea of their approach is not very novel. They borrow the idea from prior work and adapt it to MLLM. The novel part of their approach is replacing input text tokens so that the model makes an incorrect answer. 4. I think section 5 does not improve the value of this submission. The section focuses on how to edit the knowledge of MLLM following prior work. The topic is related, yet different from their core contributions. Technical Quality: 3 Clarity: 3 Questions for Authors: At this phase, I am on the borderline. I acknowledge the importance of analyzing the behavior of MLLMs and think the proposed approach is simple and reasonable. But, I also think the findings from this paper are not very novel. 1. Related to 2 in weaknesses, is the value of Fig. 3 all from a specific example? or is it averaged over many examples? 2. Please respond to all the weaknesses above since I might misunderstand some parts. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. We are glad that they acknowledge the strength and importance of our proposed methodologies and findings for future MLLM development and interpretability. Below we address the specific points raised by the reviewer: **They offer an interesting observation that MLLM retrieves factual information from early layers, but they lack insight into why this happens. I think….**: Through conducting our analyses, we developed several hypotheses as to why MLLMs retrieve information differently compared to LLMs. We suspect that the language model elicits distinct internal circuits (a set of MLPs and attention heads) in the presence and absence of visual information. If the visual information is not present, the language model relies on one type of circuit, but in the presence of information from visual tokens, this circuit gets overridden by another. This could be because the visual tokens contain fine-grained information about the constraint that forces the model to take a different path to obtain the final answer. Given the circuits are different, the nodes (e.g., MLPs/attention heads) in the circuit will therefore also be different, hence we see different causal layers activated for MLLMs versus LLMs. Validating these hypotheses fell outside the scope of this paper, however, our work lays down a strong foundation towards such future work. We will add a discussion on these hypotheses in the Appendix of the final version of the paper and lay down some possible directions for future work. **They present the insight from their approach using specific examples as in Fig. 3. However, if my understanding is correct, they do not provide numerical stats of which layers are...is the value of Fig. 3 all from a specific example?**: Fig. 3 is averaged across the examples in VQA-Constraints, thus providing a robust (visual) estimate of the causal layers in MLLMs for a factual VQA task. As suggested by the reviewer, we will augment this plot with numerical statistics to improve the readability of the results in the final version of the paper. **The overall idea of their approach is not very novel. They borrow the idea from prior work and adapt it to MLLM.**: Our proposed methodology, MultiModalCausalTrace, proposes a non-trivial adaptation to existing causal tracing methodologies in order for it to work for multi-modal inputs. Without it, no causal traces would have emerged (see Fig 7 - Appendix where we show this result) and our subsequent insights on MLLM information storage and retrieval would not have been possible. We also note that building on existing methodologies, even in small ways, drives the field meaningfully forwards. For example, the Vision Transformer is different from the Transformer only in how it views image inputs as a grid of visual tokens, yet this architecture has unlocked enormous advances across a wide range of computer vision tasks. We believe that MultiModalCausalTrace along with our probe-dataset can similarly unlock rich mechanistic interpretability studies for MLLMs in the future. **I think section 5 does not improve the value of this submission. The section focuses on how to edit the knowledge of MLLM following prior work. The topic is related, yet different from their core contributions**: Thank you for the feedback! We included Section 5, due to the following reasons: 1) First, it allowed us to validate the multi-modal causal tracing methodology proposed in Section 4. If our identified causal traces were spurious or not meaningful, then any subsequent model editing would have immediately revealed this. 2) Second, it introduces (and validates) one practical application where these causal traces could be used in the real-world. Correcting errors and adding new information has been widely explored for LLMs in light of the huge costs (including environmental) that are associated with re-training or fine-tuning these models. We, therefore, felt it was an important direction to direct the research community’s attention towards. Interestingly, our experiments suggest that targeted MLLM model editing is a better alternative to fine-tuning which we hope will inform future research in this direction. The finding is particularly relevant for long tail knowledge, for which it may not even be possible to have sufficient data for training or fine tuning.
Summary: The paper introduces MULTIMODALCAUSALTRACE, an algorithm designed to identify hidden states in large language models (LLMs) that store factual information, specifically extending this capability to vision-language models where images are encoded as visual tokens. Building on previous work in information storage identification, MULTIMODALCAUSALTRACE constructs three models: a clean model, a corrupted model (where visual tokens are randomly replaced, perturbing the hidden states), and a restored model (where some corrupted hidden states are replaced with clean ones). By conducting mediation analysis on the causal relationship between state corruption and next-token prediction outcomes, the algorithm can pinpoint layers associated with different facts. The study reveals that multi-modal LLMs exhibit distinct behaviors in information storage and propagation, such as storing facts in middle layers rather than earlier transformer layers. Additionally, the paper contributes the VQA-constraints dataset, derived from existing visual QA datasets and annotated with constraints. The paper also proposes MULTEDIT, an algorithm for inserting or correcting factual knowledge in multi-modal LLMs. MULTEDIT optimizes the projection matrix in the MLP layers of transformers to minimize the mean squared difference between the projection output and the value vector that maximizes the prediction probability of the desired output. Empirical results demonstrate that MULTEDIT effectively corrects factual errors in VQA tasks. Strengths: + The paper tackles an important task of identifying and correcting factual information in visual language model, a topic that received relatively less attention than in language modeling. + The paper provided strong motivation to the problem and background it addresses, as well as clear visual and formal presentation of the methodology adapted. + The proposed solution is comprehensive and encompassed various aspect of the information storage analysis problem. It delivers valuable insights to some practical challenges of adapting existing LLM-based methods to vision-language models such as the increased number of constraint tokens due to visual encoding. Weaknesses: + The uniqueness and impact of multi-modal models are not thoroughly demonstrated. As stated in the paper, the proposed methods is largely based on the canonical Causal Trace(CT) and Rank-One Model Editing (ROME), with modifications in implementation allowing them to work for vision-language models. However, the choice (or absence) of most of such modifications are not well-motivated and no ablation study provided to understand their exact contribution. For example, the proposed MULTEDIT largely resembles the existing ROME method, with differences in how the key vector is found and how the optimal value vector is acquired. But it is not clear from the text why such changes are necessary and/or make the method works better for visual language model. + The lack of discussion on multi-modal properties hinders the paper's novelty. The overall structure of the paper is similar to [23], and the main text does not substantially differentiate the methodology from prior work. + Minor formatting issue in the reference: journal/conference names are missing in some items (such as [23] [24]) (The reference indices of the original submission are used throughout this review) Technical Quality: 3 Clarity: 3 Questions for Authors: + In vanilla ROME editing, the value vector is found by optimizing both the probability of the desired output and similarity to the essence of the original sentence. But in MULTEDIT the later term is dropped. What is the purpose of such change and how does the updated objective preserve the understanding of the overall prompt? + In the creation of VQA-Constraints, does the authors manually correct all annotations generated by GPT-4? If so, what is the main advantage of starting with automated annotations? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors included a limitation section in the appendix covering drawbacks of the proposed method and potential negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback, and for acknowledging the comprehensiveness of our findings and insights. We are glad that they also see the importance of being able to identify and correct factual information in MLLMs, which is currently an understudied problem. Below we address the points raised by the reviewer: **“The uniqueness and impact of multi-modal models are not thoroughly demonstrated…..’**: Our two methodologies, i) MultimodalCausalTrace for multi-modal causal tracing, and ii) MultEdit for multimodal editing method, are novel in their ability to work with multimodal (image-text) inputs. Towards i), we found that applying existing causal tracing methodologies out-of-the-box did not elicit relevant causal traces for multimodal inputs (attributed to large noise in the corrupted model due to large number of visual tokens) - see Fig.(7) - Appendix. This required us to extend these methodologies in a non-trivial way for multi-modal inputs. Towards ii) we introduce a novel editing approach that is based on ROME, but improves it with an updated loss objective that does not rely on caching a Wikipedia dataset. This is computationally preferred in a multi-modal setting, as images have a larger memory footprint than text alone. We also introduce a novel probe dataset, VQA-Constraints, to conduct our analyses and can be used to drive future impact in multimodal interpretability studies. **As stated in the paper, the proposed methods is largely based on the canonical Causal Trace(CT) and Rank-One Model Editing (ROME),.....**: We note that MultEdit has certain distinctions from the ROME method which we detail as follows: While the high-level idea of employing a closed-form approximation to update (edit) a targeted localized weight matrix in the network is similar to LLMs, there are certain major distinctions in the editing method itself: (i) If looked at closely, the edit-optimization equations in ROME [23] - Eq. (2) is different from Eq. (5) of MultEdit in our paper. In principle, our editing method does not require caching a sample of Wikipedia text for computing the uncentered covariance in their equation. This term in the LLM editing works (e.g., ROME) ensure that the values corresponding to keys for unrelated texts remain similar. However, in our case, we enforce this condition with the L2 regularizer ensuring that the updated weight matrices do not deviate significantly from the original weight matrices (controlled by a hyperparameter \lambda). We find that this simple modification leads to strong editing results. One can also use the editing equation from ROME to update MLLMs, but that would require caching the embeddings from a multimodal Wikipedia type dataset (which is not readily available and requires curation/cleaning) which might be inefficient and also incur an additional operation. (ii) We also find that obtaining the values by *only* optimizing the multimodal language modeling next-token prediction loss is sufficient towards obtaining good embeddings of values – which lead to good editing performance on using it with Eq. (5) in our paper. During our experiments, we did add a KL divergence loss where the objective function was to maintain the output probability distribution with the prompt (visual prompt + <visual-constraint> is a ) between the original MLLM and the MLLM whose value vector is optimized. However, empirically we did not see an improvement in the editing performance. Overall, MultEdit is simpler in implementation and does not require caching a multimodal Wikipedia entry while leading to strong editing performances. We will add these distinctions from the LLM editing works such as ROME in the final version of our paper. We also note that our paper has a package of contributions (including MultimodalCausalTrace and VQA-Constraints) that as a whole advance the understanding of large multimodal language models. Each of these steps required technical novelty in terms of handling multimodal information and architectures, besides adapting current techniques. **Minor formatting issue in the reference: journal/conference names are missing in some items (such as [23] [24]):**: Thank you for pointing this out. We will fix these formatting issues in the final version of the paper. **The lack of discussion on multi-modal properties hinders the paper's novelty. The overall structure of the paper is similar to [23], ...**: We point the reviewer to Section 3.2 in our paper, where we describe the causal tracing methods used for language models and how our method MultiModalCausalTrace differs and extends this to work for multi-modal inputs. In the final version of the paper, we will make the distinctions between Causal Trace and MultimodalCausalTrace more clear for better readability. We also point the reviewer to the first point and second point in the rebuttal where we discuss the distinctions and the uniqueness of the various components in our paper. **In the creation of VQA-Constraints, does the authors manually correct all annotations generated by GPT-4? If so, what is the main advantage of starting with automated annotations**: For the Movies dataset in VQA-Constraints, the questions are templated (e.g., questions are of the form ‘Name of movie directed by this director’) so the visual constraint (e.g., this director) is defined by default. For the OK-VQA and Known datasets in VQA-Constraints, we conducted an MTurk study to filter and correct erroneous annotations. Initially, we used automated annotations, finding that GPT-4 achieved approximately 96% accuracy in annotating constraints in VQA questions with as few as 50 in-context examples (which we manually annotate with constraints). Based on this high efficacy, we used GPT-4 for the initial annotations and then used MTurk to correct any remaining errors (<3% of the total examples in VQA-Constraints). We will include these filtering details in the final version of the paper.
Summary: This paper studies the information storage and transfer for the multi-modality large language model (MLLM). The author provides a comprehensive empirical study leveraging the causal tracking method, i.e., corrupting a clean model by perturbing the input prompt, to identify which layers are used to retrieve information relevant to the constraints in the prompt. The author also leverages the attention contributions to compute how much one set of input tokens influences a set of output tokens to track how information is transferred from visual tokens to the causal layers. To provide new insights, a new dataset called VQA-Constraints has been created to support the empirical study. The author provides many new insights that are different from the existing research in LLM that the MLLMs reply on MLP and self-attention block in much earlier layers for information storage and a consistent small subset of visual tokens output by the vision encoder are responsible for transferring information from the image to these causal blocks. The mechanism revealed in this study also inspires a model-editing algorithm that can correct errors and insert new long-tailed information into MLLMs. Strengths: 1. Overall, this paper is very well written, and the main messages are conveyed smoothly. The findings and takeaway message have been delivered clearly. 2. The paper's novelty stems not only from the research problem but also from the design of the exploration method, the novel insights, and the corresponding proposed model-editing method. To the best of my knowledge, this should be the first work that provides a comprehensive study of knowledge tracing on MLLM, providing a great foundation for future exploration. 3. The research findings are very interesting and insightful. They are validated by different datasets and the newly proposed dataset, making the insights solid and sound. 4. The proposed model-editing method is effective and can partially validate the mechanism identified in the paper. Weaknesses: This is a strong paper in general, and the review does not realize the critical weakness of the present paper. The reviewer may be curious whether the code will be public upon the acceptance. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the Weaknesses sections for more details. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback, in particular acknowledging that this is the “first work that provides a comprehensive study of knowledge tracing on MLLM”. We are also glad that the reviewer finds our paper well-written and feels that the work can provide a solid foundation for future exploration in the space of MLLM interpretability. We will certainly be releasing the code publicly, and will link it in the final camera-ready version of the paper.
Summary: This manuscripts studies mechanistic interpretability in autoregressive vision-language models. Towards this, the authors propose `MultiModalCausalTrace`, an extension of the causal tracing technique for analyzing text-only LLMs, which is to perturb "visual constraint" tokens with a set of semantically coherent, but irrelevant tokens, and measure its change effects in model behavior. The authors observe that VLMs store and transfer information at early layers, while LLMs operate in early-to-mid layers. Based on this observation, the paper proposes `MultEdit`, a technique that injects long-tailed information in these causal layers of VLMs. Strengths: - The paper's presentation is very clear and tells a coherent story. - The main techniques (multimodal causal tracing) are reasonable; they are built upon well-tested frameworks for mechanistic interpretability of LLMs. - The findings are intriguing as the authors observe VLMs behave differently in terms of information storage and transferring, compared to LLMs. - `VQA_Constraint` is a valuable contribution to the interpretability community, and binding visual input to a natural language reference is a reasonable way to evaluate the knowledge for VQA, and is different from the many algorithmic tasks studied in prior mechanistic interpretability works on text-only LLMs. Weaknesses: - While `MultEdit` has demonstrated impressive editing efficiency and generalization performance, the technical novelty is somewhat limited as it is an application of a well-tested technique for editing LLMs. I would also like to see its generalization performance not only on `VQA-Constraints`, but also on other standard VLM benchmarks, such as MMMU. - While the authors finding clearly indicates VLMs store and transfer information differently than text-only LLMs, I'm hoping that the authors could give a more detailed explanation to the cause of this. Since all tested VLMs are fine-tuned from LLMs, it is conceivable that they would operate in a similar fashion. - This work currently only examines multimodal fusion at the embedding level, while there are other popular approaches such as the Flamingo architecture. I'm curious to see whether the authors finding would still hold under these different architectural choices. Technical Quality: 3 Clarity: 4 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have adequately addressed limitations and potential societal impacts of their work Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing constructive feedback and appreciating the paper’s presentation, techniques and the findings. We are glad that the reviewer feels that VQA-Constraint is a valuable contribution to the research community. Below we address the weaknesses raised by the reviewer: **While MultEdit has demonstrated impressive editing efficiency and generalization performance, the technical novelty is somewhat limited as it is an application of a well-tested technique for editing LLMs**: While the high-level idea of employing a closed-form approximation to update (edit) a targeted localized weight matrix in the network is similar to LLMs, our proposed editing method has two key distinctions: (i) our editing method does not require a sample of Wikipedia text to be cached. In contrast, in LLM editing techniques like ROME [23], this is required for computing the uncentered covariance to ensure that the values corresponding to keys for unrelated texts remain similar (see Eq (2)). In our case, we enforce this condition with an L2 regularizer which ensures that the updated weight matrices do not deviate significantly from the original weight matrices (controlled by a hyperparameter \lambda - see Eq (5) in our paper). We find that this simple modification leads to strong editing results. One could use ROME to update MLLMs, but that would require caching the embeddings from a multimodal Wikipedia type dataset (which might not be readily available and clean for our use-case) thus adding an additional operation. (ii) Our method *only* optimizes the multimodal language modeling next-token prediction loss. In contrast, LLM editing techniques optimize the language modeling loss along with a KL divergence loss which preserves the essence of the subject. In fact, during our early experimentations in the project, we used an additional KL divergence term to preserve the essence of the visual constraint. In particular, the objective of the KL divergence loss was to maintain the output probability distribution with the prompt (visual prompt (from visual tokens) + <visual-constraint> is a ) between the original MLLM and the MLLM whose value vector is optimized. However, empirically we did not see an improvement in the editing performance. Therefore we use the value vectors obtained with just the multimodal language modeling next-token prediction loss. Overall, MultEdit is simpler to implement and does not require caching a multimodal Wikipedia entry while leading to strong editing performances. We will add these distinctions in the final version of our paper. We note that MultEdit is one of our paper’s contributions. The others include MultimodalCausalTrace, a multi-modal casual tracing methodology, and the dataset VQA-Constraints. Together these contributions advance our understanding of how large multimodal language models process information. Each required technical novelty in terms of handling multimodal information and architectures, besides adapting current techniques. **I would also like to see its generalization performance not only on VQA-Constraints, but also on other standard VLM benchmarks, such as MMMU**: Thank you for the suggestion! We have evaluated the edited LLaVa on MMMU and report the following results: LLaVa-7B (unedited) obtains 34.4% on the MMMU validation set, and LLaVa-7B (averaged across multiple edits) obtains 33.8% on the MMMU validation set. This result highlights that targeted model editing does not impact generalization performance significantly. We use the evaluation scripts from https://github.com/BAAI-DCAI/Bunny for the evaluation. **While the authors finding clearly indicates VLMs store and transfer information differently than text-only LLMs, I'm hoping that the authors could give a more detailed explanation to the cause of this. Since all tested VLMs are fine-tuned from LLMs, ....**: The reviewer raises a good point. Through conducting our analyses, we developed several hypotheses as to why MLLMs retrieve information differently compared to LLMs. We suspect that the language model elicits distinct internal circuits (a set of MLPs and attention heads) in the presence and absence of visual information. If the visual information is not present, the language model relies on one type of circuit, but in the presence of information from visual tokens, this circuit gets overridden by another. It can also be crucial to study what happens to the visual tokens, after the projection stage (i.e., what type of information flows into the final architecture and how are the constraints encoded) as that is the potential orchestrator towards eliciting a different circuit in the model. Our work lays down a strong foundation and a practical set of tools (that we plan to open source) to study these hypotheses. We believe that making these results and tools available will also help us initiate a discussion and welcome hypotheses proposals from the community that can be studied in future work. **This work currently only examines multimodal fusion at the embedding level, while there are other popular approaches such as the Flamingo architecture. I'm curious to see whether the authors finding would still hold under these different architectural choices**: We scoped our paper to focus on the embedding-level fusion MLLM family (e.g., LLaVa, Multimodal-Phi3) because it has generally demonstrated stronger performance across a wide range of multi-modal tasks, compared to other families. Models like Flamingo fall into another MLLM family where visual tokens are fused with the language decoder at different layers. Because of these mechanistic differences, we expect that the causal layers will be different, however we leave this to future work to confirm. We note that since our framework and methodologies (including the probe datasets) can be applied to *any* MLLM architecture, they could be used to conduct these future analyses. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response! I have increased my score accordingly.
Rebuttal 1: Rebuttal: We thank all the reviewers for their constructive feedback and comments. We have individually addressed the comments in their respective sections. We want to highlight that our paper proposes a package of novel contributions that work together to advance our understanding of how state-of-the-art multimodal models process visual and textual information. Our proposed methodologies, MultiModalCausalTrace and MultEdit are carefully designed, extending existing approaches for causal tracing and model editing to multimodal inputs and are generalizable across multiple MLLM families. Together with our proposed dataset, VQA-Contraints, we present a wide suite of novel insights on how MLLMs store/retrieve information, and how information can be inserted into them in a computationally efficient way. We will open source all our methods, datasets and code to the community to enable the community to drive further advances in MLLM interpretability.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Carefully Blending Adversarial Training and Purification Improves Adversarial Robustness
Reject
Summary: To better defend against adversarial attacks, the paper proposes a novel adversarial defense mechanism for image classification – CARSO – blending the paradigms of adversarial training and adversarial purification in a synergistic robustness-enhancing way. Strengths: The paper proposes a novel defense mechanism. The proposed method is validated on multiple datasets. Weaknesses: 1. The presentation of the paper is poor. a) In the first half of the paper, the author merely describes some background. There is a lack of analysis of existing methods, such as the shortcomings of the current methods, what problems the proposed method can solve, and why it can solve these problems. b) Some descriptions are unclear, such as 'Upon completion of the training process, the encoder network may be discarded as it will not be used for inference.' I think 'may' should be removed here. 2. The current experiments are insufficient to prove the effectiveness of the proposed method. a) Table 2 simplifies a lot of information, which reduces clarity; for example, it only records the mean or best results of multiple methods and lacks the clean accuracy of the purification method. I suggest listing all methods according to both clean accuracy and adversarial accuracy. The existing content in Table 2 can be added as additional row information. b) Since the paper does not give specific problems, only a general goal, which is to better defend against adversarial attacks, the experiments become relatively limited. I believe the author should re-summarize the shortcomings of existing methods and the advantages of the proposed method and conduct more experimental comparisons. Technical Quality: 2 Clarity: 1 Questions for Authors: See Weaknesses. Confidence: 3 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: The authors have discussed limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for his/her observations, noticing however that part of such review is based on what we believe to be a mischaracterisation of our paper’s goal and contents. We will address the Reviewer’s concerns in a similar list-based format. 1. **Poor presentation.** **a)** We will start by noticing that the portion of the paper in which no novel information is added with respect to existing published literature accounts for at most ~1/4th of the lines (14 to 43, 80 to 135, 285 to 288; *i.e.* 90 lines over 356 in total), and even less so in terms of page space. A figure far from the *‘first half’* mentioned by the Reviewer. Additionally, the Reviewer does not address whether, and for what specific reasons, the mentioned background plays an irrrelevant role in the overall structure of the paper. This prevents us from addressing the merit of the question. An analysis of existing methods in the fields of AT- and purification-based adversarial defences is given in *Sections 1* and *2*, and in part of *Subsection 5.2*. Specifically, some shortcomings of existing methods are addressed at lines 38 to 43, 98 and 99, 106 to 108, 113 and 114, 310 to 313. Our analysis is consistent with the goals of our paper, as outlined in the *Abstract* and the final paragraphs of *Section 1*: *i.e.* to propose a *novel* approach that achieves adversarial robustness thanks to the blending of AT and purification, and to assess its empirical robustness according to a standardised benchmark for $\ell_{\infty}$ perturbations, across some datasets. We do not believe that such goal would require to explicitly identify pitfalls in existing approaches, which serve indeed as a basis for our method, and punctual solutions to them. Owing to its architectural novelty as the main aspect of interest, we provide some justification to its inner workings (the *‘problems it can solve’*) in *Subsections 4.2, 4.3, 4.4, 4.5* and *Appendices C* and *D*. We also believe that the recorded empirical robustness, improving upon the existing *state of the art*, constitutes a definitive justification in spite of the (well acknowledged) clean accuracy decrease. **b)** Standing by the stance that the prescriptive meaning of modal verb *‘may’* is acceptable in such case, we can further clarify its meaning by operating the substitution: `the encoder network may be discarded ` $\rightarrow$ `the encoder network is discarded`. Hardly believing that such single element is responsible for a large disruption of clarity across the paper, we are prevented from commenting upon other passages, as they have never been explicitly mentioned by the reviewer. 2. **Insufficient experiments.** We are unable to understand whether the Reviewer implicitly refers to the presence of other aspects of concern – besides later-mentioned points (a) and (b) – as the reason to believe our experiments to be *‘insufficient to prove the effectiveness of the proposed method’*. We will comment on the points explicitly mentioned. **a)** We are sorry that the Reviewer finds *Table 2* a source of reduction in clarity. We are aware of the presentation choice to compare our method to only the best-performing existing models in terms of empirical robust accuracy – which we find nonetheless adequate in the light of our declared goals (see *e.g.* the *Abstract*, *Section 1*, and the previous point *(1)* of the numbered list). Additionally, we would point out that the choice of assessing different models on the basis of the worst-case scenario against different adversarial attacks is a well-established custom of the field, and a main staple in the definition of the *AutoAtack* benchmark. Also, no averaged result is shown in *Table 2* (as the Reviewer states instead) – apart from dataset-averaged accuracies. Finally, the *clean* accuracies of all methods used in the comparison are shown either in *Table 2* or in *Appendix F* (*Table 15*). In an effort to improve clarity in the presentation of our results, the contents of *Table 15* can be moved to *Table 2*. **b)** We believe that saying that *‘the paper does not give specific problems, only a general goal’* represents a mischaracterisation of our work and its goals. Indeed, we recall once more the main aim of our endeavour: to devise a novel technique for adversarial defence based upon the blending of adversarial training and purification – and the assessment of such method on a standardised benchmark (for $\ell_{\infty}$ perturbations on image classification) across some datasets. We believe – as all other Reviewers agree – that the novelty is the main aspect of interest in our proposal, together with its assessment that proves its efficacy. As such, we do not find it a weakness the fact that our method is not built as the response to specific shortcomings of other existing methods – apart from the inferior empirical robust accuracy that they finally provide in the experimental scope addressed, and their reliance on model approximation or surrogation to assess the *end-to-end* *white-box* adaptive robustness. Aware that a further broadening of the experimental scope will increase its justificatory strength, we are prevented to comment on specific aspects of the suggestion due to its non-specific nature. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. As you can see, nearly all of my comments relate to the readability of the paper. Indeed, there are many key explanations and descriptions missing, as I pointed out in my initial review. Although I only provided a few examples for each issue, these examples are sufficient to demonstrate that the overall readability of the paper is weak. Of course, these are my personal thoughts. If other reviewers and the AC consider this an easy-to-read paper, I fully agree with accepting the paper. Currently, I will maintain my score, and I will discuss this issue during the subsequent Reviewer-AC Discussions. --- Rebuttal 2: Title: Post-rebuttal discussion comment to reviewer 7CCs Comment: We thank the Reviewer for the answer and additional clarifications about his/her position. We also understand and respect that the Reviewer finds *readability* the main weakness of our work. Believing in the role of peer-review not just as a filter – but for the betterment of submitted works – we tried to address the specific issues raised in the original review in accordance to their nature: - When referring to the *conceptual* structuring of the paper or the lack of descriptive contextualisation (*i.e.:* *weaknesses 1.a, 2.b*), we referenced the specific passages of the paper addressing those points. We also provided a more justificatory explanation of our choices, in the light of the overall goal of our paper and the need to balance those descriptions with the introduction of the (many) novel aspects of our method. - When referring to phrasing or technical/typographic aspects of presentation (*i.e.:* *weaknesses 1.b, 2.a*), we tried to accommodate the Reviewer’s suggestions as much as possible – recognising the improvement in clarity the provide. We refer to our already-submitted rebuttal for the actual discussion of the points just mentioned. We also recognise that the Reviewer may have found more issues while reading the paper than those explicitly stated. While those latter may be *‘sufficient to demonstrate that the overall readability of the paper is weak’* (at least in its initial version) – such choice may inadvertently prevent us from better addressing those clarity concerns. Finally, we thank the reviewer for his/her clear statement on paper acceptance, and for the willingness to actively engage in reviewer-reviewer and reviewer-AC discussion.
Summary: This study proposes a novel adversarial defense method called CARSO. CARSO consists of two models: a classifier and a purifier. The classifier is (pre)trained to correctly classify possibly perturbed data. The encoder of the purifier is trained to generate a latent space from the internal representation of the classifier and the original (possibly perturbed) input. The decoder of the purifier is trained to reconstruct a sample from the latent representation and the internal representation of the classifier. The final prediction is determined by aggregating the outputs of the classifier for reconstructed data. Detailed procedures are summarized as follows: - The classifier is always kept frozen. Other parts, including the VAE and small CNNs for compression, are trained on a VAE loss consisting of a reconstruction loss based on a pixel-wise channel-wise binary cross-entropy loss and KL-div. - The internal representation and input are compressed by small CNNs before being inputted into the encoder of the purifier. - The classifier is pretrained according to [18] or [62]. - When training the purifier, each batch contains both clean and adversarial samples. - The aggregation is represented by a double exponential function. - Evaluations are conducted under $L_\infty$ attacks. Strengths: - The concept of blending adversarial training and purification is novel and interesting. The proposed method, CARSO, achieves robust accuracy that surpasses the SOTA adversarially trained models and purification methods, including diffusion-based models, despite its relatively simple mechanism. - The evaluation was carefully conducted. The authors explicitly address common pitfalls in evaluating robustness. For example, they conducted end-to-end validation (full whitebox setting), addressed concerns about gradient obfuscation, and used PGD+EOT to address the stochasticity of CARSO. - CARSO can utilize existing pretrained models, which have already achieved high robust accuracy. - A wide variety of datasets (CIFAR-10, CIFAR-100, and TinyImageNet-200) were used for evaluation. Weaknesses: **1**. In my opinion, the claim that CARSO surpasses the used adversarially trained model seems questionable. If my understanding is correct, during inference, the decoder takes class information only from the internal representation of the classifier. Thus, I believe the decoder can correctly reconstruct the sample only if the classifier, outputting the internal representation, can correctly extract class information from the original perturbed sample. Could the authors clarify this? Note: Initially, I doubted whether some experimental or evaluation settings were appropriate. However, as far as I can tell, there are no issues. Just in case, I recommend the authors review their source code again. **2**. CARSO sacrifices clean accuracy more significantly than existing SOTA methods. Additionally, to compare CARSO and the best AT/purification models in terms of clean accuracy, Table 2 should include the clean accuracy of the best AT/purification models (i.e., the contents in Table 15). The scenario or dataset columns in Table 2 might not be necessary. **3**. Few ablation studies. The authors should include the case of $L_2$ perturbations and use internal representations from different layers. Particularly, the relationship between the layers used for extracting representation and robust accuracy is of interest. Technical Quality: 3 Clarity: 2 Questions for Authors: Minor comments: - The authors should standardize the meaning of each symbol. If I understand correctly, $i$ represents the sample index in Figure 1 but represents the class index in Section 4.5, leading to low readability. - In Figure 1, $\mathcal{L}_{VAE}$ is not defined. I believe it is first explained in Line 270. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors explicitly addressed the limitations in Section 5.3. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for his/her constructive observations, and for the care put into writing the review. We will gladly comment upon all points raised, in a similar list-based format. 1. **On the superiority of CARSO *w.r.t.* AT classifier baselines.** With respect to the claim that CARSO surpasses in empirical robust accuracy the individually-considered adversarially-trained *classifier* models it employs – we ultimately refer to experimental evaluation whose results are contained in *Table 2* (specifically: the comparison of columns `AT/AA` and `C/rand-AA`). Were the claim untrue – within the experimental scope considered – we would have observed a less or equal `C/rand-AA` robust accuracy in comparison to the one reported under `AT/AA`. Instead, the use of CARSO determines its marked increase, ranging from $+8.4\%$ (CIFAR-10) to $+27.47\%$ (TinyImageNet-200). Such observations alone would be sufficient to substantiate our claim. A possible explanation of such results may be found in the justification to the method provided in *Subsection 4.2*, and in the use of a robust aggregation strategy (described in *Subsection 4.5* and *Appendix D*), that *plain* AT classifiers both lack. Specifically, adversarial attacks against any classifier target the distribution of logits contained in its last layer (whose $arg\ max$ usually constitutes the predicted class). Consequently, the concern of the reviewer about overall robust accuracy being limited by that of the *classifier* would have been justified in the case only such last-layer representation had been used as its whole *internal representation* of interest. Instead, the logits layer is not even used as part of the conditioning set of the *purifier* (see *Table 5*, *Appendix E.2*). The conditioning set chosen (*i.e.* the representations at intermediate layers of the *classifier*) does not even include proper *class* information, but just a collection of features that the decoder of the *purifier* learns to map to *clean* image reconstructions, under adversarial noise corruption against the *classifier*. In such setting, an adversary against CARSO would need to target the last-layer logits of the classifier (used to finally performs class assignment via robust aggregation) only by attacking multiple intermediate layers of the very same classifier, and through the decoder that uses them as input, in addition to the classifier itself. This may ultimately make CARSO a harder target to fool, in comparison to the *classifier* alone. 2. **On the significant *clean* accuracy toll.** It is true, and we transparently recognise (see *Subsection 5.3*), that the specific version of CARSO evaluated in our work (*i.e.* using a VAE as generative purifier) imposes a heftier clean accuracy toll in comparison to existing methods (either AT-based or using diffusion/score-based models as generative purifiers). Such results ultimately depend upon the deliberate choice to assess the feasibility of the *idea* behind CARSO (*i.e.* blending AT and purification via representation-conditional purification and robust aggregation), and the robustness it produces, in the best-possible scenario for the attacker. In such light, the VAE-based purifier and the chosen robust aggregation strategy ensure exact end-to-end differentiability for the whole model. This guarantees that the evaluation is not dependent on approximated backward models, which can only provide a robustness upper bound and are more susceptible to gradient obfuscation. As we mention in *Subsection 5.3* and in the *Conclusion*, we are interested – and actively pursuing research - in different architectural choices for the purifier in CARSO and CARSO-like models, which may result in much more competitive clean accuracy, though making rigorous robustness evaluation more challenging with current tools. We thank the Reviewer for suggestions related to *Table 2*, and will definitely include in it the contents of *Table 15*, as a way to enhance transparency and clarity in the presentation of results. 3. **On ablation studies and further experiments.** We are aware that the paper provides little space to ablation studies or experimental settings different from empirical $\ell_{\infty}$ adversarial robustness evaluation. The additional assessment of $\ell_{2}$ robustness has been excluded at this stage due to the generally more demanding challenges offered by $\ell_{\infty}$; we recognise however the significant added value it may contribute to our work. We are also particularly interested – and pursuing active research – in how the choice of layers to be used as conditioning set influences overall robustness. As noted in *Subsection 5.3* we are planning further work into such realm. **Answers to *minor comments*** We thank the reviewer for the precise remarks, that allow us to improve the clarity and legibility of the paper. In response to such observations, we have operated the following edits to our manuscript. - In *Subsection 4.5* and *Appendix D* (where the robust aggregation strategy is described), the class index is now referred to as $c$, leaving index $i$ to reconstructed sample multiplicity, as shown in *Figure 1*. - The symbol $\mathcal{L}_{\text{VAE}}$ is now referred to in the caption of *Figure 1* as *VAE Loss* and the reader explicitly redirected to *Appendix B* for a formal definition of it. --- Rebuttal 2: Comment: I appreciate the authors' clarification. In conclusion, I will maintain my rating of Weak Accept. Additionally, since the clarification addressed several unclear points, I am increasing my confidence score from 4 to 5. My detailed thoughts are as follows. I believe that the CARSO proposed in this research presents a novel adversarial defense approach. Utilizing the internal representation of an adversarially robust model for conditioning a purifier is, in my opinion, a novel contribution beyond the trivial combination of adversarial training and purification. The idea of combining the two mainstream adversarial defense strategies—adversarial training and purification—and the resulting outcomes are likely to be of great interest to the community. Although there is a trade-off in clean accuracy, the achieved robust accuracy significantly surpasses the state-of-the-art, making it worthy of evaluation. Moreover, the end-to-end evaluation, investigation into gradient obfuscation, and use of PGD-EOT clearly address naturally arising questions from this approach (particularly the use of purifiers), which I found to be a solid evaluation. As far as I am aware, the primary weaknesses of this research, as acknowledged by the authors, include the lack of certain experiments and the decrease in clean accuracy. For more details, please refer to my Weaknesses 2 and 3. However, regarding the former, I recognize that critical results demonstrating the method's effectiveness were sufficiently provided. There may also be areas for improvement in the presentation. While I did not find it difficult to understand the goal and concept of the study, I encountered some challenges in fully grasping the flow of the methodology. Revisiting the structure of Section 4 could enhance the quality of the paper. Considering all these strengths and weaknesses, I will keep my rating. P.S. In addition, considering the authors' rebuttal for my Weakness 1, I increased soundness from 2 to 3.
Summary: This paper integrates adversarial training and adversarial purification to enhance robustness. It specifically maps the internal representation of potentially perturbed inputs onto a distribution of tentative reconstructions. These reconstructions are then aggregated by the adversarially-trained classifier to improve overall performance. Strengths: The idea of combining adversarial training and adversarial purification is interesting. Weaknesses: 1, The experiments are too weak. I hope the authors can refer to at least [1][2][3], which are relevant to adversarial purification, to conduct experiments from more dimensions and consider more baselines and fundamental experiments. 2, Could we just combine [1] with an adversarially-trained model to achieve similar performance? 3, Why should the classifier be adversarially trained for better accuracy? 4, Why can't we directly purify the image? Could we use an image-to-image method to purify the input image? [1] DISCO: Adversarial Defense with Local Implicit Functions. [2] Diffusion Models for Adversarial Purification [3] IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks Technical Quality: 2 Clarity: 2 Questions for Authors: The first two points mentioned above are my key concerns. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The method heavily relies on training a VAE as the generative purification model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for his/her observations. Firstly, we would start by pointing out an inaccuracy in the summary of our paper. The tentative reconstructions of input images generated by the *purifier* are not aggregated by the *classifier*. Indeed, the *classifier* processes them independently from one another, outputting a distribution of logits for each. Such distributions are then aggregated by the *robust aggregation strategy* described in *Subsection 4.5* (justified in *Appendix D*), which constitutes an integral part of our method and a core contribution to its robustness. With respect to the weaknesses identified: 1. **Weak experiments; additional baselines.** We believe our experimental choices to be consistent with the goal of our paper: *i.e.*, to show that it is possible to synergistically blend adversarial training and purification in a novel and meaningful way. The resulting model attains the *state of the art* in a hard adaptive benchmark across datasets, against the best AT- and purification-based defences. Remarkably, our evaluation relies on exact *end-to-end* differentiability (and not on best-effort approximation, such as BPDA), and explicitly accounts for the stochasticity of the method (by using randomness-aware AutoAttack, which relies on EoT). Our evaluation also guards against common pitfalls of adversarial defences in general (*e.g.* gradient obfuscation, see *Table 3*) and diffusion-based purification methods (such as the mentioned *DiffPure*, vulnerable to pitfalls identified in [4] and [5]). Referring to the specific models, [2] is directly surpassed by [5] (which reconsiders adversarial evaluation of diffusion-based models in the light of [4]), to which we directly compare. We thank the reviewer for the suggestion of [1] and [3], concerned with transformation-based defences and using BPDA to assess adaptive white-box accuracy (reasons for which they have been excluded from direct comparison in the first place). Indeed, this allows us to show once more the effectiveness of CARSO. Even with the advantage given by the use of attacks against approximated models, we surpass their reported best robust accuracy in commonly-employed datasets. More specifically: - Cascaded DISCO (k=5) [1] on CIFAR-10 attains a $\leq 0.5777$ *RA*, compared to our $0.7613$; - IRAD [3] on CIFAR-10 attains a $\leq 0.7432$ *RA*, compared to our $0.7613$; - IRAD [3] for CIFAR-100 attains a $\leq 0.6278$ *RA*, compared to our $0.6665$. If deemed appropriate, we will gladly add such comparisons to *Tables 2* and *15*, or in a dedicated appendix. As far as the further remarks are concerned, the vague and non-specific wording prevents us from addressing specific issues related to our method. 2. **Combination of DISCO and AT.** We are unaware of any published paper or experimental evidence that neither confirms nor disproves whether a hypothetical combination of DISCO and adversarial training would be effective, especially at the levels of robust accuracy attained by *state-of-the-art* models (including ours). We thank the reviewer for the suggestion, definitely worth investigating in the future. We must admit, however, that we do not consider such suggestion a *weakness* of our method, whose framing, formulation, and assessment are independent of it – and different in scope. 3. **On the adversarial training of the classifier.** As with point (2), we hardly see the Reviewer’s question as a weakness of our method. The *purifier* used as part of CARSO maps *internal features* of the classifier to image reconstructions. As such, under adversarial noise, the use of a classifier (and, as a consequence, its internal features) that are the most invariant under such perturbations stabilises and enhances the robustness of the *purifier*. Reciprocally, the use of a (same, in our case) classifier trained under noisy corruptions to finally classify purified reconstructions makes the overall process more robust also to non-adversarial artifacts the purification process may introduce. This, of course, comes at the cost of a decreased *clean* image accuracy, as we notice. 4. **On direct image purification.** As noticed in the case of points (2) and (3), once more, we do not believe the Reviewer’s question constitutes a weakness to our method. Firstly, *Subsection 4.2* (lines 197 to 202) already provides a preliminary answer to such point. In detail, *direct* image purification (*i.e.* the mapping of a perturbed to a tentatively clean image) is already the leading scheme used in legacy (*e.g.* [6]) as well as modern (*e.g.* the already mentioned [2] and [5]) adversarial purification methods. In a broader sense, [1] and [3] may also be included in such class of techniques. In the case of [6] (which uses VAEs as purification models), the approach even resulted in worse robustness compared to the classifier alone. In all remaining cases, the experimental evidence we provide shows that none of such methods is able to provide a robust accuracy better that ours, within the experimental scope considered. Finally, we want to address the *heavy reliance on VAE training* of our method. As noted in *Subsection 4.1* (lines 154 to 156), the purifier being a VAE is an *in*essential part of CARSO as a general method: *“any model capable of stochastic conditional data generation at inference time”* would suffice. In addition, the use of a VAE ensures that the attacker has access to exact *end-to-end* model gradients for evaluation, increasing the strength of our experimental setup. **References** [1], [2], [3]: as mentioned by the Reviewer. [4] Lee, Kim: ‘Robust Evaluation of Diffusion-Based Adversarial Purification’, 2024. [5] Lin et al.: ‘Robust Diffusion Models for Adversarial Purification’, 2024. [6] Gu, Rigazio: ‘Towards Deep Neural Network Architectures Robust to Adversarial Examples’, 2015. --- Rebuttal Comment 1.1: Comment: > Inaccuracy in summary. I didn't mean that these reconstructions are directly aggregated by the classifier at the image level, as it's clear that the classifier cannot achieve this. I also acknowledge that the robust aggregation strategy is intriguing. > Weak experiments; additional baselines Apologies for the lack of clarity in my previous statement. By 'weak experiments,' I don't just mean that more baselines should be compared; I'm also suggesting that more models should be used in addition to WideResNet-28-10. Furthermore, the impact of 'adversarially-balanced batches' and other technical details hasn't been thoroughly explored. Thus, the performance comparison with the current best AT-trained model doesn't seem entirely fair. Furthermore, I hope the experiments can provide more insight into the positioning of the proposed method. For instance, the advantage of adversarial purification is that it avoids changes to the original model and can be adaptively deployed across different models. On the other hand, adversarial training can reduce extra time consumption during inference. I would like to see the pros and cons of this method clearly outlined in your paper. Specifically, I expect to understand what benefits are gained from adding purification to AT and what trade-offs are made when integrating purification, rather than simply demonstrating the method's potential effectiveness. It's important to understand under which scenarios it is most effective. Regarding the goal of your paper—to show that it is possible to synergistically blend adversarial training and purification in a novel and meaningful way—I believe that simply demonstrating this possibility is not sufficient for acceptance in this venue. Compared to previous work, you need to showcase advantages across a variety of scenarios. > Combination of DISCO and AT. My intention in questioning this is to understand why a straightforward combination of existing methods like DISCO and AT wouldn't work just as effectively, given that it seems simpler. I'm curious about the motivation behind your specific blending approach. --- Rebuttal 2: Title: Post-rebuttal discussion comment to reviewer WKot (part 1 of 2) Comment: We thank the Reviewer for the clarifications about his/her review. We will address the issues further specified in a list-based format. - **Experiments with *classifiers* other than *WideResNet-28-10*.** With respect to the *classifier* model – to be used as part of the CARSO architecture – we tried to strike a balance between a reasonably-performant pretrained AT model (representative of modern AT-based approaches) while keeping model size under control. Indeed, as larger adversarially-trained models practically always perform better in terms of *clean* and adversarial accuracy (keeping the training protocol fixed), the use of smaller *classifiers* within CARSO would increase the strength of experimental results. This is indeed the case, as we achieve better-than-*SotA* robust accuracy across our whole experimental scope. To support the significance of such choice in the context of AT, the following statistics can be gathered from the RobustBench entries for $\ell_{\infty}$ robustness (commit `776bc95bb4167827fb102a32ac5aea62e46cfaab`): - **CIFAR-10**: $22\\%$ are *WideResNet-28-10*s; $54\\%$ are deeper and wider *(Wide)ResNet*s; $6\\%$ are deeper (but narrower) *ResNet*s, with an overall larger number of parameters; $4\\%$ are transformer-based models with a larger number of parameters; $14\\%$ are *(PreAct)ResNet-18*s. - **CIFAR-100**: $14\\%$ are *WRN-28-10*s; $65\\%$ are deeper and wider *(W)RN*s; $8\\%$ are transformer-based models with a larger number of parameters; $13\\%$ are *(PA)RN-18*s. - **Impact of *adversarially-balanced batches*.** Though the impact of *adversarially-balanced batches* (*ABBs*) has not been thoroughly explored, a heuristic justification for its use, within the training of CARSO, is provided in *Appendix C*. With respect to the use of *ABBs* in the adversarial training of classification models that use AT as the only technique for robustness enhancement, we refer to [1] whose crucial aspects *w.r.t.* AT are reported in *Appendix A*. The requirement of worst-case perturbations in the *inner optimisation* step theoretically discourages the use of non-worst-case examples (as would be the case of FGSM-generated examples) or non-entirely-perturbed batches. As such, *usual* PGD-based adversarial training – and derived techniques – remain the theoretically-recommended way to achieve robustness in the *end-to-end* training from scratch of classifier models. Since we train only the *purifier* part of CARSO with *ABBs*, such considerations do not apply in our case. - **Fairness of comparison with the best *AT* method.** Given the analysis previously provided, and the inclusion of a direct comparison between CARSO-based models and their *classifier* models alone, we believe the *additional* comparison of CARSO-based models with the currently best-performing AT-based techniques to be fair within the experimental scope of interest. Especially so, given that the best-performing of such models have a larger number of trainable parameters – and an even larger one in the case of purification-based defences – *w.r.t.* CARSO. - **Positioning of CARSO and related experiments.** In the design of experiments and their presentation within our work, we focused mainly on the specific measurement of empirical robust accuracy as the means of comparison with other existing approaches. While it is true that *those particular experiments* do not provide a clear-cut pros/cons analysis, the paper outlines some issues with existing AT- and purification-based approaches (*Section 1*, *2*, and *Subsection 5.2*), and the differences between them and CARSO (*Subsections 4.2, 4.3, 4.4, 4.5* and *Appendices C* and *D*). - **Goal of our work.** We believe that the most precise description of the goal*s* of our work is contained in the *Abstract*, *Section 1* (the *Introduction*) and *Section 6* (the *Conclusion*) when considered altogether. Specifically, the direct quote from our rebuttal, *i.e.* *‘to show that it is possible to synergistically blend adversarial training and purification in a novel and meaningful way’* is indeed *one* of our goals (as stated *e.g.* in the penultimate paragraph of the *Introduction*) – yet, it is hardly the only one our work achieves. Indeed, we show that not only our approach is viable, but we also do so in a deliberately hard scenario: in terms of $\ell$ norm-bound, attack choice, and requirements of *end-to-end* differentiability (which in turn allow for approximation-free assessment). Yet, we are able to attain the state-of-the-art in one of the most stringent robustness benchmarks available (*randomness-aware AutoAttack*) – against any kind of existing models, even developed well outside the compliance to our requirements. --- Rebuttal 3: Title: Post-rebuttal discussion comment to reviewer WKot (part 2 of 2) Comment: - **Specific blending approach.** Relevant reasons against the simple juxtaposition of a VAE-based purifier and a classifier are contained in [2], where such arrangement results in decreased robustness. Mitigation against such pitfall is provided by the representation-conditional purification we devise, and the stochastic data generation offered by the VAE is turned into a robustness enhancement by the aggregation strategy we propose. Without performing an actual experiment – which is outside of the scope of our paper in its current form – it would be impossible to determine if DISCO+AT (or similar) methods are equally viable from an empirical viewpoint, nor whether they would be susceptible of known (or novel!) failure modes. For sure, the structure of DISCO prevents exact *end-to-end* algorithmic differentiability, and thus forces reliance on BPDA for proper attacks. As such, it offers a less demanding evaluation scenario, and an ineliminable robustness overestimation *w.r.t.* CARSO. **References** [1] Madry et al., *Towards Deep Learning Models Resistant to Adversarial Attacks*, 2018. [2] Gu & Rigazio, *Towards Deep Neural Network Architectures Robust to Adversarial Examples*, 2015. --- Rebuttal Comment 3.1: Comment: Thank you for your detailed and patient response; I appreciate your efforts. I acknowledge that this is an interesting paper, and I want to emphasize my opinions on a few points: 1. More experiments are needed across a broader range of models, including an ablation study, to demonstrate the effectiveness of your method in a well-established manner. While I haven't explicitly mentioned using higher-resolution datasets, I believe that using ImageNet might be too inefficient for your approach. As I have repeatedly emphasized, additional experiments are crucial. I agree that the experiments support the goal of showing "that it is possible to synergistically blend adversarial training and purification in a novel and meaningful way." However, merely demonstrating this possibility is not sufficient. Stating, "we show that not only our approach is viable, but we also do so in a deliberately hard scenario," is just the most basic and necessary experiment to support your goal. 2. The presentation of this paper needs significant improvement, particularly in terms of both the expression used and the quality of the figures. --- Rebuttal 4: Comment: We thank the Reviewer for his/her prompt response, the willingness to engage in further discussion, and the interest in our paper. As for the specific contents of the Reviewer’s latest comment, we will address them in a similar list-based format, as usual. 1. We will provide split answers in the sub-list that follows, according to the specific conceptual issues raised. - **Broadening of the experimental scope.** We agree that *any* broadening of the experimental scope would constitute an improvement to our paper, as it would be for any work of science. As far as a broadening in model variety is concerned – as the reviewer mentions – we will refer to our previous *comment (part 1)*. As we already shown – using the RobustBench accepted submissions as a representative *leaderboard* for adversarial training – our specific model choice for the *classifier* (*i.e.* a *WideResNet-28-10*), which is used internally to CARSO, is shared by $96\\%$ of CIFAR-10 and $92\\%$ CIFAR-100 overall entries in terms of architecture (*ResNet* or derived), and directly comparable with similar or larger models to $76\\%$ of CIFAR-10 and $79\\%$ CIFAR-100 entries in terms of both architecture and model size lower bound. Yet, against the whole set of models (including those not directly comparable to ours, due to architectural differences), we manage to obtain superior performance in terms of empirical $\ell_{\infty}$ robust accuracy. - **Ablation study.** While we did not present it in the paper by the term *ablation study*, we conduct one crucial of such experiments as a way to assess – as the reviewer says – the effectiveness of our method, and, on a lesser measure, the trade-offs of our approach. Indeed, in *Table 2*, we directly compare the accuracy of existing AT-trained models developed for the goal of adversarial robustness, with CARSO models using the very same AT-trained *classifiers* (up to weight values). We do so in terms of both *clean* (columns `AT/Cl` vs `C/Cl`) and robust (columns `AT/AA` vs `C/rand-AA`) accuracy, across three different datasets. As such, the comparison shows the direct effect of ablating away the entire additional structure we propose, whose results we lengthily commented upon: in brief, a marked increase in robust accuracy accompanied by a decrease in clean accuracy. - **Higher resolution datasets.** Acknowledging that this point is entirely novel *w.r.t.* to previous review and post-rebuttal comment of the Reviewer, we agree – as we already said – that *any* broadening of the experimental scope would be beneficial to our work. We also believe, however, that – since we introduce our method as an entirely original one – the existing amount of evidence we provide about its effectiveness cannot be simply dismissed on the basis of dataset resolution being $\leq 64\times64$ pixels. - **On the sufficiency of the goal of our paper.** As far as the later statements by the Reviewer are concerned – as we also already said – we agree that merely showing that our approach is viable – and even doing so in a deliberately hard scenario – are not entirely sufficient. However, for some reason, the Reviewer fails to acknowledge the next part of the quoted sentence: we are able, with our method, to significantly improve upon the empirical adversarial robustness of *any* existing model pursuing the same goal – which has been tested according to AutoAttack and whose results made public by its Authors. While we still believe that further goals are yet to be achieved by our paper and by our models, we honestly do not consider those results as just the *‘most basic and necessary to support [our] goal’*. 2. **On the improvement of the ‘expressions used’ and the ‘quality of figures’.** With all due respect, we are quite surprised by the Reviewer’s observations in this regard. Not because we do not believe our paper can be affected by those issues, but due to the fact that the Reviewer voices these entirely new concerns so later on in the review and discussion period. Given the very specific and technical nature of problems such as the choice of expressions, or the quality of pictographic content, knowing those specific terms and/or aspects of poor quality in figures (which is only one, *i.e.* *Figure 1*) in greater advance would have definitely allowed for pin-point interventions in the paper before the end of such discussion phase. Especially so in the case of pictures, which we could have submitted -- according to the rules of the Conference -- before the end of the rebuttal period. We lack the specifics required to further comment upon the issues identified by the Reviewer.
null
null
Rebuttal 1: Rebuttal: We thank all reviewers for their time and useful remarks. We would like to use this space to clarify once more, in an explicit fashion, the goals of our work. As stated in the *Abstract*, *Section 1* (the *Introduction*) and *Section 6* (the *Conclusion*), our first and foremost aim was that of introducing a novel approach to obtain adversarially-robust classification models in the context of deep learning. Such method (CARSO) is based upon the non-trivial architectural blending (*i.e.* different from simple model juxtaposition or chaining) of adversarial training and adversarial purification, together with a specific *robust aggregation strategy* of the multiple purified inputs whose classification finally constitutes the robust prediction of interest. We believe such aspects of novelty to be the most interesting element of our proposal. Nonetheless, an empirical assessment of the method proposed – in a specific realisation (notably: using a VAE as the generative *purifier*) – is carried out in the $\ell_{\infty}$ norm-bound scenario, using images as input and the standardised AutoAttack routine as the *adversary* of choice. A comparison – in the very same setting – with the adversarially-trained *classifiers* used as part of CARSO, and with the overall best-performing AT-based and purification-based methods (in terms of adversarial robustness, and to the best of our knowledge) is also provided – and it shows shows the superior robust accuracy of CARSO in all scenarios considered. This comes at the cost of decreased *clean* classification accuracy, as we transparently recognise. Such decrease, however, cannot be considered separately from the choice to use a VAE as the generative *purifier* of the specific CARSO model we employed. In turn, such choice was deliberate and determined by two self-imposed requirements: to assess our method in the worst-case scenario for the defender and to avoid gradient approximation in the process of attack. In such light, a VAE *purifier* offers exact end-to-end differentiability for the resulting model (all BPDA-based attacks do not) and a more than manageable computational load for the attacker (thus preventing the need of surrogate-based attacks, as it is the case for most diffusion-based purifiers). In a similar spirit, we explicitly performed gradient obfuscation diagnostics, to ensure the most rigorous robustness testing. Finally, it is not our intention to portray CARSO as a *final* and *definitive* method. Instead, we are much interested and actively pursuing research in the development and testing of CARSO and CARSO-like models whose purifier is not a VAE, their trade-offs, and the challenges they pose for a tight-bound robustness evaluation. Additionally, we believe further insight into the role of specific *classifier* layers to be used as *internal representation* in the structure of CARSO to be worth pursuing. **Changelog of the Manuscript** In the subsection that follows, we summarise all minor changes to the manuscript prompted by Reviewer’s comments. We hope in such way to increase presentation clarity and reduce possible ambiguity. - At line `181` and line `225`, the following substitution is operated: `may be discarded` $\rightarrow$ `is discarded`. - The contents of *Table 15* (*Appendix F*) are moved to and merged with *Table 2*. - In *Subsection 4.5* and *Appendix D*, the class index (was: $i$) is now referred to as $c$, leaving index $i$ to reconstructed sample multiplicity, as already shown in *Figure 1*. - The symbol $\mathcal{L}_{\text{VAE}}$ is now referred to in the caption of *Figure 1* as *‘VAE Loss’*, with explicit reference to *Appendix B* for a formal definition of it. - Potentially, a comparison with best-performing *transformation-based* defences may be added to *Table 2* or to a dedicated appendix. In particular, as prompted by Reviewer `WKot`, we refer to the methods called DISCO [1] and IRAD [2]. **References** [1] Ho & Vasconcelos, *‘DISCO: Adversarial Defence with Local Implicit Functions’*, 2022 [2] Cao et al, *‘IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks’*, 2023
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
On the Curses of Future and History in Future-dependent Value Functions for Off-policy Evaluation
Accept (poster)
Summary: This paper proposes new off-policy estimators for POMDPs without exponential variance with the horizon. Specifically, outcome coverage and belief coverage are assumed which capture the past and future information, respectively. This framework reduces the estimation guarantees from exponential to polynomial. Strengths: 1. This paper gives comprehensive analysis in theory, which makes the proposed paradigm sound. 2. The improvement from exponential to polynomial is impressive. Weaknesses: 1. The new algorithm in Section 5.3 has not been studied thoroughly with experiments. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Is it possible to extend the analysis to infinite horizon? 2. Can you justify the full row rank assumption in Assumption 2? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors addressed the limitations in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their appreciation of our work and the valuable comments. > **”Is it possible to extend the analysis to infinite horizon?”** We believe the answer is yes. In fact, the work of Uehara et al. [2022a] (which we build on) is in the infinite-horizon discounted setting. However, as we discussed in Appendix C.2 (page 17), working with POMDPs and performing the kind of analyses we do in the infinite-horizon setting is really messy. (Just to give a taste: in the infinite-horizon setting, computing belief state requires us to trace back history indefinitely, and in the finite-horizon setting we only need to trace back to the beginning of the episode; see more detailed discussion in C.2) In contrast, the finite-horizon formulation is much cleaner and we are able to re-express the ideas and analyses in Uehara et al. [2022a] in much more elegant forms, which is significant given the complexity of the POMDP formulation (i.e., we want to simplify all aspects as much as we can, as long as it does not lose the essence of the setting). If one is willing to put up with the messiness of the infinite-horizon setting, we believe it should be possible to translate our results into the infinite-horizon setting that Uehara et al. [2022a] took. --- > **“Can you justify the full row rank assumption in Assumption 2?"** These assumptions are made for technical convenience, as mentioned in Line 108. Our main results still hold after some modifications even if they do not hold. Taking L2 belief coverage (Assumption 11) as an example: if $\Sigma\_{H, h}$ is not invertible, but $b\_h^{\pi\_e}$ lies in the subspace of $\Sigma\_{H, h}$, we can still define L2 belief coverage by replacing inverse with pseudo-inverse, and the rest of the analyses and the main results still hold. (If $b\_h^{\pi\_e}$ does not lie in the subspace, we can just define the coverage parameter to be infinite, i.e., no guarantee can be given.) Assuming full-rankness is just a convenient way to avoid dealing with these hassles. To recap, what really matters is whether (and to what extent) $b\_h^{\pi\_e}$ lies in the subspace of $\Sigma\_{H, h}$; the binary notion of full row rank or not of $\Sigma\_{H, h}$ does not really matter and is made only for simplifying presentations. Moreover, these full-rank assumptions are intimately connected to the notion of “core histories” and “core tests (future events)” in the PSR literature, which the future-dependent value function framework draws inspiration from (see their connection in Uehara et al. [2022a]). --- Rebuttal Comment 1.1: Comment: Thank the authors for the response. I am maintaining my score.
Summary: This paper addresses off-policy evaluation in the context of POMDP, aiming to develop estimators that avoid exponential dependence on the horizon. Paper introduces two novel coverage assumptions --- outcome coverage and belief coverage --- tailored to POMDPs to achieve polynomial bounds on estimation guarantees. Specifically, outcome coverage ensures boundedness of future-dependent value functions (FDVF) which targets the shift from $\pi_b\rightarrow\pi_e$, while belief coverage deals with IV ($\mathcal{B}^\mathcal{H}\rightarrow\mathcal{B}^\mathcal{S}$) and Dr ($\pi_b\rightarrow\pi_e$) jointly. The work leverages unique properties of POMDP coverage conditions, avoiding explicit dependence on the latent state space size. The MIS algorithm is proposed to provide interpretations concerning the sample complexity under theses new assumptions. Strengths: - The paper is well-structured and exceptionally clear, making it easy to read. The problem formulation, assumptions and subsequent results are presented with great clarity. The mathematical derivations appear sound. - The new coverage assumptions are novel and effectively solve the non-trivial problems (1) the counterpart of bounded density ratio, the widely adopted coverage assumption for offline MDP, in the context of POMDPs (2) avoid exponentials. - A information-theoretical algorithm, MIS, is provided for interpretation of sample complexity Weaknesses: Minor: I appreciate this work as a theoretical contribution. It would be even more promising if some preliminary experiments were conducted, as OPE using such a history weight function is a novel idea and its empirical performance cannot be anticipated. Technical Quality: 4 Clarity: 3 Questions for Authors: - As pointed out in the title, FDVF is the key tool to deal with OPE for POMDPs. I wonder if the authors could comment more on why such "future-dependence" is necessary to deal with partial observation? - From my intepretation, it seems that $L_\infty$ assumptions should be looser ones and can better accomodates the PO. Why are the $L_2$ assumptions emphasized, or do they offer any unnoticed benefits? - Minor: in line 312, should it be $d^{\pi_e}$? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The proposed method is limited to handling memoryless policies and history-dependent policies with structured constraints, but this limitation does not hurt the paper's contributions considering the hardness explained in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their appreciation of our work and the valuable comments. > **”Why such `future-dependence’ is necessary to deal with partial observation?”** As we mentioned in the paper, the most obvious thing to try is to use history-dependent value functions by converting the POMDP into a history-based MDP. However, a straightforward extension incurs the curse of horizon, which shows that some new ideas are needed to overcome this difficulty. We did not claim that “future-dependence” is absolutely “necessary”, just that it is a promising idea for addressing this issue. Future-dependence may not be the only idea that works, and other plausible approaches remain to be explored in future works. --- > **”From my interpretation, it seems that $L\_\infty$ assumptions should be looser ones and can better accommodate the PO. Why are the t $L\_2$ assumptions emphasized, or do they offer any unnoticed benefits?”** As the reviewer also noticed, $L\_\infty$ assumptions are looser than their $L\_2$ counterparts (but the $L\_2$ versions often require additional assumptions, e.g., Assumption 8). So whenever no additional assumption is needed, using $L\_2$ assumptions lead to tighter bounds and better guarantees (e.g., Theorem 7 depends on the $L\_2$ version of belief coverage, and no additional regularity assumption is needed). Another reason we emphasized $L\_2$ assumptions is its familiarity to the RL theory audience, that they look very similar to coverage coefficients in the linear MDP literature (Line 246). On the other hand, $L\_\infty$ assumptions are very … alien, which can look problematic at the first glance (Line 341) and its validity largely relies on the $L\_1$ normalization of belief vectors, a property rarely found in other settings in RL theory. Therefore, we start with $L\_2$ assumptions for a more gentle introduction, and describe $L\_\infty$ later as an improvement. --- > Line 312 It is indeed a typo; thanks for pointing it out! --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed response. I am maintaining my original score and remain in favor of acceptance.
Summary: This paper studied the finite sample guarantee of future-dependent value function (FDVF) based method for policy evaluation problem in POMDPs. The existing guarantee depends on the boundedness of the FDVF which can be exponential in horizon. The authors studied this quantity and proposed new coverage assumptions with intuitive explanation under which FDVF can be well-bounded achieving polynomial guarantee on model parameters. Besides, they also quantified a conversion ratio between Bellman residual on states and history. Strengths: 1. The paper clearly pointed out the problem of the existing sample guarantee (Theorem 2) along with illustrating examples, such as the boundedness of $C_\Xi$ can be exponential in horizon. 2. The assumptions proposed are intuitively interpretable and successfully solve the problem. Weaknesses: 1. For the boundedness of FDVF, there isn't a clear discussion on the strictness of the assumptions in all cases except in some intuitive examples. A claim that $C_{\mathcal{F}, V}$ is polynomially bounded in all cases is needed. 2. To claim a fully polynomial guarantee, the paper didn't discuss on $C_\Xi$, though it also appears in the sample guarantee and can dominate $C_{\mathcal{V}}+1$. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments. > **“For the boundedness of FDVF, there isn't a clear discussion on the strictness of the assumptions in all cases except in some intuitive examples. A claim that $C\_{F, V}$ is polynomially bounded in all cases is needed.”** $C\_{F, V}$ is the (L2) outcome coverage. As its name suggests, it is a _coverage_ parameter, which describes the extent to which the data (sampled from the behavior policy) contains information about the target policy. Coverage parameters will **not** be bounded in all cases. For example, in the MDP literature, (the boundedness of) the state-density ratios are a standard form of coverage parameter, which can be easily infinite with a poor offline data distribution. Nevertheless, we view state-density ratio as a “polynomial quantity” because there are natural settings where the cumulative importance weights are exponential yet the state-density ratio remain small (see e.g., the example on page 3 of Liu et al. [2018]: “Breaking the Curse of Horizon: Infinite-Horizon Off-Policy Estimation”). Indeed, the wide acceptance of state-density ratio as an appropriate coverage parameter is precisely established by studying “intuitive examples”, just as we did in this paper. Note that this is a major difference between MDPs and POMDPs (and not realizing the difference might be why the reviewer made this comment in the first place): in MDPs, value functions are always bounded and they have nothing to do with the offline data distribution. In the future-dependent value function (FDVF) framework, however, these FDVFs are properties of both the behavior and the target policies (Line 131), and their existence and boundedness depend on a new form of coverage not seen in MDPs (Line 248), namely the outcome coverage. --- > **“the paper didn't discuss on $C\_{\Xi}$”** $C\_{\Xi}$ is assumed to be bounded by $c (\\\|V\_F\\\|\_\infty+1)$ (Line 327), with $c$ being an absolute constant. The rationale is exactly the same as for $C\_V$ (see below), so we omitted the explanation for $C\_{\Xi}$ due to space limit at submission time. That said, the reviewer is right that we should have discussed this explicitly, and we will add in revision. Recall that for $C\_V$, we wrote in Line 322: > To highlight the dependence of $\\\|V\_F\\\|\_\infty$ on the proposed coverage assumptions, we follow Xie and Jiang [2020] to assume that **the range of the function classes is not much larger than that of the function it needs to capture** [i.e., $C\_V \le c \\\|V\_F\\\|\_\infty$ for absolute constant c, as in Line 327]. We made a similar assumption for $C\_{\Xi}$ in Line 327 for exactly the same reason: the functions that $\Xi$ needs to capture are $\\\{B^H V: V \in \mathcal{V}\\\}$ (the Bellman completeness in Assumption 6, Line 152). From the definition of $B^H V$ (Eq.(2)), we know that its boundedness is immediately provided by the boundedness of $V \in \mathcal{V}$ (i.e., $2C\_V + 1$). The final assumption on $C\_{\Xi}$ follows from combining this with the bound on $C\_V$ above. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' efforts on the rebuttal. The elaboration sounds reasonable to me and I would like to increase my score to 6.
Summary: This paper studies off-policy evaluation in POMDPs and introduces two novel coverage concepts: outcome coverage and belief coverage. Outcome coverage uses a weighted norm to ignore unimportant futures in future-dependent value functions. Belief coverage is related to the covariance matrix of belief states under the behavior policy. The paper argues that these conditions are sufficient for the reward of the evaluation policy to be close to the estimated value function for the dataset generated by the behavior policy. Strengths: To be honest, this paper is quite far beyond my abilities with RL theory. I was able to identify two positive aspects. 1. The paper operates in the POMDP setting, and (as far as I can tell) does not assume the learning agent ever has access to the set of latent states S. If I'm incorrect about this, I ask the authors to let me know so I can fix my understanding. 2. The paper starts by suggesting that we will be looking for solutions in $mathcal{F}_h$, but points out correctly that this will give us exponential dimensionality, and subsequently finds another approach with better properties. Weaknesses: The paper's main weakness is that it targets a very limited audience of theoreticians only. There is extremely little text providing intuition or any sort of motivation for the research questions, approach or solution. Despite significant familiarity with off-policy evaluation, importance sampling, POMDPs, etc., I was unfortunately unable to: - follow most of the paper, - understand the main questions being asked, - see what this would be useful for, - evaluate the reasonableness of the assumptions, - check the proofs, - appreciate the consequences of the results. In short, if the paper is intended to be read by anyone outside of a very limited audience, it will need to be substantially rewritten so as to be much more accessible. I'm not even sure if my summary is correct. Technical Quality: 2 Clarity: 2 Questions for Authors: Can the authors please provide a summary of the work that a non-theoretician might understand and appreciate if they have a strong background in the empirical side of the relevant concepts? Confidence: 1 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments. > **“the paper … does not assume the learning agent ever has access to the set of latent states S. If I'm incorrect about this, I ask the authors to let me know ...”** You are right. As in the standard POMDP setting, the latent states $s\_h$ are not included in the dataset received by the agent performing off-policy evaluation. This has also been made explicit in Lines 66 and 67. Both the behavior and the target policies also only operate on the observables, i.e., they cannot depend on the latent states (see Footnote 1). --- > **“The paper's main weakness is that it targets a very limited audience of theoreticians. There is extremely little text providing intuition or any sort of motivation for the research questions, approach or solution.”** We appreciate the comment about readability and will try our best to address it. But yes, this is a pure theory paper mainly aiming at the RL theory audience, a community with a strong presence at NeurIPS. We believe that the study of POMDPs, especially providing results in modern (offline) RL theory frameworks (i.e., providing sample complexity guarantees in terms of proper coverage parameters), is an important and understudied research direction. For MDPs, the research on understanding coverage assumptions (starting from early works of Munos and Szepesvari between 2000 and 2010) has led to growth of the offline RL theory community, and eventually practical offline algorithms that are both theoretically sound and empirically effective (see e.g., ICML 2022 Outstanding Paper Runner-up: “Adversarially trained actor critic for offline reinforcement learning”). What our paper does can be viewed as similar efforts that lay down the theoretical foundations for POMDPs, which may inspire practical algorithms later. We have articulated such a motivation in the context of (offline) RL theory research in the abstract, the introduction, and Section 3; see also our response below to your question on “summary for non-theoretician”. In the revised version, we will include additional text to further clarify the significance and rationale behind our research questions and proposed solutions. Furthermore, the POMDP theory literature have generally been known to be mathematically involved (e.g., the PSR literature, which the future-dependent value function framework draws inspiration from, has always been a niche yet important topic in RL), and distilling knowledge into an easy-to-understand form often takes multiple papers and years of efforts in such research directions. We believe we have already made progress on this: in Section 3, we rewrote and presented the work of Uehara et al. [2022a] in much simpler forms compared to the original paper, forming a clean basis for our later investigation as well as future work in this direction. --- > **“Can the authors please provide a summary of the work that a non-theoretician might understand and appreciate if they have a strong background in the empirical side of the relevant concepts?”** In OPE in MDPs, methods that learn value functions have the theoretical advantage of paying state density ratios as its coverage parameter, compared to importance sampling that incurs exponential dependence. However, naive extensions and analyses to POMDPs erase such advantages. We identify novel coverage definitions and further develop the future-dependent value function (FDVF) framework that reinstantiate such advantages in POMDPs. --- Rebuttal 2: Comment: Thanks for the response. I perhaps spoke too strongly about what I called the very limited audience. Let me amend my statement to simply say that this paper does not feel accessible to me, despite the fact that I am a practitioner with experience in many of the areas discussed here, and with plenty of theory knowledge as well (just not this kind of theory). This feels like a disadvantage for the paper, and it's unfortunately the only aspect I can comment on. There are theory papers I've read that don't feel this way, and I would encourage the authors to try their best to make this work as accessible as possible.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Treatment of Statistical Estimation Problems in Randomized Smoothing for Adversarial Robustness
Accept (poster)
Summary: This paper presents two statistical innovations on top of standard randomized smoothing. The first is employing a randomized Clopper-Pearson interval (instead of deterministic), which marginally increases the certified radius for a particular number of samples. The second involves improving sample efficiency by leveraging confidence sequences, resulting in a roughly 2x speedup over existing methods. Strengths: I'm torn on this work -- although it seems to be a technically sound improvement on the SOTA, the method is very involved for a relatively small benefit. The paper could also use some better exposition. I'm curious what other reviewers think. 1. This work leverages interesting statistical techniques to improve upon randomized smoothing, both in terms of certified radius and sample efficiency. 2. The theory is quite extensive and seems sound (although I have not checked the proofs in detail). 3. Certified robustness of classifiers is an interesting and topical problem. Weaknesses: 1. The authors should take care to proofread their work, which has a number of grammatical and structural errors. 2. The union-bound and betting-based confidence sequences seem to yield very similar results. I am not sure why both need to be included. 3. The paper flow is somewhat poor and difficult to follow. The authors should revise their manuscript to add better transitions between sections. 4. The proposed approaches include a great deal of complexity for relatively minimal improvement. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Why is the blue line in Figure 1(a) so jagged? 2. I don't understand the purpose of the bounds mentioned on line 189: "In the analysis that follows, we assume that the Bernoulli distribution has a success probability satisfying 0 < c < p < C < 1 for some constants c, C, and thus we can hide the dependence on it into ≍." 3. I have trouble understanding the plots in Figure 2. The text of the paper suggests that epsilon is the width, so why is it plotted from 0 to 1 in the bottom left plot? What exactly are these bounds converging to? I feel that these plots are explaining something important but I'm not sure what the experimental setup is here exactly. 4. The randomized interval in Definition 2.3 only differs from the deterministic interval in the "knife edge" case where the binomial distribution is exactly x. I'd expect the contribution from this term to become negligible as the number of samples grows -- why is this not the case in Figure 2b? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors do not discuss limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your review. We appreciate the comments and questions that will help improve the clarity of the paper. We would like to emphasize that the resulting (theoretically grounded) methods are reasonably simple ($\sim10$ lines of code) and (a heuristic) SotA uses $50\\%$ more samples and thus the improvement is significant. Furthermore, we provide lower bounds suggesting that it is impossible to significantly improve the performance estimation tasks. We are happy to provide further clarifications if needed and we **kindly ask you to consider increasing the score** if you are satisfied with the responses ## Weaknesses 1. **Grammatical errors** Sorry for this. We are aware of some of them and will try to eliminate them. 2. **Why are 2 methods included when they perform similarly?** We decided to include both methods for two main reasons apart from the completeness: (1) Betting works better for easy problems while UB is better for the harder ones. (2) We do not only want to propose a method for randomized smoothing. We want the reader to understand the subtleties of the task so they can potentially use this framework in a different context. In certain settings it might be easier to adopt the UB-CS and elsewhere the betting one. 3. **Paper flow is difficult to follow** We tried to make the writing as clear as possible and we suspect that some difficulty is inherent due to the technical complexity of the presented topics. We would like to increase the clarity of the paper, could you please share in what parts were difficult to follow? 4.a **Approaches include a great deal of complexity** The derivations of the methods are sometimes involved, but the methods themselves are then rather straightforward to implement. On line 243 (top of page 7) we show both proposed algorithms and they can be implemented in 12 simple lines of pseudocode (Python code would be in almost a 1-to-1 correspondence, invoking library function bisect). Union bound CS is just as complex code as the baseline (Horvath, 2022). 4.b **...for relatively minimal improvement** We want to stress that we consider the improvements of our methods to be significant. The SotA method (1) require roughly $50\\%$ more samples than our methods, (2) cannot be used in the very large sample regime and (3) is only heuristic On the contrary, our method is optimal in a certain sense and is competitive with ideal methods which strictly outperform the best possible methods for this task. Thus, we not only provide an empirically strong method outperforming SotA by $50\\%$, but also demonstrate that significant further improvements on this task are impossible. ## Questions 1. **Why is the blue line in Fig 1 jagged?** This is a property of the Copper-Pearson intervals - that we wanted to demonstrate - resulting in them being conservative in general. We discuss it in the paragraph starting at line 138. In Example A.2 (and figure 3) we work out a simple example for B(2,p) using elementary math demonstrating why this behavior occurs. Briefly, in setting of Fig 1a, $p_1=0.912$ and $p_2=0.933$ will **not** be contained in the upper confidence interval only when observing $100$ heads from $100$ tosses. This happens for $p_1$ with probability $\alpha_1 = 10^{-4}$ and for $p_2$ with probability $\alpha_2 = 10^{-3}$ which are then points on the jagged blue line. 2. **Why do we assume that there are some constants $c,C$ such that $0 <c <p <C < 1$?** (answer copied for Reviewer NzaR) This is a purely technical assumption for the clarity of exposition. This way we can hide the dependency of the width of the confidence interval on $p$ (so the width scales as $n^{-\frac12}$ and thus simplify the discussion on the width when $np \asymp 1$ where it exhibits complicated behavior (anything between $n^{-\frac12}$ and $n^{-1}$) covered in the referred book. We move the discussion in appendix and replace the paragraph in the main text by the following: For the simplicity of exposition, let the width of a confidence interval at level $1-\alpha$ with $n$ samples be $\asymp \sqrt{\log(1/\alpha)/n}$. This way, we hide the dependency on $p$ into $\asymp$. In the full generality, the width of the confidence intervals exhibits many decay regimes between the rates $\sqrt{p(1-p)\log (1/\alpha)/n} $ (when $np \gtrsim 1$) and $\log(1/\alpha)/n$ (when $np \asymp 1$). Our algorithms capture the correct scaling of the confidence intervals. Further discussion is provided in Appendix. 3. **Understanding Fig 2** Our bottom-left y axis label might have not been the best choice, we will update both captions on the left by words. In the notation of Algorithm 1,2; in the top-left we plot U_t - L_t, while on bottom-left we plot both U_t and L_t (i.e., top = width of CS, bottom = CS itself). We find the setting to be described exhaustively in the caption, but we needed to keep it short due to space constraints. We will put the following in the caption of the left part: "In the notation of Algorithms 1,2, the sequence of $U-L$ is in the top figure, while both sequences $U$ and $L$ are in the bottom figure." See the enclosed pdf for the new figure. 4. **Why are the gains from randomized CI not diminishing with increasing $n$?** The randomized and deterministic CIs are only equal when the realization of a uniform r.v. on the interval [0,1] is 1 (so almost surely the randomized ones are larger). The difference in widths gets smaller as $n$ grows; however in Fig 1b. we show the certified radii with Gaussian smoothing and so we plot $\Phi^{-1}(\underline p)$ ($\Phi$ is Gaussian CDF). It holds that $\lim_{p \to 1} \Phi^{-1}( p) = \infty $, so even when the absolute difference between the deterministic and randomized CI lower-bound of $p$ decreases, the difference between the certified radii roughly stays the same because the growth of $\Phi^{-1}$ is increasing quickly around $1$. This holds only when $\underline p \sim 1$ which is the relevant part of Fig 1b. --- Rebuttal Comment 1.1: Comment: Thank you to the reviewers for their clarifications. I'm raising my score to a 5, but am still not confident on this paper. Mostly I'm not sure that a 50% reduction in samples is significant, as randomized smoothing is very far from being practical in any sense without an order-of-magnitude conceptual breakthrough. This seems unlikely given that these approaches have been explored for a few years now. While this paper doesn't particularly excite me, perhaps the AC's tastes are different in this regard. Also as a small note: I recommend adding periods after all inline subsections (e.g., after "Related Work" in line 181). --- Reply to Comment 1.1.1: Comment: Thanks a lot! If you think that RS is impractical at this stage, then you can see our paper as a negative result, since our lower bounds prevent significant improvements for the considered task. On the other hand, we provide an adaptive estimation procedure that can be used for virtually any task related to randomized smoothing estimation and we can draw samples one-by-one. Thanks again for your review, we will add the commas.
Summary: This paper proposes sample-efficient methods for computing probabilistic robustness certificates for randomized smoothing. The proposed methods replace the standard Clopper-Pearson confidence interval on the classifier’s score, with a confidence sequence, thereby allowing the number of samples to be determined adaptively given a radius of certification $r$ and significance level $\alpha$. Two variants of the method are proposed: one that updates the sequence on a geometric schedule (shown to achieve the asymptotically optimal sample complexity) and another that adopts a betting strategy. The methods are shown to consume 1.5–3 times fewer samples compared to prior work (Horváth et al., 2022). Strengths: **Originality:** This paper imports methods from statistics/probability theory, which have not previously been applied in the context of randomized smoothing. The methods appear to be an excellent fit for improving the statistical efficiency (and hence computational efficiency) of randomized smoothing. It’s worth noting that the methods have been adapted, in that the bounds are specialized for Bernoulli random variables (rather than generic bounded random variables). **Significance:** The computational cost of randomized smoothing is a barrier to adoption, so it’s great to see work in this direction. The proposed algorithms for adaptively determining the sample size is relatively simple to implement which should encourage adoption. **Quality:** The method is well-motivated. The theory and experiments are generally well-executed, apart from some minor issues outlined below. **Clarity:** The writing is reasonably clear, apart from some issues discussed below. Weaknesses: **Clarity:** The paper is mostly clear, however some parts could be improved: - Section 3: Definition 3.1 seems out-of-place: it’s not clear that a generic formulation of sequential decision making is needed, if the experiments focus on randomized smoothing. Section 3.1 includes some discussion of the results, but is labeled “related work”. It would be good to add a few more sentences discussing the results - Section 2.2: I find it confusing that symmetric/asymmetric confidence sequences are discussed before confidence sequences are defined (even informally). - Line 189: I found this paragraph confusing – I wonder if it could be explained more concretely (with an example) in an appendix. **Impact:** - The formulation of randomized smoothing in Section 2 covers additive smoothing mechanisms, where the certificate is an $\ell_p$-ball. I suspect the formulation could be generalized to capture non-additive smoothing mechanisms and more general certificate geometries without impacting the validity of the results. - The proposed method is not applicable if one wants to estimate the maximum certified radius at an input. I wonder whether the method could be adapted to estimate the maximum certified radius within some tolerance. **Minor:** - line 24: Provide citation for claim that randomized smoothing “is currently the strongest certification method” - line 62: “realizations **are** lowercase” - line 77: Extraneous closing bracket - line 88: “de-randomized” has not been defined yet. Consider defining earlier or providing a citation. - line 92: Provide citation for claim that the Clopper-Pearson interval is “well known to be conservative” - line 94: In what sense is the confidence interval “optimal”? In terms of coverage? - line 121: “even” → “event” - line 128: Is the case where u() = 0 an upper confidence interval? - Proposition 2.4: Coverage has not been defined for a randomized confidence interval. - line 180: Should $\in$ be $\subseteq$? - line 247: Provide citation for Ville’s inequality - Table 1 caption: Delete “of number of samples” - line 324: “improved them at places” is ambiguous. - line 328: I’m not convinced that we now have a “perfect” understanding of statistical estimation for randomized smoothing. For instance, this paper has not considered the problem of estimating the smoothed classifier’s prediction. It’s possible there may be more sample-efficient ways of estimating the smoothed classifier’s prediction and bounds on the top-2 scores jointly. - “We stress out <some statement>” sounds unusual. It would be more natural to say “We stress <some statement>”. Technical Quality: 3 Clarity: 2 Questions for Authors: - Does the method apply to the more general formulations of randomized smoothing? - Could the method be adapted to estimate the maximum certified radius within some tolerance? This would cover another common use case. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The limitations ought to be discussed more. For example, the experiments only cover one dataset/model, the proposed method only improves sample complexity if the radius is fixed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your thorough review! We appreciate the insightful comments that will help with the paper. If we satisfactorily answer your questions and reservations, we kindly ask you to consider increasing your score. ## Questions * **Does it apply to more general formulations of smoothing?** Yes, our methodology can be directly applied to all[1] randomized smoothing works we are aware of as they have the same estimation subroutine. Some examples of such non-additive smoothing are Wasserstein smoothing[2] and Image transformation smoothing[3] and they both use the standard estimation procedure in randomized smoothing due to (Cohen, 2019). We state this explicitly in the paper [1]: modulo the deterministic ones, where there is no estimation. Additionally, rarely a soft base classifiers are considered instead of hard ones. Here one would need to use e.g., empirical Bernstein bound instead of Clopper-Pearson; alternatively the betting wealth needs to be computed in a different way. \ [2]: Levine et al. Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks\ [3]: Fischer et al. Certified Defense to Image Transformations via Randomized Smoothing * **Applicability to more general tasks; e.g., estimate the maximum certified radius within some tolerance** Yes, the methods can be directly applied. We run the confidence sequences until a stopping condition is met (in the paper, it was that certain value of $p$ is outside of the confidence sequence). In the proposed task the stopping criterion might be that $r(hi) < r(lo) + \varepsilon$, where the current confidence interval on the probability is $[lo, hi]$ and $r(p)$ is the radius computed from probability $p$. In the example, we know (at certain confidence) that $r(lo)$ is underestimating the true certified radius and $r(hi)$ is overestimating it; and so when the stopping criterion is met, we return conservative estimate $r(lo)$, but we know that it is at-most $\varepsilon$ smaller than the true radius. We will include this task in the paper. Please see the enclosed pdf for preliminary results. ## Clarity Thanks for the pointers, we will fix the problems. * **Start of setction 2.2** We move the content of the first paragraph after the definition 2.5 and the following remark. * **Section 3, definition 3.1** We agree that the section is brief, we will expand it using the extra page. In the paper, we try to strike a balance between the generality (so the results can be readily transfered) and the clarity of the connection to randomized smoothing. We think that the task from Definition 3.1 is natural and we would like to keep it in this form. We agree that the definition might seem to come out of nowhere and we will provide more motivation for it. * **Paragraph after line 189** (answer copied for Reviewer xWqy) This is a purely technical assumption for the clarity of exposition. This way we can hide the dependency of the width of the confidence interval on $p$ (so the width scales as $n^{-\frac12}$ and thus simplify the discussion on the width when $np \asymp 1$ where it exhibits complicated behavior (anything between $n^{-\frac12}$ and $n^{-1}$) covered in the referred book. We move the discussion in appendix and replace the paragraph in the main text by the following: For the simplicity of exposition, let the width of a confidence interval at level $1-\alpha$ with $n$ samples be $\asymp \sqrt{\log(1/\alpha)/n}$. This way, we hide the dependency on $p$ into $\asymp$. In the full generality, the width of the confidence intervals exhibits many decay regimes between the rates $\sqrt{p(1-p)\log (1/\alpha)/n} $ (when $np \gtrsim 1$) and $\log(1/\alpha)/n$ (when $np \asymp 1$). Our algorithms capture the correct scaling of the confidence intervals. Further discussion is provided in Appendix. ## Minor Thanks for all the relevant points. We integrate all of them but comment here only one the "open ended" ones. * **In which sense is the randomized interval optimal?** They are the shortest possible in expectation. We state it formally in the paper: For any binomial r.v. $X \sim \mathcal{B}(n, p)$ and any $q$ (where $p,q$ are probabilities and $n$ is number of samples), $\mathbb{P}(q \in CI_{\text{our}}(X)) \leq \mathbb{P}(q \in CI_{\text{other}}(X)) $, where $CI_\text{our}$ is our proposed randomized confidence interval and $CI_\text{other}$ is any other confidence interval at the same confidence level as ours, and so our intervals are the shortest one in expectation. * **Is [0, v(x)] the upper confidence interval?** We called it the lower confidence interval since it contains the low values. On the other hand $v(x)$ is the upper bound for the estimated quantity. We are open to rename the interval if it helps the clarity. * **Definition of coverage for randomized intervals** We update the definition so to cover the random intervals. The coverage is the probability of the true parameter appearing in the confidence interval, where the probability is not only over sampling, but also over the randomness of the intervals. * **Problems with Conclusions** Thanks, we agree. We rewrite it in the following way: In this paper, we investigated the statistical estimation procedures related to randomized smoothing and improved them in the following two ways: (1) We have provided a strictly stronger version of confidence intervals than the Clopper-Pearson confidence interval. (2) We have developed confidence sequences for sequential estimation in the framework of randomized smoothing, which will greatly reduce the number of samples needed for adaptive estimation tasks. Additionally, we provided matching algorithmic upper bounds with problem lower bounds for the relevant statistical estimation task. ## Limitations We will add Imagenet experiments and with multiple models (we already have multiple models for CIFAR and $\ell_1, \ell_2$ tasks.) We will include your proposed task. --- Rebuttal Comment 1.1: Comment: The authors' responses to my two questions have clarified that the proposed methods have broad applicability in randomized smoothing. I have therefore increased my score for "Contribution" and the overall rating. > Yes, our methodology can be directly applied to all[1] randomized smoothing works we are aware of as they have the same estimation subroutine. ... We state this explicitly in the paper. This was not clear to me from reading the paper. In section 2, randomized smoothing is formulated for additive noise and metric balls induced by a norm. Perhaps the authors could include a remark that this more limited formulation is used for ease of exposition, noting that the methods apply more generally as discussed in the response above. --- Reply to Comment 1.1.1: Comment: Thank you! We will remark it.
Summary: They study the task of certified robustness, i.e. they need to decide if a point is robust at a certain radius or not, using as few samples as possible while maintaining statistical guarantees. Their main contribution is utilizing confidence sequences (instead of confidence intervals) that allows them to draw just enough samples to certify robustness of a points which allows them to greatly decrease the number of samples needed. They also show the effectiveness of their approach experimentally. Beyond that, they propose a randomized version of Clopper-Pearson confidence intervals for estimating the class probabilities. A standard component of randomized smoothing procedures is the Clopper-Pearsons confidence interval which is known to be conservative. As a result, the certification procedures underestimate the certified robustness. They provide an optimal confidence interval for binomial random variables that resolves this issue. Strengths: Certainly one of the main issues of Randomized smoothing methods is that they are not practical due to their computational burden, i.e. a lot of samples need to be drawn to decide robustness at a certain radius. Given that, their result seems interesting. Weaknesses: Other suggestions: Line 118: hence instead of whence Line 121: event instead of even Contributions paragraph: Perhaps in the first paragraph mention that your results hold for binomial random variables. Technical Quality: 3 Clarity: 3 Questions for Authors: same as weaknesses. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: same as weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review and typos! We will clarify the binomial case. We are ready to answer any questions if any arise! --- Rebuttal Comment 1.1: Comment: I went through other reviews and responses. Due to the computational cost of RS it is hard to adopt them, and they show an empirical method outperforming SotA by 50% reduction in samples. But it is true that RS is very expensive and it is not clear that 50% reduction in the number of samples is good enough. Furthermore, they show lower bounds that significant further improvements on this task are impossible. I believe the contribution is sufficient enough for acceptance. --- Reply to Comment 1.1.1: Comment: Thanks!
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their reviews! In general we agree with them. In our understanding, the reviewers agree on the fact that we successfully attacked a well-known limitation of randomized smoothing. Their perceived weaknesses are occasional writing problems - those are easily fixable and do not require significant changes. In the enclosed pdf we provide updated figure 2. for better clarity. We changed the Y-axis labels on the left part. We also slightly updated caption - changes are in red. We also enclose a table with an experiment on a new task as requested by Reviewer NzAR Pdf: /pdf/99ece3beeae7270f62243d803b211c2d7aa3a76c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Nearly Optimal and Low-Switching Algorithm for Reinforcement Learning with General Function Approximation
Accept (poster)
Summary: This paper considers reinforcement learning with low switching cost. The authors design a new algorithm named MQL-UCB for RL with general function approximation. The key algorithmic design includes a general deterministic policy-switching strategy that achieves low switching cost, a monotonic value function structure with carefully controlled function class complexity, and a variance-weighted regression scheme that exploits historical trajectories with high data efficiency. MQL-UCB achieves minimax optimal regret and a near optimal switching cost. Strengths: 1. The problem of reinforcement learning with low switching cost and general function approximation is interesting. 2. The paper is solid. The proof looks correct to me. 3. The bounds are strong. The switching cost bound is optimal according to the lower bound. 4. The presentation is clear in general. Weaknesses: 1. Some of the assumptions look very strong. The completeness of all functions $V:\mathcal{S}\rightarrow [0,1]$ , the second-order completeness and the existence of bonus oracle are not standard to my knowledge. It seems that the weighted regression is possible only if these assumptions hold. 2. The paper [1] also studies low switching reinforcement learning with general function approximation. It seems that the setting in [1] is slightly more general (please correct me if there is misunderstanding) and their algorithm is cleaner. [1] Xiong et al. A general framework for sequential decision-making under adaptivity constraints. 3. The algorithm itself is very complicated and whether it can be implemented in practice is unclear. Since low switching RL is a very practical problem, some experiments (even under the simplified linear MDP) to show the performance of the algorithm will be helpful. I am curious about how this algorithm performs compared to the standard approach: LSVI-UCB with a doubling trick under linear MDP. Technical Quality: 3 Clarity: 3 Questions for Authors: See the weakness above. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer ZRKX Thank you for your insightful comments and suggestions! We answer your questions as follows. --- **Q1** Some of the assumptions look very strong. The completeness of all functions $V: \mathcal{S} \to [0, 1]$, the second-order completeness and the existence of bonus oracle are not standard to my knowledge. It seems that the weighted regression is possible only if these assumptions hold. **A1** The completeness assumption on the second moment, introduced by [1], is crucial for achieving tighter regret bounds in reinforcement learning (RL) with general function approximation. This assumption involves leveraging the variance of the value function at the next state, which is essential for obtaining minimax-optimal regret bounds in various RL settings. These settings range from tabular Markov Decision Processes (MDPs), as shown by [2], to more complex scenarios like linear mixture MDPs, as demonstrated by [3], and linear MDPs, as discussed by [4]. Given that the only previous work [1] to achieve the optimal regret bound also relies on this assumption, we believe it does not lessen the importance of our contribution. Additionally, while GOLF [9] only requires the standard completeness assumption, both in [1] and our work, a series of optimistic value functions are computed for a tractable planning phase. This requires including the optimistic/pessimistic value functions in the function classes, making the normal completeness assumption on $\mathcal{F_h}$ insufficient from an algorithmic perspective. As for the bonus oracle, when the function class $\mathcal{F}$ is convex, [6, 7] has shown that using a binary-search based algorithm such sup over $f_1$ and $f_2$ in the bonus oracle can be efficiently and accurately evaluated. Empirically, [8] approximated uncertainty by computing the standard deviation of an ensemble of networks and showed experimental results supporting the effectiveness of using such an uncertainty weighting technique in offline RL. --- **Q2** The paper [5] also studies low switching reinforcement learning with general function approximation. It seems that the setting in [5] is slightly more general (please correct me if there is misunderstanding) and their algorithm is cleaner. **A2** Both their setting and ours encompass MDPs with bounded eluder dimensions as special cases. Upon closer examination, [5] proves to be slightly more general because, in our Definition 2.4, the dimension is equivalent to (3.2) in [5] after taking the supremum of each term. Compared to [5], our approach is oracle efficient, assuming an oracle is available to compute the bonus for UCB-based exploration. As discussed in Section 1, [5] addresses low-switching RL with general function approximation, achieving a switching cost of the same order. However, the algorithm in [5] features an intractable planning phase and is not statistically optimal. Additionally, checking the policy-switching in [5] requires recomputing the cumulative loss functions at every episode, which becomes inefficient when $K$ is large. **Q3** The algorithm itself is very complicated and whether it can be implemented in practice is unclear. Since low switching RL is a very practical problem, some experiments (even under the simplified linear MDP) to show the performance of the algorithm will be helpful. I am curious about how this algorithm performs compared to the standard approach: LSVI-UCB with a doubling trick under linear MDP. **A3** Since our algorithm provides a general framework for solving MDPs with general function classes, the computational complexity of our method highly relies on the efficiency of the regression subroutine and bonus oracle implementation. Actually, our proposed algorithm reduces to LSVI-UCB++ in the linear MDP case, which is a computationally efficient algorithm that shares the same computational complexity as LSVI-UCB with a doubling trick. Additionally, we want to clarify that it is a desirable property for an RL algorithm designed for general function classes to resemble a more efficient algorithm designed for simpler settings. --- [1] A Agarwal, Y Jin, T Zhang. VOQL: Towards Optimal Regret in Model-free RL with Nonlinear Function Approximation. The Thirty Sixth Annual Conference on Learning Theory, 2023 [2] MG Azar, I Osband, R Munos. Minimax regret bounds for reinforcement learning. International conference on machine learning, 2017 [3] D Zhou, Q Gu, C Szepesvari. Nearly minimax optimal reinforcement learning for linear mixture markov decision processes. Conference on Learning Theory, 2021 [4] J He, H Zhao, D Zhou, Q Gu. Nearly minimax optimal reinforcement learning for linear markov decision processes. International Conference on Machine Learning, 2023 [5] Xiong, N., Yang, Z., and Wang, Z. A general framework for sequential decision-making under adaptivity constraints. [6] D Kong, R Salakhutdinov, R Wang, LF Yang. Online sub-sampling for reinforcement learning with general function approximation.- arXiv preprint arXiv:2106.07203, 2021 [7] Q Di, H Zhao, J He, Q Gu. Pessimistic nonlinear least-squares value iteration for offline reinforcement learning. ICLR 2024 [8] C Ye, R Yang, Q Gu, T Zhang. Corruption-robust offline reinforcement learning with general function approximation. Advances in Neural Information Processing Systems, 2024 [9] C Jin, Q Liu, S Miryoosefi. Bellman eluder dimension: New rich classes of rl problems, and sample-efficient algorithms. Advances in neural information processing systems, 2021 --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response. I will keep my score. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. If you have any additional questions or concerns, please let us know. We are happy to address them. Otherwise, we would greatly appreciate it if you could consider adjusting your rating.
Summary: This paper studies the reinforcement learning under general function approximation. This paper attempts to find an algorithm which achieves optimal regret and maintains low switching cost in the mean time. In the algorithm, first they calculated the empirical value function using the weighted ERM, where the weights are chosen according to the variance of the estimator. Next, they utilize optimism and pessimism over the Q-function. The algorithm only collects new samples if the existing dataset does not have enough coverage. This paper brings in the technique of constructing monotonic value function (originally from He et al. (2022)) into the cases of general function approximation. This paper achieves the optimal rate (matching the lower bound) in both the regret and also the number of switches. Strengths: This paper studies MDP with general function approximation, which is more general than the common linear MDP and tabular MDP setting. Assumptions made in this paper include the completeness and finite Eluder dimension, which are common in related literatures as well. From the regret and switching cost perspective, this paper matches the optimal rate for both of the terms. This paper seems to be the first paper which achives this goal. Compared to Agarwal el al. (2022), the algorithm and analysis are similar to theirs, but this paper has the following advantages: (1) The policies output by this paper is Markovian, while Agarwal el al. (2022) has non-Markov policies. (2) The switching cost is better than Agarwal el al. (2022) by a factor of d_elu. Weaknesses: The assumptions made on completeness and eluder dimension are somehow stronger than the usual assumptions in related literatures. Specifically, the completeness assumption requires closeness after executing Bellman operator for any value function, and the Eluder dimension requires additional parameter $\sigma$. This completeness assumption is difficult to check unless under strong assumption over dynamics, e.g. linear structure or linear mixture MDPs, etc. Compared to the related work Agarwal el al. (2022), the techniques does not have significant improvement. The algorithm idea is very similar to the algorithm in Agarwal el al. (2022). Specifically, the technique of calculating the variance and monotonicity of value functions are both used in Agarwal el al. (2022). Technical Quality: 4 Clarity: 3 Questions for Authors: I have the following questions: 1. Regarding the completeness assumption, I am curious is there a lower bound showing that the normal completeness assumption $T_h F_{h+1}\subset F_h$ is not enough? 2. In the algorithm, calculating $D_F$ requires solving an optimization problem over $(f_1, f_2)\in F$, which is not computational tractable due to the non-convexity of the function. Is there any way to bypass this? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes. The authors addressed all the limitations listed in the guidelines. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer Zbvr Thank you for your positive feedback! We address your questions point-by-point. --- **Q1** The assumptions made on completeness and eluder dimension are somehow stronger than the usual assumptions in related literatures. Is there a lower bound showing that the normal completeness assumption $\mathcal{T}\_h \mathcal{F}\_{h + 1} \subseteq \mathcal{F}\_h$ is not enough? **A1** To our knowledge, there is no lower bound proving that the normal completeness assumption$\mathcal{T}\_h \mathcal{F}\_{h + 1} \subseteq \mathcal{F}_h$ is insufficient for achieving a minimax optimal regret bound. However, all existing works that achieve minimax optimal regret for linear MDPs or MDPs with general function approximation require estimating the variance of the next-state expected return. This necessitates a slightly stronger completeness assumption of the second-order moment. Additionally, while GOLF [1] only requires the standard completeness assumption, both in [2] and our work, a series of optimistic value functions are computed for a tractable planning phase. This requires including the optimistic/pessimistic value functions in the function classes, making the normal completeness assumption on $\mathcal{F_h}$ insufficient from an algorithmic perspective. --- **Q2** In the algorithm, calculating $D_{\mathcal{F}}$ requires solving an optimization problem over $f_1, f_2 \in \mathcal{F}$, which is not computational tractable due to the non-convexity of the function. Is there any way to bypass this? **A2** Thanks for your insightful question! Here we do not require the functions are convex. However, when the function class $\mathcal{F}$ is convex, [2, 3] have showed that using a binary-search based algorithm such sup over $f_1$ and $f_2$ in the bonus oracle can be efficiently and accurately evaluated. Empirically, [4] approximated the uncertainty by computing the standard deviation of an ensemble of networks and showed experimental results supporting the effectiveness of using such an uncertainty weighting technique in offline RL. --- [1] C Jin, Q Liu, S Miryoosefi. Bellman eluder dimension: New rich classes of rl problems, and sample-efficient algorithms. Advances in neural information processing systems, 2021 [2] D Kong, R Salakhutdinov, R Wang, LF Yang. Online sub-sampling for reinforcement learning with general function approximation.- arXiv preprint arXiv:2106.07203, 2021 [3] Q Di, H Zhao, J He, Q Gu. Pessimistic nonlinear least-squares value iteration for offline reinforcement learning. ICLR 2024 [4] C Ye, R Yang, Q Gu, T Zhang. Corruption-robust offline reinforcement learning with general function approximation. Advances in Neural Information Processing Systems, 2024 --- Rebuttal Comment 1.1: Comment: Thank you very much for your response. I do not have further questions. --- Reply to Comment 1.1.1: Comment: Thank you for your support!
Summary: The paper studies reinforcement learning with general functional approximation and proposes a near-optimal algorithm with low policy switching times. Strengths: The paper is very well-written, and the main results of the paper are of high quality. The authors improved prior works on RL with general functional approximations to get an algorithm that achieves both the near-optimal regret and the lowest possible switching cost when the number of episodes is large. Besides, the proposed algorithm is intuitive and clean, and the proofs are well-written and easy to follow. Weaknesses: It would be helpful to provide a bit more details for readers (like me) who are not very familiar with the literature on RL with policy switching cost on how to obtain the counting of switches in Table 1. For example, [1] doesn't seem to optimize the number of policy switches (while only trying to optimize the regret); since the paper is directly improving upon [1], it would be helpful to provide a more detailed discussion on why [1]'s algorithm needs $\tilde{O}(dim(\mathcal{F})^2H)$ number of switches and why the authors' algorithm can strictly improve upon the number of switches to $\tilde{O}(dim(\mathcal{F})H)$ without worsening the regret. [1] Agarwal, Alekh, Yujia Jin, and Tong Zhang. "VOQL: Towards Optimal Regret in Model-free RL with Nonlinear Function Approximation." The Thirty-Sixth Annual Conference on Learning Theory. PMLR, 2023. Technical Quality: 3 Clarity: 4 Questions for Authors: You have proposed an algorithm that achieves optimality of regret of switching costs simultaneously under the condition that $K$ is large. Also, the lower bound from Theorem B.1 suggests $\tilde{O}(dim(\mathcal{F})H)$ is unavoidable when one wants to achieve a regret sublinear in $K$. Obviously, the current results will not make sense if the number of episodes $K$ is relatively small compared to the length of episode $H$. I am thus wondering: do you have any insights (what can be done and what cannot be done) in this different regime? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: There is no obvious limitation in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer 6wYy Thank you so much for your strong support! We address your questions as follows. --- **Q1** It would be helpful to provide a bit more details for readers who are not very familiar with the literature on RL with policy switching cost on how to obtain the counting of switches in Table 1. For example, [1] doesn't seem to optimize the number of policy switches (while only trying to optimize the regret). **A1** Thanks for your suggestion! In [1], to compute the bonus function, the authors slightly generalize the method proposed by [2] (see Algorithm 2, [1]). According to Theorem 1 in [2], the utilization of this online subsampling subroutine will automatically lead to an $\tilde{O}(\dim(\mathcal{F})^2H)$ switching cost guarantee. We will add the explanation for the result shown in the table in the revision. Specifically, [2] employs online subsampling techniques, which maintain a small 'core' set of the history data and the policy is switched only if the subset is updated. In contrast, our algorithm applies a novel uncertainty-based policy switching strategy, which directly controls the cumulative sensitivity of historical datapoints. --- **Q2** You have proposed an algorithm that achieves optimality of regret of switching costs simultaneously under the condition that $K$ is large. Also, the lower bound from Theorem B.1 suggests $\tilde{O}(\dim H)$ is unavoidable when one wants to achieve a regret sublinear in $K$. Obviously, the current results will not make sense if the number of episodes is relatively small compared to the length of the episode. I am thus wondering: do you have any insights (what can be done and what cannot be done) in this different regime? **A2** Thanks so much for your insightful question! - For the regret bound, when restricted to linear MDPs, our regret bound still suffers from the $\tilde{O}(H^{2.5}d^5\sqrt{H + d^2})$ $K$-independent term, which will be the leading term when $K = O(H^4d^8(H + d^2))$. If we aim to achieve better results in the small $K$ regime, it is crucial to improve the $\tilde{O}(H^{2.5}d^5\sqrt{H + d^2})$ term in the current regret upper bound. Essentially, this term results from the inaccuracy of the variance estimators. It is still an open problem if we can achieve optimal regret for linear MDPs even when $K$ is relatively small. To tackle this issue, we may consider (1) more accurate estimation of the variance term $\sigma_{k, h}$ (e.g. [3] for linear mixture MDPs) or (2) variance-aware confidence set which does not require additional knowledge of the value of variance [4, 5, 6]. - For the switching cost, our lower bound also holds when $K$ is small. For example, if $K = o(\dim(\mathcal{F}) H)$, our lower bound can be interpret as 'to achieve sublinear regret, the switching cost should be $\Omega(\dim(\mathcal{F}) H)$'. Since such a high switching cost is not achievable, our lower bound result implicitly implies that sublinear worst-case regret is not possible when $K = o(\dim(\mathcal{F}) H)$. --- [1] A Agarwal, Y Jin, and T Zhang. "VOQL: Towards Optimal Regret in Model-free RL with Nonlinear Function Approximation." The Thirty-Sixth Annual Conference on Learning Theory. PMLR, 2023. [2] D Kong, R Salakhutdinov, R Wang, L Yang. "Online sub-sampling for reinforcement learning with general function approximation" [3] D Zhou, Q Gu. "Computationally efficient horizon-free reinforcement learning for linear mixture mdps." Advances in neural information processing systems, 2022 [4] Z Zhang, J Yang, X Ji, SS Du. "Improved variance-aware confidence sets for linear bandits and linear mixture mdp." Advances in Neural Information Processing Systems, 2021 [5] H Zhao, J He, D Zhou, T Zhang, Q Gu. "Variance-dependent regret bounds for linear bandits and reinforcement learning: Adaptivity and computational efficiency." The Thirty Sixth Annual Conference on Learning Theory, 2023 [6] Z Zhang, JD Lee, Y Chen, SS Du. "Horizon-Free Regret for Linear Markov Decision Processes." ICLR 2024 --- Rebuttal Comment 1.1: Comment: Thank you for the detailed explanations.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Enhancing Diversity in Bayesian Deep Learning via Hyperspherical Energy Minimization of CKA
Accept (poster)
Summary: This paper introduces the hyperspherical energy as an objective to prompt the diversity of particles in BNNs. It claims that the hyperspherical energy approach can avoid permutation invariance in traditional diversity metrics. Strengths: I agree that the diversity of particles in BNNs is an important problem. The proposed method introduces hyperspherical energy, which can avoid permutation invariance. This invariance is mainly due to the complex structures of DNNs, which previous Bayesian Inference methods have long ignored. Weaknesses: The technical contribution of this paper is not strong enough. Only a new regularization term is added to the standard ensemble framework. The idea of hyperspherical energy comes from previous works, and I do not recognize significant changes from previous works. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. The diversity evaluations are limited to synthetic data. I am not convinced that the proposed method can be scaled up. 2. Important ablation studies are missing. For example, different model architectures and the number of particles. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: 1. This paper is restricted to relatively small Bayesian neural networks, and does not discuss the scalability. 2. The proposed method can improve uncertainty estimation, but fails to improve accuracy. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Technical Contribution We strongly disagree with the reviewer’s dismissal of our contribution as “merely a regularization term”. Comparing neural networks in particle-based variational inference (ParVI) has been a difficult problem and most proposed comparison kernels suffer from the lack of channel permutation invariance (i.e. just permuting channel positions lead the networks to be different from each other according to those kernels whereas the function they realize would be exactly the same), and our exploration of pairwise CKA kernels in ParVI solves this problem and provides a differentiable kernel that is invariant to channel permutations and isotropic scaling. These important properties have long been ignored in the ParVI. We additionally add HE to work on top of the CKA kernel, and our experiments demonstrated that it is more effective at minimizing a cosine similarity (in our case CKA) than minimizing cosine similarity itself (Section 3). We believe we are the first to adopt MHE for particle variational inference (ParVI) as well as the first to propose using MHE on top of a pairwise CKA kernel to compare networks. Other reviewers also believe that our approach is well-motivated/novel. ## Particle Number Ablation We agree that performing ablations on the number of particles is important. We observe improvements in inlier accuracy and outlier detection performance as the number of particles increases, with four and five particles performing similarly (Table 2). We will add more ablations on particle numbers in the final version of the paper. | Training Particles | NLL $\downarrow$ | ID Accuracy $\uparrow$ | ID ECE $\downarrow$ | AUROC SVHN $\uparrow$ | AUROC CIFAR-10/CIFAR-100 $\uparrow$ | AUROC Textures (DTD) $\uparrow$ | | -------- | -------- | -------- | -------- | -------- | -------- | -------- | | 2 | 0.791 | 59.21 | 5.21 | 89.87 | 67.34/68.44 | 68.48 | | 3 | 0.771 | 60.86 | 5.74 | 91.57 | 69.05/70.59 | 69.31 | | 4 | **0.761** | 62.26 | 6.90 | **92.92** | 70.83/71.44 | **70.89** | | 5 | 0.784 | **63.10** | 9.82 | 92.65 | **72.13**/**71.68** | 70.69 | **Table 2.** Ablation on number of training particles for ResNet18 + $\text{HE}$ trained with TinyImageNet. ## Questions Contrary to the reviewer’s claim, we did, evaluate on MNIST/CIFAR10/CIFAR100 in the original submission, and additionally TinyImageNet in the rebuttal (see general comments). None of these datasets are synthetic. Synthetic examples were included for visualization purposes but those weren’t the main results in the paper. ## Limitations We did discuss scaling and memory footprint in Appendix F. Results showed that our algorithm does not increase training time significantly over regular ensembles. --- Rebuttal Comment 1.1: Comment: I acknowledge that the evaluations do cover MNIST/CIFAR10/CIFAR100. I am willing to raise my score to reflect that, along with the new experiments added in the rebuttal. However, I still do not think the technical contributions mentioned in the rebuttal are convincing. The rebuttal only discusses the meaning of this regulation term. --- Rebuttal 2: Comment: We thank the reviewer for the recognition of our experimental results and the raised rating. We also appreciate the continuing discussion. We have discussed in our rebuttal the nontrivial changes and key properties of our proposed regularization method versus previous regularization techniques in ensemble training. As recognized by other reviewers, the combination of $\text{CKA}$ and $\text{HE}$ is a novel approach and a fresh perspective, and most importantly, it leads to improvements in practice for a long-standing important problem in ensemble learning, i.e., enabling better ensemble diversity (not attainable by merely switching channels within a network) which leads to significantly improved OOD performance. We disagree with the reviewer's attempt to dismiss any paper that works on regularization of networks to be not novel, without actually judging the contribution of the paper. Besides, our novel $\text{CKA}$+$\text{HE}$ kernel also allowed us to train for increased diversity on the synthetically generated outlier images which further significantly improved the OOD performance in all cases. To the best of our knowledge, this has not been explored in existing literature of ParVI methods.
Summary: The paper “Minimizing Hyperspherical Energy for Diverse Deep Ensembling” explores the use of Centered Kernel Alignment (CKA) and Minimization of Hyperspherical Energy (MHE) in Bayesian deep learning to enhance the diversity of ensemble models and parameters generated by hypernetworks. By incorporating these techniques, the authors aim to improve uncertainty quantification in both synthetic and real-world tasks, which is quantified by measuring OOD detection performance and calibration. The key contributions include proposing CKA as an optimization objective and utilizing MHE to address the diminishing gradient issue, leading to more stable training and better performance in uncertainty estimation. Strengths: - **Original:** The paper introduces a novel approach by combining CKA and MHE to enhance the diversity of deep learning ensembles and hypernetworks, which is a fresh perspective in Bayesian deep learning. - **Detailed experiments:** The experimental results are comprehensive and demonstrate significant improvements in uncertainty quantification across various tasks, showing the practical effectiveness of the proposed methods. - **Well-written:** The paper is well-structured, with clear explanations of the methods and thorough discussions of the results. The figures and tables effectively illustrate the performance improvements. Weaknesses: - The comparison of the approach in terms of OOD detection performance is slightly biased due to leveraging of generated OOD samples. Here in order to differentiate the contribution of the method from the incorporation of additional information a further experiments comparing against DDU would be warranted. In particular one could imagine adding a term to the loss which encourages low likelihood of the fitted GMM on OOD samples. - The comparisons in Figure 2 paint a slightly overly optimistic picture for the OOD HE approaches. In 2 dimensions the problem becomes quite trivial given negative / OOD samples. The difficulty of generating enough samples to cover the OOD volume becomes exponentially more difficult with increasing dimensionality of the input space and is especially easy when the diversity of the data is limited. - (Related to prior point) the datasets used to show the efficacy of the OOD approach are rather simple and small such that sample diversity is limited. Running one experiment on a larger dataset like imagenet would be very beneficial to show that the method can also work in the context of highly diverse input distributions and high dimensionality of the input space. - There seems to be a mistake in table 2 row Ensemble + HE, column PE - I recommend renaming Fig1 d and e to Cossim evaluation and HE evaluation as using objective gives the impression that the data shown is from models trained using these objectives. - As the work points out that applying the method to larger datasets would in principle be possible, this might be a good additional comparison to include. Many comparison approaches will have issues in this case, but comparison to other ensembling based approaches would be feasible. Technical Quality: 3 Clarity: 3 Questions for Authors: - When running the LeNet comparisons agains DDU, was the model trained with spectral normalization (a requirement for DDU to function well) and how was the regularization strength determined? - Why was the normalization-free variant of LeNet used for some methods whereas the standard variant for the DDU baseline? - Page 9, line 289: Was the GMM fitted to the features of the pre-classification layer. I.e. to the features used by the last linear layer? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The work correctly points out that the small size of the datasets is one of the main limitations. Generally, seeing the efficacy of the approach in a larger scale training setup would significantly improve the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Questions >When running the LeNet comparisons against DDU, was the model trained with spectral normalization (a requirement for DDU to function well) and how was the regularization strength determined? Yes, all DDU models were trained with spectral normalization. We picked the best regularization for experiment balancing AUROC/inlier accuracy from values tested in the range $1.0-5.0$. >Why was the normalization-free variant of LeNet used for some methods whereas the standard variant for the DDU baseline? To ease the transition to hypernetworks, we aimed to construct a method not requiring feature normalization. We utilized weight standardization (WS) but did not adapt WS to work along with spectral normalization, and found no difference in inlier performance. >Page 9, line 289: Was the GMM fitted to the features of the pre-classification layer. I.e. to the features used by the last linear layer? Yes, the GMM was fitted to the features of the pre-classification layer. --- Rebuttal Comment 1.1: Comment: Thanks for answering the questions. I would have appreciated some comments on the weaknesses of the work I pointed out. Will leave my score unchanged as solely the issue of dataset size was addressed and other points remained unmentioned. --- Reply to Comment 1.1.1: Comment: Thank you for pointing out the missing response to your comments on the weaknesses. We somehow forgot to include them and will post the answers here in addition to the experiments on the larger dataset. > Fair comparison against DDU. Instead of adding OOD examples to DDU as you suggested, which is also viable, in Tables (Paper 1-4. Rebuttal Table 1), we showed that even without OOD samples, our method/kernel still outperforms ensembles and DDU (Tables 1/2) in OOD detection, which are the apples-to-apples comparisons. Although, we can mark that row more clearly to indicate synthetic OOD sample usage. > In 2D with OOD examples the problem becomes quite trivial. The difficulty of generating enough samples to cover the OOD volume becomes exponentially more difficult with increasing dimensionality of the input space and is especially easy when the diversity of the data is limited. We agree that adding OOD to synthetic experiments is easy, however as mentioned earlier our methods outperform baselines even without OOD examples. For images, our main point is to show that we do not have to cover the entire OOD volume but just have to generate some random images to make sure the diversity is high on random outlier images. Our OOD detection experiments show that although our generated synthetic images (Fig. 7/8) are nowhere similar to the tested outliers, our approach still performs well for outlier detection. > Larger datasets This one is answered in the general remarks. > Table 2 and Figure 1: Thank you for pointing these out. We will correct the extra zero in the table, and further clarify the cossim/he figure name.
Summary: The authors proposed to improve the quantification of particle diversity in deep ensemble with hyperspherical energy (HE) on top of the CKA kernel. They further integrate the HE kernel in particle-based variational inference (ParVI) and generative ensemble with hypernetwork frameworks. The methods are evaluated on both synthetic experiments and small-scale classification datasets. Strengths: 1. The motivation of the paper is clear: addressing mode collapse in deep ensemble by using a kernel that is more suitable to measure particle diversity of neural networks (HE kernel). 2. The advantage of HE kernel has been demonstrated in two different ensemble frameworks (parVI & generative ensemble with hypernetwork), and the empiricall performance of the method looks Ok. Weaknesses: The datasets considered are a bit outdated and the networks considred seem to be quite small, such that overall the performance is on the lower end (e.g. 85% acc. for CIFAR10, while many BDL methods can easily achieve accuracy higher than 90%). Fuerthermore, recent BDL papers typically consider larger datasets (such as ImageNet) and deeper networks. It is necessary to consider larger datasets and larger models in order to assess the practical effectiveness of the method. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors do not address potential negative social impact since the paper is predominantly theoretical. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### CIFAR-10 Performance We did evaluate our approach using a standard ResNet18, shown in Table 4 of the manuscript. The inlier performance of all approaches (The “Accuracy” column in Table 4.) is above 95%. We compared using the permutation invariant kernels $\text{CKA}_\text{pw}$, $\text{HE}$ with RBF. Further explanation can be found in the general remarks. --- Rebuttal Comment 1.1: Comment: For the question you asked about larger dataset and larger models, results can also be found in the general remarks. --- Rebuttal Comment 1.2: Comment: Thanks for the rebuttal, which addresses my concern about the model perfoemance. I am willing to raise my score to 6.
null
null
Rebuttal 1: Rebuttal: ## General Remarks We are grateful to the reviewers for their constructive feedback. Below, we respond to questions and concerns shared by the reviewers regarding running on a larger dataset. ## Larger Dataset We agree with the reviewers that the evaluation of BDL on a larger dataset is desirable. Due to time constraints of the rebuttal period, we applied HE to an ensemble of ResNet18 on TinyImageNet (Table 1). Encouraging feature diversity with $\text{CKA}_\text{pw}$ and $\text{HE}$ significantly improves uncertainty estimation while minimally impacting inlier performance. Besides, encouraging feature diversity on synthetically generated OOD examples, albeit highly different from the testing examples, further significantly improves the performance of both inlier classification and outlier detection. We will add more ImageNet-level experiments in the final version. | Model | NLL $\downarrow$ | ID Accuracy $\uparrow$ | ID ECE $\downarrow$ | AUROC SVHN $\uparrow$ | AUROC CIFAR-10/CIFAR-100 $\uparrow$ | AUROC Textures (DTD) $\uparrow$ | | -------- | -------- | -------- | -------- | -------- | -------- | -------- | | ResNet18 (5) | 0.775 | 62.95 | 8.90 | 89.81 | 66.85/67.33 | 68.96 | | SVGD + RBF (5) | 0.926 | 61.87 | 16.10 | 92.76 | 72.23/73.73 | 65.67 | | SVGD + $\text{CKA}_\text{pw}$ (5) | 0.835 | 60.15 | 8.26 | 94.08 | 78.40/79.48 | 66.48 | | SVGD + $\text{HE}$ (5) | 0.732 | 61.36 | **3.71** | 94.10 | 72.05/72.86 | 70.75 | | ResNet18 + $\text{HE}$ (5) | 0.784 | 63.10 | 9.82 | 92.65 | 72.13/71.68 | 70.69 | | ResNet18 + $\text{HE}$ OOD (5) | **0.606** | **68.51** | 11.49 | **98.54** | **79.00**/**81.09** | **89.18** | **Table 1.** Performance of ResNet18 ensemble trained with TinyImageNet. All models are pretrained with a deep ensemble with no regularization, then fine tuned for 20 epochs with each method (including the deep ensemble). Standard predicted entropy is used in the AUROC calculation. Synthetic OOD generated from noise and augmented TinyImagenet. ## Larger Models We have already included results with a ResNet18 with over **96%** inlier accuracy performance on CIFAR-10 in the original manuscript (Table 4 (Column 4 Accuracy is inlier accuracy)). The smaller ResNet32 models, with ~85% inlier performance (Table 2.) were additionally included for a fair one-to-one comparison with the work presented in (D’Angelo et al. 2021), . Additionally, we reported scaling time/memory costs with typical weight space function space ParVI methods in Appendix F. Our approach is comparable to other ParVI methods in speed and incurs a slightly larger memory cost due to feature kernel construction.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
EnsIR: An Ensemble Algorithm for Image Restoration via Gaussian Mixture Models
Accept (poster)
Summary: This paper proposes a post-training model ensenble method for image restorationn by leveraging Gaussian mixture models on split pixels and lookup table for fast innference. The pixels are split into various bin sets according to their value ranges and the ensemble problem of the pixels is reformulated into Gaussian mixture models. The EM algotithm is used to solve the ensemble weight of the models. This area lacks sufficient research attention and previous works are mainly using averaging or its variant. So this work has potential influence for industrial application. The proposed method is fast and effective to obtain better ensemble results for three image restoration tasks. Strengths: 1. The method is novel, properly derived and has a theoretical support. It is well ellaborated in pseudo-codes and discussed. 2. The authors have conducted extensive experiments on three restoration tasks, including 14 benchmarks. 3. Its results are promising and consistently better than other baselines. Weaknesses: 1. ZZPM is a new and recently proposed method after 2022, and averaging is the most straightforward and traditional method. Why is its performance sometimes worse than the normal averaging? 2. For the task of super-resolution, the input and output are not in the same size, so the notations should be clarified. 3. Why can we suppose that image noise follows a Gaussian distribution with zero mean? Is there any clue or derivation to support the assumption? 4. The authors say that the method is not accelerated by GPU vectorization yet. Is it because CPU is faster or any other reasons? 5. The pixel numbers of each bin set will be highly diverse as shown in Figure 6. In some bin set, there will be many pixels and then EM algorithm can run properly without problem. However, if there are too few pixels in a bin set, EM algorithm may fail to find a solution. Technical Quality: 4 Clarity: 3 Questions for Authors: I have listed some questions regarding the implementation and experiments in Weakness section. Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The authors have discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q-1.** ZZPM is a new and recently proposed method after 2022, and averaging is the most straightforward and traditional method. Why is its performance sometimes worse than the normal averaging? **A-1.** Thank you for your insight. ZZPM was developed for the latest image restoration competition [Reference 1], where all base models are similar in terms of their structure and performance. However, as mentioned in L240-243, our experiments are conducted using models with different structures and sometimes diverse performances on several test sets. In such scenarios, one advanced model may consistently outperform the others. ZZPM assigns ensemble weights that are negatively proportional to the mean squared error between the result of a base model and the average of the base model results. This approach can exaggerate deviations from the optimal prediction. **Q-2.** For the task of super-resolution, the input and output are not in the same size, so the notations should be clarified. **A-2.** Thank you for your careful suggestion. We will revise the notation. **Q-3.** Why can we suppose that image noise follows a Gaussian distribution with zero mean? Is there any clue or derivation to support the assumption? **A-3.** Thank you for your insightful question. The Gaussian prior is commonly adopted in image reconstruction or restoration tasks [Reference 2-3]. The assumption that noise follows a Gaussian distribution can derive the mean squared error (MSE) loss or L2 norm we commonly used as loss function. If we assume it follows a Laplacian distribution, it would yield a L1 norm. The derivation is provided below. >Suppose the error between the ground-truth $y_n$ and the restoration result $f(x_n)$ by model $f$, follows zero-mean Gaussian, i.e., $\epsilon_n = y_n - f(x_n) \sim \mathcal{N}(0, \sigma^2)$. Then we have the log likelihood >$\sum_{n=1}^N \log P(\epsilon_n)=-\frac{N}{2}\log(2\pi) - N\log\sigma -\sum_{n=1}^N \frac{(y_n - f(x_n))^2}{2\sigma^2}.$ >Because we are optimizing the restoration model $f$, the objective can be simplified into the squared L1 norm loss $\sum_{n=1}^N (y_n - f(x_n))^2$. The derivation is similar for Laplace and L1 norm. **Q-4.** The authors say that the method is not accelerated by GPU vectorization yet. Is it because CPU is faster or any other reasons? **A-4.** The reason is that the implementation of method is related to the number of base models. It involves the nesting of several ```for``` loops whose the number is dynamic and dependent on the number of base models, making it difficult to vectorize. **Q-5.** The pixel numbers of each bin set will be highly diverse as shown in Figure 6. In some bin set, there will be many pixels and then EM algorithm can run properly without problem. However, if there are too few pixels in a bin set, EM algorithm may fail to find a solution. **A-5.** Indeed, the EM algorithm requires a sufficient number of sample points. If the number of pixels in a bin set is insufficient, the EM algorithm will fail to solve the GMM. In this case, we adopt the average as the default method to assign equal ensemble weights. The setting is described in L216-217. [Reference 1] Ntire 2023 challenge on image super-resolution (x4): Methods and results. In CVPR, 2023. [Reference 2] Deep Gaussian Scale Mixture Prior for Image Reconstruction. IEEE TPAMI, 2023. [Reference 3] Learned Image Compression with Discretized Gaussian Mixture Likelihoods and Attention Modules. In CVPR, 2020. --- Rebuttal Comment 1.1: Title: After rebuttal Comment: Thank you to the authors for their detailed response. After reviewing all the rebuttal, I can confirm that all of my concerns have been fully addressed. --- Reply to Comment 1.1.1: Title: Response to Reviewer Aksr Comment: Thank you for your thoughtful review and the constructive comments you've provided. We are truly grateful for your recognition of our work.
Summary: A novel post-training ensemble learning method for image restoration is developed in this work by employing Gaussian mixture models and the EM algorithm to generate better restoration results. The authors reformulate the ensemble problem of image restoration into various gaussian mixture models, use the EM algorithm to estimate range-wise ensemble weights on a reference set, and store the weights in a lookup table during inference. The method effectively improves ensemble results on 14 benchmarks and 3 restoration tasks, including super-resolution, deblurring and deraining. Strengths: - The method is grounded on a reasonable derivation and assumption of Gaussian prior to restoration. The visualization in Fig. 6 also validates the assumption. - The method performs consistently well on the mentioned 14 datasets. - Extensive experiments and ablation studies show the features of the developed method. Weaknesses: - The experiments all show the cases of three base methods, and one of them may be worse than the other two methods. But if all the base models are comparably good, can the method surpass other methods? Experiments with 2 or 4 base models, instead of 3, would be informative. - The experiments are for the restoration task of local patterns such as deraining and deblurring. However, dehazing or enhancement requires a glocal restoration. What are the results of the ensemble comparisons for dehazing or enhancement? This can provide better insights into the generalizability of the method. - Some notations are duplicate or misused, like "L" is used for log-likelihood in Eq. 19 and also for the dimension length of the image. In Algorithm 2, both "Equation" and "Eq. " are used. The notations should be revised. Technical Quality: 3 Clarity: 3 Questions for Authors: - What is the result of the case with 2 or 4 base models, instead of 3? - What are the results for dehazing or enhancement? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are mentioned on Page 9 and include the "trade-off between runtime and performance." A discussion and possible solution are provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q-1.** The experiments all show the cases of three base methods, and one of them may be worse than the other two methods. But if all the base models are comparably good, can the method surpass other methods? Experiments with 2 or 4 base models, instead of 3, would be informative. **A-1.** Thank you for your advice. We have conducted additional experiments using two and four base models for the task of deblurring. The experimental results are shown in the last two columns of Table 1 in the rebuttal file. In the case of two base models, the ZZPM method reduces to Average since their weights are equal. For the case of four base models, we selected NAFNet [Reference 1] as the fourth base model. From the experimental results, it is evident that our method consistently achieves the best ensemble results, whether using two or four base models. **Q-2.** The experiments are for the restoration task of local patterns such as deraining and deblurring. However, dehazing or enhancement requires a glocal restoration. What are the results of the ensemble comparisons for dehazing or enhancement? This can provide better insights into the generalizability of the method. **A-2.** Thank you for your advice. We have conducted additional experiments on low-light image enhancement (LLIE) and dehazing. The experimental results are shown in the first two columns of Table 1 in the rebuttal file. We also provide two visual comparisons in Figure 1 and 2 of the rebuttal file. The datasets used for the experiments are LOLv1 [Reference 2] for LLIE and OTS [Reference 3] for dehazing. The pre-trained models are provided by their authors. For your convenience, we also include error maps between the restoration results and ground-truths, along with the visual results. Darker error maps indicate better performance. From the results, it is evident that our method outperforms other ensemble methods in both quantitative and qualitative measures. **Q-3.** Some notations are duplicate or misused, like "L" is used for log-likelihood in Eq. 19 and also for the dimension length of the image. In Algorithm 2, both "Equation" and "Eq. " are used. The notations should be revised. **A-3.** Thank you for your careful suggestions. We will revise them accordingly. **Question 1.** What is the result of the case with 2 or 4 base models, instead of 3? **Answer 1.** Please refer to **A-1** and the uploaded rebuttal file. **Question 2.** What are the results for dehazing or enhancement? **Answer 2.** Please refer to **A-2** and the uploaded rebuttal file. [Reference 1] Simple baselines for image restoration. In ECCV, 2022. [Reference 2] Deep retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560, 2018. [Reference 3] Benchmarking singleimage dehazing and beyond. IEEE TIP, 28(1):492–505, 2018. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer E4or Comment: Thanks for your prompt response. My concerns have been addressed, and I keep my previous positive rating. --- Rebuttal 2: Title: Response to Reviewer E4or Comment: Thank you so much for your constructive comments you've provided. We are truly thankful for your recognition of our work.
Summary: This paper proposes an ensemble algorithm called EnsIR for image restoration tasks using Gaussian mixture models (GMMs). The method partitions pixels into range-wise bins, formulates the ensemble as GMMs over these bins, and solves for ensemble weights using expectation-maximization. The weights are stored in a lookup table for efficient inference. Experiments are conducted on super-resolution, deblurring and deraining tasks. Strengths: 1. Provides a formulation of image restoration ensemble as GMMs. 2. Experiments conducted on multiple image restoration tasks and datasets. Weaknesses: 1. Marginal improvement over existing methods: The performance gains are minimal compared to simpler approaches. For example, on the Rain100H dataset (Table 5), the proposed method achieves a PSNR of 31.725, only marginally better than the Average (31.681) and ZZPM (31.679) baselines. In some cases, like on Rain100L, the improvement is less than 0.6 dB over averaging. 2. Lack of distinction from existing approaches: The paper does not clearly articulate how this method fundamentally improves upon or addresses limitations of existing ensemble techniques. The core idea of using GMMs and LTU for weighting seems combination of existing approaches, and the paper doesn't adequately explain its novelty. 3. Insufficient analysis of results: There is a notable lack of in-depth analysis of the experiment results. For instance, the paper doesn't discuss why the method performs worse on some deraining tasks (e.g., Test100 in Table 5) compared to HGBT. This lack of analysis makes it difficult to understand the method's strengths and weaknesses. 4. Inconsistency in parameter selection: The ablation study in Table 1 shows that a bin width of 16 achieves the best PSNR (31.742), yet the authors choose 32 as the default "for the balance of efficiency and performance" (line 220) without adequate justification for this trade-off. 5. Limited efficiency advantages: The paper claims to address efficiency, but Table 6 shows that the proposed method (0.1709s) is significantly slower than Average (0.0003s) and ZZPM (0.0021s) approaches. The efficiency gain over regression-based methods is not a strong selling point given the performance trade-offs. 6. Shallow theoretical analysis: While the paper provides derivations in the Appendix, the core idea essentially reduces to a lookup table for ensemble weights. The theoretical contribution and novelty are limited, especially given the marginal performance improvements. 7. Overstated claims: The paper claims to "consistently outperform" existing methods (lines 17-19), but this is not supported by the results, particularly in deraining tasks where it underperforms HGBT on some datasets. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Why was bin size 32 chosen as default when 16 performed better in ablations? 2. How does this method fundamentally differ from and improve upon existing ensemble approaches? 3. What explains the performance degradation on deraining tasks? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The limitations section is brief and does not adequately address key shortcomings like marginal gains and lack of novelty. A more thorough discussion of limitations would strengthen the paper. Given the marginal improvements, lack of novelty, and limited analysis, I recommend rejecting this paper. Significant revisions would be needed to make this a compelling contribution. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q-1.** Marginal improvement over existing methods: The performance gains are minimal compared to simpler approaches. For example, on the Rain100H dataset (Table 5), the proposed method achieves a PSNR of 31.725, only marginally better than the Average (31.681) and ZZPM (31.679) baselines. In some cases, like on Rain100L, the improvement is less than 0.6 dB over averaging. **A-1.** As noted by Reviewer EWje, our method is an ensemble approach that does not require extra training or fine-tuning, rather than a new architecture or network. Our method is designed for the inference stage and can be applied off-the-shelf to all existing restoration models. It would be unreasonable to expect an ensemble method to achieve improvements over 0.6 dB. However, our method shows more stable and significant improvements over Average and ZZPM, as demonstrated in Tables 3-5. An ensemble method capable of improving performance by 0.2 dB would be beneficial for competition participants. **Q-2.** Lack of distinction from existing approaches: The paper does not clearly articulate how this method fundamentally improves upon or addresses limitations of existing ensemble techniques. The core idea of using GMMs and LTU for weighting seems combination of existing approaches, and the paper doesn't adequately explain its novelty. **A-2.** The existing ensemble methods do not involve Gaussian Mixture Models (GMM) or Lookup Tables (LUT), while our method addresses the ensemble problem in image restoration using GMM, the Expectation-Maximization (EM) algorithm, and LUT. Unlike previous works, our derivation shows that ensemble in image restoration can be transformed into multiple GMM problems, where the weights of the GMMs serve as the ensemble weights. We then leverage a modified EM algorithm to solve the GMMs, with the means of each Gaussian distribution known as prior knowledge. LUT is used to save the estimated weights for inference. Our contributions lie in deriving the restoration ensemble problem into GMMs, modifying the EM algorithm to solve these GMMs, and ultimately proposing a novel ensemble method for image restoration. **Q-3.** Insufficient analysis of results: There is a notable lack of in-depth analysis of the experiment results. For instance, the paper doesn't discuss why the method performs worse on some deraining tasks (e.g., Test100 in Table 5) compared to HGBT. This lack of analysis makes it difficult to understand the method's strengths and weaknesses. **A-3.** In Table 5, our method (32.002 dB, 0.9268) clearly outperforms HGBT (31.988 dB, 0.9241) on the Test100 dataset. Regarding the analysis of results, we have discussed its limitations, such as "if all base models fail, ensemble methods cannot generate a better result," in Section 4.2.4. We illustrated the ensemble weights, image features, and pixel distributions in Figures 4-6 in the Appendix. Additionally, we analyzed the scenario where one model may consistently outperform others in Section 4.2.2. **Q-4.** Inconsistency in parameter selection: The ablation study in Table 1 shows that a bin width of 16 achieves the best PSNR (31.742), yet the authors choose 32 as the default "for the balance of efficiency and performance" (line 220) without adequate justification for this trade-off. **A-4.** As noted in Table 1 of the manuscript, it is slow (1.2460 seconds per image) when using a bin width of 16. When dealing with thousands of images, it would takes hours to obtain ensemble results for a test set. Therefore, we chose a bin width of 32 which is over seven times faster than the case of 16. The method is designed for real-world industrial scenarios, so balancing efficiency and performance is a crucial reason for choosing a bin width of 32. **Q-5.** Limited efficiency advantages: The paper claims to address efficiency, but Table 6 shows that the proposed method (0.1709s) is significantly slower than Average (0.0003s) and ZZPM (0.0021s) approaches. The efficiency gain over regression-based methods is not a strong selling point given the performance trade-offs. **A-5.** From Table 1, 5, and 6 of the manuscript, our method with a bin width of 128 still outperforms ZZPM (31.702 vs. 31.679 on Rain100H), and is comparably fast compared to ZZPM (0.0059 seconds vs. 0.0021 seconds). This demonstrates both the efficiency and performance gains of our method. **Q-6.** Shallow theoretical analysis: While the paper provides derivations in the Appendix, the core idea essentially reduces to a lookup table for ensemble weights. The theoretical contribution and novelty are limited, especially given the marginal performance improvements. **A-6.** The core idea of our method is **not related to the lookup table**. The core idea is to transform the ensemble problem of image restoration into multiple Gaussian mixture models and then solve it using a modified EM algorithm. The lookup table is simply used to store the solved weights for inference. **Q-7.** Overstated claims: The paper claims to "consistently outperform" existing methods (lines 17-19), but this is not supported by the results, particularly in deraining tasks where it underperforms HGBT on some datasets. **A-7.** Although our method achieves top performance in 27 out of 28 metrics across 14 datasets, which can be considered as "consistently outperforming", we will revise "consistently" to "overall". **Question 1.** Why was bin size 32 chosen as default when 16 performed better in ablations? **Answer 1.** Please refer to **A-4.** **Question 2.** How does this method fundamentally differ from and improve upon existing ensemble approaches? **Answer 2.** Please refer to **A-2** and **A-6**. **Question 3.** What explains the performance degradation on deraining tasks? **Answer 3.** Please refer to **A-3.** --- Rebuttal 2: Comment: Thanks for your detailed rebuttal. After careful consideration, I maintain my original rating of 4 (Borderline reject). My decision is primarily based on two key concerns: 1. Marginal improvements: While I acknowledge the overall improvements across various tasks (SR, Image Deblur, Deraining), the gains are often minimal (0.01/0.001 dB level). As the proposed approach belongs to the classical Ensemble Learning field, it should be primarily compared to well-established methods like GBDT or HGBT. The marginal improvements over these approaches do not sufficiently justify the novelty of your method. More importantly, the paper and rebuttal lack an in-depth analysis in the Experiments section explaining why these marginal improvements occur and under what conditions your method excels or falls short. 2. Limited efficiency advantages: Your method does not seem to adequately address the limitations of existing approaches. For instance, in Table 5 (Image Deraining results), for the Test100 dataset, both GBDT and HGBT perform worse than the original model. The proposed approach, while showing some improvement, does not significantly overcome this limitation. This raises questions about its practical applicability compared to well-established algorithms like GBDT or HGBT. I think while the paper presents an interesting approach to ensemble learning for image restoration, the marginal improvements and limited contributions still outweigh its strengths. The proposed method, when compared to well-known algorithms like GBDT or HGBT, does not appear sufficiently practical or innovative to warrant acceptance in its current form. --- Rebuttal Comment 2.1: Title: Response to Reviewer j7aF Comment: Thank you for your response and consideration. We would like to address your remaining concerns individually: Regarding the first concern about the improvement and analysis, we would like to respond from three perspectives: 1. We wish to emphasize that the enhancement of our method is a complementary benefit that requires no additional training or fine-tuning. It can serve as a vital auxiliary tool for real-world image restoration applications. For instance, in the recent NTIRE 2024 competition, the difference between the best and second-best results was often less than 0.01 dB in PSNR, as shown in the table below. In such cases, our method, which consistently outperforms existing ensemble techniques, can prove decisive. 2. Our approach falls under the category of weakly supervised methods that only require the means and variances of bin sets from degraded images, whereas traditional methods like GBDT and HGBT are fully supervised, requiring access to all pixel values. Under these circumstances, our method is still more stable across various image restoration tasks and less sensitive to performance variations among base models. It is, therefore, commendable and challenging for our method to consistently demonstrate superiority over supervised methods across five tasks and 16 benchmarks. 3. Despite the improvement of 0.001 dB on Test100, our method can achieve a significantly larger improvement of up to 0.20 dB on many other benchmarks, such as Test1200 (33.276 for GBDT versus 33.475 for ours). The second concern is mainly about the phenomenon where ensemble results are inferior to those of the original model. It is because one base model may be consistently better than the other base models on a benchmark. We would like to respond with two points: 1. In real-world scenarios, where the ground-truths of the test set are often unavailable, it is difficult to determine which base models are underperforming and contributing to the ensemble's lower results compared to a single original model. In such cases, the best course of action is to mitigate this limitation of ensemble methods, and our approach has demonstrated superior performance in doing so compared to other ensemble techniques. 2. In common industrial scenarios, all base models tend to be similar in structure and performance. As mentioned on Lines 240-243, we aimed to assess our method’s robustness across architectures by conducting experiments using models with different structures and, at times, diverse performances on several test sets (for example, 30.292 for MPRNet and 31.194 for MAXIM versus 32.025 for Restormer). However, this phenomenon could be easily avoided, in the first place, by selecting similar structures or even the same network with different initial states, which is a common practice in industrial settings. Therefore, we cannot acknowledge this phenomenon as a limitation of ensemble methods. **Table A.** PSNR comparison of the best and second-best results from four challenges at the 9th New Trends in Image Restoration and Enhancement (NTIRE) Workshop and Challenges in 2024. |Challenge | 2nd best | Best | | ------------- | ------------- | ------------- | |Stereo Image Super-Resolution Challenge - Track 1 | 23.6492 | 23.6496 | |RAW Image Super Resolution Challenge | 43.39 | 43.40 | |Low Light Enhancement Challenge | 24.52 | 24.52 | |Image Shadow Removal Challenge - Track 1 | 24.81 | 24.81 |
Summary: This paper reformulate the ensemble problem of image restoration into Gaussian mixture models (GMMs) and employ an expectation maximization (EM)-based algorithm to estimate ensemble weights for aggregating prediction candidates. Importantly the authors' method achieves state-of-the-art performance without training. I think this work is interesting and can inspire researchers. Strengths: 1.The proposed method is more interpretable than traditional deep learning (ensemble learning) methods. 2.The proposed method obtains state-of-the-art performance compared to other ensemble methods. 3.The authors performed comparisons on a large number of datasets and image restoration tasks, which fully validated the generalization of the ensemble method. Weaknesses: 1. I have some confusion. How are the averages in Table III achieved? Why is averaging able to achieve higher results than every single method (non-ensemble method)? Isn't it the average of every single method? 2. I would like to know the computational burden of the method? 3.ensemble learning doesn't provide a "huge" gain, but it makes sense. I'm just wondering how much extra computation is introduced by this gain? Table 6 reports the runtime, and I would like to know the difference in time between the ensemble and the non-ensemble. 4. It's not easy to see the difference in Figure 1, so the author could add an error map or a different figure. Technical Quality: 3 Clarity: 3 Questions for Authors: see weaknesses Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: see weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q-1.** I have some confusion. How are the averages in Table III achieved? Why is averaging able to achieve higher results than every single method (non-ensemble method)? Isn't it the average of every single method? **A-1.** Yes, the "Average" refers to the average of the results of all single restoration models. Similar to the ensemble approaches in high-level tasks like classification and regression, the ensemble method of averaging also works well in image restoration tasks [Reference 1-4]. The reasons why it could be better than every single method can be interpreted in two aspects. 1. Because image restoration tasks lie in ill-posed problem, the majority of restoration models produce results that deviate from the ground-truth to some extent. Then it could alleviate the deviation issue by averaging the results of several models. 2. The restoration task could essentially be regarded as multiple prediction/classification problems, where the output values range from 0 to 255 for each pixel. If all base models perform comparably well, their ensemble will be more likely to produce results that match the clean images more closely. **Q-2.** I would like to know the computational burden of the method? **A-2.** The computational complexity of our method is $\mathcal{O}(3HWMT^M)$, where $3HW$ represents the number of pixels, $M$ is the number of base models, and $T$ is the number of bins. The average runtimes of our method with different configurations are presented in Table 1 of the manuscript. Table 2 of the PDF rebuttal file illustrates the average runtimes for all steps, including preprocessing an image, inference by each base model, generating the ensemble using our method, and saving the result as an image file. Table 6 of the manuscript compares the average runtimes of different ensemble methods. **Q-3.** Ensemble learning doesn't provide a "huge" gain, but it makes sense. I'm just wondering how much extra computation is introduced by this gain? Table 6 reports the runtime, and I would like to know the difference in time between the ensemble and the non-ensemble. **A-3.** We have measured the runtime of each step, as shown in Table 2 of the rebuttal file. As we can see, our method does not introduce significant additional time (0.1709 seconds compared to a total time of 0.9039 seconds). The time of inferencing by three models is 0.7330 seconds, while the time of our method is 0.1709 seconds. **Q-4.** It's not easy to see the difference in Figure 1, so the author could add an error map or a different figure. **A-4.** Thank you for your helpful advice. We will revise our figures with additional error maps. For the images in the rebuttal file, we have included error maps for your convenience. [Reference 1] Ntire 2017 challenge on single image super-resolution: Dataset and study. In CVPRW, 2017. [Reference 2] Ntire 2021 challenge on image deblurring. In CVPR, 2021. [Reference 3] Ntire 2023 challenge on image super-resolution (x4): Methods and results. In CVPR, 2023. [Reference 4] Ntire 2024 challenge on image super-resolution (×4): Methods and results. In CVPR, 2024. --- Rebuttal Comment 1.1: Comment: Thanks to the author's detailed response, I choose to maintain my score (Accept). --- Reply to Comment 1.1.1: Title: Response to Reviewer EWje Comment: We sincerely appreciate your valuable feedback and constructive comments. We are truly thankful for your recognition of our work.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers, ACs, SACs, and PCs for their effort and attention. We have uploaded a rebuttal file in PDF format to illustrate the tables and figures. Table 1 presents the experimental results of the ensemble for four tasks, i.e., low-light image enhancement (LLIE), dehazing, deblurring with two base models, and deblurring with four base models. Table 2 measures the time cost of each step. Figure 1 and 2 show the visual comparisons of ensemble for LLIE and dehazing, respectively. Due to space limitations, we list the newly cited works, including LOLv1 [1], OTS [2], RetinexFormer [3], RQ-LLIE [4], CIDNet [5], MixDehazeNet [6], DEA-Net [7], C2PNet [8], and NAFNet [9]. [1] Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu. Deep retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560, 2018. [2] Boyi Li, Wenqi Ren, Dengpan Fu, Dacheng Tao, Dan Feng, Wenjun Zeng, and Zhangyang Wang. Benchmarking singleimage dehazing and beyond. IEEE TIP, 28(1):492–505, 2018. [3] Yuanhao Cai, Hao Bian, Jing Lin, Haoqian Wang, Radu Timofte, and Yulun Zhang. Retinexformer: One-stage retinex-based transformer for low-light image enhancement. In ICCV, 2023. [4] Yunlong Liu, Tao Huang, Weisheng Dong, Fangfang Wu, Xin Li, and Guangming Shi. Low-light image enhancement with multi-stage residue quantization and brightness-aware attention. In ICCV, 2023. [5] Yixu Feng, Cheng Zhang, Pei Wang, Peng Wu, Qingsen Yan, and Yanning Zhang. You only need one color space: An efficient network for low-light image enhancement. arXiv preprint arXiv:2402.05809, 2024. [6] LiPing Lu, Qian Xiong, DuanFeng Chu, and BingRong Xu. Mixdehazenet: Mix structure block for image dehazing network. arXiv preprint arXiv:2305.17654, 2023. [7] Zixuan Chen, Zewei He, and Zhe-Ming Lu. Dea-net: Single image dehazing based on detail-enhanced convolution and content-guided attention. IEEE TIP, 2024. [8] Yu Zheng, Jiahui Zhan, Shengfeng He, Junyu Dong, and Yong Du. Curricular contrastive regularization for physics-aware single image dehazing. In CVPR, 2023. [9] Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. In ECCV, 2022. Pdf: /pdf/2c21fcc3aef35dc2317cfdb550afaf736588d8a6.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
ADOPT: Modified Adam Can Converge with Any $\beta_2$ with the Optimal Rate
Accept (poster)
Summary: The submitted work analyzes the divergence of Adam and RMSprop in smooth nonconvex settings. The authors propose a new optimizer ADOPT, whose convergence does not depend on the second-moment coefficient $\beta_2$. The proposed optimizer is evaluated in toy settings, image classification, language modeling (finetuning settings), and reinforcement learning. Strengths: * This paper tackles the non-convergence problem of the Adam optimizer. As Adam is the go-to optimizer for deep learning, deeply understanding Adam optimizer in detail is valuable. * The paper resolves the non-convergence issue of Adam without requiring bounded noise assumptions or specific $\beta_2$. * The experiments are performed in diverse settings, including image classification, generative modeling, and deep reinforcement learning. Weaknesses: * The non-convergence of Adam is not an issue in practice and I am unsure about the practical implications of ADOPT over Adam. The non-convergence issue arises near the minima and most modern deep learning models are not trained to convergence. * The paper does not thoroughly investigate the sensitivity of ADOPT optimizer parameters in practical scenarios. A more comprehensive study on the optimizer parameters and comparison with Adam is required to understand its robustness and practical utility. Technical Quality: 3 Clarity: 3 Questions for Authors: * In realistic settings, where is the marginal convergence speed in ADOPT coming from? My understanding is that non-convergence of Adam would matter near the minima only. Faster training way before reaching the minima is not expected. In particular, in language modeling pretraining, convergence is usually not achieved. * Does the difference in the recommendation of the epsilon parameter for ADOPT simply arise because it is added inside the square root instead of outside? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See Weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments. We will answer your questions to address your concerns. Please also refer to the general response and the PDF attached with it. > The non-convergence of Adam is not an issue in practice and I am unsure about the practical implications of ADOPT over Adam. The non-convergence issue arises near the minima and most modern deep learning models are not trained to convergence. > In realistic settings, where is the marginal convergence speed in ADOPT coming from? My understanding is that non-convergence of Adam would matter near the minima only. Faster training way before reaching the minima is not expected. In particular, in language modeling pretraining, convergence is usually not achieved. > The paper does not thoroughly investigate the sensitivity of ADOPT optimizer parameters in practical scenarios. A more comprehensive study on the optimizer parameters and comparison with Adam is required to understand its robustness and practical utility. I agree with you that in modern deep learning models, such as large language models, training is rarely run until full convergence is achieved. However, even in such cases, Adam's non-convergence can be a problem. For example, when training with a large dataset or small batch size, the constant term in Theorem 3.1 can become non-negligible due to the large gradient noise. In such cases, Adam may become unstable even in the early stages of training. To confirm that this happens empirically, we have added a new experiment of pre-training of GPT-2. See the figure in the PDF attached to the general response for the results. We observed that ADOPT is always stable in this experiment, whereas Adam actually diverges in the early phase of training due to loss spikes when the batch size is small. We also observed that ADOPT performed slightly better even when the batch size was large. We believe that these results clearly demonstrate the practical effectiveness of ADOPT in modern deep learning. > Does the difference in the recommendation of the epsilon parameter for ADOPT simply arise because it is added inside the square root instead of outside? This is a good question. In fact, the reason that ADOPT requires larger $\epsilon$ than Adam is not simply due to the order of square root and addition. This can be explained by theory: as can be seen from Theorem 3.1, the convergence bounds for Adam are of the order of $\log(\epsilon^{-2})$, while the convergence bounds for ADOPT in Eq. (33) are of the order of $\epsilon^{-2}$. In other words, when$\epsilon$ is small, the convergence bound of ADOPT becomes loose more rapidly than those of Adam. For this reason, it is safer for ADOPT to use slightly larger $\epsilon$ than Adam. However, our default settings have been found to work robustly in all experiments, so tuning of the eps is rarely needed, even in practice. We would be glad to respond to any further questions and comments that you may have. Thanks. ### References [1] github.com/karpathy/nanoGPT --- Rebuttal 2: Comment: I thank the reviewers for their responses. > ...when training with a large dataset or small batch size, the constant term in Theorem 3.1 can become non-negligible due to the large gradient noise. In such cases, Adam may become unstable even in the early stages of training. To confirm that this happens empirically, we have added a new experiment of pre-training of GPT-2. See the figure in the PDF attached to the general response for the results. We observed that ADOPT is always stable in this experiment, whereas Adam actually diverges in the early phase of training due to loss spikes when the batch size is small.... I am unsure if the statements from Theorem 3.1 generalize to realistic scenarios: 1. First, the Theorem corresponds to RMSprop and there are various assumptions 2.1-2.3 and 2.5, which can become invalid in practical scenarios. 2. Regarding the GPT pretaining result, as the optimal learning rate range for the two optimizers can be very different, it is unclear if the learning rate for Adam was just large whereas it wasn't the case for ADOPT. A compelling result would include performance vs. learning rate for the different batch sizes. I do not expect the authors to do these experiments in this short rebuttal. If the authors have already scanned the learning rates, I would request them to point me to the result. To explicitly show that ADOPT is more stable during early training, the authors can track the top eigenvalue of the pre-conditioned Hessian [1], which determines the stability of adaptive optimizers for large batches. An alternative experiment would involve plotting heatmaps of performance against learning rate and batch size. In small-scale experiments, if the authors can show that ADOPT either (i) has a smaller pre-conditioned sharpness during early training or (ii) the optimal performance for optimal covers a large range of hyperparameters such as learning rate and batch size. Then, it would be convincing that ADOPT works reasonably well compared to Adam. [1] Adaptive Gradient Methods at the Edge of Stability, https://arxiv.org/abs/2207.14484 I would like to reiterate that I do not expect the authors to perform these experiments in this discussion period. My concerns regarding the practical implications of the non-convergence issues remain. Therefore, I am keeping my current score. --- Rebuttal Comment 2.1: Title: Response to the Comment to Reviewer bGLP Comment: Thank you for your comment. As you pointed out, Adam's loss spikes in the GPT-2 experiment may simply be due to the learning rate being too high, so we also ran the experiment with the learning rate lowered from the default of 6e-4 to 1e-4 for the case of a small batch size. The results are shown in the table below; for Adam, lowering the learning rate only slows down the timing of loss spikes and eventually causes divergence in training. ADOPT, on the other hand, is able to perform stable learning at all settings. Further reductions in the learning rate resulted in significantly slower training, so we did not include them in the table. | Optimizer | LR | 50K iters | 100K iters | 150K iters | 200K iters | | ------------- | ---- | :----------: | :------------: | :-----------: | :----------: | | Adam | 6e-4 | 7.64 | 7.54 | - | | | Adam | 1e-4 | 3.26 | 3.17 | 7.09 | 7.56 | | ADOPT | 6e-4 | 3.22 | 3.17 | 3.13 | 3.10 | | ADOPT | 1e-4 | 3.16 | 3.09 | 3.04 | 3.02 | This result is also consistent with theory: the constant term for Adam's convergence bounds in Theorem 3.1 is independent of the learning rate, so a failure in training cannot be prevented by reducing the learning rate. As you stated, Theorem 3.1 is for RMSprop, which does not account for momentum, but there is a similar constant term in Adam's convergence bounds in prior studies (e.g., [1]) that is independent of the learning rate. Thus, this result is a practical example of how Adam can become unstable when training on a large-scale data set or with a small batch size, as the gradient noise increases in such cases. Such instability is not unique to this experimental setting, as it has often been reported to be observed in the training of larger LLMs [2, 3]. We hope that this will address your concerns about our submission. We would be glad to respond to any further comments that you may have during the discussion period. Thanks! ### References [1] Alexandre Défossez, Leon Bottou, Francis Bach, and Nicolas Usunier. A simple convergence proof of adam and adagrad. Transactions on Machine Learning Research, 2022. ISSN 2835-8856. [2] Molybog, Igor, et al. "A theory on adam instability in large-scale machine learning." arXiv preprint arXiv:2304.09871 (2023). [3] Takase, Sho, et al. "Spike No More: Stabilizing the Pre-training of Large Language Models." arXiv preprint arXiv:2312.16903 (2023). --- Rebuttal 3: Title: Response to the Additional Comment to Reviewer bGLP Comment: Thank you for your comment. > For the optimal hyperparameters, such as learning rate, does ADOPT always perform better than Adam? We have summarized the result regarding the optimal hyperparameter settings in terms of the batch size and the learning rate in the table below. The result shows the test loss after running 100K training iterations. The best results are shown in bold. When comparing the best results, ADOPT shows a slightly better result than Adam. **Adam** | Batch size \ LR | 6e-5 |1e-4 | 6e-4 | | ----------------- | ------- |------ | -------- | | 96 | 3.21 | 3.17 | 7.54 | | 480 | - | 3.31 | **3.02** | **ADOPT** | Batch size \ LR | 6e-5 |1e-4 | 6e-4 | | ----------------- | --------- |--------- | ----- | | 96 | 3.11 | 3.09 | 3.17 | | 480 | - | **2.98** | 3.00 | > Does ADOPT require a smaller warmup (or no warmup)? Is warmup used in the GPT experiments? In the GPT-2 experiment, following the default setting of the nanoGPT code base, the linear warmup is used for the first 2000 iterations. Please refer to the original code provided at github.com/karpathy/nanoGPT/blob/master/train.py for more detailed settings. We did not change the experimental settings except for choices of the optimizer, the batch size, and the learning rate. > Is the range of optimal learning rates longer for ADOPT compared to Adam? Or the size of the range is the same but it is shifted? As can be seen from the table above, ADOPT seems to work more robustly to the choice of the learning rate compared to Adam. If you have any further comments or questions, we would be glad to respond them. Thanks!
Summary: The paper titled "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate" introduces a new adaptive gradient method named ADOPT. This method aims to address the non-convergence issues of the Adam optimization algorithm. Adam, despite its popularity in deep learning, does not theoretically converge unless the hyperparameter β2 is chosen in a problem-dependent manner. The paper proposes ADOPT, which achieves the optimal convergence rate with any choice of β2, without relying on the bounded noise assumption. ADOPT modifies Adam by removing the current gradient from the second moment estimate and changing the order of the momentum update and normalization. The paper also presents extensive numerical experiments showing ADOPT's superior performance across various tasks, including image classification, generative modeling, natural language processing, and deep reinforcement learning. Strengths: 1. The paper provides a robust theoretical foundation for the proposed ADOPT algorithm, demonstrating its convergence with an optimal rate 2. The proposed method is practically significant as it eliminates the need for problem-dependent tuning of the β2 parameter, making it more user-friendly and broadly applicable. 3. The paper includes comprehensive experiments across various tasks, showing that ADOPT consistently outperforms Adam and its variants. 4. The paper clearly identifies the non-convergence problem of Adam and provides a well-justified solution in ADOPT. Weaknesses: 1. The analysis still relies on the assumption that the second moment of the stochastic gradient is uniformly bounded, which might not always hold in practice. 2. The paper could have included comparisons with more recent optimizers beyond Adam to strengthen its empirical claims, like the recent work CO2 ("CO2: Efficient distributed training with full communication-computation overlap." ICLR (2024).). 3. The paper does not thoroughly discuss the potential computational overhead introduced by ADOPT compared to Adam and other optimizers. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. How can the assumption on the second moment being uniformly bounded be relaxed? Are there any potential methods or future work suggested to address this? 2. What is the computational cost of ADOPT compared to Adam and AMSGrad? Is there a significant overhead that might affect its practicality in large-scale applications? 3. While ADOPT performs well on a variety of tasks, how does it perform on tasks not covered in the experiments? Are there specific types of problems where ADOPT might not be as effective? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: see the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments. We will answer your questions to address your concerns. Please also refer to the general response and the attached PDF. > The analysis still relies on the assumption that the second moment of the stochastic gradient is uniformly bounded, which might not always hold in practice. > How can the assumption on the second moment being uniformly bounded be relaxed? Are there any potential methods or future work suggested to address this? As you mentioned, our analysis still relies on the bounded second moment assumption, which is sometimes violated in practice, although it is milder than the bounded stochastic gradient assumption used in the previous works. A promising direction to further relax this assumption is to use the bounded variance assumption, where $\mathbb{E} [ \|| g - \nabla f \||^2 ]$ is assumed to be bounded. We mentioned it in the third paragraph of Section 6 as a limitation and future work. > The paper could have included comparisons with more recent optimizers beyond Adam to strengthen its empirical claims, like the recent work CO2 ("CO2: Efficient distributed training with full communication-computation overlap." ICLR (2024).). Thank you for your suggestion. We think that CO2 is specifically designed for efficient distributed training, so their contributions are orthogonal to ours. In our experiment, we compare other optimizers which share the motivation of addressing the non-convergence issue of Adam (e.g., AMSGrad). Of course, combining the techniques of other works (e.g., CO2) with our ADOPT could be a promising direction to further improve the performance, but we leave it for future work. > The paper does not thoroughly discuss the potential computational overhead introduced by ADOPT compared to Adam and other optimizers. > What is the computational cost of ADOPT compared to Adam and AMSGrad? Is there a significant overhead that might affect its practicality in large-scale applications? Thank you for your comment. Let us clarify the computational cost of ADOPT compared to Adam and AMSGrad. The computational cost of ADOPT is equal to that of Adam and less than that of AMSGrad. Since both Adam and ADOPT need to store momentum $m_t$ and second-order moments $v_t$, their memory costs are about the same. In addition, Adam computes bias corrections when updating parameters, while our implementation of ADOPT omits them, making ADOPT slightly less computationally expensive than Adam. On the other hand, AMSGrad requires storing $\hat{v}_t$ in addition to $m_t$ and $v_t$, so its memory cost is larger than ADOPT (and Adam). > While ADOPT performs well on a variety of tasks, how does it perform on tasks not covered in the experiments? Are there specific types of problems where ADOPT might not be as effective? As far as we have experimented, ADOPT has always shown better results than Adam. To further strengthen our experimental results, we also performed pre-training experiments on GPT-2 and found that ADOPT performed better than Adam in those experiments as well. In particular, we observed that Adam suffers from loss spikes when the batch size is small and the gradient noise is large, while ADOPT always performs well stably. See the general response for more details. We would be glad to respond to any further questions and comments that you may have. Thanks. --- Rebuttal 2: Title: A Gentle Reminder to Reviewer AhBk Comment: Thank you again for your efforts in reviewing our paper and your constructive comments. The discussion period will end soon, so please let us know if you have further comments about our reply to your feedback. Thanks.
Summary: Motivated by the counterexample due to Reddi et al., this work designs an adaptive optimizer that can converge for the choice of beta_2 that is independent of the problem instance. Their analysis works for a more general condition than the previous works. Strengths: It has overall good presentation. The main scope and results are presented well. Weaknesses: The theoretical results look sound. I have some questions below. Technical Quality: 3 Clarity: 3 Questions for Authors: - In Theorem 4.1, please specify the choice of hyperparameters (beta_1, beta_2, \eps). I could not find this even in the appendix, even though the main text said it can be found there. In particular, does the choice found by your analysis similar to what people use in practice? - If the choice of beta's do not match with your theoretical prediction, I think the claim that the new optimizer AdaOPT alleviates the parameter-tuning is not an overclaim. At the end of the day, it seems that the authors have to tune those parameters depending on the settings. Hence, practically speaking, there's no reason for practitioners to use the proposed algorithm over the original Adam. - It seems that for your algorithm, $\epsilon$ needs to be chosen much larger than that of Adam. How sensitive is this choice? Why is this the case? $\epsilon$ really should be for numerical stability in the original Adam. Does your analysis suggest such a large value for $\epsilon$? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Overall, the main contribution in this work seems quite marginal. For a submission of this type, I would support a clear acceptance if - (i) the theoretical guarantees have a noticeable innovation over the previous ones. - (ii) or the proposed method works much better than the previous ones. It seems that the theoretical improvement is the removal of the uniformly upper bounded assumption of stochastic gradients. To me, this looks like a minor improvement over the previous work. Moreover, the resulting algorithm does not seem to have major advantages over Adam. In particular, unlike the claim made in this paper, I don't think the resulting algorithm alleviates the hyper-parameter tuning. I don't see why practitioners should choose this algorithm over Adam. Therefore, I recommend "borderline accept" for this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments. We will answer your questions to address your concerns. Please also refer to the general response and the PDF attached with it. > In Theorem 4.1, please specify the choice of hyperparameters (beta_1, beta_2, \eps). I could not find this even in the appendix, even though the main text said it can be found there. In particular, does the choice found by your analysis similar to what people use in practice? > If the choice of beta's do not match with your theoretical prediction, I think the claim that the new optimizer AdaOPT alleviates the parameter-tuning is not an overclaim. At the end of the day, it seems that the authors have to tune those parameters depending on the settings. Hence, practically speaking, there's no reason for practitioners to use the proposed algorithm over the original Adam. The concrete convergence bound is provided in Eq. (33) in Appendix E, which shows that the convergence rate is $O ( 1 / \sqrt{T} )$ with any $(\beta_1, \beta_2, \epsilon)$. The bound gets tighter when $\beta_2$ is chosen to be close to 1, which corresponds to practical choices. In fact, as shown in Figure 1, the training with ADOPT tends to be stable when $\beta_2$ is close to 1, although the convergence is achieved even with small $\beta_2$. In terms of $\beta_1$, there is a gap between theory and practice. In theory, the bound will be tighter when $\beta_1$ is small, whereas $\beta_1 = 0.9$ is used in practice. This gap is consistently observed in the literature on the convergence analysis of Adam (e.g., [2, 3]). To the best of our knowledge, the effectiveness of momentum in Adam-type optimizers is still an open question. > It seems that for your algorithm, needs to be chosen much larger than that of Adam. How sensitive is this choice? Why is this the case? really should be for numerical stability in the original Adam. Does your analysis suggest such a large value for ? You are correct that ADOPT requires a larger $\epsilon$ than Adam. This can be explained by theory: as can be seen from Theorem 3.1, the convergence bounds for Adam are of the order of $\log(\epsilon^{-2})$, while the convergence bounds for ADOPT in Eq. (33) are of the order of $\epsilon^{-2}$. In other words, when $\epsilon$ gets small, the convergence bounds of ADOPT become loose more rapidly than those of Adam. For this reason, it is safer for ADOPT to use a slightly larger value for $\epsilon$ than Adam. Our default settings have been found to work robustly in all experiments, so tuning of the $\epsilon$ is rarely needed, even in practice. > Overall, the main contribution in this work seems quite marginal. For a submission of this type, I would support a clear acceptance if > (i) the theoretical guarantees have a noticeable innovation over the previous ones. > (ii) or the proposed method works much better than the previous ones. > It seems that the theoretical improvement is the removal of the uniformly upper bounded assumption of stochastic gradients. To me, this looks like a minor improvement over the previous work. Moreover, the resulting algorithm does not seem to have major advantages over Adam. In particular, unlike the claim made in this paper, I don't think the resulting algorithm alleviates the hyper-parameter tuning. I don't see why practitioners should choose this algorithm over Adam. Thank you for your suggestions. To demonstrate the practical effectiveness of our ADOPT more clearly, we have added an experiment in pre-training language models, in which Adam tends to suffer from optimization difficulties like loss spikes. In this experiment, we used the nanoGPT [1] code base to run the GPT-2 pre-training in OpenWebText. We observed that Adam suffered from loss spikes and completely failed the training when the batch size was small, while ADOPT was able to train stably even with small batch sizes. We also observed that ADOPT performed slightly better than Adam even for large batch sizes. This result is consistent with theory, since, when the batch size is small, the gradient noise is larger and the constant term of Adam's convergence bound in Theorem 3.1 also gets larger. Thus, the experimental results confirm that ADOPT has an advantage over Adam even in practical cases such as pre-training language models. We hope these results will address your concerns. We would be glad to respond to any further questions and comments that you may have. Thanks. ### References [1] github.com/karpathy/nanoGPT [2] Alexandre Défossez, Leon Bottou, Francis Bach, and Nicolas Usunier. A simple convergence proof of adam and adagrad. Transactions on Machine Learning Research, 2022. ISSN 2835-8856. [3] Bohan Wang, Jingwen Fu, Huishuai Zhang, Nanning Zheng, and Wei Chen. Closing the gap between the upper bound and lower bound of adam’s iteration complexity. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks for your responses. I read through them, and it partially addresses my concerns. Hence, I'll increase the score to 6. On a side note, it seems that the benefits of momentum seems to be theoretically justified in the nonsmooth and nonconvex setting (as shown by [1]), and the authors might find it helpful to further clarify the main contributions of this work. [1] Ahn and Cutkosky, "Adam with model exponential moving average is effective for nonconvex optimization" (https://arxiv.org/pdf/2403.02648) --- Rebuttal 2: Title: Response to the Additional Comment by Reviewer qABQ Comment: Thank you for your additional comments. > On a side note, it seems that the benefits of momentum seems to be theoretically justified in the nonsmooth and nonconvex setting (as shown by [1]), and the authors might find it helpful to further clarify the main contributions of this work. We were not aware of the result of the paper you mentioned, so thank you for letting us know. As you pointed out, that paper does indeed partially explain the role of momentum in Adam, but there appear to be some limitations: first, their analysis is limited to the case $\beta_2=\beta_1^2$, which deviates from the practical choice. In addition, their analysis can only be applied to the case where Adam's update is clipped, which also deviates from the practical algorithm. Thus, the role of momentum in Adam is not yet fully understood and seems to be an open question. We promise to cite that paper and describe this point clearly in the final version. If you have any further comments or questions, we would be glad to respond them. Thanks.
Summary: The paper proposes a new adaptive gradient method called ADOPT, which addresses the non-convergence issue of popular methods like Adam and RMSprop. The method modifies the calculation of second moment estimates and the order of momentum calculation and scaling operations. Extensive numerical experiments demonstrate that ADOPT achieves competitive or superior results compared to existing methods across various tasks. Strengths: The paper introduces a new adaptive gradient method ADOPT that is as easy as the implementation of Adam, and enjoys easy convergence proofs. The paper gives in-depth analysis for the convergence of ADOPT with toy examples, in comparison with the failure cases of Adam. The paper conducts comprehensive numerical experiments on various tasks, demonstrating the competitive performance of ADOPT compared to the widely used Adam. Weaknesses: The convergence of a modified version of Adam is not significant from theoretical sense unless the ADOPT can beat the performance of Adam in practice given existing convergence proofs of Adam. From the empirical results, the performance of ADOPT is not superior over Adam very much. People may be reluctant to use ADOPT in practice. More importantly, the algorithm 1 (ADOPT) seems to require storage of three parts: g_t, m_t, v_t, which is more than what the standard Adam requires. This is quite significant drawback of ADOPT if this cannot be optimized. ------------------------------------------------------------------------------------------------- comments after rebuttal -------------------------------------------------------------The authors' response clearly resolved the memory cost concerns. I would like to increase the score to 6. Technical Quality: 3 Clarity: 3 Questions for Authors: see weakness. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes, The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments. We will answer your questions to address your concerns. Please also refer to “Author Rebuttal by Authors” and the attached PDF. > The convergence of a modified version of Adam is not significant from theoretical sense unless the ADOPT can beat the performance of Adam in practice given existing convergence proofs of Adam. > From the empirical results, the performance of ADOPT is not superior over Adam very much. People may be reluctant to use ADOPT in practice. Your point is that ADOPT does not seem to perform so well compared to Adam in the practical experiments in Section 5 except for the toy examples. In fact, what our theory shows is that Adam fails catastrophically when the constant term of Adam's convergence bound in Theorem 3.1 becomes large; hence, in a correctly tuned situation in practice (e.g., with a sufficiently large $\beta_2$), it is natural that ADOPT will not improve significantly relative to Adam. However, to show that even in practice Adam can make catastrophic failures and ADOPT can avoid them, we have added an experiment in pre-training language models. In this experiment, we used the nanoGPT [1] code base to run the GPT-2 pre-training with OpenWebText. See the general response and the attached PDF for more details. We observed that Adam suffered from loss spikes and completely failed the training when the batch size was small, while ADOPT was able to train stably even with small batch sizes. We also observed that ADOPT performed slightly better than Adam even for large batch sizes. This result is consistent with theory, since the gradient noise is larger and the constant term of Adam's convergence bound is larger when the batch size is small. Thus, the experimental results confirm that ADOPT has an advantage over Adam even in practical cases such as pre-training language models. We hope these results will address your concerns. > More importantly, the algorithm 1 (ADOPT) seems to require storage of three parts: g_t, m_t, v_t, which is more than what the standard Adam requires. This is quite significant drawback of ADOPT if this cannot be optimized. I respectfully point out that this is a misunderstanding: the memory cost of ADOPT is equivalent to that of Adam. Only $m_t$ and $v_t$ need to be stored in ADOPT, and $g_t$ can be discarded once those updates are done. Presumably, this misunderstanding arises from the fact that $g_{t+1}$ is used to update $m_{t+1}$ in Algorithm 1, but this does not mean that $g_t$ needs to be stored. To clarify this, an equivalent alternative representation is given in Algorithm 2 in the attached PDF of the general response. In fact, experimental results confirm that the memory costs of ADOPT and Adam are equivalent. For example, the memory cost for pre-training GPT-2 is approximately 18 GB per GPU for both. We would be glad to respond to any further questions and comments that you may have. Thanks. ### References [1] github.com/karpathy/nanoGPT --- Rebuttal 2: Title: A Gentle Reminder to Reviewer f5HN Comment: Thank you again for your efforts in reviewing our paper and your constructive comments. The discussion period will end soon, so please let us know if you have further comments about our reply to your feedback. Thanks.
Rebuttal 1: Rebuttal: We thank all reviewers for their comments. They are insightful and help us to make our paper better. We have added new experiments and explanations to address the reviewer's concerns. Please also refer to the individual responses to each reviewer. ## Additional experiments of pre-training GPT-2 Since many reviewers seem to have concerns about the practical effectiveness of ADOPT, we have added a new experiment to reinforce it. In this experiment, we ran a pre-training of GPT-2 using the nanoGPT [1] code base to compare Adam and ADOPT. We used OpenWebText as the training data. Experimental setup conformed to the default settings of nanoGPT except for the selection of the optimizer. We also tested a case in which the total batch size was changed from 480 to 96, as a setting where the gradient noise becomes larger. The results are summarized in Figure 7 of the attached PDF file. The most notable finding is that in the small batch size case, Adam causes loss spikes in the early stages of training and fails to converge, while ADOPT is always able to train stably. This is consistent with Adam's theory of non-convergence. As the gradient noise increases, $G$ in Theorem 3.1 also increases, and the constant term in Adam's convergence bounds becomes non-negligible especially when using a large-scale dataset like OpenWebText. As a result, Adam is more likely to fail to train in such cases. Our ADOPT, on the other hand, does not suffer from this problem because it can always guarantee convergence. We also observed that both Adam and ADOPT work well when the batch size is large, but even in this case, ADOPT performs slightly better. ## Clarification of memory cost of ADOPT Some reviewers have raised concerns about the memory cost of ADOPT, so we will address them as well. The memory cost of ADOPT is exactly the same as that of Adam, with $m_t$ and $v_t$ as the two parameters to be stored during training; AMSGrad requires storage of $\hat{v}_t$ in addition to these, making its memory cost larger than that of ADOPT and Adam. We also experimentally confirmed that ADOPT and Adam have the same memory consumption during training in the GPT-2 experiment. We observed that both ADOPT and Adam require 18 GB per GPU during training. We hope that it will address the reviewers' concerns. We would be glad to respond to any further questions and comments that you may have. Thanks. References [1] github.com/karpathy/nanoGPT Pdf: /pdf/a84c838c079e34d2afd048d8754a9acabc5a5f71.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning to Balance Altruism and Self-interest Based on Empathy in Mixed-Motive Games
Accept (poster)
Summary: The paper introduces LASE, a novel distributed multi-agent reinforcement learning algorithm. LASE aims to foster altruistic cooperation through a gifting mechanism while avoiding exploitation in mixed-motive games. The algorithm dynamically adjusts the allocation of rewards based on social relationships inferred using counterfactual reasoning. The paper reports comprehensive experiments in various mixed-motive games, demonstrating that LASE effectively promotes group collaboration and adapts policies to different co-player types without compromising fairness. Strengths: The paper presents an innovative combination of gifting mechanisms and counterfactual reasoning within a multi-agent reinforcement learning framework. This approach to dynamically adjust reward allocation based on inferred social relationships is novel and well-grounded in developmental psychology theories of empathy. The paper is clearly written and well-organized. The methodology and experimental setup are described in detail, making it relatively easy to follow and reproduce the results. Additionally, the authors conducted comprehensive experiments across various mixed-motive game scenarios, thoroughly demonstrating the effectiveness of LASE in promoting cooperation and fairness. The breadth and depth of these experiments add significant credibility to the proposed method. Weaknesses: The proposed method, while interesting, could be seen as incremental since it primarily combines existing techniques (gifting and counterfactual reasoning). The novelty could be better highlighted by contrasting more explicitly with prior works. The more important weakness is that the paper lacks a detailed analysis of the individual contributions of the gifting mechanism and counterfactual reasoning. For example, there is no direct comparison with the original gifting method or an ablation study isolating the impact of the counterfactual reasoning module. This makes it difficult to discern the specific roles and contributions of these components to the overall performance. Thus, I think the paper is below the acceptance bar for NeurIPS. Technical Quality: 3 Clarity: 3 Questions for Authors: 1.How does LASE compare to the original gifting methods? Including a direct comparison or an ablation study focusing on the gifting mechanism could provide deeper insights into its contribution. 2.Can you provide an ablation study that isolates the impact of the counterfactual reasoning module? This would help in understanding its specific role and effectiveness. 3. Why not compare the Equality with other baseline methods? I am curious about the equality performance of baseline methods? from the report of their paper, I guess some of them will have not bad performance. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper seems lack of discussion about limitations and broader societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments on our work! > The comparison to the original gifting methods and the individual contributions of the gifting mechanism. Our primary contribution lies in designing an algorithm that can dynamically adjust the gifting strategy to different co-players, performing exceptionally well in multi-agent decision-making and adaptation. In contrast, most of the previous works predefine fixed gifting amounts, lacking adaptation ability. LIO and LToS [4] have learned gifting strategies, but underperform in mixed-motive scenarios. As far as we know, [1] first used the gifting mechanism to promote cooperation in sequential social dilemmas. The implementation of the gifting in [1] is to equip the agent with the extra action of a "gifting beam", which gives other agents in the beam a reward of $g$, and incurs an immediate penalty of $−g$ to itself. We try to approximatively implement this method by adding an additional gifting action to the A2C agents. However, since this action incurs an immediate $−g$ penalty, even a small $g$ will quickly reduce the probability of choosing this action, and the result is almost the same as that of A2C without gifting in our baselines which underperforms LASE. There are also some works that focus on how social dilemmas can be resolved by pre-defining the proportion of rewards to be distributed (gifted) [2] [3]. At one extreme, equal distribution makes everyone's goal the group's total benefit, the same as our GO baseline. For a clearer comparison, we conduct an additional ablation experiment using the converged gifting weight (as shown in Table 2) as a fixed gifting weight for each agent. ||SSH|SSG|Cleanup|Coingame| |---|---|---|---|---| |Original gifting|12.937|80.347|0.172|0.209| |A2C|12.476|81.285|0.274|0.036| |Fixed weight|14.626|55.117|35.704|3.435| |LASE|18.948|117.784|38.736|33.467| LASE outperforms because it can encourage cooperative behavior by rewarding some specific agents. In addition, LASE can dynamically adjust the proportion of rewards shared with different agents, which helps avoid being exploited by others. To sum up, our contribution to the gifting mechanism is mainly reflected in the fact that LASE can adjust the gifting strategy dynamically by inferring the social relationships with others, and can effectively promote group cooperation and avoid exploitation under various sequential social dilemmas. [1] Lupu et al., Gifting in multi-agent reinforcement learning, AAMAS 2020 [2] Wang, et al., Emergent prosociality in multi-agent games through gifting, IJCAI 2021 [3] Willis et al., Resolving social dilemmas through reward transfer commitments, ALA 2023 > Ablation study, contribution and effectiveness about the counterfactual reasoning module. The social relationships inferred through counterfactual reasoning guide our gifting strategy and form the core of our algorithm, making ablation experiments on this module challenging. A feasible method is to use an end-to-end trained neural network to determine gifting weights, as done by LToS [4]. So we add LToS as one baseline: ||SSH|SSG|Cleanup|Coingame| |---|---|---|---|---| |LToS|12.476|77.386|1.912|-0.078| |LASE|18.948|117.784|38.736|33.467| It shows that the counterfactual reasoning module helps LASE outperform LToS. Meanwhile, LIO as a method that does not use counterfactual reasoning but directly uses neural networks to output gifted rewards has also been compared as a baseline in the paper. Counterfactual reasoning is an effective idea in multi-agent learning, however, most previous research has focused on cooperative tasks using the CTDE framework [5] or competitive tasks using counterfactual regret minimization (CFR) [6]. In contrast, LASE employs counterfactual reasoning to infer social relationships in a decentralized manner, achieving excellent performance in mixed-motive games. We believe this innovation has the potential to significantly enhance community development. [4] Yi, et al., Learning to share in multi-agent reinforcement learning, NeurIPS 2022 [5] Foerster et al., Counterfactual Multi-Agent Policy Gradients, AAAI 2018 [6] Zinkevich, et al., Regret minimization in games with incomplete information, NeurIPS 2007 > Compare the Equality with other baseline methods. As an evaluation metric, fairness should be evaluated alongside reward to measure algorithm performance effectively. Some algorithms may fail to address decision-making issues in mixed-motive games where each agent receives a small reward, but the reward disparity between agents is minimal, resulting in high fairness. Clearly, these methods are not effective. An effective method should both maximize group reward and ensure intra-group equity. Thus, we compared only LASE and GO, which achieve the highest rewards. We have now included the fairness results of other baselines: |Fairness|SSH|SSG|Coingame|Cleanup| |---|---|---|---|---| |LASE|0.994|0.951|0.835|0.802| |LASE w/o|0.986|0.862|0.848|0.685| |GO|0.968|0.856|0.785|0.496| |IA|0.984|0.877|0.898|0.708| |LIO|0.931|0.985|0.745|0.545| |SI|0.995|0.937|0.750|0.892| |A2C|0.997|0.854|0.831|0.824| As you guess, LASE does not always outperform the baseline on fairness metrics, such as SI and A2C in SSH, LIO in SSG, IA in Coingame, and SI in Cleanup. However, LASE can significantly enhance group benefits while maintaining high fairness, which other baselines can’t. > Limitations and broader societal impacts. We have mentioned the limitations of our work in Section 7, Conclusion, including the assumption that each agent's reward function will be modified by other agents' gifts, whereas in reality, people may refuse gifts. Additionally, LASE currently focuses on finite discrete action spaces, and extending it to continuous action spaces is our next goal. We discussed the broader impact of our work in Appendix D. If you have any further questions, please feel free to ask! We are happy to discuss this with you! --- Rebuttal Comment 1.1: Title: Main concerns are addressed. Comment: Dear Authors, Thanks for the additional experiments provided. The most of my concerns are addressed. And I suggest the authors to add the new experiments in the future version. And I will increase my score. --- Reply to Comment 1.1.1: Title: Thanks for the response Comment: Thank you very much for your valuable feedback and for taking the time to review our submission! We truly appreciate your insights and suggestions, which have helped us identify areas for improvement. We will carefully consider your comments and add the experiments in our revised version!
Summary: This work proposes LASE, a multi-agent reinforcement learning framework that aims to improve co-operation between agents in mixed-motive games using transfer of rewards between agents in a zero-sum manner. Specifically, each agent uses counterfactual reasoning to compute a social relationship metric that computes the effect of their actions on the Q-values of other agents. Notably, LASE uses fully decentralized training, in contrast to many related works in the area. Finally, LASE outperforms existing baselines in a variety of popular mixed-motive environments such as Cleanup. Strengths: 1. The paper is well-organized, clearly written and technically sound. The general flow of the paper is smooth and proposed methods are explained reasonably well. The paper has an appropriate number of citations and properly details existing work in the related work section. 2. The presented framework is fully decentralized, which gives it an advantage to most works in the area that use centralized training decentralized execution (CTDE). 3. The results are generally promising with LASE showing significant gains in most environments tested. The analysis of co-operation strategies learned by different methods is interesting. In particular, I liked the GO vs LASE BG ablation study in Section 6.3. Weaknesses: 1) The framework is relatively complex, as it requires learning two additional networks (perspective-taking and Q networks) for each agent. 2) Given the dependence of the method on joint action, I have doubts about the scalability of the method. As more agents are added into the system, credit assignment would become more difficult. 3) Some experimental details need to be clarified further. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. In general, given the complexity of the framework and the dependence on the joint action, I am sceptical about the scalability of the method. It would be interesting to see more results that test the scalability of the method. Convincing results with 8 agents for Cleanup and 1 other environment would significantly increase the strength of the paper. 2. The observation conversion network is not trained using the ground truth observations of agent j, but instead using the ground truth observations of agent i (this being key to the fully decentralized claim of the work). This makes me wonder why such a network is needed at all? What happens if a network that directly maps from agent i’s observation to agent j’s policy is learned. It would be interesting to see this in an ablation study. In other words, why is the "perspective taking" module not a single network? 3. How is the SR policy network trained? I am confused why it is trained using RL and not simply using supervised learning using the actual observed actions of agent j. 4. I am not convinced by the argument that removing the cleaning beam in Cleanup makes the environment harder. Removing the cleaning beam also reduces the dimensionality of the action space. What was the reason behind removing it? 5. Figure 1 is slightly misleading as it shows separate parameters for the observation conversion network and SR policy network. However, equation 4 shows them having the same parameters. 6. I am unsure why the policy has a manual epsilon greedy added to it. Actor critic methods are on-policy, the epsilon greedy changes that and I am not sure this manual tuning for the policy is standard. Why was this required? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Sufficient details provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The complexity of the framework and the scalability of the method. Thank you very much for your suggestions on LASE’s scalability! To test LASE’s scalability, we have extended Cleanup and Snowdrift as follows: | | Map size | Player num | Obs size | Init Waste/ Snowdrift num | Episode length | | --- | --- | --- | --- | --- | --- | | Cleanup.Extn | 8→12 | 4→8 | 5→7 | 8→16 | 100→150 | | Snowdrift.Extn | 8→12 | 4→8 | 5→7 | 6→12 | 50→70 | Here are the experimental results: | | LASE | IA | LIO | SI | A2C | | --- | --- | --- | --- | --- | --- | | Cleanup.Extn | **56.513** | 20.798 | 1.294 | 3.548 | 0.135 | | Snowdrift.Extn | **232.564** | 227.762 | 20.317 | 207.461 | 134.964 | As shown above, LASE still outperforms the baselines in extended environments. Thus, we say that LASE has a certain degree of scalability. We believe that learning two additional networks for opponent modeling and social relationship inference in a decentralized manner is not overly complicated, ensuring the method's scalability. However, it is undeniable that the current version of LASE requires inferring the social relationships of each agent individually, leading to increased computational complexity as the number of agents increases greatly. The scalability of LASE will be a main focus of our future study. A possible approach is updating the relationship between agents less frequently. > The explanation of the observational conversion network and the SR policy network. An ablation study about mapping $i$’s observation to $j$’s policy directly. Thank you very much for your suggestion to replace the PT module with a single network for the ablation study! We trained the network $p(\mathbf{\hat{a_t}} | o^i_t, \mathbf{a_{t-1}})$ with supervised learning by minimizing the MSE loss of the predicted joint action $\mathbf{\hat{a_t}}$ and the real action $\mathbf{a_t}$, and get the following results: | | SSH | SSG | Cleanup | Coingame | | --- | --- | --- | --- | --- | | LASE w/o PT | 18.442 | 118.616 | 37.174 | 29.541 | | LASE | 18.948 | 117.784 | 38.736 | 33.467 | The results show that the two methods perform comparably in SSH, SSG, and Cleanup, while LASE performs better in Coingame. However, since both approaches predict others' actions based on the same local observations, they essentially serve as opponent modeling tools and thus exhibit similar performance. We use the PT module in LASE for two main reasons: **First**, since the SR value network $\phi$ is required to calculate the egocentric $Q$-value when carrying out counterfactual reasoning, and the SR policy network $\mu$ predicts others' actions with the egocentric policy model, both networks share the CNN and some FC layers to extract the observation features. And they are optimized with the same reward signal under the actor-critic framework (Eq. 5). This sharing is common in actor-critic framework and helps improve training efficiency. Therefore, although the PT module in Figure 1 consists of two networks, the additional parameters and computational overhead mostly stem from the observation conversion network, and it introduces essentially the same amount of parameters as using a single perspective taking network. **Second**, based on the psychological theory that perspective-taking is a crucial component of cognitive empathy [1], we employ the PT module instead of an end-to-end trained neural network to predict opponent actions, which helps computationally and comprehensively model empathy, the essential mechanism in human society. Since this approach does not result in significant performance loss or increased memory and computational demands, we believe our work contributes new insights into opponent modeling and encourages the community to incorporate human cognitive processes into AI agent design. [1] Davis, Mark H. "Measuring individual differences in empathy: Evidence for a multidimensional approach." *Journal of personality and social psychology* 44.1 (1983): 113. > Removing the cleaning beam in Cleanup. In our implementation, the cleaning beam is replaced by a cleaning action which takes effect only at the waste location, thereby maintaining the dimensionality of the action space. Since the river where the waste accumulates and the apple orchard are located on opposite sides of the map, removing the cleaning beam prevents the agent from directly cleaning waste near the apple orchard. Meanwhile, the time cost of traveling between the river and the apple orchard requires agents to collectively learn a more explicit division of labor strategy in the dilemma. This is why we claim that removal will make the environment harder. > The parameters of the observation conversion network and SR policy network. Sorry for the confusion! The observation conversion network $\eta$ and the SR policy network $\mu$ do employ separate sets of parameters. In Equation 4, even though the loss is computed forward through $\mu$, is not used to update $\mu$, but only to update $\eta$. The update method of $\mu$ is in Eq.5. > Epsilon greedy We use epsilon greedy mainly to enhance exploration and avoid falling into local optima, which is more likely to occur in mixed-motive games. The introduction of epsilon greedy allows for a controlled level of exploration. The utilization of epsilon greedy is not an anomaly. The classic on-policy algorithm SARSA [2] uses epsilon greedy. A low epsilon value does not cause drastic policy changes, so on-policy methods remain effective. This strategic balance between exploitation and exploration has proven to be empirically sound, fostering a robust approach to algorithmic performance. A similar example is PPO, which uses trajectories sampled from a slightly offset policy for update, but it is still classified as an on-policy algorithm, and it has demonstrated strong empirical performance. [2] Sutton, Richard S. "Generalization in reinforcement learning: Successful examples using sparse coarse coding." NeurIPS 1995 --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: I thank the authors for the rebuttal. My doubts have been clarified and I have raised my score to reflect the same. --- Reply to Comment 1.1.1: Title: Thanks for the reply Comment: Thank you for your reply! We greatly appreciate the time and effort you put into evaluating our work. Your feedback has provided us with meaningful insights, and we will make sure to address your comments and incorporate the necessary changes in our future revisions.
Summary: The authors propose a novel algorithm, LASE, that employs a gifting mechanism in order to steer agents toward equilibria of high social welfare in mixed motive games. A novelty in LASE is that it estimates the "social relationship" between a player and its co-players through a counterfactual Q-value baseline. The authors show that LASE performs favourably across a number of temporally-extended social dilemmas. Strengths: The paper is well-written and well-presented. The authors address a gap in the literature which does so far not attempt to estimate the influence of co-player policies on the joint Q-value function. In general, estimating counterfactuals involving Q-values has in the past been found to suffer from high variance issues (such as in [1]). Hence, I find it positively surprising that the authors' method performs decently across several different environments. [1] Counterfactual Multi-Agent Policy Gradients, Foerster et al., AAAI 2018 Weaknesses: I believe the main weakness of the authors' approach is that, under partial observability, a player cannot generally see all parts of its co-players observations, hence making it impossible to fully reconstruct their policy inputs (hence the restriction to common knowledge fields of view in [2]). I believe the author's algorithm should crucially estimate the uncertainty of their social relationship estimates such as to avoid misunderstandings that could erode trust in real-world situations. Additionally, I am unsure about the authors' use of the term "empathy" - "theory of mind" would certainly work here, but "empathy" seems like an inherently emotional concept. [2] Multi-Agent Common Knowledge Reinforcement Learning, Schroeder de Witt et al., NeurIPS 2019 Technical Quality: 3 Clarity: 3 Questions for Authors: What do you think could be a real-world application of this line of work? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I believe the authors are addressing limitations adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Estimate the uncertainty of their social relationship. > We select the $w^{ij}$ data from the last $10^6$ timesteps of training to calculate their mean value $\overline{w}$ and standard deviation $s$, which estimates the uncertainty of social relationships. The calculation method is as follows: $$ \overline{w}^{ij}=\frac{\sum_{t=T_{max}-10^6}^{T_{max}}w_t^{ij}}{10^6}, \overline{w}=\sum_{i=1}^n\sum_{j=1, j\neq i}^n \overline{w}^{ij} $$ $$ s=\frac{\sum_{i=1}^n\sum_{j=1, j\neq i}^n\sqrt{\frac{\sum_{t=T_{max}-10^6}^{T_{max}}(w_t^{ij}-\overline{w}^{ij})^2}{10^6-1}}}{n\times (n-1)} $$ We conduct a comparative experiment to replace the input of SR policy network $\hat{o}^j$ with $j$’s real observation $o^j$ . Here is the results: | $\overline{w}$ | SSH | SSG | Coingame | Cleanup | | --- | --- | --- | --- | --- | | LASE w/o $o^j$ | $0.07184\pm{0.01921}$ | $0.02341\pm{0.00939}$ | $0.22243\pm{0.18594}$ | $0.34572\pm{0.01509}$ | | LASE w/ $o^j$ | $0.06278\pm{0.01136}$ | $0.03157\pm{0.00527}$ | $0.19317\pm{0.05493}$ | $0.29465\pm{0.00889}$ | The results show that the mean value of LASE's inferred social relationships closely matches that of scenarios with actual observations, although partial observability significantly increases uncertainty. Considering that the social relationships between people in real life tend to be relatively stable and do not change drastically, we think that a possible solution to handle the uncertainty of social relationships is to introduce some smoothing techniques for $w^{ij}$ to reduce the variance of social relationships over time. This approach will be explored in our future work. Meanwhile, It is important to note that Figure 8 and the corresponding analysis show that LASE is able to correctly infer the relationships with different co-players and respond properly. Specifically, in the experiments conducted in Section 6.3, one LASE interacts with three rule-based co-players: cooperator, defector and random. The results show the $w^{ij}$ given to the cooperative co-player is the largest, significantly higher than that given to the other two co-players. The $w^{ij}$ given to the random co-player comes second, and the smallest is given to the defector. These results demonstrate the consistency between LASE’s estimates of the relations and the ground truth. Thanks again for your valuable insights! We strongly agree that this issue has an important impact on trust in real-world scenarios, and we will add relevant results and analysis in a revised version of the paper. At the same time, we would also like to thank you for providing the literature about partial observability, which is very helpful to our follow-up work! > The usage of the term “empathy” > Empathy includes both emotional empathy and cognitive empathy [1]. The former refers to the ability to feel and share others’ emotions. The latter involves imagining and understanding others’ thoughts, feelings and perspectives. In particular, [2] shows that human response is empathically modulated by the learned relations with others. Although there are similarities between Theory of Mind (ToM) and empathy, ToM mainly focuses on attributing mental states to others [1]. Based on [2], we design LASE, incentivizing the cooperative behavior of others through gifts, which is closely related to the emergence of cooperation in real world [3]. [1] De Waal, Frans BM, and Stephanie D. Preston. "Mammalian empathy: behavioral manifestations and neural basis." *Nature Reviews Neuroscience* 18.8 (2017): 498-509. [2] Singer, Tania, et al. "Empathic neural responses are modulated by the perceived fairness of others." *Nature* 439.7075 (2006): 466-469. [3] Yalcin, Ӧzge Nilay, and Steve DiPaola. "A computational model of empathy for interactive agents." *Biologically inspired cognitive architectures* 26 (2018): 20-25. > The real-world application > Our work employs decentralized learning to infer the influence of other agents on oneself in mixed-motive games, enabling broad applicability in complex real-world multi-agent interactions. A potential application scenario is multi-agent automated negotiation. With the rapid advancement and increasing application of machine learning and LLMs, we believe that in the future, agents may assist or even replace humans in various fields such as E-commerce for automated negotiations and autonomous driving. For example, bargaining in e-commerce is a classic mixed-motive game scenario. In bargaining, both the buyer and seller share the common goal of reaching an agreement. However, each party also aims to maximize their own profit, leading to competition over the transaction price. Consequently, during this process, each side must infer the other's willingness to cooperate and adjust their negotiation strategy accordingly. If the other party shows a high willingness to cooperate, such as when a seller notices that the buyer is very eager to get the good, the seller can propose a higher price. Conversely, if the other side displays a very unfriendly attitude, the agent should consider making some concessions. Another example is autonomous driving. When two vehicles traveling towards each other meet on a narrow road, they infer the other's manner based on its behavior. If the other one shows courtesy, the focal vehicle proceeds first; if the other one is brash or in a hurry, the focal agent yields to avoid conflict. --- Rebuttal Comment 1.1: Title: Thanks for Your Response, and One More Question. Comment: I thank the authors for their reply. I am satisfied with the authors' response to my concerns about relationship uncertainty and the usage of the term "empathy". Concerning the real-world applicability, I, however, I would like to ask whether and how the authors believe that their work is relevant to LLM agents given the difficulty of performing RL directly with these. --- Reply to Comment 1.1.1: Title: Two possible approaches to integrating LLM and LASE in negotiation tasks. Comment: Given that effective communication and complex strategies in automated negotiation agents often require fluent natural language, it is intuitive to leverage LLMs in this real-world application. Previous work has already formulated the negotiation task and developed datasets that support reinforcement learning, supervised learning, and other methods. [1] Building on this foundation, we propose integrating LLMs into our approach in two ways: First, inspired by Cicero [2], which uses RL to train an intent model and generates messages based on that intent using a pre-trained language model, we think that we can prompt an LLM based on the predicted cooperation willingness of the opponent derived from LASE’s SRI module. Considering that our SRI module outputs a scalar value and that LLMs can often misinterpret the magnitude and meaning of numerical values, it may be necessary to establish a mapping system that correlates different cooperation willingness scores with corresponding natural language expressions. For example, if the SRI module predicts that the opponent has a high willingness to cooperate, the LLM, once prompted with this information, might negotiate more assertively, potentially increasing its bargaining position and maximizing profit. Second, we can consider transferring the framework of our algorithm to the design of an LLM-based agent, aiming for superior performance in this task. For instance, we could replace the PT module with the LLM's world knowledge by prompting the LLM to infer the opponent's next action based on the current dialogue history. When the LLM observes the opponent's actual action, it could be prompted to engage in counterfactual reasoning, similar to the ReAct [3] framework, updating its belief about the opponent's cooperation willingness. An example template of such a prompt might be: "You initially believed the buyer would offer {}, but the actual offer was {}. Considering your previous belief about the buyer's cooperation willingness was {}, do you think you need to make adjustments? If so, how should it be made?" We hope this approach will help address the issue of LLMs lacking a broader understanding of the overall dialogue progression [4], thereby improving performance in negotiation tasks. [1] Post, Thierry, et al. "Deal or no deal? decision making under risk in a large-payoff game show." American Economic Review 98.1 (2008): 38-71. [2] Meta Fundamental AI Research Diplomacy Team (FAIR)†, et al. "Human-level play in the game of Diplomacy by combining language models with strategic reasoning." Science 378.6624 (2022): 1067-1074. [3] Yao, Shunyu, et al. "React: Synergizing reasoning and acting in language models." arXiv preprint arXiv:2210.03629 (2022). [4] Cheng, Yi, et al. "Cooper: Coordinating specialized agents towards a complex dialogue goal." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 16. 2024.
Summary: This paper introduces LASE (Learning to balance Altruism and Self-interest based on Empathy), a multi-agent reinforcement learning algorithm designed for mixed-motive games. LASE uses a gifting mechanism where agents share a portion of their rewards with others based on inferred social relationships. Counterfactual reasoning determines these relationships by comparing the actual joint action's value to a baseline that averages over the other agents' actions. The authors use a perspective-taking module to predict other agents' policies. Experimental results across various sequential social dilemmas show LASE's ability to promote cooperation while maintaining individual interests. The authors claim the following contributions: (1) a computational model of empathy that modulates responses based on inferred social relationships; (2) a decentralized MARL algorithm that balances altruism and self-interest in mixed-motive games, and (3) theoretical analysis of decision dynamics in iterated matrix games and experimental verification of LASE's performance in sequential social dilemmas. Strengths: * Novel approach: The introduction of LASE as a mechanism to balance altruism and self-interest in multi-agent settings is innovative and addresses an important challenge in mixed-motive games. * Theoretical contribution: I think the analysis of the algorithm's behavior in iterated matrix games does a good job of grounding the empirical results in game theory. * The overall evaluation was sufficient to demonstrate the performance of the algorithm in comparison to well-chosen baselines Weaknesses: I think the paper is quite strong overall. However, several places can still be improved to increase the potential impact of the work. * Clarity issues: The paper could benefit from improved clarity in several areas. For example, a bit more clarity on how gifts are determined would be helpful. Consider adding pseudocode for key components like the Social Relationships Inference module or the gifting mechanism and how the modules integrate. * Insight into counterfactual baseline design choice: The paper's choice of using the average behavior as the counterfactual baseline for determining social relationships warrants further investigation. The authors could consider conducting experiments that vary this baseline, comparing alternatives such as fixed neutral baselines, learned cooperative behavior baselines, or worst-case action baselines. This would provide insights into the robustness of the approach and potentially identify improvements to the algorithm. * Limited domain coverage: While the paper evaluates LASE across several sequential social dilemmas, which is adequate, the approach could be further strengthened by expanding the range of domains tested. For example, the authors could consider alternatives from MeltingPot domains (https://github.com/google-deepmind/meltingpot). Technical Quality: 4 Clarity: 3 Questions for Authors: Do you have any proposed changes to the paper in response to my critiques? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: I think there could be more discussion of this, but the current evaluation is sufficient. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Clarify issues Thanks for your valuable suggestions! The overall framework and flow of LASE as well as the relationship between each module can be seen in Figure 1 in the paper and Algorithm 1 in the appendix. For clarity, here we present the pseudocode for both Social Relationship Inference (SRI) and Gifting modules in detail: ```python # Trajectories collection ... ... # SRI for i in Agents: for j in Agents_without_i: predicted_obs = obs_conversion(i_obs) predicted_action_prob = SR_policy(predicted_obs) for j_virtual_action in all_possible_actions: SR_q_values.append(SR_value(i_obs, j_virtual_action + others_real_actions)) counterfactual_baseline = predicted_action_prob @ SR_q_values w_ij = (SR_value(i_obs, real_joint_actions) - counterfactual_baseline) / M # Eq (3) w_ii = 1 - sum([w_ij for j in Agents_without_i]) # Gifting for i in Agents: i_weighted_reward = 0 for j in Agents: i_weighted_reward += w_ji * j_reward # Training with weighted rewards ... ... ``` These two modules are executed after all agents have completed an entire episode of interaction with the environment, adjusting the rewards to influence subsequent RL training. Specifically, in SRI, each agent $i$ predicts another agent $j$’s action probability using Perspective Taking and assesses the impact of all possible actions of $𝑗$ on its $Q$-value using the SR value network. Then $i$ infers the social relationship $w^{ij}$ with $j$ using Equation 3. After all agents have completed SRI, the Gifting module is responsible for distributing rewards. Specifically, the weighted reward for each agent $i$ is the sum of the gifted rewards $w^{ij}\times r^j$ from other agents $j$, including $i$’s own reward left to itself $w^{ii}\times r^i$. > Insight into counterfactual baseline design choice Thank you very much for your suggestions on designing the counterfactual baseline! We apologize for the confusion we may have encountered while attempting to implement the three different baselines you mentioned. We are not entirely sure if we fully understood your points and would greatly appreciate the opportunity to discuss this matter in more detail to ensure we correctly implement your suggestions! - Fixed neutral baseline: If you are referring to using a fixed hyperparameter as a neutral baseline, we believe that this approach may not be appropriate due to the continuously evolving $Q$-values throughout the RL learning process. If you are referring to the neutral value of $i$'s $Q$-values conditioned on the various possible actions of $j$, we think that our ablation study **LASE w/o**, where $\sum_{a_t^{j'}}\frac{1}{|\mathcal{A}^j|}Q^i(o^i_t,(a_t^{-j},a_t^{j'}))$ is used as a counterfactual baseline, has already addressed this. - Learned cooperative behavior baselines: Are you referring to "cooperative behavior" as $\text{max}_{a^{j'}_t} Q^i(o^i_t, (a_t^{-j},a_t^{j'}))$? If so, based on Equation 3, this would lead to $w^{ij} \leq 0$, resulting in no gifting occurring. This would cause the algorithm to fail and revert to the A2C baseline. - Worst-case action baselines: We interpret this baseline as $\text{min}_{a^{j'}_t}Q^i(o^i_t, (a_t^{-j},a_t^{j'}))$. We replace LASE's counterfactual baseline with the Worst-case Action Baseline (WAB), and the self-play results are as follows: |*Self-play*|SSH|SSG|Coingame|Cleanup| |---|---|---|---|---| |WAB|17.624|118.731|0.009|43.741| |LASE|18.948|117.784|38.736|33.467| WAB outperforms LASE on SSG and Cleanup, which is because using $\text{min}_{a^{j'}_t}Q^i(o^i_t, (a_t^{-j},a_t^{j'}))$ as the baseline results in $w^{ij}\geq 0$. This overly optimistic estimation of social relationships promotes group cooperation in self-play settings, but it also raises the risk of exploitation when interacting with unknown agents. To demonstrate this, an adaptive experiment on WAB is conducted in the same manner as described in Section 6.3 of the paper. A WAB is trained with three rule-based agents and three A2C agents, respectively. We record the mean gifting weights assigned by the WAB to other agents and the rewards of the WAB after gifting: |*Gifting weight*|Cooperator|Random|Defector|A2C_1|A2C_2|A2C_3| |---|---|---|---|---|---|---| |WAB|0.321|0.254|0.184|0.266|0.221|0.222| |LASE|0.268|0.091|0.027|0.0391|0.1339|0.2194| |*Reward after gifting*|Rule based agents|A2C agents| |---|---|---| |WAB|5.778|4.467| |LASE|**15.129**|**8.752**| Due to WAB giving excessive gifts to other agents and not receiving reciprocation as it would in self-play, the actual reward obtained by WAB is significantly lower than that of LASE. This demonstrates the weakness of WAB being easily exploited. Additionally, we observe that in Coingame, the social relationship converged to $w^{12}\approx w^{21}\approx 1$, indicating that the agents' optimization objective shifted towards maximizing the opponent's rewards. In Coingame, only collecting coins of the opponent's color impacts its reward. Finally, the two agents converged to a behavior of not collecting any coins, demonstrating that excessive gifting is detrimental to learning in this environment. > Limited domain coverage Thank you very much for your suggestions! As a standardized testing platform that integrates various environments including cooperation, competition, and mixed motives, MeltingPot can further assist in testing the scalability and robustness of algorithms. We will consider selecting some of these environments as experimental settings in our future work to enhance the persuasiveness of our paper. Thank you again for your insightful comments! We will make corresponding changes in the revised version of the paper based on your and other reviewers’ suggestions, including but not limited to improving and clarifying the description of the algorithm, adding discussions and experiments on the counterfactual baseline, and expanding the experimental environments.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Face2QR: A Unified Framework for Aesthetic, Face-Preserving, and Scannable QR Code Generation
Accept (poster)
Summary: This paper is pioneering in its integration of face identity with QR codes, proposing a novel pipeline for generating customized QR codes with embedded face identity. The key idea is to leverage diffusion models and control networks to create visually appealing QR codes while preserving face identity. The pipeline introduces an ID-aware QR ReShuffle module to address conflicts between face identity and QR code patterns, and designs an ID-preserved Scannability Enhancement module to improve scannability without compromising the face identity and visual quality. The experiment results showcase a perfect balance between face identity, aesthetic quality and scannability. Strengths: - As the first paper to combine face identity with QR codes, the proposed pipeline for generating face embedded QR code is innovative and addresses the practical needs for social connection in real-world scenarios. - The IDRS module presents an interesting solution to conflicts arising from different control signals. By rearranging QR patterns to harmonize varying control conditions, the proposed pipeline leverages information from both face images and QR codes to generate customized QR codes. - The IDSE module significantly enhances the scannability of QR images using adaptive loss, while concurrently maintaining a faithful representation of face identity. - Experimental results show that the generated QR codes successfully preserve face identity, yielding impressive visual results. Notably, there is minimal interference from QR patterns in the face region. - The paper is well-organized, thoroughly discussing motivation and related work. Rigorous experiments enhance the credibility of the results. Weaknesses: - The paper mentions that the method is limited by the generative models, but does not present bad cases due to failure of generative models. It is recommended to include such examples to provide readers with a better understanding of the algorithm’s limitation. - Certain technical details in the paper require further elaboration. For example, definition of error rate is not explictly given. In Figure 3, the image difference visualization $D$ appears to support the claim that “adaptive loss modifies face region more gently”. However, without a comparison of $D$ between adaptive and uniform losses, this claim lacks substantiation. - The paper lacks a comparison of computational resource requirements with other methods. This omission makes it challenging to assess the practical feasibility of this algorithm. - Typos exist; for instance, line 174 should read “with a learning rate of 0.002” Technical Quality: 4 Clarity: 3 Questions for Authors: - Can the authors provide anaylsis of bad cases caused by failure of generative models? - Could the authors provide definition of error rate and comparison of image difference visualization $D$ between adaptive and uniform losses? - While the method excels in visual quality, what are the computational resource requirements compared to other methods? Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: Yes. The paper adequately discusses limitations and broader impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer jWT5, Thank you for taking the time to review our paper and providing valuable feedback. Below, I will address the raised concerns: > **Q1: [Can the authors provide anaylsis of bad cases caused by failure of generative models?]** * We include some bad cases caused by failure of generative model in Table D of the PDF. This issue may arise from the lack of diversity in the training data, or the model’s inability to generate complex structures or understand nuanced prompts. We hope these examples can provide a clearer understanding of the algorithm’s limitations. Notably, Face2QR is designed to allow new, better generative models to be easily plugged into our proposed training-free framework. As generative models advance, we expect ongoing improvements to address these limitations, increasing the robustness and versatility of future models. Our framework is adaptable and will benefit from these advancements, helping us to push the boundaries of what generative models can achieve. > **Q2: [Could the authors provide definition of error rate and comparison of image difference visualization $D$ between adaptive and uniform losses?]** * The error rate is defined as $e/N_\theta$ where $e$ is the number of error modules and $N_\theta$ is the total number of modules in a QR code excluding the marker and alignment pattern region. In our experiments, the generated QR images have version 5 and $37\times 37$ modules in total. Therefore, the value of $N_\theta$ is typically $1197 = 37^2-3\times 7^2-5^2$. * In Figure 3, the visualization $D$ primarily illustrates the differences in modification between the face region and the background when using the adaptive loss. We are pleased to provide the comparison of image difference visualization $D$ between adaptive and uniform losses in Table E of the PDF. It is shown that the modifications in the face region are less pronounced with adaptive loss. A more straightforward comparison between adaptive loss and uniform loss can be found in Table 7, where we demonstrate how different losses affect the nuance of face ID. > **Q3: [What are the computational resource requirements compared to other methods?]** * Face2QR is able to run on one RTX 4090 GPU, so our pipeline has similar computational resource requirements as previous method such as Text2QR [41] and ArtCoder [34]. In the first two modules (IDQR and IDRS), the pipeline goes through Stable Diffusion twice, and the latent code update in the last module (IDSE) can converge within 150 iterations. In terms of generation time, Face2QR outputs a QR image in about five minutes on one RTX 4090 GPU, which is about the same as Text2QR. > **Q4: [Typo exists on line 174.]** * The typos in the article will be addressed accordingly. Thanks for pointing them out. Thanks again for your review. We hope our response has well answered your questions. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal. All my concerns have been well addressed. I think this paper is novel and interesting. Thus, I decide to keep my original score (7: Accept). --- Rebuttal 2: Title: Response to Reviewer jWT5 Comment: We are glad that our rebuttal addressed all your concerns. Thanks for your valuable comments and recognition of our work.
Summary: The article introduces a novel pipeline designed to create customized QR codes that integrate aesthetic appeal, facial identification (ID), and scannability. The proposed approach incorporates three key components: (1) ID-refined QR Integration (IDQR) seamlessly incorporates facial ID into the QR code background (2) ID-aware QR ReShuffle (IDRS) addresses and resolves conflicts between facial ID and QR code patterns (3) ID-preserved Scannability Enhancement (IDSE) optimizes the robustness of QR code scanning while preserving both the facial ID and aesthetic quality. Strengths: (1) The motivation of Face2QR proposed in the paper is straightforward, and the method proves to be effective based on the quantitative and qualitative results presented. (2) There is sufficient ablation study to demonstrate the effectiveness of each module in this paper. (3) This paper is well-written and easy to follow. Weaknesses: (1) The paper's innovation is relatively weak, with each module and its technology being a combination of previous works. (2) Although the paper includes numerous quantitative experiments, it only involves up to 20 identities, all of whom are celebrities. This limitation hinders the ability to fully demonstrate the method's effectiveness for a broader range of ordinary users. (3)The paper does not mention how to set the value of a in line 131. Technical Quality: 3 Clarity: 3 Questions for Authors: Quantitative and qualitative experiments on a larger and more diverse group of ordinary users are crucial to demonstrate the scalability and effectiveness of the method. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors adequately addressed the limitations of their work Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer MYMo, Thank you for taking the time to review our manuscript and providing valuable feedback. All raised concerns are addressed below point by point: > **Q1: [Each module and its technology is a combination of previous works.]** * We acknowledge that the components—Diffusion models, Identity Preserved Generative Models, and the latest QR generation methods—are directly used. The reason for this design is to allow new, better methods to be easily plugged into our proposed **training-free framework**. Our innovation lies in how we control these components, which is why we proposed the three modules: IDQR, IDRS, and IDSE. The key technical contributions of our work are embodied in these three modules, which integrate the components and balance the three control signals. Specifically: - IDQR Module: This module integrates a face ID with an aesthetic background. It preserves the facial identity in the generated image aligned with text prompts, ensuring the luminance distribution matches that of a QR code. - IDRS Module: This module harmonizes the QR pattern with the face ID. It uses face masks to preserve the fidelity of the face ID and resolves conflicts between the face ID and QR patterns by reshuffling the QR code to match the brightness distribution in the face region. - IDSE Module: This module balances scannability and aesthetic quality. It iteratively updates the generated image in the latent space and applies adaptive loss to carefully preserve the face ID while enhancing scannability. In Table 1, Text2QR [41] demonstrates unsatisfactory results due to the lack of harmony between the face ID, QR pattern, and background. In summary, the innovation of our work lies not only in the modules of our model but also in the proposed training-free framework for solving complex control problems, where triplet control signals inherently conflict with each other. We believe that this new framework will facilitate subsequent research in the community on the effective control of generative models. We will clarify these contributions in the revised manuscript. > **Q2: [Involved identities are all celebrities. How about the method's effectiveness for a broader range of ordinary users?]** * Our Face2QR system is generalizable to real faces, generated realistic faces, and cartoon faces. As shown in Table C of the PDF, the experimental results demonstrate that facial identities are well preserved and seamlessly blended into the background in all generated QR images, showcasing the effectiveness of Face2QR across these three face types. > **Q3: [The paper does not mention how to set the value of a in line 131.]** * The value $a$ is the pixel length of one module, which is determined by the QR image size and the version of the QR code. In our experiments, the version 5 of QR code is leveraged, so there are typically $37\times 37$ modules in a QR code image. Then for an QR image has size $L\times L$, the value of $a$ should be $L/37$. Such details will be included in the revised manuscript. Thanks again for your review. We hope our response has well addressed all your concerns. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal. The reply partially addressed my questions, but there are still doubts regarding Question 2. Reviewer p8m4 also mentioned this issue. The main concern here is to understand the effectiveness of the method on images of other identities, not the type of facial images. --- Rebuttal 2: Title: Response to Reviewer MYMo Comment: Thanks for joining the discussion. By proposing three modules (i.e., IDQR, IDRS, and IDSE), we introduce a **training-free QR generation framework** that solves the complex problem of managing inherently-conflicted triplet control signals. This framework can also accommodate other types of identities. For example, we can simply replace InstantID, which is designed to preserve face identity, with components designed to preserve object identity (e.g., SSR-Encoder [53] and CustomNet [54]) to create object-preserved QR codes. Since face identity preservation is particularly challenging due to the uncanny valley effect, where even minor discrepancies in facial features or expressions can cause discomfort and appear unnatural, our focus in this paper aims at solving the challenge of generating aesthetic QR codes with faces. Also, the Reviewer p8m4’s Question 3 (“The illustrative results are based on the celebrities or movie stars. How about the results to common face?”) queries whether our Face2QR is applicable to **common faces**. In Table C of the rebuttal pdf, we provide the evidence that Face 2QR generalizes well to real common faces, generated realistic faces, and even cartoon faces. We apologize for misunderstanding your comments regarding “it only involves up to 20 identities, **all of whom are celebrities**. This limitation hinders the ability to fully demonstrate the method's effectiveness for **a broader range of ordinary users**”. We thought you were asking the generalizability of Face2QR to **common faces**, similar to Reviewer p8m4’s Question 3, rather than its application to object identities. Due to the NeurIPS 24 policy during the author-reviewer discussion period, we currently cannot find any way to show the generation results of object-preserved QR codes, but we will add them in the revised manuscript for completeness. Thanks again for joining the discussion and providing valuable comments. [53] Yuxuan Zhang, Yiren Song, Jiaming Liu, Rui Wang, Jinpeng Yu, Hao Tang, Huaxia Li, Xu Tang, Yao Hu, Han Pan, Zhongliang Jing. Ssr-encoder: Encoding selective subject representation for subject-driven generation. In Proc. CVPR 2024. [54] Ziyang Yuan, Mingdeng Cao, Xintao Wang, Zhongang Qi, Chun Yuan, and Ying Shan. Customnet: Object customization with variable-viewpoints in text-to-image diffusion models. In ACM Multimedia 2024.
Summary: This work proposed Face2QR, a pipeline for generating personalized QR codes that balance aesthetics, face identity, and scannability. It mainly introduces three components: ID-refined QR integration (IDQR) for seamless background styling with face ID, ID-aware QR ReShuffle (IDRS) to rectify conflicts between face IDs and QR patterns, and ID-preserved Scannability Enhancement (IDSE) to boost scanning robustness through latent code optimization. Face2QR outperforms existing methods in preserving facial recognition features within custom QR code designs. Strengths: + Generating personalized QR codes that balance aesthetics, face identity, and scannability seems an interesting topic and application. + The design of the major components in this paper, including IDQR for seamless background styling with face ID, DRS to rectify conflicts between face IDs and QR patterns, and IDSE to boost scanning robustness through latent code optimization, is well-motivated and reasonable. + Experiments with user studies show the proposed method has strength compared with existing method like Text2QR, ArtCoder etc. Weaknesses: - The proposed method is mainly build upon existing works in areas like Diffusion models, Identity Preserved Generative Models, and latest QR generation method like [41]. It is indeed a good application work but may lack technical contributions. - The proposed new components are good practice for this specific application about QR code generation. It is a good application paper but I'm not sure whether these insights are general enough to fit a requirement for a NeurIPS paper. For example, it is not clear if the proposed method can generate impact or useful for more general topics such as Identity Preserved Generative Models. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to details in the Weakness section above. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Some limitations have been discussed in the submission. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer ebKp, Thank you for taking the time to review our manuscript and providing valuable feedback. All raised concerns are addressed below point by point: > **Q1: [Method is built upon existing work like Diffusion models, Identity Preserved Generative Models and latest QR generation method like [41].]** * While Diffusion models, Identity Preserved Generative Models, and latest QR generation method are components of our framework, the key technical contributions lie in the three modules (i.e., IDQR, IDRS, and IDSE), which integrate these components and balance the three control signals. More specifically, the IDQR module integrates a face ID with an aesthetic background, the IDRS module harmonizes the QR pattern with the face ID, and the IDSE module balances between scannability and aesthetic quality. In Table 1, Text2QR [41] shows the unsatisfactory results without harmonizing the face ID with QR pattern and the background. More importantly, our contribution also lies in the proposed **training-free framework** for solving **complex control problem** in which triplet control signals inherently conflict with each other. We believe this new framework could facilitate the subsequent research in community about the effective control of generative models. > **Q2: [Whether these insights are general enough? Will the proposed method generate impact for more general topics?]** * Since the proposed framework resolves the conflicts among triplet control signals, it is applicable to other generation tasks involving multiple controls. For example, with some modifications to the pipeline, it is feasible to control face identity, object and background simultaneously. One possible solution is to have one module integrate face identity with background, another harmonizes object positions with face identity, and the final module balance the generation of objects and background. Additionally, this framework can be also applied to other tasks, such as generating videos that preserves the motion of a designated person across various backgrounds. Therefore, although our Face2QR is specifically designed for QR code generation, the built training-free framework with triplet controls provides valuable insights and has broader impacts across various fields, especially in the control of generative models. Thanks again for your review. We hope our response has well addressed all your concerns.
Summary: The paper presents an interesting framework to generate face-preserving QR code, which is useful in social entertainment applications. To enable this application, the paper first encode the Face ID information into the QR generation process and a refining process is applied to improve the integrity of facial features as well as the scannability of the QR code. Experiments results show the effective of the proposed algorithm. The paper is well presented and would be easy to reproduce. Strengths: 1. The proposed face ID preserved QR code generation is useful in industry applications. 2. The presentation of the paper is clear and easy to follow. There are sufficient details to reproduce the paper. Weaknesses: 1. The paper seems to be an engineering report. The novelty of the paper is limited and it seems more likely to be an application paper rather than a neurips submission. 2. For the experimental evaluations, there are several points which should be well improved. * The evaluation test set is relatively small. For example, for scanning robustness test, there are only 20 QR codes for the test, which may not be statistically effectively. * For the ID-preserving results, it seems the ID results have been compromised compared with the original images. * Also, the generated face results seems not be consistent with the QR codes, as shown in Figure 1 and Table 1. It seems to be more like a simple combination of a QR code with a face image. 3. The illustrative results are based on the celebrities or movie stars. How about the results to common face? Technical Quality: 3 Clarity: 3 Questions for Authors: The main concern is on the novelty of the paper. Please well justify the novelties of the paper. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper has discussed the potential limitations in the Conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer p8m4, Thank you for taking the time to review our manuscript and providing valuable feedback. All raised concerns are addressed below point by point: > **Q1: [The paper is more likely to be an application paper rather than a neurips submission.]** * We believe that our Face2QR is novel since it is pioneering in the field of image-to-image generation to generate QR codes that preserve face ID, scannability and aesthetic quality at the same time. The pipeline is able to balance between three inherently conflicting control signals and achieve the SOTA performance. More specifically, the IDQR module preserves the facial identity in the generated image aligned with text prompts, and ensures the luminance distrbution matches that of a QR code. The IDRS module uses face masks to preserve the fidelity of face ID and resolves conflict between face ID and QR patterns by reshuffling the QR code to match the brightness distribution in the face region. The IDSE module iteratively updates the generated image in the latent space and applies adaptive loss to carefully preserve the face ID while enhancing scannability. We will make these contributions clearer in the revised manuscript. * The innovation of our work not only lies in the modules of our model but also lies in the proposed **training-free framework (no parameters updated)** for solving complex control problem in which triplet control signals inherently conflict with each other. We believe that this new framework will facilitate the subsequent research in community about the effective control of generative models. > **Q2.1: [The evaluation test set is relatively small.]** * This paper adopts setting of scanning robustness test from previous works [41,34], which also use a batch of 20 samples. We conducted a scanning robustness experiment with an expanded test set of 100 QR codes, using the same settings as described in the manuscript. The test results of our Face2QR are shown in the table below, with an average successful rate over 95%. The successful rate of this new scanning robustness experiment is consistent with that reported in the manuscript. | Decoder | $(3\text{cm})^2$@$45^{\circ}$ | $(3\text{cm})^2$@$90^{\circ}$ | $(5\text{cm})^2$@$45^{\circ}$ | $(5\text{cm})^2$@$90^{\circ}$ | $(7\text{cm})^2$@$45^{\circ}$ | $(7\text{cm})^2$@$90^{\circ}$ | | ------- | ---------------: | ----------: | ---------------: | -------------------------------: | -------------------------------: | -------------------------------: | | Scanner | 98% | 96% | 100% | 100% | 99% | 100% | | WeChat | 95% | 99% | 100% | 100% | 98% | 98% | | TikTok | 100% | 100% | 100% | 100% | 100% | 100% | > **Q2.2: [The ID results have been compromised compared with the original images.]** * In our pipeline, the InstantID [38] network is used to preserve face identity during the generation process. We compare our generation results with the results of InstantID using the same prompt in Table A of the PDF. Compared with the baseline of InstantID, our results with additional QR information show little degradation in quality of face identity. For the generated QR images to be practical in daily life, they must be successfully decoded by standard QR code decoders originally designed for black and white QR codes. Therefore, although the balance between face identity and QR pattern is carefully managed, subtle artifacts, such as color blocks in the face region, may still occur due to the compromises made for scanability. If a decoder is designed specifically for decoding aesthetic QR codes, we believe it can completely eliminate the impact of QR patterns on faces. In the current situation, our method is likely the best solution for balancing face identity, QR code patterns, and aesthetics. > **Q2.3: [The face results seems not be consistent with the QR codes. The results are like a simple combination of a QR code with a face image.]** * In the generated QR image, the face ID is consistent with the original face image. Moreover, the decoded QR code matches the original encoding, indicating the functional consistency between the face region and the QR pattern. * We do not consider our generated QR images to be simple combinations of a QR code with a face image. Indeed, conventional methods such as ArtUp [44] attempted to combine them directly, but this often comes at the expense of aesthetic quality. As shown in Table B of the PDF, ArtUp directly pastes the user-provided face image onto the QR code, resulting in much lower aesthetic quality compared to ours. By harmonizing triplet control signals, our Face2QR successfully achieve superiority in balancing aesthetics, face-preserving, and scanability. It is worth noting that the clothes, pose, hairstyle, and other features in the generated images have been adjusted accordingly to achieve holistic semantic consistency, far beyond only combining a QR code with a face image. > **Q3: [How about the results to common face?]** * Our Face2QR is generalizable to real common faces, generated realistic faces, and even cartoon faces. The experimental results are shown in Table C of the PDF. It can be seen that the face identities, no matter which type it is, are well preserved in the generated QR images with seamlessly blending into the background, which demonstrates the effectiveness of Face2QR in these three face types. Thanks again for your review. We hope our response has well addressed all your concerns.
Rebuttal 1: Rebuttal: Dear Reviewers and Area Chairs, We appreciate the reviewers (**R1** p8m4, **R2** ebKp, **R3** MYMo, and **R4** jWT5) for their insightful feedback. The reviewers agree that: **Novel approach**: * **R3**: "The article introduces a **novel** pipeline designed to create customized QR codes..." * **R4**: "As **the first paper** to combine face identity with QR codes, the proposed pipeline for generating face embedded QR code is **innovative**..." **Effectiveness**: * **R1**: "Experiments results show the **effective** of the proposed algorithm." * **R2**: "Face2QR **outperforms existing methods** in preserving facial recognition features within custom QR code designs." * **R3**: "There is sufficient ablation study to demonstrate the **effectiveness** of each module in this paper." * **R4**: "Experimental results show that the generated QR codes **successfully preserve face identity**, yielding **impressive visual results**." **Interesting**: * **R1**: "The paper presents an **interesting** framework..." * **R2**: "Generating personalized QR codes that balance aesthetics, face identity, and scannability seems an **interesting** topic and application." * **R4**: "The IDRS module presents an **interesting** solution to conflicts arising from different control signals." **Well-Written and Organized**: * **R1**: "The paper is **well presented** and would be **easy to reproduce**." * **R2**: "The design of the major components in this paper, including ..., is **well-motivated** and **reasonable**." * **R3**: "This paper is **well-written** and **easy to follow**." * **R4**: "The paper is **well-organized**, thoroughly discussing motivation and related work." We have responded individually to each reviewer to address any concerns. Best Regards, Authors **References for the PDF file:** [51] Pexels. Accessed: 2024-08-06. [52] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of StyleGAN. In Proc. CVPR, 2020. Pdf: /pdf/266a3b08746c2779fe33b6640b9f234bb9846ec4.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Optimal Parallelization of Boosting
Accept (oral)
Summary: This paper studies parallelization in weak-to-strong boosting algorithms. Such algorithms are modeled by the number of sequential rounds $p$ that they run for, and the amount of work $t$ that can be done in parallel in each round. Each unit of work in a round is typically a query to a weak learning algorithm, that outputs a hypothesis from a class of VC dimension $d$ (and these queries can be instantiated in parallel). Formally, in round $i$, the algorithm invokes (in parallel) the weak learner with distributions $D^i_1,\dots,D^i_t$, and obtains $h^i_1,\dots,h^i_t$ such that the error of $h^i_j$ wrt $D^i_j$ is at most $1/2-\gamma$. There are $p$ such rounds, at the end of which, the weak-to-strong learning algorithm outputs some weighted vote over all the $h^i_j$s obtained so far. Ideally, we want the final classifier output by the algorithm to be *strong*: at the very least, its error should be competitive with that of AdaBoost (which is $\tilde{O}(d/m\gamma^2)$ where $m$ is the number of training samples). Under a model of weak-to-strong learning defined as above, the classic AdaBoost works with $p=O(\ln m / \gamma^2)$, and $t=1$. What are some other reasonable tradeoffs that we can hope for? Karbasi and Larsen (2024) gave an algorithm that works with $p=1$ and $t=\exp(O(d \ln m / \gamma^2))$. This was followed up on by Lyu et al. (2024), who obtain $p=O(\ln m/\gamma^2R)$ and $t=\exp(O(dR^2))\ln(1/\gamma)$ for any $1 \le R \le 1/2\gamma$. Both Karbasi and Larsen (2024) as well as Lyu et al. (2024) also gave some lower bounds, but neither covered the entire spectrum of $p$ and $t$ in terms of tightness with respect to algorithms achieving these bounds. This paper largely fills up these gaps. On the upper bound side, the authors present an algorithm that achieves $p=O(\ln m/\gamma^2R)$ and $t=\exp(O(dR)) \ln \frac{\ln m}{\delta \gamma^2}$ for any $R \ge 1$. Observe that the bound on $t$ improves Lyu et al. (2024)'s bound by a factor $R$ in the exponent. The authors also show lower bounds that are tight (upto log factors) in nearly in all regimes. Both, the algorithm for the upper bound, and the lower bound instance, are inspired by the work of Lyu et al. (2024). ### **Upper bound** There are $p$ sequential rounds. For simplicity, we describe the first round, prior to which $D_1$ is set to the uniform distribution on the training sample. We break our computation into $R$ chunks in parallel. Each chunk, invokes a weak learner $t/R$ times in parallel on a fresh sample drawn from $D_{1}$, to obtain $t/R$ many hypotheses in total (Thus, the total number of invokations to the weak learner across all the $R$ chunks is $R \cdot t/R = t$ as required.) Thereafter, there are $R$ sequential rounds of boosting. As $r$ ranges from $1,\dots,R$, we try to obtain a classifier that has error at most $1/2-\gamma$ with respect to $D_r$. We simply do this by checking if there was a hypothesis in the $r$th chunk that has such an error with respect to $D_r$ using the sample we had (which, notably, was from $D_1$). If we do find such a hypothesis, we do a standard boosting update to derive $D_{r+1}$. Assuming that the hypotheses in each step had the required errors with respect to $D_r$, we can imagine that each step works correctly as a standard boosting step, and hence, in each of the $p$ rounds, we are in fact doing $R$ rounds of boosting (and hence, $p$ can be a factor $R$ smaller than standard AdaBoost). But do the hypotheses in each step have the required properties? When we have a sample from $D_1$, we can simply see if a hypothesis has error at most $1/2-\gamma$ with respect to $D_1$ by checking the error of the hypothesis on the sample itself --- this follows from standard uniform convergence of VC classes. However, what if we have a sample from $D_1$, but want to check if a hypothesis has error at most $1/2-\gamma$ with respect to $D_2$? Can we still use the empirical error on the sample as a proxy? In fact, this is what the algorithm is doing in each boosting step. Intuitively, if the distributions $D_2$ and $D_1$ are "close", this should still work. But note that we make exponential updates to $D_1$ in the boosting step, so it is not obvious at all that $D_2$ should be close to $D_1$. Lyu et al. (2024) control the max-divergence between $D_2$ and $D_1$, and show that this recipe works by using sophisticated tools like advanced composition from differential privacy. This is where the authors diverge (no pun intended): instead of the max-divergence, the authors control the KL divergence between $D_2$ and $D_1$ instead. This is acheieved by using the Gibbs variational principle. The technical analysis seems highly non-trivial, but gets the job done: with good chance over the sample, the empirical error on a sample from $D_1$ is going to be a good proxy for the distributional error on $D_2$, provided the KL divergence between $D_2$ and $D_1$ is small. If the KL is not small, then the authors show that progress has already been made. In this way, by tracking KL divergence instead of the max-divergence, the authors are able to improve over the bound of Lyu et al. (2024). ### **Lower bound** The analysis for deriving the improved lower bound is much more involved. We first start by describing the high level construction in Lyu et al. (2024). The ground truth hypothesis is a random concept $c$ on a domain twice the size of the training set. The hypothesis class $\mathcal{H}$ that the weak learner operates over is also constructed randomly. In particular, it contains $c$, and also $p$ other hypothesis $h_1,\dots,h_p$, where each $h_i$ on each $x$ agrees with $c$ with probability $1/2+2\gamma$. The VC dimension of this class can be controlled in terms of $p$. Now, whenever the weak learner gets queried with a distribution $D$, if it can satisfy this query by returning a hypothesis that is not $c$, it does so. The goal is to argue that the weak learner can get away with never having to return $c$ at all in any round. If this is the case, what the learning algorithm knows about the rest of the domain is only in the form of $2\gamma$-biased coins. By instantiating the lower bound on learning the bias of a coin, we get a lower bound on the number of rounds. Lyu et al. (2024) require the number of queries $t$ in each round to be sufficiently small for the weak learner to never return $c$. The main observation by the authors is that, indeed, it is possible to use a much bigger bias than $2\gamma$ in the construction of $h_i,\dots,h_p$. That is, each hypothesis can be biased towards $c$ to a much larger extent (as much as $\sqrt{\ln(m)/p} \gg 2\gamma$). This lets them relax the number of allowed queries $t$ per round, which ultimately yields the stronger lower bound. Strengths: This paper essentially completes the characterization of the tradeoff between the number of sequential rounds and the parallel work in each round in boosting algorithms. Previous work left gaps between the upper and lower bounds across much of the spectrum of these parameters. The authors improve upon the state of the art, using highly non-trivial analyis tools, and essentially close the gaps across nearly all of the spectrum. We now have a significantly more complete picture about the tradeoffs involved in parallelizing boosting thanks to the authors' work. Weaknesses: The paper "Boosting, Voting Classifiers and Randomized Sample Compression Schemes" (https://arxiv.org/pdf/2402.02976) by da Cunha et al. (2024) is a relevant paper to the present work---in particular, we can get rid of at least one of the two log factors in the error of AdaBoost with a voting classifier. I recommend the authors at least mention this and cite the paper. I would also encourage the authors to discuss (somewhere in the paper, maybe as a separate paragraph, or in the conlusion) a bit more about the only regime that we still don't know a matching upper bound for: that of $t \ge \exp(\exp(d))$. Minor/typos:\ Line 64: $n$ hasn't been introduced yet (it should be the size of the training set? and maybe also use $m$ then?)\ Line 132: Shellah -> Shelah Technical Quality: 4 Clarity: 4 Questions for Authors: Is there some intuitive meaning to the lower bound of $t \ge \exp(\exp(d))$, even at a very high level? It seems like such a bound on $t$ (albeit weaker) also existed in Lyu et al. (2024). Do you have any thoughts on how one may attempt to close it, or the inherent difficulty? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors adequately address any limitations that I can foresee. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful evaluation of our work. It was a joy for us to read it. It is clear that the reviewer built a solid understanding of our work, grasping multiple of the subtleties in the argument. This even extends to related works to some extents, as evidenced by the reviewer's comments and the insightful suggestion of a recent reference, which we will adopt. In fact, in our opinion, such level of comprehension seems deserving of a higher confidence score. Regarding the question on why the $\exp(\exp(d))$ term appears, indeed it is somewhat unclear whether it should truly be there. Simply examining the calculations, it originates from the following argument in the lower bound: For each parallel round, we have around $N=\exp(d)$ many random hypotheses that could be used to answer the query distributions (since the VC-dimension is $d$). Since the query distributions in round $i$ are independent of the random hypotheses used to answer queries in round $i$, if each of them is a valid response with just constant probability (say $1/e$), then the chance that none of them are a valid response to a fixed query distribution is only $e^{-N} = \exp(-\exp(d))$ (recall the hypotheses are chosen randomly). So for a parallel algorithm to ask a query that forces the weak learner to return the true concept, we would need to ask around $t = \exp(\exp(d))$ queries. We acknowledge that this is not super intuitive, but at least this is where it originates. It would be very interesting to exploit this in a new algorithm. --- Rebuttal 2: Title: Response to rebuttal Comment: Thank you for the intuition on the $\exp(\exp(d))$ lower bound on $t$. I maintain my score of 8, and indeed, I am confident that this is a strong contribution and should be accepted (updated confidence 3 -> 4). Great work again!
Summary: The authors study parallelized boosting, a natural weak-to-strong learning model recently re-introduced by Larsen and Karbasi. Building on recent work of Lyu et al., this work gives new upper and lower bounds on the trade-off between number of rounds, and number of parallel calls per round to the weak learner, and in particular resolves the complexity of parallel boosting up to log factors in a certain natural parameter regime. A boosting algorithm is a method for amplifying a `weak’ learner assumed to have some advantage $\gamma$ (that is classification accuracy $1/2+\gamma$) to a strong learner (achieving accuracy $1-\varepsilon$ with probability $1-\delta$) by repeated rounds of calls to the weak learner on sequentially modified ground distributions, typically taking a weighted majority vote of the results. A $(p,t)$-parallelized boosting algorithm makes p rounds of t-calls to the weak learner, where each round can only depend on the outputs of previous rounds. The authors main result is a new upper giving a tradeoff between $p$ and $t$ for learning any hypothesis class $H$ with VC dimension $d$. In particular, for any $R \in \mathbb{N}$, they give a boosting algorithm with $$p=\frac{\log(m)}{\gamma^2 R}, t=e^{dR}\log\frac{\log m}{\delta R}$$ Here $m$ is the number of samples used by the algorithm, which is assumed to achieve near-optimal accuracy-sample trade-off $m \approx \tilde{O}(d\varepsilon^{-1}\gamma^{-2})$. This improves over prior work which gave a similar result for $t=e^{dR^2}$, reducing the R-dependence from quadratic to linear in the exponent. Second, the authors improve prior lower bounds on parallelized boosting to show their bound is near-tight in many regimes of interest. In particular, they prove that either $p \geq \min(exp(d), \log(m)\gamma^{-2}))$ or $t \geq exp(exp(d))$, or $p\log t \geq d\gamma^{-2}\log(m)$. The last of these matches the upper bound up to log factors, so the authors resolve the problem in the regime where $t < exp(exp(d))$, $p < \min(exp(d), \log(m)\gamma^{-2})$, and under the requirement of near-optimal accuracy-sample tradeoff. Strengths: Boosting is one of the most successful and broadly used paradigms in machine learning. Understanding the extent to which boosting can be parallelized is a core problem, and of great interest to the learning theory and machine learning communities. This work makes substantial progress on resolving the complexity of parallelized boosting. The main technique introduced to improve Lyu et al.’s upper bound is a novel and elegant "win-win" theorem, analogs of which may be useful in other problems. The rough idea is to "simulate" sequential boosting distributions $D_0,\ldots,D_R$ in each round, and look at $KL(D_0,D_R)$. If the KL between these distributions is close, the authors argue that one can essentially simulate sampling from $D_R$ by sampling from $D_0$ (up to some small error), meaning the `simulated’ boosting will be successful and adopt the guarantees of standard sequential boosting. If the KL is large, we cannot simulate samples but this indicates the boosting algorithm has made progress and we win anyway. This method removes the sub-optimal $R$ factor from Lyu et al.’s method using the simpler max-divergence instead of KL-divergence. Weaknesses: My main complaint is that I feel the results are a little bit over-stated in the abstract and early in the introduction, which claims to essentially resolve the complexity of parallelized boosting. This doesn’t really seem true, since as discussed above the problem only seems to be resolved (up to log factors) under three assumptions: 1. $t < exp(exp(d))$ 2. $p < \min \{exp(d), \log(m)\gamma^{-2})\}$ 3. The algorithm is required to have near-optimal accuracy-sample tradeoff. Note that the latter $p$-dependence is not so restrictive, since this is achieved by non-parallel boosting (I.e. t=1), but the other parameter regimes remain open. It is unclear to me how restrictive the last condition is. It seems very reasonable one would be willing to sacrifice somewhat on samples to achieve higher parallelization; is this possible? Technical Quality: 4 Clarity: 3 Questions for Authors: It Is it possible one might achieve better trade-offs by relaxing the sample-optimality assumption? Or is this largely an assumption made to simplify the formulae for $p$ and $t$ in terms of samples and not accuracy. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful review. As for Reviewer GD3o, we see that the present reviewer got a solid understanding of our work and its contributions. The question posed by the reviewer attests to this and, once again, we share our opinion that such level of comprehension is suitable for a higher confidence score. Answering the question, it is indeed possible to achieve better $p$ vs. $t$ tradeoffs by further relaxing the restriction on the sample complexity of the algorithm. In more detail, the $\log n$ factor in the upper and lower bounds may be replaced by $\log(1/\varepsilon)$ factors for a target accuracy of $\varepsilon$ greater than or equal to the accuracy we obtain. We chose to focus on the near-optimal accuracy regime to keep the formulas as simple as possible with the already numerous parameters. We agree with the author that emphasizing it can make the scope of our contribution clearer. We will add a discussion of this to the paper. --- Rebuttal Comment 1.1: Title: Rebuttal Response Comment: I acknowledge the authors' rebuttal. My confidence score is based on the fact that I did not check the works' math line by line, though I ensured the main claims were plausible and trust the authors' correctly proved them. I am confident in my overall assessment of the paper, and that it should be accepted.
Summary: The authors offer the bound of algorithm 1 in paper in a very traditional learning theory. Strengths: I think a theory understanding of the algorithm is more important than the experiment reports. This paper shows the bounds for a kind of parallel boosting algorithm. The proof sturcture of algorithms is clear. Authors present their work clearly. Weaknesses: The most important problem for this work is the view of boosting and the applicability of algorithm 1. 1. After the work of XGBoost, the proof of boosting is to minimize the loss value of model on training dataset instead of the combining the weak learners. From this aspect, can we gain a better bound or design a better parallel boosting algorithm? 2. I really like the proof work in this paper, but the fatal problem in this paper is that the algorithm 1 may be not accelerate the model training. For line 10 to line 18, the algorithm have to find $h^*$ in $H_{kR+r}$ and this process may be a exhausting work, which means algorithm may cost more time than traditional boosting with the same computing resource. Technical Quality: 3 Clarity: 3 Questions for Authors: Please show the importance of algorithm 1 or show that algorithm 1 can accelerate model training under the same computing resource. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The same with weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the effort invested in evaluating our submission. We were happy to see that the reviewer found the presentation of our arguments clear and values the theoretical nature of our work, which is its entire focus. We remark that with a lot of parallel computation, the time to find $h^\star$ in $H_{kR+r}$ may be reduced (that is, the time to completion, not the total work). In particular, one thread can evaluate the performance of each $h \in H_{kR+r}$ in parallel and then the best performing $h^\star$ can be computed.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Meta-Reinforcement Learning with Universal Policy Adaptation: Provable Near-Optimality under All-task Optimum Comparator
Accept (poster)
Summary: This paper presents a bilevel optimization framework for meta-reinforcement learning (Meta-RL) named BO-MRL, which aims to enhance policy adaptation through a universal policy optimization algorithm. The framework is designed to improve data efficiency by implementing multiple policy optimization steps on a single data collection during task-specific adaptation. The authors provide theoretical guarantees, including upper bounds on the expected optimality gap over a task distribution, which measure the distance between the adapted policy and the task-specific optimal policy. Empirical validation is conducted on several benchmarks, demonstrating the superior effectiveness of the proposed algorithm compared to existing methods. Strengths: ### S1. Theoretical Contributions The paper provides a solid theoretical foundation for the proposed bilevel optimization framework, including convergence guarantees and upper bounds on the expected optimality gap. This strengthens the understanding of the method's performance and its theoretical underpinnings. ### S2. Empirical Validation The empirical results presented in the paper are comprehensive and robust, showing significant improvements over state-of-the-art methods on various benchmarks. This demonstrates the practical applicability and effectiveness of the proposed framwork. ### S3. Novel approach The introduction of a universal policy optimization algorithm within a bilevel optimization framework is a novel and effective approach to tackling the challenges in Meta-RL. This method addresses both the optimality and data-inefficiency issues commonly faced by existing methods. Weaknesses: ### W1. Complexity of Implementation The proposed framework, while theoretically sound and empirically validated, may be complex to implement in practice. The bi-level optimization and the need for hypergradient computation require careful tuning and expertise, which might limit its accessibility to a broader audience. ### W2. Limited discussion on practical implications The paper could benefit from a more detailed discussion on the practical implications and potential limitations of the proposed method in real-world applications. This includes considerations such as computational resources, scalability to large-scale environments, and adaptability to varying task complexities. ### W3. Comparison with Broader Range of Methods While the paper compares the proposed method with several state-of-the-art Meta-RL algorithms, it would be beneficial to include a broader range of methods, especially those outside the immediate Meta-RL context, to provide a more comprehensive evaluation of its performance. Technical Quality: 3 Clarity: 3 Questions for Authors: ### Q1. Hyperparameter Sensitivity How sensitive is the proposed method to the choice of hyperparameters? ### Q2. Scalability Can the authors provide more insights into the scalability of the proposed method? How does the computational complexity scale with the number of tasks and the size of the state-action space? ### Q3. Practical Deployment What are the practical considerations for deploying the proposed method in real-world scenarios? Are there any specific challenges or requirements that practitioners should be aware of? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the theoretical limitations and the assumptions underlying their framework. However, a more detailed discussion on the potential negative societal impacts and how to mitigate them would be beneficial. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks very much for your time and effort in reviewing our work. Thanks for your suggestions to make our manuscript better. We address your concerns as follows. >**Weakness 1. Complexity of Implementation The proposed framework, while theoretically sound and empirically validated, may be complex to implement in practice. The bi-level optimization and the need for hypergradient computation require careful tuning and expertise, which might limit its accessibility to a broader audience.** **Answer:** We propose a practical algorithm (Algorithm 2, lines 628) to simplify the hypergradient computation in Propositions 1 and 2 (lines 199 and 212). In the simplified hypergradient computation, the inverse Hessian matrix is computed by the conjugate gradient, which can be implemented in many standard Python libraries, such as Scipy. The remaining parts of the gradient computation can be handled by the Autograd function in PyTorch. The experiments in our paper adopt the conjugate gradient. The experiment results validate the computational efficiency of our algorithm. In the experiment of Half-cheetah, the computation time of the hyper-gradient with the inverse of Hessian for a three-layer neural network is about $0.3$ second in each meta-parameter update, where we use only the CPU to compute the hyper-gradient. This approach has demonstrated high efficiency across a wide range of applications, including several widely used RL algorithms, such as TRPO [44] and CPO [1]. The details are shown in Appendix C of [44]. In the simplest meta-RL method, MAML [13], the authors use the TRPO to update the meta-parameter, as shown in Section 5.3 of [13], the inverse of the Hessian is computed in a similar way to ours. Therefore, the computational complexity of the hyper-gradient in our proposed method is comparable to many existing RL and meta-RL approaches, which are shown efficient. >**Weakness 2. Limited discussion on practical implications. The paper could benefit from a more detailed discussion on the practical implications and potential limitations of the proposed method in real-world applications. This includes considerations such as computational resources, scalability to large-scale environments, and adaptability to varying task complexities.** **Answer:** In the experiment of the proposed algorithm, we only use the CPU to do meta-training and meta-tests. After the meta-model is trained, the policy adaptation only takes a few seconds (2 seconds). Therefore, the requirement of the computational resources for the proposed algorithm is low. In the experiment section (Section 6, line 362), we conduct the experiments on relatively simple grid-world environments (lines 362, the results are shown in Figure 1,), and also on the complex high-dimensional locomotion tasks (lines 373, the results are shown in Figure 2). The experimental results demonstrate the superior performance of the proposed algorithm, highlighting its good adaptability to varying task complexities. The proposed algorithm holds the scalability to large-scale environments and tasks. Please refer to **the answer to Question 2** for more details. A potential limitation in the practice is that the design of the reward functions for multiple tasks could be challenging, and requires lots of efforts of experts. >**Weakness 3. Comparison with Broader Range of Methods. It would be beneficial to include a broader range of methods, especially those outside the immediate Meta-RL context, to provide a more comprehensive evaluation of its performance.** **Answer:** The manuscript focuses on meta-RL and compares it with the existing meta-RL algorithms [13,43,48]. It is typical for the meta-RL papers to compare within the context of meta-RL. >**Question 1. Hyperparameter Sensitivity. How sensitive is the proposed method to the choice of hyperparameters?** **Answer:** As the performance of RL algorithms can be sensitive to the choice of hyperparameters, as shown in [3.a], the proposed method could hold a similar property. However, the tuning of the hyperparameters is easy. In particular, in the proposed method, the hyperparameters that are required to be tuned often appear in existing widely used meta-RL and RL algorithms, such as TRPO. We can follow the previous standard choice of them. [3.a] Eimer, Theresa, et al. "Hyperparameters in reinforcement learning and how to tune them." International Conference on Machine Learning, 2023. >**Question 2. Scalability. Can the authors provide more insights into the scalability of the proposed method? How does the computational complexity scale with the number of tasks and the size of the state-action space?** **Answer:** Overall, the computational complexity of the proposed method is comparable to many existing meta-RL algorithms, including MAML [13]. More details about the computational complexity analysis are shown in Appendix E and F (lines 633-694). In terms of the scalability to a large number of tasks, the applied stochastic optimization algorithm, Adam, has shown good performance on a huge amount of data, e.g., millions of data points. In terms of the scalability to large state-action space, we apply the neural network as the approximation function of the action policy, which can deal with large-scale problems. In the experiment, the dimension of the continuous state-action of the Ant environment is 35, which is sufficiently large for most RL problems. >**Question 3. Practical Deployment. What are the practical considerations for deploying the proposed method in real-world scenarios? Are there any specific challenges or requirements that practitioners should be aware of?** **Answer:** Meta-RL algorithms have been widely applied to many real-world scenarios, such as robotics [6, 40]. In terms of real-world practice, our algorithm is similar to these meta-RL algorithms and has no extra requirement for practitioners. --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: I appreciate the authors for the detailed responses. I still think this is a good submission worth an acceptance, so I maintain my assessment.
Summary: This paper proposes an optimization framework to learn the meta-prior for task-specific policy adaptation. Strengths: 1. The proposed method claims to be data-efficient in terms of data collection developed only using one-time data collection. Also, this paper aims to solve the RL problem as a bilevel optimization problem. 2. The proposed method considers both unconstrained and constrained optimization problem cases. 4. the upper bounds of the optimality between the adapted policy and the optimal task-specific policy are provided. Weaknesses: 1. the proposed method heavily depends on the minimization problems of eq. 1 or eq. 2, which minimizes the distance between the policy and the predefined policy and finds the optimal policy for the additional task at the same time. My concern and question is that given the same problem that we want to adapt a pre-defined policy \pi to a new task L while keep the original task J, what is the performance difference if i just train the \pi to minimize L+J ? 2. can you plot the algorithm deployment performance for the example in Fig. 2? since sometimes if you only do \min_\theta ||\pi_\theta - K(x)||, where K(x) is a predefined controller, the \theta converges fast, but the achieved \pi_\theta does not work well in some examples. Technical Quality: 3 Clarity: 3 Questions for Authors: see the weaknesses Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: yes, the author has provided that Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks very much for your time and effort in reviewing our work. Thanks for your suggestions to make our manuscript better. We address your concerns as follows. >**Weakness 1. The proposed method heavily depends on the minimization problems of eq. 1 or eq. 2, which minimizes the distance between the policy and the predefined policy and finds the optimal policy for the additional task at the same time. My concern and question is that given the same problem that we want to adapt a pre-defined policy $\pi$ to a new task $L$ while keep the original task $J$, what is the performance difference if I just train the $\pi$ to minimize $L+J$?** **Answer:** The goal of the policy adaptation problem, i.e., Eq. 1 or Eq. 2 (lines 159 and 161), is **not** adapting a pre-defined policy $\pi$ to a new task $L$ while keeping the original tasks $J$. Minimizing $L + J$ is essentially multi-task learning. The motivation for multi-task learning is that there exists a common solution for the multiple tasks and training on them together could benefit the training for each task. For example, the feature extraction layers for multiple image classification tasks are shared, and training these layers together could help extract better features for each task. In the problem of this manuscript, we consider meta-learning, where we aim to learn knowledge for a task distribution. The tasks in the task distribution have different goals and there is no shared component that can applied to all tasks. For example, the task of driving a robot north and the task of driving a robot south are related and follow a task distribution. However, the optimal policies for them have no shared component. If we minimize $L + J$ of two tasks, the solution performs badly for any of the two tasks. Therefore, minimizing $L + J$ is not applicable to our problem. This manuscript studies meta-learning, which is different from multi-task learning. Meta-learning aims to learn a meta-policy, such that the meta-policy can be adapted to tasks in a task distribution with a small amount of data. It trains a meta-policy during the meta-training. During the meta-test, the meta-policy is adapted to new tasks by the policy adaptation, Eq. 1 or Eq. 2 (lines 159 and 161). Therefore, the goal of Eq. 1 or Eq. 2 is adapting the meta policy to the new task $L$ in the meta-test, i.e., the adapted policy can minimize the loss of $L$ or is close to its minimum. During the meta-test, the original multiple tasks about $J$ are not used anymore, and the learned meta-policy $\pi_\phi$ serves as the prior knowledge to learn the task-specific policy for new tasks. Next, we would like to discuss why we cannot directly minimize $L$ without learning meta-policy $\pi_\phi$ and why the learned meta-policy $\pi_\phi$ is necessary. As we know, a fundamental difference between RL and supervised learning is that RL can not be solved on one-time collected data, i.e., the loss of $L$ cannot be minimized using the data collected on one policy. The RL algorithm minimizes $L$ by iterative data sampling and policy optimization. However, during the meta-test of meta-RL, we can only collect data on one policy (one-time data collection) on the new task of $L$ and it is impossible to minimize $L$. Therefore, we use the problem in Eq. 1 or Eq. 2 to approximate the minimization of $L$. Meta-RL is to learn how to better approximate the minimization of $L$, and the meta-policy $\pi_\phi$ in Eq. 1 or Eq. 2 is the knowledge that is learned by meta-RL to reduce the approximation error. >**Weakness 2. Can you plot the algorithm deployment performance for the example in Fig. 2? Since sometimes if you only do $\min_\theta ||\pi_\theta - K(x)||$, where $K(x)$ is a predefined controller, the $\theta$ converges fast, but the achieved $\pi_\theta$ does not work well in some examples.** **Answer:** In Fig. 2 (line 359), we show the performance of adapted policies on the meta-test tasks. The x-axis is the adaptation iteration number that the meta-policy is adapted to the new tasks, and the y-axis is the average accumulated reward of the adapted policies. The meta-test occurs after the new tasks are given, i.e., it represents the deployment of the learned meta-policy to new tasks. Therefore, Fig. 2 can reflect the performance of $\pi_\theta$ beyond the convergence. Figure 5 in Appendix B (line 600) shows the convergence of meta-training. --- Rebuttal Comment 1.1: Comment: thank you, that answers my question.
Summary: The paper proposes a bilevel optimization algorithm for Meta-RL, which unlike MAML, implements multiple-step policy optimization on one-time data collection. In addition, the paper provides an upper bound on the expected optimality gap over the task distribution, that quantifies the model’s generalizability to a new task from the task distribution. Experiments show the advantages of the proposed framework over existing meta-learning approaches. Strengths: * The paper proposes a practical algorithm, supported by a theoretical upper bound of the optimality of the proposed algorithm. * The paper is well-written and easy to follow and understand. * The appendix is well-organized and contains all the proofs and discussions about complexity and its relation to MAML. * Experiments were performed to verify the theoretical results and show the advantage of the proposed algorithm over MAML. Weaknesses: * Comparison to state-of-the-art: the paper compares the proposed algorithm to MAML, EMAML, and ProMP, which were proposed 5 years ago. Since meta-RL is a very active research field, and even MAML has newer and more advanced variants, it is hard to be convinced that the paper compares the proposed algorithm to the most relevant approaches. In addition, the authors did not explain why they chose to compare to those specific baselines, and if the contribution of newer MAML variants can also be incorporated into their algorithm. * The related work section can be improved. Specifically, the section is a single short paragraph that covers only bilevel optimization algorithms for meta-RL. An extended related work section on meta-RL (even in the appendix) that covers the different types of meta-RL algorithms and places the paper with respect to the various approaches could help the reader understand the paper's contribution. Technical Quality: 3 Clarity: 3 Questions for Authors: * Please address the aforementioned weaknesses. * In Line 178: does NPG mean Natural Policy Gradient? If yes, maybe state it explicitly. * Is it possible to formulate a similar bound for other meta-RL algorithms, such as RL2 [1] or VariBAD [2]? * Lines 32-33 explain that even if the learned meta-parameter $\phi$ is close to the best one, the model adapted from the learned $\phi$ might be far from the task-specific optimum for some tasks, since the best meta-parameter is shared for all tasks and learned from the prior distribution that can be with high variance. The proposed algorithm was designed to adapt a stronger optimality metric, where the model is adapted from the task-specific optimal policy for each task. Could you provide a real experiment (even a synthetic experiment) that demonstrates this phenomenon?\ \ The existing experiments show that the proposed approach performs better than MAML, but it is still not entirely clear that this leads to the improved performance. A synthetic example can help fully explain this important failure case of existing meta-learning approaches. * Perhaps I missed the following point when reading the paper, but during task-specific policy adaptation, the algorithm performs multiple-step policy optimization on one-time data collection, which speeds up the learning process - should it decrease performance (theoretically) compared to one-step policy optimization on one-time data collection? [1] Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, and Pieter Abbeel. RL2: Fast reinforcement learning via slow reinforcement learning. arXiv:1611.02779, 2016. [2] Luisa Zintgraf, Sebastian Schulze, Cong Lu, Leo Feng, Maximilian Igl, Kyriacos Shiarlis, Yarin Gal, Katja Hofmann, and Shimon Whiteson. VariBAD: Variational Bayes-adaptive deep RL via meta-learning. Journal of Machine Learning Research, 22(289):1–39, 2021. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors addressed the limitations of the proposed algorithm in section P. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks very much for your time and effort in reviewing our work. Thanks for your suggestions to make our manuscript better. >**Weakness 1. Comparison to state-of-the-art: the paper compares the proposed algorithm to MAML, EMAML, and ProMP, which were proposed 5 years ago. Since meta-RL is a very active research field, and even MAML has newer and more advanced variants, it is hard to be convinced that the paper compares the algorithm to the most relevant approaches. In addition, the authors did not explain why they chose to compare to those specific baselines, and if the contribution of newer MAML variants can also be incorporated into their algorithm.** **Answer:** Thanks for the suggestive comments. In **the global rebuttal**, we introduce the categorization of meta-RL and compare the advantages/disadvantages of the categories. Based on it, we justify choosing MAML, EMAML, and ProMP as the baselines. In this manuscript, we focus on the category of optimization-based meta-RL. It is typical in EMAML, ProMP, and [10, 1.a] that optimization-based methods are only compared with optimization-based methods (due to their worse optimality than black-box methods in the in-distribution meta-test setting). So, in the experiments, we compare the proposed algorithm with the existing optimization-based meta-RL approaches, including MAML, EMAML, and ProMP. The experimental results show that the proposed method can outperform the baselines significantly. Moreover, we also achieve the state-of-art theoretical result over all optimization-based meta-RL papers, as shown in Table 1 (line 35). Recent optimization-based meta-RL papers, including new variants of MAML [10, 1.a], aim to solve the meta-gradient estimation issues. They usually do not significantly outperform MAML, EMAML and ProMP. In **the attached PDF file of the global rebuttal**, we include a recent baseline from [10] (2022) and compare their performances. The results show that the proposed method can also outperform this baseline. [1.a] Tang, Yunhao. "Biased gradient estimate with drastic variance reduction for meta-reinforcement learning", 2022. >**Weakness 2. The related work section can be improved. Specifically, the section is a short paragraph that covers only bilevel optimization algorithms for meta-RL. An extended related work section on meta-RL that covers the different types of meta-RL algorithms and places the paper with respect to the various approaches could help the reader understand the paper's contribution.** **Answer:** Thanks for the suggestion. In **the global rebuttal** and **the answer to Weakness 1**, we discuss the categorization of existing meta-RL algorithms, compare the two categories, and the advantages of the method in the manuscript over the existing optimization-based meta-RL methods. These discussions make our contribution more clear. We will add them to the revised manuscript. >**Question 2. Is it possible to formulate a similar bound for other meta-RL algorithms, such as RL2 [1] or VariBAD [2]?** **Answer:** This manuscript focuses on the theoretical analysis of optimization-based meta-RL. To the best of our knowledge, almost all papers that work on the theoretical analysis of meta-RL algorithms [10, 38, 50, 52] focus on optimization-based meta-RL. From **the answer to Weakness 1**, we can see that the design of optimization-based methods is usually inspired by theoretical analysis, and the design of the black-box method is often more heuristic. As a result, it is challenging to derive optimality bounds for black-box meta-RL, such as RL2 and VariBAD. >**Question 3. Lines 32-33 explain that even if the learned meta-parameter is close to the best one, the model adapted from the learned might be far from the task-specific optimum for some tasks, since the best meta-parameter is shared for all tasks and learned from the prior distribution that can be with high variance. The proposed algorithm was designed to adapt a stronger optimality metric, where the model is adapted from the task-specific optimal policy for each task. Could you provide a real experiment (even a synthetic experiment) that demonstrates this phenomenon? The existing experiments show that the proposed approach performs better than MAML, but it is still not entirely clear that this leads to improved performance. A synthetic example can help fully explain this important failure case of existing meta-learning approaches.** **Answer:** We apologize for the confusion regarding the reason for the performance improvement of the proposed algorithm over MAML. The better performance of the proposed algorithm is not due to the use of a stronger optimality metric. The stronger optimality metric is only used to evaluate the proposed algorithm. It is not directly used in the design of the algorithm. Instead, the reason is that we design the policy adaptation problem (Problem (1), line 159) and solve it by multiple optimization steps. In specific, MAML and its variants apply the policy gradient on the one-time data collection during the meta-test. Problem (1) (line 159) is to maximize a surrogate function, which is an approximate total reward function (as indicated in line 170) using one-time data collection. We solve the optimal solution of Problem (1) by multiple optimization steps. The objective function of Problem (1) is a better approximation for the total reward function than that of the policy gradient in MAML, and therefore can achieve a better performance than MAML. Moreover, since the objective of Problem (1) is a lower bound of the total reward function (stated in Lemmas 1 and 2, line 304), we can derive the optimality bound of the proposed algorithm under a stronger optimality metric. The theoretical analysis provides more insight into why the design of Problem (1) is good. To improve clarity, we will swap the third paragraph and the fourth paragraph of the introduction, and add a necessary clarification to the modified manuscript. --- Rebuttal 2: Title: Supplementary answer for the reviewer's comment Comment: We thank the reviewer again for reviewing our work. Here, we have supplementary answers to the reviewer's questions. >**Question 1. In Line 178: does NPG mean Natural Policy Gradient? If yes, maybe state it explicitly.** **Answer:** Yes. Thanks for pointing it out. We will clarify it in the revised manuscript. >**Question 4. During task-specific policy adaptation, the algorithm performs multiple-step policy optimization on one-time data collection, which speeds up the learning process - should it decrease performance (theoretically) compared to one-step policy optimization on one-time data collection** **Answer:** In this manuscript, our algorithm performs (i) multiple-step policy optimization on one-time data collection, i.e., for each iteration of the meta-test, we collect data on the current policy and adapt the policy using multiple optimization steps. MAML performs (ii) one-step policy optimization on one-time data collection, i.e., for each iteration of the meta-test, it collects data on the current policy and adapts the policy using a one-step policy gradient. As indicated in lines 680-690, the computation times of (i) and (ii) are comparable for one iteration. On the other hand, the optimality of our algorithm is better than MAML for a single iteration, i.e., higher data efficiency of the proposed algorithm. So, our algorithm takes fewer iterations and less training time to reach a given optimality requirement, i.e., speed up the adaptation process. The speed-up is due to the higher data efficiency and the better optimality. --- Rebuttal Comment 2.1: Comment: I thank the authors for their detailed response. The categorization of existing meta-RL algorithms provided in the global rebuttal answer helps to clarify the differences between the various meta-RL algorithms and place the proposed method relating to the existing approaches. Adding this discussion to the paper would improve the contribution of the manuscript. The discussion explains that although optimization-based meta-RL methods hold worse performance than black-box methods, they are more robust to sub-optimal meta-policies and can handle tasks outside the training distribution, compared with black-box methods (such as RL2 and VariBAD). Can you provide a reference that supports this claim? I understand that the motivation for black-box algorithms such as VariBAD or RL2 was to learn the prior distribution over tasks and theoretically should not work with test tasks outside this prior distribution, but is there anyone who performed any experiment that validates that optimization-based methods work better in such scenarios? I appreciate the added comparison to the recent baseline and the detailed answers to my questions. --- Rebuttal 3: Comment: Thanks for your reply. The categorization of meta-RL approaches and the comparison of the differences between the two categories are justified in [1]. The experiments conducted in [2,3] show that optimization-based meta-RL methods are more robust to sub-optimal meta-policies and can handle tasks outside the training distribution. [1] Beck, Jacob, et al. "A survey of meta-reinforcement learning", 2023. [2] Xiong, Zheng, et al. "On the Practical Consistency of Meta-Reinforcement Learning Algorithms", 2021. [3] Finn, Chelsea, et al. "Meta-Learning and Universality: Deep Representations and Gradient Descent Can Approximate any Learning Algorithm", 2018. --- Rebuttal Comment 3.1: Comment: Thanks for your reply. These references help to improve the clarity of the discussion above. I didn't find references [2,3] in the manuscript, and I think that it would be beneficial to add them. I raised my score since the authors answered all my concerns.
null
null
Rebuttal 1: Rebuttal: We are grateful and indebted for the time and effort invested to evaluate our manuscript by all reviewers, and for all the suggestions and reference recommendations to make our manuscript a better and stronger contribution. Please find below our detailed replies to all the reviewers' comments. In this global rebuttal, we would like to discuss the categorization of existing meta-RL algorithms and compare the advantages and disadvantages of these categories. >**Categorization of existing meta-RL.** As mentioned in the second paragraph of the introduction (line 19), meta-RL methods can be generally categorized into (i) optimization-based meta-RL, (ii) black-box (also called model-based or context-based) meta-RL. Optimization-based meta-RL approaches, such as MAML and its variants, usually include a policy adaptation algorithm and a meta-algorithm. During the meta-training, the meta-algorithm aims to learn a meta-policy, such that the policy adaptation algorithm can achieve good performance starting from the meta-policy. The learned meta-policy parameter is adapted to the new task using the policy adaptation algorithm during the meta-test. Black-box meta-RL, such as RL2 and VariBAD, aims to learn an end-to-end neural network model. The model has fixed parameters for the policy adaptation during the meta-test, and generates the task-specific policy using the trajectories of the new task takes. The meta-RL algorithm in the manuscript is an optimization-based method. > > >Optimization-based meta-RL methods are typically less specialized to the training tasks and hold worse performance than black-box methods. However, it is more robust to sub-optimal meta-policies and can handle tasks outside the training distribution, compared with black-box methods. In optimization-based meta-RL, the task-specific policy is adapted from a shared meta-policy over the task distribution. The learned meta-knowledge is not specialized for any task, and its meta-test performance on a task depends on a general policy optimization algorithm applied to new data from that task. In contrast, the end-to-end model in black-box meta-RL typically includes specialized knowledge for any task within the task distribution, and uses the new data merely as an indicator to identify the task within the distribution. As a result, the optimality of optimization-based methods is usually worse than black-box methods, especially when the task distribution is heterogeneous and the data scale for adaptation is extremely small. On the other hand, the policy adaptation algorithms in the meta-test of optimization-based methods can generally improve the policy and show the convergence starting from any initial policy, not only the learned meta-policy. Therefore, it is robust to sub-optimal meta-policy and can deal with tasks that are out of the training task distribution. In contrast, due to the specialization of the learned model, black-box methods cannot be generalized outside of the training task distribution. Pdf: /pdf/11965c85188ef5945dc2b1872ffb997eb318469c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
FFAM: Feature Factorization Activation Map for Explanation of 3D Detectors
Accept (poster)
Summary: This paper proposes a feature factorization activation map to explain 3D detectors. It uses non-maximum matrix factorization (NMF) to obtain a global concept activation map and then refine it with feature gradients of an object-specific loss. A voxel upsampling strategy is further proposed to upsample sparse voxels to align the granularity between the activation map and input point cloud. Both quantitative and qualitative results validate the efficacy of the proposed method. Strengths: - The basic idea is easy to follow, and the comparison with previous works (both in 2D and 3D) is clear. - The key motivation and contributions are clearly presented. - The content of methodology and implementation are well organized. - The visualization results are impressive. Both the quantitative and qualitative results convincingly support the efficacy of the proposed explanation method. Weaknesses: The only concern is about the application, or the value in applications, of this proposed method. I understand this is a good adaptation and attempt to apply such explanation methods on LiDAR-based 3D detectors, but it is still unclear how it can benefit the procedure of improving 3D detectors or downstream applications. I would be more interested in how it produces a stronger 3D detector or a 3D detector with better controllability and safety, especially for safety-critical scenarios like autonomous driving. In addition, there are also related areas or applications, such as 3D detection in indoor scenes (ScanNet, SUN RGB-D), etc, and camera-based/multi-modality 3D detection. I am also curious about the performance of such explanation methods when applied to those problems. How about the performance, and is there any new challenge? Technical Quality: 3 Clarity: 3 Questions for Authors: None. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your insightful feedback. Below please find our clarifications in response to your comments. **Q1.** The only concern is about the application, or the value in applications, of this proposed method. I understand this is a good adaptation and attempt to apply such explanation methods on LiDAR-based 3D detectors, but it is still unclear how it can benefit the procedure of improving 3D detectors or downstream applications. I would be more interested in how it produces a stronger 3D detector or a 3D detector with better controllability and safety, especially for safety-critical scenarios like autonomous driving. **Authors’ Reply:** We fully agree with your viewpoint. Our FFAM can be used to find the regions of interest of detectors on point cloud input. It also can provide visual explanation for different object attributes at a fine-grained level. In section 4.3, we utilize FFAM to reveal the detection mode of false positive predictions generated by detectors. There are three main observations in the experiment. First, we observe the average saliency maps of false positives exhibit similarities to those of true positives. The detector predicts a false positive because it detects a similar pattern to that of a true positive. Second, false positives tend to be surrounded by more noise points, with a point density of approximately one-third of true positives. We believe noises and sparse density may be significant factors contributing to the occurrence of false positives. Lastly, the ratio of car, pedestrian, and cyclist objects in true positives is approximately 36:5:2, while in false positives, it is 13:8:2. This suggests car objects are less prone to false positives compared to pedestrian and cyclist objects. We think that the above observations can help researchers to design more efficient and reliable 3D object detectors. We believe there are more applications of FFAM that can be explored. In our future work, we will use FFAM to improve the performance (including accuracy and speed) of 3D detectors. **Q2.** In addition, there are also related areas or applications, such as 3D detection in indoor scenes (ScanNet, SUN RGB-D), etc, and camera-based/multi-modality 3D detection. I am also curious about the performance of such explanation methods when applied to those problems. How about the performance, and is there any new challenge? **Authors’ Reply:** In this paper, our main focus is on explaining LiDAR-based 3D detectors on outdoor scenes. For indoor scenes or other modalities as input, as long as the intermediate features of the network can be directly linked to the input, our FFAM can be applied effectively. We believe that the most challenging task would be explaining 3D detectors based on multi-view cameras. This is because the correspondence between the intermediate features of such detectors and the input is relatively intricate. Many of them employ learning-based methods to elevate two-dimensional images into three-dimensional space. --- Rebuttal Comment 1.1: Title: Final Decision Comment: Thanks for the author's efforts in addressing my questions. Given the response, I can better understand the application and adaptation for other scenarios, but I think there are still limited new insights regarding how the proposed method can help improve current algorithms. The connection between the analysis and the proposed method is a little weak. Hence, I cannot raise my rating and would keep the original "borderline acceptance" recommendation. --- Reply to Comment 1.1.1: Comment: Thank you for your comments. The interpretability research on 3D object detection is still in its early stages of development. There is very little existing work in this area currently. The method we proposed has made significant progress compared to previous approaches. Our paper focuses more on the theoretical aspect, with relatively weaker applications. However, it still holds positive significance for advancing the interpretability of 3D object detection models.
Summary: The paper proposes a method called Feature Factorization Activation Map (FFAM) to provide visual explanations for 3D object detectors based on LiDAR data. This method addresses the interpretability issue in 3D detectors by using non-negative matrix factorization to generate concept activation maps and refining these maps using object-specific gradients. The approach is designed to handle the unique challenges of 3D point cloud data, such as sparsity and the need for object-specific saliency maps. The paper evaluates FFAM against existing methods and demonstrates its effectiveness through qualitative and quantitative experiments. Strengths: 1. Using non-negative matrix factorization to generate concept activation maps is novel and well-justified for the application. 2. The method is evaluated on multiple datasets and compared against state-of-the-art methods, demonstrating its superiority in producing high-quality visual explanations. 3. The paper provides a clear and detailed description of the methodology, including the feature factorization, gradient weighting, and voxel upsampling processes. 4. The proposed method has practical implications for improving the interpretability of 3D object detectors, which is crucial for applications in autonomous driving and robotics. Weaknesses: 1. The method involves several computationally intensive steps, such as non-negative matrix factorization and voxel upsampling, which may limit its applicability in real-time systems. 2. The paper focuses primarily on LiDAR-based 3D detectors. Discussing how the method could be adapted or extended to other types of 3D data or detection systems would be beneficial. 3. While the evaluation is comprehensive, it primarily focuses on two datasets (KITTI and Waymo Open). Additional datasets and scenarios could further validate the robustness and generalizability of the method. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Real-Time Applicability: How does the computational overhead of FFAM compare to existing methods in real-time applications, especially in autonomous driving scenarios? 2. How sensitive is FFAM to the choice of detector? We noticed that two kinds of detectors were used in this paper. More comparison and analysis are necessary. 3. Can FFAM be extended to other 3D detectors or modalities, such as RGB-D sensors or radar data? If so, what modifications would be necessary? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: 1. FFAM requires access to the feature maps within 3D detectors, which may not always be possible, especially for proprietary or closed-source systems. 2. The method's scalability to large-scale or real-time applications is not fully addressed. The computational requirements may be prohibitive for some practical applications. 3. The method is tailored to LiDAR data and may not directly translate to other types of 3D data without significant modifications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your insightful feedback. Below please find our clarifications in response to your comments. **Q1.** Real-Time Applicability: How does the computational overhead of FFAM compare to existing methods in real-time applications, especially in autonomous driving scenarios? **Authors’ Reply:** Compared to the existing method OccAM, our approach has a significant advantage in speed. NMF and voxel upsampling are implemented by CUDA operators which could use GPU to accelerate the process. The main overhead of our FFAM lies in gradient backpropagation. Therefore, the delay of FFAM depends on the size of the model being explained. Currently, the backpropagation delay of most 3D object detectors is around 100ms. On the other hand, the existing method OccAM is a perturbation-based explanation method, which requires extensive sampling of inputs and then processing them one by one. As a result, this process is very slow. Typically, using OccAM to obtain an visual explanation requires several minutes. However, visual explanation methods (including our FFAM and OccAM) belong to post hoc explanations of the model. They are not used in real-time applications of autonomous driving. Therefore, we did not compare the speed of these methods in the paper. **Q2.** How sensitive is FFAM to the choice of detector? We noticed that two kinds of detectors were used in this paper. More comparison and analysis are necessary. **Authors’ Reply:** FFAM is not specific to detector architecture. It can be widely applied to existing LiDAR-based 3D detectors. In Appendix A.2, we apply FFAM to some other state-of-the-art detectors, such as DCDet, PV-RCNN, and Voxel R-CNN. **Q3.** Can FFAM be extended to other 3D detectors or modalities, such as RGB-D sensors or radar data? If so, what modifications would be necessary? **Authors’ Reply:** FFAM can be extended to extensive 3D detectors. In this paper, our main focus is on studying point clouds as the input format. FFAM has not yet been applied to other modalities. But we believe that RGB-D and radar data can be converted into point cloud format, so these modalities should also be feasible for FFAM to process. **Q4.** FFAM requires access to the feature maps within 3D detectors, which may not always be possible, especially for proprietary or closed-source systems. **Authors’ Reply:** We concur with your observation regarding the limitation in FFAM. We also pointed out this limitation of FFAM in the conclusion section. We believe that FFAM can primarily be used to provide researchers with some insights to improve detectors. In addition, it can also be used to reveal the internal working mechanism of detectors to users, thereby improving their understanding and trust in detectors. --- Rebuttal 2: Comment: Dear reviewer v6wE, is there anything more you would like to ask of the authors, before the author-reviewer discussion period ends (tomorrow)? --- Rebuttal Comment 2.1: Title: Follow-up Comment Comment: Thank you for the detailed responses. The rebuttal has addressed most of my previous concerns, I would keep my initial rating as borderline accept. --- Reply to Comment 2.1.1: Comment: Thank you for reviewing our paper and providing valuable feedback. Do you have any other unresolved issues? If possible, could you consider adjusting your rating to more accurately reflect your views on our work? We would greatly appreciate your support and suggestions.
Summary: This paper addresses the challenge of explanation and interpretability in 3D detection methods. It introduces a Feature Factorization Activation Map (FFAM), which utilizes non-negative matrix factorization (NMF) and object-specific gradient weighting to generate global and object-specific activation maps at the voxel level. An up-sampling method is subsequently employed to produce per-point activation maps. Extensive quantitative and qualitative experiments demonstrate the effectiveness of the proposed FFAM in generating saliency maps for various points over the previous method. Strengths: - The topic is interesting and warrants investigation. As discussed in Section 4.3, the research can significantly enhance 3D object detectors, particularly in identifying false positive modes. - The quantitative results are convincing and relevant to the topic. - Overall, the paper is well-written and easy to comprehend. - The code is made available, promoting transparency and reproducibility. - The paper discusses the limitation of the proposed method, specifically the necessity of accessing the feature map. Weaknesses: - The rationale for using NMF is unclear. It appears to be a learning-based, PCA-like method to extract saliency from voxel features. How does it compare to the proposed method on the global activation map? - There is a lack of ablation studies, such as those examining the parameter 𝛾 - The gradient weighting method is intriguing and seems sensible. Is this method original to your work? - It would be beneficial to use 𝑊 as a weight in Equation 2. The current version is a little confusing. - In Section 4.1, it is claimed that the saliency map generated by FFAM is superior to occAM because it is more focused on the object. Please provide further validation on why a clearer activation map is considered better. Technical Quality: 2 Clarity: 3 Questions for Authors: Please check weaknesses. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: It would strengthen the paper if an example could be provided demonstrating how FFAM can help identify some non-trivial error modes and offer insights for improving the detector. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your insightful feedback. Below please find our clarifications in response to your comments. **Q1.** The rationale for using NMF is unclear. It appears to be a learning-based, PCA-like method to extract saliency from voxel features. How does it compare to the proposed method on the global activation map? **Authors’ Reply:** Non negative matrix factorization (NMF) is a matrix factorization technique used to discover potential concepts in features. The process of NMF is to approximately decompose a non negative matrix $V$ into the product of two non negative matrices H and W, i.e. $V ≈ HW$. Among them, $W$ and $H$ are non negative, which makes NMF more interpretable in many fields. For example, in facial recognition, the basis vectors in the $W$ matrix usually represent specific concepts such as nose, eyes, mouth, etc [1]. In our work, we utilize NMF to uncover latent concept within voxel features of 3D detectors. Typically, voxel features with effective detection clues in 3D detectors contain richer semantic concepts. Therefore, we could sum the activation coefficients of different concepts in $H$ matrix to obtain a global activation map. The basis vectors obtained by PCA do not have clear semantic concepts. Therefore, we have not compare our method with PCA-like method. **Q2.** There is a lack of ablation studies, such as those examining the parameter $r$. **Authors’ Reply:** Due to page limitations in the main body of the paper, we have included the hyperparameter analysis and ablation experiments in the appendix. Please refer to Appendix A.1 and A.4 for details. **Q3.** The gradient weighting method is intriguing and seems sensible. Is this method original to your work? **Authors’ Reply:** As far as we know, we are the first one that use the gradient to refine a global activation map. The previous method that is closest to our method is ODAM. It also utilizes backward gradients to generate saliency maps. However, there are two main difference between our method and ODAM. Firstly, the usage of gradients is different. ODAM multiply the gradient map with mid feature maps, and then sum the values along the channel dimension to obtain the final saliency maps. While our FFAM utilizes the gradients to generate a weighting item as follows: $$ \omega = \sum_{k=1}^{d} \left| G_{\cdot k} \right|, $$ where $G_{\cdot k}$ refers to the $k$-th channel of gradient map $G$, and $d$ denotes the number of channels. Secondly, ODAM is designed to generate visual explanations for image detectors. While our FFAM is used to explain LiDAR-based 3D detectors. **Q4.** It would be beneficial to use $W$ as a weight in Equation 2. The current version is a little confusing. **Authors’ Reply:** Equation 2 represents the process of solving for NMF. Since it is difficult to find a numerical solution for the non-negative matrix factorization of matrix $A$, only an approximate solution can be obtained. We attempt to find a matrix $\hat{A}$ that closely approximates $A$, while also satisfying the product of two non-negative matrices $H$ and $W$. **Q5.** In Section 4.1, it is claimed that the saliency map generated by FFAM is superior to OccAM because it is more focused on the object. Please provide further validation on why a clearer activation map is considered better. **Authors’ Reply:** In the context of visual explanation for 3D detectors, a clearer activation map is considered superior because it aids in better identifying the region of interest within the point cloud. We have conducted quantitative experiments to validate our method and previous explanation methods. As shown in Table 1 and Table 3, our FFAM achieves best results on VEA, PG and enPG metrics. These metrics reflect the degree of focus of an explanation method on an object. Furthermore, our FFAM performs best on the Deletion and Insertion metrics which are are widely used to evaluate explanation methods. As shown in Figure 5, our methods have the fastest performance drop and largest increase for Deletion and Insertion respectively, showing points highlighted in our saliency maps have a greater effect on detector predictions than the other methods. **Q6.** It would strengthen the paper if an example could be provided demonstrating how FFAM can help identify some non-trivial error modes and offer insights for improving the detector. **Authors’ Reply:** In section 4.3, we utilize FFAM to find the modes of false positives generated by a detector. There are three observations in the experiments. We believe these observations will provide some insights for researchers to improve 3D detectors. First, we observe the average saliency maps of false positives exhibit similar similarities to those of true positives. The detector predicts a false positive because it detects a similar pattern to that of a true positive. Second, false positives tend to be surrounded by more noise points, with a point density of approximately one-third of true positives. We believe noises and sparse density may be significant factors contributing to the occurrence of false positives. Lastly, the ratio of car, pedestrian, and cyclist objects in true positives is approximately 36:5:2, while in false positives, it is 13:8:2. This suggests car objects are less prone to false positives compared to pedestrian and cyclist objects. [1] Daniel D Lee and H Sebastian Seung. Learning the parts of objects by non-negative matrix factorization. nature, 1999. --- Rebuttal Comment 1.1: Comment: Thanks for your responses. It solves most of my concerns. However, considering the technical soundness of the work, I cannot raise my rating and will keep my previous rating, boarderline acceptance. --- Reply to Comment 1.1.1: Comment: Thank you very much for recognizing our work and providing valuable feedback. We are glad to hear that our response has resolved most of your questions. May I ask what specific question you have regarding the technical soundness. We are willing to further discuss and eager to make improvements based on your valuable feedback. --- Rebuttal 2: Comment: Dear reviewer bytz, is there anything more you would like to ask of the authors, before the author-reviewer discussion period ends (tomorrow)?
Summary: The paper proposes a “FFAM” for visual visualization of 3D Detectors. It introduces a non-negative matrix factorization (NMF) to decomposing 3D features into the product of two non-negative matrices. Besides, an object-specific loss is utilized to generate the object-specific saliency maps. Finally, the voxel upsampling is used to recover the resolution of the activation maps. Strengths: 1. This work introduces NMF in explaining point cloud detectors and utilizes feature gradients of an object-specific loss to generate object-specific saliency maps. 2. A voxel upsampling strategy is proposed to upsample sparse voxels. Weaknesses: 1. The description of Non-negative Matrix Factorization (NMF) is unclear. When the point cloud is large in scale, NMF may become unstable and exhibit a long convergence time. 2. The core innovation of this paper lies in feature factorization. The authors directly chose NMF but did not provide a reason for this choice. For instance, Principal Component Analysis (PCA) can also achieve feature factorization. 3. The object-specific gradient is element-wise multiplied with the global concept activation map to obtain the specific activation map of the object. In other words, the object-specific gradient inhibits the activation that do not belong to the current object. Therefore, it seems reasonable to directly use the weighting of the object-specific gradient as the activation maps. 4. Since the gradient map G can already represent the current object, why not perform NMF on the gradient map and then use a method similar to obtaining a global activation map to derive the object activation map? 5. Object-Specific Gradient Weighting is a general module. Can it be applied to other methods of generating activation maps, such as OccAM? 6. Regarding voxel upsampling, the authors provide a limited description. I am curious about why they chose the Gaussian kernel and how it compares to other upsampling methods such as trilinear interpolation and transpose convolution. Technical Quality: 3 Clarity: 3 Questions for Authors: Please the weakness. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors mentioned the limitations and societal impact in the checklist. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your insightful feedback. Below please find our clarifications in response to your comments. **Q1.** The description of Non-negative Matrix Factorization (NMF) is unclear. When the point cloud is large in scale, NMF may become unstable and exhibit a long convergence time. **Authors’ Reply:** In our experiments, Non-negative Matrix Factorization (NMF) maintains excellent stability and rapid convergence times across different scales of point cloud, encompassing those from the KITTI and Waymo Open datasets. The implementation of NMF is facilitated by well-established libraries, which can be seamlessly integrated with PyTorch code. **Q2.** The core innovation of this paper lies in feature factorization. The authors directly chose NMF but did not provide a reason for this choice. For instance, Principal Component Analysis (PCA) can also achieve feature factorization. **Authors’ Reply:** We agree with your perspective that Principal Component Analysis (PCA) and Non-negative Matrix Factorization (NMF) are both effective tools for feature decomposition and dimensionality reduction. However, the basis vectors derived from PCA can not represent a clear semantic concept, and thus are primarily used for data dimensionality reduction. In contrast, the basis vectors of NMF exhibit non-negativity and additivity, often representing specific concepts such as car doors and wheels. The DFF [1] is the first work that employs NMF to localize semantic concepts within images. Inspired by DFF, we utilize NMF to reveal the latent semantic concepts embedded in the intermediate voxel features of 3D object detectors. Typically, voxel features that provide effective cues for detection in 3D detectors possess richer semantic concepts, which can be used to create saliency maps. **Q3.** The object-specific gradient is element-wise multiplied with the global concept activation map to obtain the specific activation map of the object. In other words, the object-specific gradient inhibits the activation that do not belong to the current object. Therefore, it seems reasonable to directly use the weighting of the object-specific gradient as the activation maps. **Authors’ Reply:** We agree with your point that object-specific gradient can serve as the activation map. However, it only highlights the object-specific region in a point cloud and is struggle to differentiate the importance of point within the object-specific region. NMF can be used to help uncover the recognition pattern within the object-specific region. As shown in Figure 4, NMF assists in identifying the detector's distinct recognition patterns for various categories and object attributes. The quantitative results in Table 9 emphasize the enhanced performance achieved by integrating the object-specific gradient with NMF. This confirms the significant role of NMF in obtaining more nuanced and detailed visual explanations. **Q4.** Since the gradient map G can already represent the current object, why not perform NMF on the gradient map and then use a method similar to obtaining a global activation map to derive the object activation map? **Authors’ Reply:** The raw point features contain substantial semantics which is beneficial for NMF to uncover latent concepts within these features. However, the gradient map G only contains backward gradients for a specific object. Consequently, employing NMF to extract concepts from gradient maps lacks a clear conceptual basis. **Q5.** Object-Specific Gradient Weighting is a general module. Can it be applied to other methods of generating activation maps, such as OccAM? **Authors’ Reply:** we believe that incorporating object-specific gradient weighting into other methods is feasible. But we haven't tried combining the weighting module with other methods yet. Because other methods usually have their own approaches of obtaining object-level visual explanations. For example, OccAM is a perturbation-based method that involves randomly masking the input point cloud to assess performance changes in the output. As it inherently serves as an object-level explanation method, there is no immediate necessity to integrate it with object-specific gradient weighting. Consequently, we have not attempted to combine these techniques. **Q6.** Regarding voxel upsampling, the authors provide a limited description. I am curious about why they chose the Gaussian kernel and how it compares to other upsampling methods such as trilinear interpolation and transpose convolution. **Authors’ Reply:** Given that voxels are sparsely scattered throughout 3D space, identifying all neighboring points for a given point to be interpolated presents a challenge. Consequently, trilinear interpolation is not well-suited for handling sparse voxels. On the other hand, transpose convolution, being a learning-based technique, does not align as the optimal choice within our framework. In contrast, our voxel upsampling method initiates by searching for neighbors within a specified range and subsequently applies a Gaussian kernel to weigh them. This method is more adaptable for our FFAM. [1] Edo Collins, Radhakrishna Achanta, and Sabine Susstrunk. Deep feature factorization for concept discovery. In ECCV, 2018. --- Rebuttal Comment 1.1: Comment: The authors have addressed my concerns. I raise my rating to borderline accept. --- Rebuttal 2: Comment: Dear reviewer 93Lo, is there anything more you would like to ask of the authors, before the author-reviewer discussion period ends (tomorrow)? --- Rebuttal 3: Comment: Dear reviewer 93Lo, do you have any further questions regarding my response?
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
FlashMask: Reducing the Complexity of Attention Computation through Sparse Mask Representation
Reject
Summary: This paper proposes a novel method to address the high computational and memory complexity of current large-scale transformers. By adopting a simple yet effective column-wise sparse representation of attention masks, the algorithm achieves reduced memory and computational complexity while maintaining the accuracy of attention computation. Strengths: 1. This paper investigates a topic of interest, given the current trend toward increasing context lengths in LLMs. 2. The method proposed in this paper is straightforward and easy to implement. 3. The paper is well-written and clearly presented. Weaknesses: 1. It is crucial to highlight the advantages of this method over related work to help readers fully understand its significance. However, in the subsection "Attention Optimization Techniques," the authors only mention the drawback of FlashAttention and discuss its relationship to their work. The introduction of other related works is confusing and makes it difficult to comprehend their relevance to this paper. The overall conclusion, "*Both of the previously discussed solutions either compromise precision or yield only marginal enhancements in efficiency. Conversely, our proposed FlashMask is capable of delivering exact computations.*" is general and non-specific. It is unclear which methods compromise precision and which ones only offer marginal improvements. 2. In the experiments, the baseline algorithms are limited to Vanilla Attention and FlashAttention. Are there more efficient Transformer algorithms that could be used for comparison? If not, the authors should explain the rationale behind the selection of these specific baselines. 3. As a non-expert in this field, I found the writing of this paper confusing. For instance, the initialism "HBN" is introduced without any explanation or context. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Has there been any prior work on efficient attention mask computation? 2. Why was the FA-Varlen method not included in the DPO and RM scenarios? Could the authors provide an explanation in their paper? 3. In Figure 4, the latency of both FA-Window and FlashMask is almost identical. If the authors aim to demonstrate the efficiency of FlashMask, could they explain the experimental results more clearly? 4. In Figures 3 and 5, it appears that the performance and efficiency of FA-Varlen are comparable to FlashMask in the SFT setting. Could the authors clarify this comparison? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: none Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review. ---------- **For each weakness you mentioned:** 1. It is crucial to highlight the advantages of this method over related work to help readers fully understand its significance. However, in the subsection "Attention Optimization Techniques," the authors only mention the drawback of FlashAttention and discuss its relationship to their work. The introduction of other related works is confusing and makes it difficult to comprehend their relevance to this paper. The overall conclusion, "Both of the previously discussed solutions either compromise precision or yield only marginal enhancements in efficiency. Conversely, our proposed FlashMask is capable of delivering exact computations." is general and non-specific. It is unclear which methods compromise precision and which ones only offer marginal improvements. **Reply:** Vanilla Attention: * Advantages: Simple implementation and easy to understand. It was initially proposed in the seminal paper **Attention is All You Need**. * Disadvantages: Slow computation speed and high memory usage, with a quadratic relationship to sequence length, resulting in a complexity of $O(N^2)$. Memory Efficient Attention: * Advantages: Lower memory usage with a complexity of $O(\sqrt{N})$, significantly reducing resource requirements. * Disadvantages: Slower computation speed compared to IO-aware FlashAttention. See the kernel latency in Figure 3 of the attached PDF file. FlashAttention: * Advantages: Fast computation speed with no loss in accuracy compared to Vanilla Attention. * Disadvantages: The official implementation only supports limited mask lacks the mask functionalities required for downstream tasks such as SFT, DPO, and RM. FlashAttention with DenseMask: * Advantages: Third-party implementations support DenseMask in FlashAttention, enabling various masks for downstream tasks like SFT, DPO, and RM. * Disadvantages: High memory usage with a quadratic relationship to sequence length, resulting in a complexity of $O(N^2)$. FlashAttention VarLen: * Advantages: Fast computation speed and support for variable-length sequences, suitable for training tasks such as SFT. * Disadvantages: Limited support for sparse computation modes, unable to support training tasks like DPO and RM. Other Approximate Algorithms (e.g., Reformer and Linformer): * Advantages: Achieve lower memory usage and faster computation speed through approximate sparse attention calculations. * Disadvantages: Model convergence performance cannot match that of Full Attention, leading to reduced accuracy. ---------- 2. In the experiments, the baseline algorithms are limited to Vanilla Attention and FlashAttention. Are there more efficient Transformer algorithms that could be used for comparison? If not, the authors should explain the rationale behind the selection of these specific baselines. **Reply:** As illustrated in the previous reply, Vanilla Attention, FlashAttention with DenseMask are the proper baselines for comparision. We add other efficient Transformer algorithm called Memory Efficient Attention (MEA) in Figure 3 in the attached PDF. FlashAttention-DenseMask is the most competitive baseline for comparison. FlashAttention with VarLen is faster than FlashAttention-DenseMask, but it can be only used in SFT and the performance of FlashMask is the same with it. ---------- 3. As a non-expert in this field, I found the writing of this paper confusing. For instance, the initialism "HBN" is introduced without any explanation or context. **Reply:** We supposed that the "HBN" you mentioned is "HBM". Thanks for your suggestions. HBM is shorted for High Bandwidth Memory, which is the global memory of the GPU. ---------- **For each question you mentioned:** 1.Has there been any prior work on efficient attention mask computation? **Reply:** please refer to our replies of the first and the second weakness. 2.Why was the FA-Varlen method not included in the DPO and RM scenarios? Could the authors provide an explanation in their paper? **Reply:** Thank you for your suggestions. FA-Varlen cannot represent the sparse attention mask in the DPO and RM scenarios as mentioned in our reply of the first weakness. We will provide a comprehensive explanation in the Camera-Ready version. 3.In Figure 4, the latency of both FA-Window and FlashMask is almost identical. If the authors aim to demonstrate the efficiency of FlashMask, could they explain the experimental results more clearly? **Reply:** Thank you for advices. Figure 4 is used to explain that FlashMask can be not only speed up the SFT, DPO and RM scenarios, but also speed up the sliding window sparse attention mask training scenarios. FlashMask is a more general method than the existing techniques, which can be used in many other NLP training tasks. 4.In Figures 3 and 5, it appears that the performance and efficiency of FA-Varlen are comparable to FlashMask in the SFT setting. Could the authors clarify this comparison? **Reply:** In fact, if the sparse attention mask can be represented by either the FlashMask or FA-Varlen method, the efficiency of these two methods are almost the same as shown in Figure 3 and 5. However, FA-Varlen failed to represents the sparse attention masks in DPO and RM. Therefore, the FlashMask method is a more general method in almost all of the NLP tasks, while the usage of FA-Varlen method is limited to SFT. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response, most of my concerns are addressed during rebuttal and I keep my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for taking the time to review our paper and for your response to our rebuttal. We understand your decision to maintain the original score and respect your evaluation. We would like to further clarify some of the improvements and experimental results we mentioned in the rebuttal, which we believe demonstrate the strengths of our work. If there are any points that you feel were not fully addressed, we would be more than happy to engage in further discussion to ensure that you have a comprehensive understanding of our contributions. Thank you again for your time and valuable feedback. Best regards, Authors
Summary: This paper proposes FlashMask, which accelerates the masked attention mechanism that can reduce the original attention from O(N^2) to O(N) and simultaneously reduces the memory cost. Experimental results show that the proposed FlashMask significantly reduces training time without accuracy degradation. Strengths: + This paper provides a comprehensive study and analysis about the sparse attention, in terms of their efficiency. Also, this paper includes existing attention optimization like FlashAttention, explaining the motivation of the proposed FlashMask, which lies in the lack of optimization for sparse attention. + This paper proposes an optimization for column-based sparse attention, which significantly improves memory efficiency and reduces computational costs. + This paper provides a comprehensive complexity analysis, evaluation, and comparison with existing methods. It seems the authors make a lot of efforts on the proposed approach. Weaknesses: - Even though FlashMask achieves significant improvement in the memory efficiency of sparse attention, the key idea is similar to FlashAttention, but it is just for sparse attention mechanisms. Based on this fact, the novelty of this paper is not strong. I recommend the authors explain why the red part in the algorithm is designed and why it is unique for sparse attention. - The authors only present optimization for column-based sparse attention. The performance for other types of sparse attention is unknown. If the proposed approach can be applied to all sparse attention, the contribution of this paper is extremely great. However, the existing version is not comprehensive. - Based on the experiments, the practical latency is not significantly reduced as compared to other methods, even though the theoretical complexity is from N^2 to N. Besides, the authors do not provide results for accuracy. Technical Quality: 2 Clarity: 3 Questions for Authors: See Weaknesses. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review. ---------- 1. Even though FlashMask achieves significant improvement in the memory efficiency of sparse attention, the key idea is similar to FlashAttention, but it is just for sparse attention mechanisms. Based on this fact, the novelty of this paper is not strong. I recommend the authors explain why the red part in the algorithm is designed and why it is unique for sparse attention. **Reply:** There are two contributions in this paper: (1) we proposed a column-based sparse attention mask representation, instead of using a large dense tensor to represent the sparse attention mask. (2) we proposed an efficient kernel implementation using our sparse attention mask representation based on FlashAttention. We think that the novelty of this paper is that we found the common sparse attention mask representation in most of the NLP tasks, and speeded up the attention phase combining this representation and FlashAttention. Our sparse attention mask representation is general in most of the NLP tasks, including SFT, DPO, RM, etc. As shown in Figure 1 (b) of the main text, the Attention Mask required for a long sequence composed of three sequences in the SFT scenario, with sequence lengths [4,3,3]. The FMS values are [4,4,4,4,7,7,7,10,10,10], and FME does not need to be set, defaulting to the maximum number of rows. For example, in the second column, the FMS value is 4, indicating that the elements in rows $[4,10)$ are masked. As shown in Figure 1 (c) of the main text, the Attention Mask required for a long sequence composed of three sequences in the bidirectional SFT scenario, with sequence lengths [4,3,3]. FlashMask uses two pairs of FMS and FME to describe the bidirectional scenario: the lower left part is denoted as $FMS_1$ and $FME_1$, and the upper right part as $FMS_2$ and $FME_2$. $FMS_1$ is [4,4,4,4,7,7,7,10,10,10], and $FME_1$ does not need to be set, defaulting to the maximum number of rows; $FMS_2$ does not need to be set, defaulting to 0; $FME_2$ is [0,0,0,0,4,4,4,7,7,7]. For example, in the fourth column, the $FMS_1$ value is 7, indicating that the elements in rows $[7,10)$ are masked, and the $FME_2$ value is 4, indicating that the elements in rows $[0,4)$ are masked. As shown in Figure 1 (d) of the main text, the Attention Mask required for one Query and two Answers in the DPO scenario, with the Query length being 4, Answer1 length being 3, and Answer2 length being 3. The FMS values are [10,10,10,10,7,7,7,10,10,10]. For example, in the zeroth column, the FMS value is 10, indicating that the elements in rows $[10,10)$ are masked. ---------- 2. The authors only present optimization for column-based sparse attention. The performance for other types of sparse attention is unknown. If the proposed approach can be applied to all sparse attention, the contribution of this paper is extremely great. However, the existing version is not comprehensive. **Reply:** Thanks for you suggestions. In fact, it is hard to handle all kinds of the sparse attention masks. If we use a large dense tensor to represent the sparse attention mask, the costs of the memory usages and HBM access are not acceptable; if we use a sparse-coo liked method to represent the sparse attention mask, it is hard to implement an efficient CUDA kernel as well. In this paper, we found that the sparse attention mask in most of the NLP tasks can be represented using a column-based way. Therefore, we can use FMS and FME in our paper to represent the mask efficiently. We believed that our method is general to most of the NLP tasks like SFT, DPO, RM, etc. ---------- 3. Based on the experiments, the practical latency is not significantly reduced as compared to other methods, even though the theoretical complexity is from N^2 to N. Besides, the authors do not provide results for accuracy. **Reply:** The theoretical complexity $O(N^2)$ corresponds to the FA-DenseMask and Vanilla Attention in Figure 3. We provided a new curve in the attached PDF file (Figure 3) comparing the kernel latency of the FA-DenseMask, Vanilla Attention and our proposed FlashMask method. It showed that the theoretical complexity is from $O(N^2)$ (FA-DenseMask, Vanilla Attention) to $O(N)$ (our FlashMask method). We also conducted some extra experiments evaluating model quality in the attached PDF file (Figure 1 and Table 1). Due to time constraints, we have only supplemented the experiments in SFT and DPO scenarios, and the benchmark tests were conducted on a limited set of datasets to demonstrate the accuracy preservation of our method. We used the Huggingface LLaMA2-7B pre-trained model and conducted SFT training using Vanilla Attention, FlashAttention-DenseMask, and FlashMask, followed by DPO training. SFT and DPO use Packing/InToken data training strategy. The SFT phase was performed on the "allenai/tulu-v2-sft-mixture" dataset, using the AdamW optimizer with beta1=0.9, beta2=0.999, a learning rate of 2e-05, an end learning rate of 1e-07, weight decay of 0.0, a total training step count of 12000, a warmup step count of 360, and a global train batch size of 16. The DPO phase was conducted on the "HuggingFaceH4/ultrafeedback_binarized" dataset, using the AdamW optimizer with beta1=0.9, beta2=0.999, a learning rate of 1e-06, an end learning rate of 1e-07, weight decay of 0.0, a total training step count of 1600, a warmup step count of 10, and a global train batch size of 16. We will provide a comprehensive comparison in the Camera-Ready version. The model accuracy results showed that the model accuracy using our method is the same with that of the FA-DenseMask and Vanilla Attention. --- Rebuttal 2: Comment: As we near the author-reviewer discussion deadline, we seek your feedback on our rebuttal. Your insights have been crucial in improving our work, and we're grateful for the time and effort you've dedicated to our manuscript. Thank you for your valuable guidance. We look forward to your response and are ready to make further adjustments if needed. Best regards, Authors --- Rebuttal Comment 2.1: Comment: Thank the authors for the rebuttal. Unfortunately, my main concern is not well addressed, as this work is specific for the column-based sparse attention, which is not general enough. Besides, the implementation is based on existing FlashAttention. Overall, I keep my original rating.
Summary: The paper introduces FlashMask, an innovative algorithm designed to address the computational and memory challenges associated with conventional attention mechanisms in large-scale Transformers. FlashMask employs a column-wise sparse representation for attention masks, significantly reducing the computational complexity from quadratic to linear with respect to sequence length. The authors demonstrate FlashMask's effectiveness across various masking scenarios and training modalities, including Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Reward Model (RM). Strengths: This paper presents a novel solution to a well-known problem in the field of natural language processing, offering a practical method to reduce the computational burden of attention mechanisms in Transformers. The paper provides extensive empirical evidence to support its claims, including comparisons with state-of-the-art techniques like FlashAttention, demonstrating FlashMask's superiority in terms of speed and efficiency. FlashMask's performance across different masking scenarios and training modalities shows its versatility and robustness, indicating its potential applicability to a wide range of models and tasks. Practical Impact: The paper not only presents theoretical advancements but also demonstrates practical benefits, such as enabling the Weaknesses: The scaling ability of the proposed method deserves further verified on large scale datasets. While the paper demonstrates FlashMask's effectiveness in specific scenarios, it may lack broader evidence on how it performs across different types of NLP tasks or diverse datasets. The paper could provide more detailed insights into how FlashMask handles different sparsity levels and the impact on various model sizes and complexities. Technical Quality: 3 Clarity: 2 Questions for Authors: See the weakness Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review. ---------- We were sorry that we had not clarified our key points previously. In the SFT/DPO/RM training scenarios, the sparsity of the attention mask is usually natural, but not intended to speed up training while sacrificing accuracy. For examples, as illustrated in Figure 1, the sparsity of the attention mask may be from: (1) padding mask, (2) InToken mask, and (3) question and answering mask. We were not discussing how to design a new sparse attention mask, balancing the model accuracy and training efficiency. Instead, we took advantage of the sparsity attention property in these NLP tasks itself to speed up the training process. The key points of our paper include: (1) we proposed an efficient sparse mask representation method which can be used in SFT, DPO, RM and many other scenarios; (2) based on our sparse mask representation, we proposed an efficient kernel implementation to speed up the training process. Since we did not introduce extra approximate calculations (the sparsity comes from the NLP task itself), the model accuracy should be exactly the same with or without our method. We also conducted some extra experiments evaluating model quality in the attached PDF file (Figure 1 and Table 1). Due to time constraints, we have only supplemented the experiments in SFT and DPO scenarios, and the benchmark tests were conducted on a limited set of datasets to demonstrate the accuracy preservation of our method. We used the Huggingface LLaMA2-7B pre-trained model and conducted SFT training using Vanilla Attention, FlashAttention-DenseMask, and FlashMask, followed by DPO training. SFT and DPO use Packing/InToken data training strategy. The SFT phase was performed on the "allenai/tulu-v2-sft-mixture" dataset, using the AdamW optimizer with beta1=0.9, beta2=0.999, a learning rate of 2e-05, an end learning rate of 1e-07, weight decay of 0.0, a total training step count of 12000, a warmup step count of 360, and a global train batch size of 16. The DPO phase was conducted on the "HuggingFaceH4/ultrafeedback_binarized" dataset, using the AdamW optimizer with beta1=0.9, beta2=0.999, a learning rate of 1e-06, an end learning rate of 1e-07, weight decay of 0.0, a total training step count of 1600, a warmup step count of 10, and a global train batch size of 16. We will provide a comprehensive comparison in the Camera-Ready version. It showed that the training loss and the benchmark results are the same between our speedup method and the baselines. ---------- **The detailed replies are as follows:** 1. The scaling ability of the proposed method deserves further verified on large scale datasets. **Reply:** In Section 4.3 and 4.4, we had conducted some experiments to show the speedup and memory saving of our proposed method. The testing dataset included the LongBench (an open-source benchmark for long context understanding) and our synthetic data. As we mentioned before, our method is used to speed up the training process without sacrificing accuracy. We believed that the scaling ability of our method should be good on large scale datasets. 2. While the paper demonstrates FlashMask's effectiveness in specific scenarios, it may lack broader evidence on how it performs across different types of NLP tasks or diverse datasets. **Reply:** Our proposed method performed extact computation when the NLP tasks used sparse attention mask itself. Therefore, different datasets would not affect the performance of our proposed methods. The SFT, DPO and RM are almost the most important LLM downstream training tasks, and we believed that our proposed method is a general speedup method in most of the NLP tasks. 3. The paper could provide more detailed insights into how FlashMask handles different sparsity levels and the impact on various model sizes and complexities. **Reply:** Our proposed method performed extact computation when the NLP tasks used sparse attention mask itself. Therefore, the model accuracy after using our method would not be changed, regardless of the sparsity levels, model sizes and complexities. --- Rebuttal Comment 1.1: Comment: As we near the author-reviewer discussion deadline, we seek your feedback on our rebuttal. Your insights have been crucial in improving our work, and we're grateful for the time and effort you've dedicated to our manuscript. Thank you for your valuable guidance. We look forward to your response and are ready to make further adjustments if needed. Best regards, Authors
Summary: This paper proposes FlashMask, a modification of FlashAttention with fixed masks. The paper shows speedup of FlashAttention when using sparse masks in the attention matrix. Strengths: FlashAttention is an important algorithm, and sparsity in the attention matrix is an important feature. Further study of these aspects is helpful. Weaknesses: The paper seems to make claims that are unsubstantiated by experiments. In the abstract and introduction, the paper claims speedup without sacrificing model quality. However, there is no experiment evaluating model quality in the experiments. This is a critical flaw. Further, the contribution of the paper is unclear. Block-sparsity is already supported in FlashAttention (see section 3.3 of FlashAttention). It is unclear how this paper is different. There are also more recent works such as "Fast Attention Over Long Sequences With Dynamic Sparse Flash Attention" (NeurIPS 2023), which seem to be strictly more expressive in features than this paper. Technical Quality: 2 Clarity: 2 Questions for Authors: See weaknesses Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The paper discusses superlinear scaling in sequence length as a limitation, but is lacking in discussion of model quality. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review. ---------- 1. The paper seems to make claims that are unsubstantiated by experiments. In the abstract and introduction, the paper claims speedup without sacrificing model quality. However, there is no experiment evaluating model quality in the experiments. This is a critical flaw. **Reply:** We were sorry that we had not clarified our key points previously. In the SFT/DPO/RM training scenarios, the sparsity of the attention mask is usually natural, but not intended to speed up training while sacrificing accuracy. For examples, as illustrated in Figure 1, the sparsity of the attention mask may be from: (1) padding mask, (2) InToken mask, and (3) question and answering mask. We mainly focused on how to speed up the training process when using these kinds of natural sparse attention masks. Our baselines are the non-speedup methods using the same sparse attention mask. Therefore, we claimed speedup without sacrificing model accuracy. **Due to time constraints, we have only supplemented the experiments in SFT and DPO scenarios, and the benchmark tests were conducted on a limited set of datasets to demonstrate the accuracy preservation of our method. We will provide a comprehensive comparison in the Camera-Ready version.** We used the Huggingface LLaMA2-7B pre-trained model and conducted SFT training using Vanilla Attention, FlashAttention-DenseMask, and FlashMask, followed by DPO training. SFT and DPO use Packing/InToken data training strategy. The SFT phase was performed on the "allenai/tulu-v2-sft-mixture" dataset, using the AdamW optimizer with beta1=0.9, beta2=0.999, a learning rate of 2e-05, an end learning rate of 1e-07, weight decay of 0.0, a total training step count of 12000, a warmup step count of 360, and a global train batch size of 16. The DPO phase was conducted on the "HuggingFaceH4/ultrafeedback_binarized" dataset, using the AdamW optimizer with beta1=0.9, beta2=0.999, a learning rate of 1e-06, an end learning rate of 1e-07, weight decay of 0.0, a total training step count of 1600, a warmup step count of 10, and a global train batch size of 16. From the train loss curves in Figure 1 of the attached PDF, it is evident that FlashMask ensures convergence accuracy comparable to the baselines (Vanilla Attention and FlashAttention-DenseMask). Additionally, FlashMask and FlashAttention-DenseMask exhibit identical loss values. The evaluation metrics from the benchmark tables (Table 1 in attached PDF) indicate that FlashMask achieves the same accuracy on par with FlashAttention-DenseMask. **Therefore, FlashMask, in comparison to FlashAttention-DenseMask, is an exact algorithm with no loss in accuracy.** ---------- 2. Further, the contribution of the paper is unclear. Block-sparsity is already supported in FlashAttention (see section 3.3 of FlashAttention). It is unclear how this paper is different. There are also more recent works such as "Fast Attention Over Long Sequences With Dynamic Sparse Flash Attention" (NeurIPS 2023), which seem to be strictly more expressive in features than this paper." **Reply:** SFT, DPO and RM are the very important scenarios in the NLP downstream training tasks. Although block-sparsity is already supported in FlashAttention, the block-sparsity does not match the sparse attention mask pattern in SFT, DPO and RM shown in Figure 1. The paper you mentioned ("Fast Attention Over Long Sequences With Dynamic Sparse Flash Attention") contributed to speed up the training process when using the QK-Sparse attention and Hash-Sparse attention. But it also failed to represent the sparse mask attention in SFT, DPO and RM shown in Figure 1. **We proposed an efficient sparse mask representation method which can be used in SFT, DPO, RM and many other scenarios, even including the QK-Sparse/Hash-Sparse attention in the mentioned paper. Thanks to our efficient representation, we proposed an efficient kernel implementation to speed up the sparse attention phase. Our approach is more general than the existing research works and can be used in most of the NLP training scenarios.** For example, as shown in Figure 2 of the attached PDF, FlashMask can represent QK-sparse and Hash-sparse masks using FlashMask. In the figure, $S_1$ denotes the starting row index of the mask in the lower left triangle, and $E_2$ denotes the ending row index of the mask in the upper right triangle. The diagonal elements should be considered part of the lower left triangle. In FlashMask, $E_1$ (representing the ending row index of the mask in the lower left triangle) does not need to be set and defaults to the maximum number of rows, while $S_2$ (representing the starting row index of the mask in the upper right triangle) defaults to 0. Here, the $S_1$ and $S_2$ represent the $FMS$ in Algorithm 1, and the $E_1$ and $E_2$ represent the $FME$ in Algorithm 1. For example, in Figure 2(a) (in the attached PDF file) for QK-sparse, in the second column, $S_1$ is 6, indicating that the elements in the rows $[6,6)$ are masked, and $E_2$ is 3, indicating that the elements in the rows $[0,3)$ are masked. In Figure 2(b) (in the attached PDF file) for Hash-sparse, in the first column, $S_1$ is 1, indicating that the elements in the rows $[1,8)$ are masked, and $E_2$ is 1, indicating that the elements in the rows $[0,1)$ are masked. In Figure 2(b) (in the attached PDF file) for Hash-sparse, in the second column, $S_1$ is 5, indicating that the elements in the rows $[5,8)$ are masked, and $E_2$ is 2, indicating that the elements in the rows $[0,2)$ are masked. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your rebuttal. I now better understand your contributions. It appears that one of the key observations is that column-wise padding for attention is useful for some situations (analogous to the row-wise "key padding mask" in BERT-style models). Unfortunately, it is now clear to me that the paper is quite poorly written for the contribution. The submitted draft of the paper does not make it clear that FlashMask is an optimization for specific use cases where the opportunity for per-column masking already exists. Even then, given the examples in the paper, I'm not convinced that they can't be covered by a block-sparse mask. I would recommend revising the paper for clarity and precision and re-submitting to a later conference. I will not be changing my score.
Rebuttal 1: Rebuttal: 1. The core contribution of this paper lies in proposing FlashMask, an extension of FlashAttention with sparse mask attention speedup, for downstream NLP tasks such as SFT, DPO, and RM. FlashMask introduces a column-based sparse mask representation and develops an efficient CUDA kernel implementation. FlashMask's mask representation method is not only applicable to common NLP downstream tasks such as SFT, DPO, and RM, but it can also express more customized masks, as shown in Figure 1 and Figure 2 of the main paper. Additionally, FlashMask can efficiently represent and implement masks corresponding to the QK-sparse and Hash-sparse attention in the paper like "Fast Attention Over Long Sequences With Dynamic Sparse Flash Attention". This method not only achieves exact accuracy equivalent to FlashAttention with Dense Mask implementation (it is described as the baseline algorithm in our main paper), but also reduces the memory complexity of the mask from $O(N^2)$ to $O(N)$. In end-to-end training, SFT, DPO, and RM tasks can achieve more than 2.4 times speedup. 2. In the main paper previously, we did not provide model accuracy experiments but declared FlashMask as an **exact** algorithm in the Abstract and Introduction. During this rebuttal period, we supplemented model accuracy experiments to demonstrate that FlashMask is lossless in terms of accuracy. Due to time constraints, we have only supplemented experiments in SFT and DPO scenarios, and the benchmark tests were conducted on a limited set of datasets to demonstrate the accuracy preservation of our method. We will provide a comprehensive comparison in the Camera-Ready version. We used the Huggingface LLaMA2-7B pre-trained model and conducted SFT training using Vanilla Attention, FlashAttention-DenseMask, and FlashMask, followed by DPO training. Both SFT and DPO phases used the Packing or InToken data training strategy described in Figure 1(a) and Figure 1(d) in the main paper respectively. The SFT training was performed on the "allenai/tulu-v2-sft-mixture" dataset, using the AdamW optimizer with beta1=0.9, beta2=0.999, a learning rate of 2e-05, an end learning rate of 1e-07, weight decay of 0.0, a total training step count of 12000, a warmup step count of 360, and a global train batch size of 16. The DPO training was conducted on the "HuggingFaceH4/ultrafeedback_binarized" dataset, using the AdamW optimizer with beta1=0.9, beta2=0.999, a learning rate of 1e-06, an end learning rate of 1e-07, weight decay of 0.0, a total training step count of 1600, a warmup step count of 10, and a global train batch size of 16. From the train loss curves in Figure 1 of the attached PDF, it is evident that FlashMask ensures convergence accuracy comparable to the baselines (Vanilla Attention and FlashAttention-DenseMask). Additionally, FlashMask and FlashAttention-DenseMask exhibit identical loss values. The evaluation metrics from the benchmark tables (Table 1 in the attached PDF) indicate that FlashMask achieves the same accuracy on par with FlashAttention-DenseMask. Therefore, FlashMask, in comparison to FlashAttention-DenseMask, is an exact algorithm without sacrificing the model accuracy. Pdf: /pdf/bb5a2d45e6ee1da5a9a8641d3bcbecb0e244a2b8.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Universality of AdaGrad Stepsizes for Stochastic Optimization: Inexact Oracle, Acceleration and Variance Reduction
Accept (poster)
Summary: The paper explores the adaptive gradient methods for solving convex composite optimization problems. The authors propose two main algorithms, UniSgd and UniFastSgd, equipped with AdaGrad stepsizes. They establish efficiency guarantees under various conditions, demonstrating implicit and explicit variance reduction properties. The results are also extended to incorporate SVRG-type variance reduction, leading to faster convergence rates. Strengths: 1. This paper extends existing results by showing the universality of AdaGrad stepsizes in a more general setting, providing efficiency guarantees and state-of-the-art complexity bounds. 2. The proposed methods are adaptive, not requiring problem-specific constants, making them practical for real-world applications. 3. Numerical experiments validate the theoretical findings, showing the practical effectiveness of the proposed methods. Weaknesses: 1. Although the authors call their algorithm adaptive methods, the proposed algorithm still requires knowing the domain diameter, which is an obvious drawback. Also, the reliance on the boundedness of the domain might also limit the application of the methods in more general or practical scenarios. 2. The main body of this paper is difficult to understand. It seems that the algorithms are all rather complicated and the authors do not explain the methods clearly. Also, I think the authors should further emphasize the novelty and the challenge of this paper. 3. More extensive empirical evaluations across a wider range of practical problems on real-world datasets would strengthen the experiments part of this paper. 4. The authors claim in the abstract that “the main part can be accessed only via a (potentially **biased**) stochastic gradient oracle”, but it seems that the paper assumes an unbiased stochastic oracle. Maybe I missed something, but I do not find where we use the biased oracle. Can you explain this more clearly? Technical Quality: 3 Clarity: 2 Questions for Authors: See the weaknesses part. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your time and efforts spent on reviewing this manuscript. We appreciate the feedback and would like to make some comments on the points you raised in your review. - [W1]: We agree that this is a drawback and it was explicitly written in the paper. Note, however, that $D$ is indeed the **only** parameter that the methods need to know. This is the price one pays for getting completely universal methods, automatically adapting to a large variety of other parameters (degree of smoothness $\nu$ and the corresponding smoothness constant $H_{\nu}$, oracle's variance $\sigma$, variance at the minimizer $\sigma_*$, oracle's inexactness $\delta$, etc.), while enjoying the optimal worst-case complexity estimates, typical for non-adaptive methods. We are not aware of any other algorithms with similar properties, especially, working in such a large number of situations and with such a unified treatment as in our work (basic methods and accelerated ones, with variance reduction and without it, etc.). While we do agree that it would be very nice to have completely parameter-free algorithms, it seems that this goal may not be achievable for stochastic optimization problems, at least not without additional, somewhat restrictive, assumptions. Please refer to our detailed reply to Reviewer PtE8. - [W2]: Thank you for the comment. We agree that it would be better to provide more explanations and intuitions. However, we are rather limited by the page limit and therefore must think twice if we want to add anything extra. Could you please clarify where exactly you would like us to elaborate, except better emphasizing the novelty and the challenge? - [W3]: Note that our paper is mostly theoretical, and we do not attempt to be exhaustive with experiments. But we will take your point into account and add more experiments in the revised version of the paper. - [W4]: Yes, of course. Our oracle $\hat{g}$ is assumed to be an unbiased estimate of $\bar{g}$, which is itself only an **approximation** to the true gradient $\nabla f$. Thus, $\hat{g}$ is, in general, a biased estimate of $\nabla f$: $\mathbb{E}\_{\xi}[\hat{g}(x, \xi)] = \bar{g}(x) \neq \nabla f(x)$. Some particular examples where we have biased gradients are mentioned in lines 114-122 and also in lines 186-194. For instance, if we work with finite-sum problems, $f(x) = \frac{1}{n} \sum_{i = 1}^n f_i(x)$, where each function $f_i$ represents another nontrivial optimization problem with a strongly convex objective, $f_i(x) = \max_{u_i} \Psi_i(x, u_i)$, whose solution $\bar{u}\_i(x)$ can be found with accuracy $\delta$, then the natural stochastic gradient $\hat{g}(x, \xi) = \nabla_x \Psi_{\xi}(x, \bar{u}_{\xi}(x))$ (with uniformly chosen index $\xi$) is a biased estimate of $\nabla f(x)$, and $\delta_f = \delta$. We hope we were able to address all your concerns and kindly ask you to increase your score. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. It addresses most of my concerns and I am happy to increase my score from 4 to 5. --- Reply to Comment 1.1.1: Comment: Thank you. Let us know please if you have any other questions that we can answer to resolve your remaining concerns.
Summary: This paper proves new convergence rates for stochastic gradient methods with AdaGrad step-sizes. The authors' build on the fact that AdaGrad step-sizes adapt to both the smooth and non-smooth settings and extend these results to show convergence with biased stochastic gradient oracles on functions which are only approximately smooth. Because Hölder-smooth functions satisfy this class, this paper shows that AdaGrad step-sizes lead to universal methods which only require knowledge of the diameter, $D$. As part of their analysis, the authors also show how to relax the bounded variance assumption and combine AdaGrad step-sizes with acceleration and variance reduction. Strengths: - The theoretical setting is extremely general, covering composite optimization with Hölder smooth functions with biased stochastic gradient oracles satisfying a relaxed variance bound. - The analysis covers SGD as well as accelerated SGD and (accelerated) SGD with variance reduction. - The synthetic experiments show that the proposed methods are faster than existing methods and adapt to different degrees of smoothness in the objective function. Weaknesses: - The contributions are fairly niche. Universality of (accelerated) SGD with modified AdaGrad step-sizes is known [14], Adagrad is known to converge without bounded variance [2], and AdaGrad step-sizes have been used with (accelerated) SVRG previously [14, 34]. Thus, this paper fits into a small gap by proving universality with a relaxed variance condition and covering both acceleration and variance reduction. - ~The actually degree to which bounded variance is relaxed is not clear since the connections between Assumption 3 and Assumption 6 are not developed.~ - ~Variance reduce SGD with AdaGrad step-sizes (UniSvrg) only converges to a neighbourhood of the optimal solution even when using an unbiased stochastic gradient oracle.~ - Experiments are only for synthetic problems and do not show the actual relationship between oracle calls and optimization error. ### Detailed Comments **Note**: The authors have clarified the connection between Assumptions 3 and 6 as well the role of $\delta_g$ in Theorem 10. **Assumption 3 vs Assumption 6/9**: An important and unaddressed issue is the comparison between between Assumption 3 and Assumption 6/9. Clearly Assumption 3 implies Assumption 6/9 up to constant factors. But, in what circumstances do Assumptions 6/9 hold but the variance is not bounded, i.e. Assumption 3 fails? In comparison to this work, Nguyen et al. [2] give results for (strongly-convex) SGD that depend only on the stochastic gradient variance at the minimizer and without any variance assumption. Indeed, the typical assumption in this setting is only that $\mathbb{E}[\|g(x^*, \xi)\|_2^2]$ is finite. **Theorem 10** I'm somewhat disturbed that SVRG with Adagrad step-sizes doesn't converge exactly to a minimizer $x^*$ under your assumptions. For example, suppose $\nabla f$ is $L$-Lipschitz and $\bar g = \nabla f$ so that $\delta_f$ is zero. In this setting, I expect variance-reduced methods like SVRG to converge exactly to a minimizer, but Theorem 10 shows UniSVRG converges to a neighbourhood of size $O(\delta_{\hat g})$. In comparison, standard SVRG, which does not require any variance bound (Allen-Zhu and Yuan, [1]), converges exactly. Can you please give sufficient conditions for $\delta_{\hat g}$ to be zero and explain how they compare to standard SVRG assumptions? Also, why are both Assumptions 6 and 9 required simultaneously for this theorem? They provide similar bounds on the variance of the gradient estimators, so I would have thought that Assumption 9 supplants Assumption 6. **Experiments**: I would have liked to see more realistic experiments. I understand that finding real-world problems for which the diameter $D$ is bounded can be difficult, but one synthetic regression problem is not sufficient to judge the performance of the methods analyzed in this paper. In comparison, Dubois-Taine et al. [14] include a large number of experiments on real-world data. In addition, I strongly dislike the choice to treat one mini-batch gradient computation as a single oracle call instead of $b$ calls. While the choice has no effects in Figure 1, it is actually quite misleading in Figure 2, where it makes the total SFO complexity of all the algorithms appear to decrease as batch-size increases. In reality, the SFO complexity should decrease until a threshold and then $b$ and then begin to increase again. That is, there is some optimal batch-size (usually less than $n$) which minimizes total complexity ### Minor Comments - "Gradient methods are the most popular and efficient optimization algorithms for solving machine learning problems." --- They are not the most efficient methods depending on the problem class and desired sub-optimality, so I wouldn't say this. - "However, the line-search approach is unsuitable for problems of stochastic optimization, where gradients are observed with random noise." --- you may want to mention that line-search works with stochastic gradients when interpolation is satisfied [1]. - So the difference between the results in Section 4 and [47] is that [47] considers a modification of the Adagrad step-size? You might want to state this modification so the difference is clear. - Contributions, Bullet 3: I don't like calling this variance reduction. It is well-known that SGD with a constant step-size depends only on the variance of the stochastic gradients at the minimizer [2], but no one would call SGD variance reduced. - Table 1: - Can you make the font-size larger? It's a little hard to read this table. - If you are reporting Big-O complexities, then can you please put them in Big-O notation? - You should explain in the caption why UniSgd and UniFastSgd have two convergence rates each (e.g. variance assumptions). - Line 117: If it's satisfied for any $\delta_f > 0$ then taking limits as $\delta_f \rightarrow 0$ would imply that it's also satisfies for $\delta_f = 0$. - Line 136: I would say the classical method is due to Nesterov [3], while you are referring to more recent acceleration methods for composite optimization. - Eq. (4): How you solve this implicit equation, given that $\hat x_+$ depends on $\hat M_+$ through the stochastic gradient update? - Assumption 6: In line 126, you say that the oracle outputs are assumed to be independent. However, from the notation in Assumption 6, it seems that in this case the oracle is evaluated with the same randomness $\xi$. Indeed, this must be the case else Assumption 6 reduces to Assumption 3. Maybe you can add a remark clarifying this in the text? - Theorems 4,7: You should comment somewhere in the text that these rates (according to Algorithm 1) are not anytime, but require knowledge of $N$. - Line 230: This sentence isn't finished. - Page 7: The spacing between the algorithm blocks is quite weird. You should probably fix this. - Theorem 11: I would prefer if you stated this without assuming $N \in \Theta(n)$. - Table 2, row 3: I think there's a typographic issue here since you appear to be adding the definition of $N_\nu(\epsilon)$ with the log. - Figures 1, 2: - The font sizes are much too small to read. As a rule of thumb, the size of text in a figure should be at least as large as the size of text in paragraph mode. - Figure 1: It would be nice if you reminded the reader that $q$ is the power in the test problem in Eq. (5). - Line 916: You should also reference SDCA [4] and SAG [5], which were very early (if not the earliest) variance reduction methods. ### References [1] Vaswani, Sharan, et al. "Painless stochastic gradient: Interpolation, line-search, and convergence rates." Advances in neural information processing systems 32 (2019). [2] Nguyen, Lam, et al. "SGD and Hogwild! convergence without the bounded gradients assumption." International Conference on Machine Learning. PMLR, 2018. [3] Nesterov, Yurii. "A method for solving the convex programming problem with convergence rate O (1/k2)." Dokl akad nauk Sssr. Vol. 269. 1983. [4] Shalev-Shwartz, Shai, and Tong Zhang. "Stochastic dual coordinate ascent methods for regularized loss minimization." Journal of Machine Learning Research 14.1 (2013). [5] Schmidt, Mark, Nicolas Le Roux, and Francis Bach. "Minimizing finite sums with the stochastic average gradient." Mathematical Programming 162 (2017): 83-112. Technical Quality: 4 Clarity: 3 Questions for Authors: To reiterate, my questions are: - Why doesn't SVRG with AdaGrad step-sizes converge exactly and under what conditions is $\delta_{\hat g} = 0$ satisfied? - Why are both Assumptions 6 and 9 required to show convergence of UniSVRG? - Under what conditions are Assumptions 6/9 weaker than Assumption 3? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The contributions fit into a fairly small gap given previous work and experiments are not actually sufficient to justify the methods. See main review for details. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the positive evaluation! ## Major remarks 1. It seems there is a certain misunderstanding of Ass. 6, which we hope to clarify. Basically, Ass. 6 is satisfied for finite-sum problems with (Hölder) smooth components (see also lines 186-194 in the paper for a more general example). The motivation is as follows. Ass. 6 is the generalization of $\mathbb{E}\_{\xi}[\\| g(x, \xi) - g(y, \xi) \\|^2] \leq 2 L \beta(x, y)$, where $\beta(x, y) = f(y) - f(x) - \langle \nabla f(x), y - x \rangle$, for which the main example is the finite-sum objective $f(x) = \frac{1}{n} \sum_{i = 1}^n f_i(x)$ with the usual stochastic oracle $g(x, \xi) = \nabla f_{\xi}(x)$ (for a uniformly chosen index $\xi$) and $L$-smooth components $f_i$. To generalize it to the case when $f_i$ are $(\nu, H)$-Hölder smooth, we need to add $\delta$: $\mathbb{E}\_{\xi}[\\| g(x, \xi) - g(y, \xi) \\|^2] \leq 2 L [\beta(x, y) + \delta]$; then, we can take an arbitrary $\delta > 0$ and let $L = [\frac{1 - \nu}{2 (1 + \nu) \delta}]^{\frac{1 - \nu}{1 + \nu}} H^{\frac{2}{1 + \nu}}$. This is almost our Ass. 6 except that we have the second moment instead of the variance $\mathbb{E}\_{\xi}[\\| [g(x, \xi) - g(y, \xi)] - [\nabla f(x) - \nabla f(y)] \\|^2]$ in the left-hand side (for simplicity, we ignore the possibility of using inexact gradients $\bar{g}$ and function values $\bar{f}$). Using the variance is better because it is always smaller than the second moment and can easily be reduced with mini-batching. 1. There is no problem with $\delta_g$ in Thm. 10. First, let us stress that Thm. 10 covers the standard setting of finite-sum problems with $L$-smooth components. In this case, $L_f = L_g = L$, $\delta_f = \delta_g = 0$, and we recover the classical result from (Allen-Zhu and Yuan, [1]), without any extra assumptions. However, it is important that our method is actually more powerful and converges in other regimes as well, e.g., for finite-sum problems with Hölder-smooth components. Indeed, as discussed before, for such problems, **$\delta_g$ can be an arbitrary (potentially very small) positive number** (and so can be $\delta_f$, see lines 116-117 in the paper). Choosing $\delta_f$ and $\delta_g$ carefully (by optimizing the rate from Thm. 10 while taking into account the fact that $L_f$ and $L_g$ also depend on these constants), we get the following convergence rate: $F_t \lesssim \frac{H D^{1 + \nu}}{(2^t)^{\frac{1 + \nu}{2}}}$ which goes to zero as $t \to \infty$. For more details, see Cor. 41. 1. The previous example shows that $\delta_f$ does not only measure the error between $\bar{g}$ and $\nabla f$ but is also related to the smoothness properties of the objective: even when $\bar{g} = \nabla f$, it is still meaningful to allow for $\delta_f > 0$ (e.g., to handle Hölder-smooth problems). 1. [Ass. 3 vs 6] Indeed, Ass. 6 is weaker than 3 because we can choose any $\delta_g > 0$ and let $L_g \sim \frac{\sigma^2}{\delta_g}$. Then, Thms. 4 and 5 become simple corollaries of Thms. 7 and 8. On the other hand, as already discussed above, Asm. 6 is satisfied for finite-sum problems with smooth components and the standard oracle, while Asm. 3 may be violated for this case (consider the quadratic function on the entire space); even if one defines $\sigma$ in Asm. 3 by looking only at $x$ from the feasible set, $\sigma$ will depend on the diameter of this set and may be potentially very big. We will add the corresponding comments. 1. [Asm. 6 vs 9] When we use exact gradients, $\bar{g} = \nabla f$, both are equivalent; otherwise, one does not seem to imply the other. For some reason, the current proof of Thm. 10 needs Asm. 9 at one place, namely, in lines 693-694, to get the Bregman distance term in Lem. 28, which can then be bounded via the function residual using Lem. 29. To remove Asm. 9, we would need to obtain a counterpart of Lem. 29 in terms of the approximate Bregman distance instead of the exact one, which we do not know how to do at the moment. We will clarify this. 1. [Experiments] We will take your point into account and add more experiments. Regarding the mini-batch size, we are not sure if you are correct. Consider, e.g., SGD with mini-batch size $b$ as applied to minimizing an $L$-smooth convex function. To reach accuracy $\epsilon$, it needs $N_b = \frac{L D^2}{\epsilon} + \frac{\sigma^2 D^2}{b \epsilon^2}$ oracle calls, each such a call requires $b$ stochastic gradients. If the computation is sequential, the total SFO complexity is then $b N_b = \frac{b L D^2}{\epsilon} + \frac{\sigma^2 D^2}{\epsilon^2}$, which is minimized for $b = 1$. To our knowledge, mini-batching is provably efficient only when we allow for parallel computations of stochastic gradients; then, the complexity becomes $N_b$ which indeed decreases with $b$ (however, $b$ cannot exceed the limits of our parallelism, e.g., the number of computing nodes). See also Sec. 6 in [54] for a similar point of view. 1. [W1] Please note that [2] considers another variance assumption, namely, $\\| g(x, \xi) - \nabla f(x) \\|^2 \leq \sigma_0^2 + \sigma_1^2 \\| \nabla f(x) \\|^2$, which is completely different from ours and is not guaranteed to be satisfied (with good constants) even for finite-sum problems with smooth components. Our assumption is actually much weaker and have never been explored in the literature on AdaGrad. Furthermore, for the accelerated method with AdaGrad step sizes, nothing has been known except for the classical case when the variance is uniformly bounded. We therefore kindly disagree that our contributions are "fairly niche" / straightforward, including also the uniform extension of everything to Hölder-smooth problems. ## Minor remarks Unfortunately, we do not have enough space to answer every remark but we will consider all of them and make the appropriate modifications. ## Conclusion We hope we could answer your questions, and kindly ask you to consider increasing your score to support our work. --- Rebuttal Comment 1.1: Comment: Thanks for your response. > 1. It seems there is a certain misunderstanding of Ass. 6, which we hope to clarify. What misunderstanding? Did I say something in my review which indicated I didn't understand Assumption 6? I just want you to give a concrete example in the paper. > 2. First, let us stress that Thm. 10 covers the standard setting of finite-sum problems with $L$-smooth components. Does it say this somewhere in the manuscript? I appreciate your general presentation, but it's quite typical to include corollaries showcasing how the theorems behave in simplified settings. I think adding this type of statement would greatly improve your paper. > 4. Asm. 3 may be violated for this case (consider the quadratic function on the entire space) Don't you assume a finite diameter $D$ in your analysis? In this case, I think finite-sum quadratics should be fine for Assumption 3, right? I would like to you provide an example of a function *in your setting* which satisfies Assumption 6, but doesn't satisfy Assumption 3. Note that I'm not particularly concerned about assuming $D$ is finite (unlike some of the other reviewers), but you shouldn't assume finite $D$ and then make comparisons on the full space. Moreover, since you assert $\sigma^2$ may be very big for finite $D$, can you provide a comparison between $\sigma^2$ and $\delta_g, L_g$? > 5. [Asm. 6 vs 9] ... which we do not know how to do at the moment That's unfortunate. I don't have a lot of insight into this problem, but it does seem strange to need both assumptions. > 6. [Experiments] Regarding the mini-batch size, we are not sure if you are correct. There are quite a few papers on this subject which support my comments. Please see Gower et al. (1) and Kento et al. (2). I don't particularly like the second paper, but it get's the point across. > 7. Please note that [2] considers another variance assumption... Sorry, it looks like I missed including the additional references for my review. I've edited the review to add them now. By [2], I was referring to the work by Nguyen et al., which is [2] in my list. Their variance assumption is what I would consider to be "variance at the minimum". Of course you will not agree that your contributions are fairly niche, but that is my opinion. [1] Gower, Robert Mansel, et al. "SGD: General analysis and improved rates." International conference on machine learning. PMLR, 2019. [2] Imaizumi, Kento, and Hideaki Iiduka. "Iteration and stochastic first-order oracle complexities of stochastic gradient descent using constant and decaying learning rates." Optimization (2024): 1-24. --- Reply to Comment 1.1.1: Comment: Dear Reviewer aWSb, Thanks for the discussion. > adding this type of statement would greatly improve your paper. We agree and will add it. > I would like to you provide an example ... This is exactly the example we mentioned the last time, namely, the quadratic function with the variance growing with $\\| x \\|$. In this case, $\sigma$ grows with the diameter $D$ and therefore may be arbitrarily large, in contrast to $\sigma_*$ which may not grow with $D$ at all. To be more precise, **here is one simple specific example**: $f(x) = \mathbb{E}\_L[F(x, L)]$, where $L$ is the random variable uniformly distributed on $[0, L_{\max}]$ and $F(x, L) = \frac{L}{2} \\| x \\|^2$. We want to minimize this function over the ball of radius $R$ centered at the origin, and use the standard oracle $g(x, L) = \nabla_x F(x, L) = L x$. Denoting $\bar{L} = \mathbb{E}[L] = \frac{1}{2} L_{\max}$ and $L_V^2 = \mathbb{E}[(L - \bar{L})^2] = \frac{1}{12} L_{\max}^2$, we get $f(x) = \frac{\bar{L}}{2} \\| x \\|^2$, and the variance is $\sigma^2(x) = \mathbb{E}\_L[\\| g(x, L) - \nabla f(x) \\|^2] = L_V^2 \\| x \\|^2$. Its maximal value over the feasible set is then $\sigma = \max \\{ \sigma(x) : \\| x \\| \leq R \\} = L_V R$, while $\sigma_* = \sigma(x^*) = 0$. For this example, our Asm. 1 is satisfied with $L_f = \bar{L}$, $\delta_f = 0$, while Asm. 6 is satisfied with $L_g = L_{\max}$, $\delta_g = 0$. The "classical" convergence rate (Th. 4) is thus $ O(\frac{L_f R^2}{k} + \frac{\sigma R}{\sqrt{k}}) = O(\frac{\bar{L} R^2}{k} + \frac{L_V R^2}{\sqrt{k}}) = O(\frac{L_{\max} R^2}{\sqrt{k}}) $. In contrast, the $\sigma_*$-rate (Th. 7) is $ O(\frac{(L_f + L_g) R^2}{k} + \frac{\sigma_* R}{\sqrt{k}}) = O(\frac{L_{\max} R^2}{k}) $, which is much faster. Please note that we are actually discussing now the general question why $\sigma_*$-bounds are better than the corresponding $\sigma$-bounds for smooth stochastic optimization problems. This question has already been addressed in many previous works and is not directly related to the specific adaptive methods we propose. The only important detail is that our "new" Asm. 6 indeed covers smooth stochastic optimization problems. > Please see Gower et al. (1) ... Thanks for the references. We do not mind adding extra experiments with another way of counting mini-batch computations (as you suggested in the first post) and will do that in the revised version. Note, however, that, from the theoretical point of view, the "optimal" mini-batch size suggested in [Gower et al., 1] does not give any nontrivial results: it is always comparable to either $b = 1$ or $b = n$. Indeed, consider, the finite-sum minimization of $f(x) = \frac{1}{n} \sum_{i = 1}^n f_i(x)$ with $L$-smooth components $f_i$ and assume that $\frac{1}{n} \sum_{i = 1}^n \\| \nabla f_i(x) - \nabla f(x) \\|^2 \leq \sigma^2$ for any $x$. One of the standard examples considered in [Gower et al., 1] is the so-called $b$-nice sampling meaning that the oracle uses mini-batching $g_b(x, \xi) = \frac{1}{b} \sum_{j = 1}^b \nabla f_{\xi_j}(x)$ with indices $\xi_j$ chosen uniformly at random from $\\{ 1, \ldots, n \\}$ without replacement. The variance of such an oracle is $\sigma_b^2 = \frac{n - b}{n - 1} \frac{\sigma^2}{b}$, so SGD needs $ N_b = O(\frac{L D^2}{\epsilon} + \frac{\sigma_b^2 D^2}{\epsilon^2}) = O(\frac{L D^2}{\epsilon} + \frac{n - b}{n} \frac{\sigma^2 D^2}{b \epsilon^2}) $ oracle calls to reach accuracy $\epsilon$. The total SFO complexity is then $b N_b = O(\frac{b L D^2}{\epsilon} + \frac{n - b}{n} \frac{\sigma^2 D^2}{\epsilon^2})$, which is exactly the same expression as we wrote in our previous reply, up to the factor $\frac{n - b}{n}$ which comes because we now use sampling *without replacement*. As we can see, the resulting expression is a linear function of $b$, so its minimum value is attained at one of the boundary points: either $b = 1$ (if $n \geq \frac{\sigma^2}{L \epsilon}$) or $b = n$ (otherwise). The "optimal" $b$ in [Gower et al., 1] appears to be more complicated simply because they first rewrite $b N_b = O(\max\\{\frac{b L D^2}{\epsilon}, \frac{n - b}{n} \frac{\sigma^2 D^2}{\epsilon^2}\\})$ and then minimize the maximum by solving $\frac{b L D^2}{\epsilon} = \frac{n - b}{n} \frac{\sigma^2 D^2}{\epsilon^2}$. Although the resulting solution could potentially improve some *absolute* constants, it will still result in the same SFO complexity of $O(\min\\{ \frac{L D^2}{\epsilon} + \frac{\sigma^2 D^2}{\epsilon^2}, \frac{n L D^2}{\epsilon} \\})$ as the naive approach selecting either $b = 1$ or $b = n$. > By [2], I was referring to the work by Nguyen et al. ... Their variance assumption ... Indeed, they work with the variance at the minimizer. However, our objection was that there are no other results in the literature showing that AdaGrad does adapt to the variance at the minimizer for convex finite-sum problems. Note that the work of Nguyen et al. studies only **non-adaptive** methods.
Summary: This paper demonstrates the universality of AdaGrad in stochastic optimization, presenting adaptive algorithms that converge efficiently without prior knowledge of problem-specific constants. The research contributes novel variance reduction techniques, theoretical proofs, and empirical evidence, showcasing robust performance across optimization scenarios. It advances the field by offering a versatile approach to stochastic optimization, hinting at potential extensions to more complex problem sets. Strengths: The paper's strength lies in its theoretical depth, the algorithms are designed to be universally applicable to a wide range of optimization problems, including those with Hölder smooth components, showcasing a high level of adaptability. They also illustrate the impact of the mini-batch size on the convergence of proposed methods, which further helps to verify the theorem. Weaknesses: Rather than pointing out shortcomings, I would like to discuss some issues with the authors. Technical Quality: 3 Clarity: 3 Questions for Authors: The prior knowledge is also not required in UniXGrad[28] and AcceleGrad[32]. What is the difference between them and the proposed algorithm? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: It is mentioned on line 246 of page 7 that "Alternative accelerated SVRG schemes with AdaGrad stepsizes (3) were recently proposed in [34]; however, they seem to be much more complicated. " I think it would be beneficial if the author discussed the algorithm in [34] in detail to help readers understand the advantages of the algorithm in this paper more clearly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive evaluation of our work. Below you can find the answers to your questions / comments. > The prior knowledge is also not required in UniXGrad[28] and AcceleGrad[32]. > What is the difference between them and the proposed algorithm? From the algorithmic perspective, all the three methods (UniXGrad, AcceleGrad and our UniFastSgd) are different versions of Nesterov's accelerated gradient method. Our UniFastSgd is one of the standard versions known as the Method of Similar Triangles (see Section 6.1.3 in [44] and [31]). The name stems from the fact that the next point $x_{k + 1}$ is defined in such a way that the triangles $(x_k, v_k, v_{k + 1})$ and $(x_k, y_k, x_{k + 1})$ are similar (see the picture on page 1 in the PDF attached to the main rebuttal). In contrast, UniXGrad and AcceleGrad choose $x_{k + 1}$ as the result of the (projected) gradient step from $y_k$, which results in a slightly more complicated iteration with no guarantee of similar triangles. From the theoretical perspective, the convergence rate guarantees for UniXGrad and AcceleGrad were proved only under the uniformly bounded variance and for functions with either bounded gradients or Lipschitz continuous gradients. In other words, it is not known whether UniXGrad and AcceleGrad can provably adapt to the more general assumptions from our paper (such as Hölder smoothness, variance at the minimizer, etc.). It might be possible to extend our techniques to those methods as well, but we have not investigated this direction as we personally find the Method of Similar Triangles more elegant and simpler to work with. > It is mentioned on line 246 of page 7 that "Alternative accelerated SVRG > schemes with AdaGrad stepsizes (3) were recently proposed in [34]; however, > they seem to be much more complicated. " I think it would be beneficial if the > author discussed the algorithm in [34] in detail to help readers understand > the advantages of the algorithm in this paper more clearly. Thank you for the suggestion. We agree and will elaborate on this in the revised version of the manuscript. Here are some of the reasons why our method is simpler and more elegant than the AdaVRAE algorithm from [34]: 1. Each epoch of our algorithm is essentially the standard Method of Similar Triangles with the only difference that one of the vertices ($\tilde{x}$) is fixed during the epoch (we mention this in lines 240-244; see also the picture on page 2 in the PDF attached to the main rebuttal). As a result, its geometry is much easier to understand than that of AdaVRAE, and we can readily apply the classical results for each Triangle Step (such as Lemma 24) without proving everything "from scratch". 2. Our algorithm uses a simple recurrent formula for the sequence $A_t$, namely, $A_{t + 1} = A_t + \sqrt{A_t}$, which can be easily analyzed using standard techniques (Lemma 32). In contrast, AdaVRAE uses quite complicated ad-hoc formulas which are impossible to even write on a single line. We hope we could answer your questions, and kindly ask you to consider increasing your score to support our work. --- Rebuttal Comment 1.1: Comment: Dear Reviewer soXZ, Thanks again for your feedback. Please let us know if you are satisfied with our reply and if your concerns are now resolved. We would be happy to provide more explanations if needed.
Summary: This paper studied how to apply AdaGrad type stepsize for stochastic convex optimization in a unified way. It proposed different algorithms and a general rule for the stepsize. Later, the authors presented different convergence rates under different settings, all of which match the existing best results. Finally, numerical experiments are given to show the better performance of these new algorithms. Strengths: The paper also extends the result to the Hölder smooth case. Weaknesses: 1. The major weakness is still Assumption 2. Though I understand this condition is widely used in the prior works with AdaGrad stepsize, this doesn't mean we should always accept it. **i.** From the practical perspective, a bounded domain means one either has some prior information on the solution or puts some artificial constraints on the problem. However, both of them may not be realistic. A simple example is logistic regression with separable data (but this is unknown in advance). Then clearly, putting a bounded domain can guarantee the existence of $x^*$. However, one can notice that a better solution always exists but not in the domain. **ii.** From the analysis perspective, with a bounded domain, one can immediately bound $\frac{M_k\\|x_k-x^*\\|^2-M_k\\|x_{k+1}-x^\*\\|^2}{2}\leq\frac{M_k\\|x_k-x^*\\|^2-M_{k+1}\\|x_{k+1}-x^\*\\|^2}{2}+O((M_{k+1}-M_{k})D^2)$ to get a telescoping sum, which significantly simplify the analysis. In my experience, this is the major hardness. Without a bounded domain, one has to pay more effort to control this term or may meet other difficulties if one chooses to divide $M_k$ in the analysis. **iii.** More importantly, I think different works with AdaGrad-like stepsize are trying to relax it recently as mentioned by the authors. **iv**. To summarize, under Assumption 2, the paper is more like a combination of different existing results. Nothing particularly new I can find in both algorithms and proofs. Even under Assumption 1 (or any inexact oracle), the proofs still follow the classical steps and one only needs to put $\delta$ in the R.H.S. of every inequality. 2. If I am not wrong, all results (except Theorems 7 and 8) in the paper hold for any $F(u)$ but not only $F^*$. If so, the authors can add a remark to state it to improve the paper. But feel free to correct me if I missed something. 3. For some plots, I cannot find the confidence interval. Please add it according to the checklist. Technical Quality: 2 Clarity: 2 Questions for Authors: See **Weaknesses**. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your time and efforts spent on reviewing this manuscript. We appreciate the feedback and would like to make some comments on the points you raised in your review. ## Major remarks 1. Our assumption on the bounded feasible set is satisfied whenever one knows some upper bound $R$ on the distance $\\| x_0 - x^* \\|$ from the initial point to the solution. In this case, we can easily transform our initial problem into the one with the bounded feasible set of diameter $D = 2 R$ by adding the additional simple ball constraint $\\| x - x_0 \\| \leq R$. This is essentially the setting we consider. We agree that it would be highly desirable to develop a completely parameter-free method for stochastic optimization, which would not require the knowledge of any problem-dependent parameters and would automatically and efficiently adapt to all of them. However, according to the several recent works (see [CH24,AK24,KJ24]), this seems too much to ask for. In contrast to the standard deterministic optimization, where one can compute exact function values and use line-search techniques by paying only an additive logarithmic term for not knowing the parameters of the problem, for stochastic optimization, one really needs some nontrivial knowledge of either the distance to the solution $R$, or the smoothness constant / oracle's variance, etc. Without this, the method would not be able to achieve a nearly optimal complexity. In other words, without assuming the knowledge of $R$, we would need to impose other restrictive assumptions such as the knowledge of the Lipschitz constant and/or oracle's variance. But then the algorithm will be tailored a particular function class and will not be as universal as our methods (working **simultaneously** for each Hölder class and different variance assumptions). Therefore, it is quite debatable which assumption is actually better. 2. Note that problems with bounded domains arise quite commonly in machine learning problems. In fact, any $\ell_2$-regularized convex problem $\min_x [f(x) + \frac{\lambda}{2} \\| x \\|^2]$ ($P_{\lambda}$) (e.g., logistic regression) is equivalent to $\min_x \\{ f(x) : \\| x \\| \leq D \\}$ ($P'\_D$) for a certain $D$, so that there is a one-to-one correspondence between them. In practice, one usually selects the right regularization coefficient $\lambda$ by using the grid search, i.e., solving $(P\_{\lambda})$ for multiple values of $\lambda$ and checking the quality of the resulting solution $x_{\lambda}$. But this is exactly the same as doing the grid search over $D$, solving $(P'\_D)$ and checking the quality of the corresponding solution $x'\_D$. For an additional discussion of bounded domains, see also Ch. 5 in [SNW11] and p. 125 in particular. 3. We kindly disagree with your assessment that, under Assumption 2, our paper is just a trivial combination of different existing results. Consider, for example, the following two particular cases of (only a part of) our results: - Adaptive accelerated SGD for smooth problems with the bound via $\sigma_*$ (Theorem 8 for $\delta_f = \delta_{\hat{g}} = 0$). - Adaptive accelerated SVRG method for functions with Hölder-smooth components (last row in Table 2 or Corollary 42). In our opinion, both results are quite nontrivial: 1) The first (nonadaptive) accelerated method with the bound via $\sigma_*$ was suggested only recently, in [WS21]; the adaptive method was mentioned there as an open question. The algorithm was later revisited and improved in [IJLL23] (by properly separating the smoothness constants), however, the method was still non-adaptive. 2) We are aware of only one adaptive accelerated SVRG method, namely, AdaVRAE from [LNEN22], which was proven to work only for functions with Lipschitz gradient. However, as we explained in our reply to Reviewer soXZ, that algorithm and the corresponding proofs are rather complicated and it is extremely difficult to check if the method provably works for the more general class of Hölder-smooth problems. Could you please be more specific and explain in detail why you believe that the above two results are not particularly new and easily follow from the existing results in the literature (assuming our Assumption 2 holds)? Please provide some references. References: - [CH24] Y. Carmon, O. Hinder. The Price of Adaptivity in Stochastic Convex Optimization. arXiv:2402.10898, 2024. - [AK24] A. Attia, T. Koren. How Free is Parameter-Free Stochastic Optimization? ICML, 2024. - [KJ24] A. Khaled, C. Jin. Tuning-Free Stochastic Optimization. ICML, 2024. - [WS21] B. Woodworth, N. Srebro. An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning. NeurIPS, 2021. - [LNEN22] Z. Liu, T. Nguyen, A. Ene, H. Nguyen. Adaptive Accelerated (Extra-)Gradient Methods with Variance Reduction. ICML, 2022. - [IJLL23] S. Ilandarideva, A. Juditsky, G. Lan, T. Li. Accelerated stochastic approximation with state-dependent noise. arXiv:2307.01497, 2023. - [SNW11] S. Sra, S. Nowozin, S. Wright. Optimization for Machine Learning. MIT Press, 2011. ## Minor remarks - [W2] ($F(u)$ instead of $F^*$): Yes, this is indeed true, but we do not see how this could be useful for improving our results. We would appreciate it if you could elaborate. - [W3] (confidence interval): Thank you, we will add it. ## Conclusion We hope we were able to address your concerns and kindly ask you to reconsider your score. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. My comments are as follows: **To Major remarks 1.** This response is not convincing. 1. I understand the setting you are considering. But as I stated and commented by the authors, it is undesirable since it may not be practical in many cases. 2. The description of the lower bound is highly inaccurate. Note the recent lower bounds developed for parameter-free algorithms only apply to classical stochastic optimization. Instead, the current paper considers the finite-sum case. I remark that a key assumption in these lower bounds for general stochastic optimization is the finite variance condition. However, this condition is not usually assumed in the finite-sum case. Hence, the results mentioned by the authors don't make too much sense in the current setting. As a simple example, one cannot apply the famous $O(1/\epsilon^2)$ lower bound in stochastic convex optimization to the finite sum case since the latter admits a well-known faster upper bound $O(n+\sqrt{n/\epsilon})$. **To Major remarks 2.** Though theoretically speaking, the authors' discussion holds, I am confused about what the authors want to convey. The problem in the paper is to solve $P'_D$ in your notation. However, it seems like the authors want to tell me that grid search on $\lambda$ for $P\_\lambda$ is reasonable, which I think is unrelated to the paper. Could you elaborate more on this point? **To Major remarks 3.** 1. As I stated earlier, the telescoping sum appears easily under a bounded domain (see the inequality mentioned in the review) and one only needs to keep every $\delta$ in your notation simply on the R.H.S. of every inequality. If the authors think my statement is not accurate, could you please let me know at which step in the proof requires additional analysis on $\delta$? 2. Moreover, I would like to clarify that I only said the algorithm and the analysis are not new. I kindly remind the authors that I also pointed out that extending Hölder smooth case is a strength. In other words, the key issue in my opinion is that the analysis doesn't bring any new insights and hence is a trivial combination. 3. The description is also inaccurate for [LNEN22] as it contains two algorithms: AdaVRAE and AdaVRAG. Moreover, whether the proof is difficult to check or not is a subjective view. I think for a general reader who is not familiar with the optimization literature, your paper and [LNEN22] are in the same difficulty when going through the proof. Acutally, as far as I can see, once one puts $\delta$ in every R.H.S. inequality in the proof of [LNEN22], I cannot see any obvious obstacles to prevent their proof work. If the authors think there are some barriers in their proofs, please explicitly write it down. **To Minor remark [W2].** Note that one reason that people study optimization errors for the finite-sum case is to understand the excess risk of ML algorithms, which can always be decomposed as follows: $\mathbb{E}\_{S,A}[F(x(A))-F(x^*)]=\mathbb{E}\_{S,A}[F(x(A))-F\_S(x(A))]+\mathbb{E}_{S,A}[F\_S(x(A))-F\_S(x^*)]$ where $F(x)=\mathbb{E}\_{z\sim P}[f(x,z)]$, $P$ is an unknown distribution, $x^*\in\mathrm{argmin}F(x)$, $S$ is a set of independent samples of $z$, $F\_S(x)=\frac{1}{\|S\|}\sum\_{z\in S}f(x,z)$, and $x(A)$ is the output by an algorithm $A$. Note that $x^*$ may not be the optimal solution of $F_S(x)$. As such, a bound on the optimization error for any reference point $u$ is more useful than $F_S(x(A))-F\_S^*$ where $F\_S^*=\inf F\_S(x)$. --- Reply to Comment 1.1.1: Comment: **Remark 1:** 1. Please note that **our paper does not consider only the finite-sum case**. Instead, we consider a general stochastic optimization problem in which the objective function can be of any form, as long as we are able to compute its stochastic gradients. In particular, it could be the "classical stochastic optimization" problem $\min_x \\{ f(x) = \mathbb{E}_{\xi}[F(x, \xi)] \\}$ for which the recent lower bounds developed for parameter-free algorithms do apply. We mentioned the finite-sum case simply because it is an important example (but not the only one). 1. The **lower bounds we were discussing do make sense even for finite-sum problems** in the important situation when $n$ is very large ($n \to \infty$). In this case, the $O(n + \sqrt{n / \epsilon})$ bound you mentioned is of no use. **Remark 2:** We were simply providing an example of an important family of applied problems which have bounded domain with known diameter. Essentially, the point was that, instead of the commonly used additive regularization for model selection, one may use the equivalent "ball regularization". **Remark 3:** 1. Even when $\delta_f = \delta_g = 0$, some of our results are still new, e.g., Th. 7 and 8. As we indicated in our previous reply, the adaptive accelerated SGD method with the $\sigma_*$-bound was considered an open question. **If you believe that Th. 8 and its proof containing the analysis of Alg. 2 are not new, could you please indicate the corresponding works with the same result?** 2. Regarding the addition of $\delta$ in the right-hand side of most inequalities, this is largely true. However, **this is not a drawback of our approach but instead a confirmation of its elegance**: we offer a reasonably simple analysis leading, in particular, to the state-of-the-art convergence rates for the Hölder-smooth problems. In contrast, the only other existing convergence analysis of AdaGrad methods for Hölder-smooth stochastic optimization problems from [RKWAC24] is completely different and more complicated: it does not use any $\delta$ and requires several rather technical lemmas to handle recurrent sequences involving the combination of several terms in different powers depending on $\nu$ (see Lemmas E.6-E.9 in their paper). Nevertheless, some care should be taken with "simply adding $\delta$ everywhere". One interesting example is Th. 8 which establishes the convergence rate of $O(\frac{L_f D^2}{k^2} + \frac{L_g D^2}{k} + k \delta_f + \delta_g + \frac{\sigma_* D}{\sqrt{k}})$. To get, from this result, the correct $ O(\frac{H_f(\nu) D^{1 + \nu}}{k^{\frac{1 + 3 \nu}{2}}} + \frac{H_{\max}(\nu) D^{1 + \nu}}{(b k)^{\frac{1 + \nu}{2}}} + \frac{\sigma_* D}{\sqrt{k}}) $ convergence rate (Cor. 40), it is very important to allow $\delta_f$ and $\delta_g$ (defined by our Asms. 1 and 6) be different. If we were not careful and treated them as the same $\delta$ everywhere, we would end up with the slower rate of $ O(\frac{H_f(\nu) D^{1 + \nu}}{k^{\frac{1 + 3 \nu}{2}}} + \frac{H_{\max}(\nu) D^{1 + \nu}}{b^{\frac{1 + \nu}{2}} k^{\nu}} + \frac{\sigma_* D}{\sqrt{k}}) $ (where the second term does not even go to zero when $\nu = 0$). **One of the "nontrivial" insights of our work is the realization that such a separation is important.** 3. We did not consider AdaVRAG from [LNEN22] because its complexity is worse by an extra logarithmic factor. As we have already explained in our reply to Reviewer soXZ, **our UniFastSvrg algorithm is considerably simpler than AdaVRAE** and is based on the Method of Similar Triangles which is well-known in the optimization community. Consequently, its convergence analysis is much easier to follow: after applying the standard results on one triangle step (which is unrelated to SVRG methods at all), it only requires 3.5 pages to finish the proof for the accelerated SVRG (see Sec. A.4.2). Of course, we are not speaking here about "a general reader who is not familiar with the optimization literature" for whom any optimization paper would be very difficult. > Acutally, as far as I can see, once one puts $\delta$ in every R.H.S. > inequality in the proof of [LNEN22], I cannot see any obvious obstacles to > prevent their proof work With all due respect, it is impossible to verify such a claim without carefully checking every line of their 15+ page long proof. **Remark [W2]:** Thanks for the clarification. Note, however, that, for any point $u$, we have $F(x_k) - F(u) \leq F(x_k) - F^*$. Thus, proving $F(x_k) - F(u) \leq \epsilon$ for any $u$ is exactly the same as proving $F(x_k) - F^* \leq \epsilon$. **References:** - [RKWAC24] A. Rodomanov, A. Kavis, Y. Wu, K. Antonakopoulos, V. Cevher. Universal Gradient Methods for Stochastic Convex Optimization. ICML, 2024.
Rebuttal 1: Rebuttal: We thank all the reviewers for their valuable comments. We did our best to answer all the questions and will be happy to continue the discussion if needed. After reading the reviews, we had the impression that several aspects of our work were probably unnoticed or underappreciated (perhaps, due to the high level of generality of our presentation). Therefore, we would like to draw the reviewers' attention to the following few of our contributions which are particular cases of our general results but are nonetheless quite important in themselves: 1. Adaptive methods for minimizing finite-sum objectives $f(x) = \frac{1}{n} \sum_{i = 1}^n f_i(x)$ with $L$-smooth components $f_i$, whose convergence rate is expressed in terms of the variance $\sigma_*$ at the minimizer (Theorems 7 and 8 for $\delta_f = \delta_g = 0$). 1. Extensions of these algorithms to Hölder-smooth problems (first two rows in Table 2 or Corollaries 39 and 40). 1. Variance reduction methods for finite-sum problems with Hölder-smooth components. We are not aware of any other works that provably solve any of the aforementioned problems, even under the assumption that the distance to the solution is known, or the feasible set is bounded. Regrading the first contribution or, more generally, relaxing the assumption of uniformly bounded variance, we know only the works [FTCMSW22,AK23]. However, they use a different variance assumption, which is weaker than ours and is not guaranteed to hold for finite-sum problems with smooth components. Furthermore, the corresponding methods are not accelerated. The first non-adaptive accelerated method capable of solving finite-sum problems while enjoying the variance bound only at the minimizer was developed only recently in [WS21]; making this method adaptive was left as an open question. As for the Hölder smoothness, it is important to mention that the very fact that stochastic AdaGrad methods, including the accelerated algorithm, provably work for the entire Hölder class (and not just for its extreme subclasses&mdash;those with bounded or Lipschitz continuous gradients&mdash;which was known before from [LYC18,KLBC19]) is very recent and was proved only in [RKWAC24]. The extension of the corresponding results to $\sigma_*$-bounds and explicit variance reduction are both highly nontrivial tasks. We would also like to point out that, for each of the above three contributions, we do not only present the basic methods but also the accelerated ones, which is always rather challenging. With that being said, we hope the reviewers could take another look at our work and increase their scores. Finally, we attach the PDF containing the two pictures clarifying the geometry of our accelerated methods, as discussed with Reviewer soXZ. References: - [RKWAC24] A. Rodomanov, A. Kavis, Y. Wu, K. Antonakopoulos, V. Cevher. ICML, 2024. - [AK23] A. Attia and T. Koren. SGD with AdaGrad Stepsizes: Full Adaptivity with High Probability to Unknown Parameters, Unbounded Gradients and Affine Variance. ICML, 2023. - [FTCMSW22] M. Faw, I. Tziotis, C. Caramanis, A. Mokhtari, S. Shakkottai, R. Ward. The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance, COLT, 2022. - [WS21] B. Woodworth, N. Srebro. An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning. NeurIPS, 2021. - [KLBC19] A. Kavis, K. Levy, F. Bach, V. Cevher. UniXGrad: A Universal, Adaptive Algorithm with Optimal Guarantees for Constrained Optimization. NeurIPS, 2019. - [LYC18] K. Y. Levy, A. Yurtsever, V. Cevher. Online Adaptive Methods, Universality and Acceleration. NeurIPS, 2018. Pdf: /pdf/74a0039fa2c5ec8a5e20b1ea85ab37a21527837e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DAPE: Data-Adaptive Positional Encoding for Length Extrapolation
Accept (poster)
Summary: This paper proposes a simple learnable positional encoding called CAPE that boosts the length extrapolation performance of Transformer language models. Strengths: The empirical performance of CAPE is substantially better than previous positional encodings. I also think Figure 1 nicely demonstrates the flexibility of CAPE, i.e., CAPE can learn both local and anti-local attention heads. Weaknesses: The speed is unbearably slow in my opinion. While I appreciate the honesty of reporting speed differences in Table 1, the authors really need to figure out a way to improve CAPE's training and inference efficiency. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What are the potential ways to improve the speed of CAPE? Please be as concrete as possible. 2. Can the authors test CAPE using needle in a haystack? This way the readers can have a clearer picture of how CAPE is using long context information. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer QhvW, Thank you very much for appreciating our work. We will address your concerns below. **Q1: The speed of training** A1: We will answer the question in three parts: 1) **The potential way to improve the speed of CAPE**; 2) **The additional training ratio will gradually decrease with a larger model size**.; 3) **Moreover, CAPE indeed can speed up training**. Please refer to the more detailed answer to **CAPE computation cost** in the **Author Rebuttal by Authors**. **The potential way to improve the speed of CAPE**: * **Reduce the size of $D_{CAPE}$**. The computation cost is $O(hN^2D_{CAPE})$. Therefore, by reducing the $D_{CAPE}$ to be half, the CAPE computation cost will be half. * **Algorithm that is more efficient than MLP**. The MLP in CAPE can be changed to other more efficient operations, such as sparse-MLP. * **Sparsity**: we can make the MLP operation sparse to speed up CAPE. As shown in Figure 6, our CAPE could work well with only $D_{CAPE}=4$, while the Table 1 count with $D_{CAPE}=32$ * Pruning: use pruning on MLP to remove the redundant parameters. * Dynamic sparsity: we could use dynamic sparse training methods, such as Sparse Momentum, Dynamic Sparse Reparameterization (DSR), Dynamic Sparse Training (DST) and so on. * **Data Parallel**: The CAPE consists of fully-connected layer. Therefore, better data parallel can significantly reduce the time. * **GPU with high memory width**. The CAPE read/write the matrix with size $N^2$. Therefore, a GPU with a higher memory width will help improve the speed of CAPE. * **GPU with high computation compatibility**. The CAPE consists of a multiple-layer perception network, which is computationally dense. Therefore, GPU has better computation compatibility can help speed up the * Finally, with the increase in hardware, the cost of CAPE will be acceptable. For example, with the development of GPU, the Large Model gradually is accepted, while it is unimaginable to train a 175B model 10 years ago. **The additional training ratio will gradually decrease with a larger model size, compared to baseline Kerple.** The following is the time cost with a training length of 512 with micro_gpu_batch_size 1. | Method | 350M Total | Ratio(Compared to CAPE-Kerple) | 2.7B Total |Ratio(Compared to CAPE-Kerple) | 6.7B Total | Ratio(Compared to CAPE-Kerple)| |------|------|------|------|------|------|------| |RoPE|210.01|0.9366| 472.63|1.1187| 635.57|0.8858 |T5's bias|355.16| 1.5839|537.62|1.2725|808.85|1.1273 |Alibi|172.60|0.7697|325.95| 0.7715|596.77|0.8317 |**Kerple**|189.91| **0.8469**|370.32| **0.8765** |661.82|**0.9224** |FIRE|248.13|1.1066|432.63| 1.0240|797.68|1.1118 |**CAPE-Kerple**|224.22|**1.0000** |422.48|**1.0000**|717.46|**1.0000** Apparently, when the model becomes large, the additional computational cost of CAPE gradually decreases. Therefore, the CAPE may be a potential good choice for an extremely large language model. **Moreover, CAPE indeed can speed up training, compared to current popular RoPE** | Evaluation | RoPE Length 4096 & Batch 1 |Kerple Length 512 & Batch 8|CAPE-Kerple Length 128 & Batch 32 |CAPE-Kerple Length 512 & Batch 8 |CAPE-Kerple Length 1024 & Batch 4 | CAPE-Kerple Length 2048 & Batch 2 | CAPE-Kerple Length 4096 & Batch 1 | |------|------|------|------|------|------|------|------| |128|38.36|33.04|31.49|32.22|33.22|34.71| 36.65| |256|33.21|29.11|28.27|28.32|29.02|30.08|31.57| |512|27.33|24.68|24.93|23.88|24.14|24.77|25.68| |1024|25.49|23.82|24.31|22.62|22.62|23.09|23.80| |2048|23.55|24.03|23.34|21.16|21.00|21.30|21.84| |4096|**24.58**|30.76|24.38|**21.79**|21.34|21.45|21.83| |8192|152.54|36.81|25.01|21.70|21.12|21.24|21.50 |Time Cost|**265.48** |117.10 |128.94|**192.45**|314.86|547.78|1217.34 With the same training token, the CAPE with a training length of 512 and batch size of 8 can even with comparable performance with a RoPE training length of 4096 and batch size of 1. Also, the CAPE with a training length of 512 and batch size only takes 192.45ms, while RoPE takes 265.48 ms. Therefore, the CAPE could be a choice for speeding up training in the future. **Q2: How CAPE is using long context information and haystack test** A2: We analyze how CAPE uses the long context information via visualization analysis, as shown in Figure 1 and Appendix. According to Figure 1 and Appendix, the CAPE not only helps the model to pay attention to the local information but also helps the model to look at the information far away, with different heads having different functions. The haystack test, requires more training (including pretrain and alignment) so that the model can follow the instructions to finish the test, while currently, we do not have such resource to train such a model. However, in our CHE benchmark, there is one task named missing duplicate that may be similar to haystack test. * haystack test: * It works by embedding specific, targeted information (the “needle”) within a larger, more complex body of text (the “haystack”). * The goal is to assess an LLM’s ability to identify and utilize this specific piece of information amidst a vast amount of data. * CHE Benchmark Missing Duplicate Task: * The input is a binary string of the form $ww$ where w is itself a binary string. **One token in this string has been hidden, and the network must find out which one it is**, by looking at the value at the same position but on the other side of the string. For instance, if $x = ab$_$aba$ (i.e., w = aba), then $y = a$. * The goal is also to assess a model's ability to identify and utilize this specific piece of information amidst a vast amount of data. * As shown in Table 2, Kerple only gets 79.06% accuracy, while CAPE-Kerple gets 87.56%. This suggests the ability of CAPE on the haystack test. If there are any questions, please let us know. And if you think that we have addressed your concerns, could you please consider raising the score? Thank you very much for your support. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal. Comment: I like the additional details provided by the authors. I encourage the authors to continue pushing the efficiency of CAPE. I increased the score to 6. --- Reply to Comment 1.1.1: Title: Response to Reviewer QhvW Comment: Dear Reviewer QhvW, Thank you very much for your reply, for improving the score, and for your encouragement. We will continue to refine our work, focusing on both efficiency and effectiveness. We sincerely hope that our efforts will contribute to and inspire the entire research community.
Summary: Considering the fixed parameters of RoPE may lead to the generalization issue, this paper introduce a dynamic position embedding method named CAPE, where position encoding is depend on the input context. Specifically, CAPE enables testing-time adaptation to input context by using a two-layer LeakyReLU neural network to parameterize the positional bias added to the vanilla attention module. By meticulously selecting the hyper-parameters, CAPE can achieve better performance compared with previous methods. Strengths: 1. The concept is intriguing. Making position embeddings data-independent can potentially enhance the model's performance. 2. The authors conducted numerous experiments on small language models and offered valuable insights. 3. This paper includes several additional experiments shown in the appendix, which provide valuable insights. Weaknesses: 1. CAPE can introduce extra computation, as well as lower the training and inference efficiency 2. The choice of hyperparameters is significant for CAPE, as it is correlated with the model performance and the efficiency. 3. There is no experiments on LLMs. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Can CAPE adapt to LLMs, for example, Llama3-8B, with few post-training steps? 2. What's the context length boundary of CAPE? In this paper, experiments just demonstrate the maximum context length with 8192. 3. The authors claim that "CAPE is semantically dependent and adaptive." What does "semantically dependent" mean? If CAPE is used for a summarization task with different context lengths (perhaps 8192 or 4096), what does "semantic" refer to in the summarization task? Providing a few examples could help clarify the writing. 4. About motivation: Even though positional encoding (PE) is fixed during inference, attention still relies on the input context. What if we consider a scenario where PE simply provides the context index to help the language model distinguish the positions of each token (static), while the attention module dynamically selects key information and provides semantic information to assist the language model? Why does CAPE need to incorporate such a dynamic selection mechanism, which is already learned by LLMs, into the PE? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: See weakness and question part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer NzBk, Thank you very much for appreciating our work. We will address your concerns below. **Q1: The efficiency of CAPE** A1: **With the model size increase, the additional computing cost ratio will decrease, compared to baseline Kerple**. Moreover, the CAPE can even speed up the training because a smaller training length of CAPE can still achieve good performance. We discuss this in **CAPE computation cost** in **Author Rebuttal by Authors**. **Q2: The choice of $D_{CAPE}$** A2: The CAPE is relatively robust to the choice of $D_{CAPE}$, as shown in Figure 6. To achieve satisfactory performances, we find that CAPE with a modest size (not large $D_{CAPE}$) is sufficient. For convenience, we will copy the results (Arxiv dataset) below. | | 512 | 1024 | 2048 | 4096 | 8192 | |------|---------|---------|---------|---------|---------| | $D_{CAPE}$ 4 | 4.54 | 4.27 | 4.38 | 4.28 | 4.06 | | $D_{CAPE}$ 8 | 4.53 | 4.26 | 4.33 | 4.17 | 3.97 | | $D_{CAPE}$ 16| 4.52 | 4.24 | 4.26 | 4.08 | 3.86 | | $D_{CAPE}$ 32| 4.50 | 4.22 | 4.22 | 4.04 | 3.82 | | $D_{CAPE}$ 64| 4.50 | 4.21 | 4.22 | 4.04 | 3.85 | **Q3: The experiment on LLM** A3: We further conduct experiments on 2.7B and 6.7B model sizes, which proves that the CAPE still works well. We further analyze the result of 2.7B and 6.7B in **Result on Large Model Size 2.7B and 6.7B** in **Author Rebuttal by Authors**. We add larger model (2.7 and 6.7BB) experiments in the following, with micr_gpu_batch_size 4 and length 512 (Books Dataset). | Model size| Method | 512 | 1024 | 2048 | 4096 |-------|-------|-------|-------|-------|-------| |2.7B|RoPE| 21.01|25.00|48.13|160.59 | | | RPE|21.10|21.88|23.59|33.23| | | Kerple|21.14|22.08|23.38|27.21| | | CAPE-Kerple|20.52|21.01|20.23|19.57| |6.7B|RoPE| 20.86|22.27|28.01|110.00 | | RPE|20.79|21.60|22.32|26.31| | | Kerple|20.71|21.57|22.07|24.48| | | CAPE-Kerple|20.09|20.54|19.83|19.32| **Q4: Can CAPE adapt to LLMs, for example, Llama3-8B, with few post-training steps?** A4: The comment is quite helpful. Although we train from scratch for all transformer models with CAPE (that is the main reason that we did not try very large transformer models due to the computation limitation), it is still possible that we merely train on the CAPE part (i.e., the introduced MLP) but freeze (or fine-tune) all other parameters of transformer models. This strategy will definitely accelerate the training and require fewer post-training steps. We will discuss the possibility of adapting CAPE to LLMs with fewer post-training steps in the paper. **Q5: What's the context length boundary of CAPE? In this paper, experiments just demonstrate the maximum context length of 8192.** A5: We further validate CAPE on length 16384, which proves that CAPE still works well. For the length of 32768, the GPU reports out-of-memory. Therefore, we believe that the context boundary of CAPE is more than 16484, while the training length is 128. |Method| 128|256|512|1024|2048|4096|8192|16384| |-------|-------|-------|-------|-------|-------|-------|-------|-------| Kerple|31.96| 29.02| 29.70| 42.74| 56.24| 73.59| 87.03|93.38| CAPE-Kerple|31.44| 28.25| 24.93| 24.33| 23.29| 24.32| 24.93|25.33 **Q6: What does "semantically dependent" mean?** A6: Thank you for pointing out. The "semantically dependent" indicates that our position encoding value depends on the semantics (which is the attention score in this paper). For clear presentation, we will change such words to ``context adaptive`` or ``contextually dependent``. And we will also change explain that the context in the paper is attention score. **Q7: Why does CAPE need to incorporate such a dynamic selection mechanism, which is already learned by LLMs, into the PE** A7: The reason is that the CAPE has more expressiveness. Suppose that the ``optimal'' attention mechanism is composed of the key-query multiplication (denoted as $A(x)=XW_Q(XW_K)^T$) and the additive positional encoding bias (denoted as $B(x)$), i.e., $A_{optimal}(x)=XW_Q(XW_K)^T+B(x)$. Here, x and X are the input sequence and the corresponding token embeddings. In previous static PE, $B(x)$ is set as a constant for all input sequence, i.e., $B(x) = B$ and the constant $B$ is optimized across all samples $\{x\}$ during training. However, our main contribution and claim is that the optimal positional encoding should vary for different sequences. Therefore, we proposed the dynamic context-adaptive PE (CAPE), where $B(x)$ depends on both the sequence and the positional information. In contrast with the static PE, the proposed CAPE can adjust dynamically with the input context, and is optimal for each input. Speaking from higher-level, we see that the general fixed and static PE (even though the PE is learned from data) is an averaged optimal positional encoding over all training samples, while the dynamic PE is context-dependent and is optimal for each sample. That is the core motivation of using dynamic PE rather than the static PE. If the the optimal solution is $B(x) = B$, the $f(XW_Q(XW_K)^T, B)$ can be reduced to zero as $f(*)$ is a universal approximate function (two-layer mlp with activation). Therefore, our CAPE is at least no worse than the baseline. If there are any questions, please let us know. And if you think that we have addressed your concerns, could you please consider raising the score? Thank you very much for your support. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: I appreciate the authors' response, and I believe conducting experiments on LLMs is essential for this paper. I hope the authors can incorporate the results of these experiments in the revision. However, some questions still confuse me. 1. First, the formulation of A(x) = XW_{Q}(XW_{K})^{T} + B(x). Using RoPE as an example, which can be written as: A(x) = (Q+f(Q))(K + f(K))^{T} = QK^{T} + Qf(K)^{T} + f(Q)K^{T} + f(Q)f(K)^{T}. Is B(x) equal to Qf(K)^{T} + f(Q)K^{T} + f(Q)f(K)^{T} ? or B(x) is equal to f(Q)f(K)^{T}? As the authors mentioned ``B(x) is set as a constant for all input sequence'', however, for RoPE, B(x) has Q and K terms (if B(x) is the former formulation), which is input-depend, rather than a fixed term. 2. Can you explain why CAPE has better extrapolation than other RPE methods, e.g., YaRN, from an Intuitive perspective? 3. Does the ``better extrapolation than other RPE methods'' in the paper mean that CAPE performs better than other RPE methods with the same extrapolation length, e.g., when YaRN and CAPE scale the model context length to both 8192, CAPE has better performance? or does CAPE potentially have a longer context scaling length than YaRN (as you know, YaRN has a limited scaling length)? I hope the authors can respond to those questions. --- Rebuttal 2: Title: Response to Reviewer NzBk Comment: Dear Reviewer NzBk, Thank you very much for your reply. We will answer your questions below. **Q1: The RoPE and its B(x)** A1: **We claim "B(x) is set as a constant for all input sequence" under the discussion of additive RPE**. Usually, the RoPE is not considered as an additive relative position encoding, as discussed in FIRE [3] paper Section 2.2. We will answer the question about RoPE in two parts: **1) The definition of Additive RPE**; **2) General view of the $B(x)$ and RoPE** **The definition of Additive RPE, which should have formulation $A(x) = XW_{Q}(XW_{K})^{T} + B$ and B is induced by position encoding function** According to FIRE paper [3], the additive relative position encoding can be represented by the formulation $A(x) = XW_{Q}(XW_{K})^{T} + B$, while the B $\in R^{N \times N}$ is induced by the **position encoding function** $N^{*2} \to R$. To be more specific, $B=g(D)$, where $g(.)$ is a position encoding function and $D$ is the distance matrix. Usually, the $D$ is the following: ```python D= [0 0 0] [1 0 0] [2 1 0] ``` Therefore, when we are discussing the $A(x) = XW_{Q}(XW_{K})^{T} + B$, usually the $B$ should fulfilled the mentioned requirements. According to FIRE paper [3], the RoPE is not an additive relative position encoding so the RoPE actually does not have such $B$. Therefore, under the original definition, RoPE actually does not have either $B$ or $B(x)$. And for our $B(x)$, we mainly focus how to utilize the additive RPE matrix $B$ and attention score, so that there should be an additive relative position encoding bias matrix $B$. Therefore, considering the previous definition, we will have the following: * **The naive query and key implementation:** $XW_{Q}(XW_{K})^{T}$ * **The part that utilizes both bias matrix B and $XW_{Q}(XW_{K})^{T}$:** $B(x)$ * **The part is that fixed after training**: $B$ * For the previous naive additive position encoding method: $B(x)=B$ * For our CAPE: $B(x)=B+f(XW_{Q}(XW_{K})^{T}, B)$. **General view of the $B(x)$ and RoPE** If we do not consider the definition from FIRE[3] and make the definition of $B$ becomes more general, then we may have the following. * **The naive query and key implementation:** $XW_{Q}(XW_{K})^{T}$ * **The part that removes naive query and key implementation:** $B(x)$ * **The part is that fixed after training for additive RPE**: $B$ * For the previous naive additive position encoding method: $B(x)=B$ * For our CAPE: $B(x)=B+f(XW_{Q}(XW_{K})^{T}, B)$. * For RoPE: $B(x)=Qf(K)^{T} + f(Q)K^{T} + f(Q)f(K)^{T}$. From the general additive RPE perspective, this may be why RoPE is better than its baseline Sinusoidal encodings. **Q2: Why CAPE is better than other RPE methods e.g.** A2: Compared to other RPEs e.g. YaRN (YaRN is also a great work), we further improve the performance of the current great method Kerple and FIRE[3] via dynamic position encoding to get better performance. Also, YaRN improves performance via sequence-length-related adjustment. * **Our baseline is powerful**. We may check the FIRE paper[3] Figure 1 (on page 2). Figure 1 of the FIRE paper compares the performance between Kerple, and FIRE and other RPEs (including YaRN). The results prove that Kerple and FIRE achieve relatively good performance. * **CAPE solves the baseline Kerple and FIRE's limitation, whose position encoding is fixed after training.** As we claim in our paper, though additive RPE achieves good performance, its position encoding is fixed after training. Hence, we propose to further improve the additive relative position encoding performance by dynamically adjusting the position encoding via attention score. * Therefore, our method could achieve relatively better performance than other methods. **Q3: The definition of "better extrapolation than other RPE methods"**. A3: The better extrapolation means: with the same experiment setting (training length, training tokens, and so on), our proposed method could achieve better performance on evaluation length $T_{eval}$, while $T_{eval}$ is larger than training length $T_{train}$. For example, we train CAPE-Kerple with **the maximum training length of 1024**, and we could say that CAPE-Kerple could have better extrapolation performance if CAPE-Kerple achieves better performance on evaluation length 8192. We follow the definition of length extrapolation from previous works [1][2][3]. **Thank you very much for your reply. If there is any other question, please let us know.** Reference: [1] Press, O., Smith, N., & Lewis, M. Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. ICLR, 2021. [2] Chi, T. C., Fan, T. H., Ramadge, P. J., & Rudnicky, A. (2022). Kerple: Kernelized relative positional embedding for length extrapolation. NIPS, 2022 [3] Li, S., You, C., Guruganesh, G., Ainslie, J., Ontanon, S., Zaheer, M., ... & Bhojanapalli, S. Functional Interpolation for Relative Positions improves Long Context Transformers. ICLR, 2024. --- Rebuttal 3: Title: Re-Response to Authors Comment: Ok, I appreciate the authors' professional response. In your paper, on line 233, the authors mention, "As shown in Figure 4 and Table 5, CAPE consistently outperforms established baselines such as RoPE" (I know you mention Figure 2, please correct this in the final revision). It is surprising for me to know that Additive Position Embedding works better than RoPE-based methods. I also read the general response by the authors, the ``Q2: Result on Large Model Size 2.7B and 6.7B (Reviewer tTna, Reviewer NzBk)'' also surprised me, as it indicates that additive RPE can achieve much better results than RoPE under the context scaling settings. 1. Can you explain why the additive RPE is better than the RoPE-based model here intuitively? 2. Are all the experiments in Figure 2 fair and consistent? Including model size, training data, etc. 3. From the results in Figure 2, it seems that the mainstream RoPE is not as effective as Additive RPE. What do you think about this phenomenon? Should the mainstream RPE now use additive RPE rather than cumulative RPE like RoPE? I hope the authors answer the above questions, which are very important for my judgment of your work. --- Rebuttal Comment 3.1: Title: Response to Reviewer NzBk (Part 1/2) Comment: Dear Reviewer NzBk, Thank you very much for your notice in Line 233, we have revised it for the final revision. And thank you very much for your question. We will answer your question below. **Q1: Can you explain why the additive RPE is better than the RoPE-based model here intuitively** A1: The additive RPE has two attributes: 1) could have explicitly long-term decay (local position pattern); 2) could have the anti-local pattern. We will explain it below step by step. **Explicitly long-term decay (local position pattern)** * The RoPE paper claims that long-term decay is important for long context, and the RoPE achieves it via implicitly long-term decay. * As the additive RPE has the formulation $A(x)=XW_Q(XW_K)^T+B$, we could implement the explicitly long-term decay (local position pattern) pattern via a bias matrix that B has a negative value. * For example, $B(i,j) = -r_1\log(1+r_2|i-j|)$ (logarithmic variant) , where $r_1, r_2>0$ are learnable scalars. **The anti-local pattern: emphasize far away keys more** * For long-context, we cannot abandon long-distance information, otherwise it will become local attention. Therefore, the anti-local position pattern is important so that the model could pay attention to long-distance information. * The RoPE has long-term decay so that with distance increase, the long-distance information weight will be smaller. **Therefore, RoPE does not have anti-local position pattern.** * The additive RPE FIRE successfully achieves anti-local pattern, as shown in its paper Figure 4. * Our CAPE-Kerple also successfully achieves anti-local position pattern, as shown in Figure 1 and Appendix F. For example, with the distance increases, the bias value is non-decreasing. Therefore, as the additive RPE could have both local and anti-local position patterns, it may be better. **Q2: Are all the experiments in Figure 2 fair and consistent? Including model size, training data, etc.** A2: **We promise that the experiments are fair and consistent**. We follow the training protocol from Kerple. **And we only change the position encoding method for different experiments.** The model size of Figure 2 is 125M, and the model size of Figure 4 is 350M. The experiment setting is shown in Appendix B. And we directly copy it here. | | | **125M** | | **350M** | |-----------------------------|-------------|-------------|-------------|------------| | Training sequence length | | 512 | | 512 | | Batch size | | 32 × 8 | | 32 × 8 | | Number of iterations | | 50k | | 50k | | Dropout prob. | | 0.0 | | 0.0 | | Attention dropout prob. | | 0.0 | | 0.0 | | Attention head | | 12 | | 16 | | Feature dimension | | 768 | | 1024 | | Layer number | | 12 | | 24 | | Optimizer | | Adam | | Adam | | Optimizer parameter betas | | [0.9, 0.95] | | [0.9, 0.95]| | Learning rate | | 6e-4 | | 3e-4 | | Precision | | float16 | | float16 | --- Rebuttal 4: Title: Response to Reviewer NzBk (Part 2/2) Comment: **Q3: From the results in Figure 2, it seems that the mainstream RoPE is not as effective as Additive RPE. What do you think about this phenomenon?** A3: The following is our personal opinion. The mainstream method is RoPE because of three reasons: 1) The timing of RoPE and additive RPE ; 2) Within training length, RoPE has comparable performance with additive RPE; 3) The development of LLaMA. **The timing of RoPE and additive RPE** The RoPE paper is on arxiv from 20 April 2021, while there was no Alibi (27 Aug, 2021), Kerple (20 May, 2022), or FIRE (3 Oct, 2023). **Within training length, RoPE has comparable performance with additive RPE.** As shown in Figure 2 and Appendix 5, for the performance within training length, the proposed RoPE achieves close or similar performance with Kerple and FIRE. For example, with a training length 512, RoPE achieves 4.5755 ppl, Kerple achieves 4.5817 and FIRE achieves 4.5741. Therefore, for the performance within training length, the different is not very large. The additive RPE presents its superiority for length extrapolation, and the length extrapolation problem has become important recently. Therefore, as previously we mainly considered performance within training length, the RoPE is enough. **The development of LLaMA** * LLaMA uses RoPE. Therefore, if anyone is working on open-source LLM, then they will use LLaMA so that they will use RoPE. * Therefore, we could find that the YaRN, CLEX, ChunkLLaMA or other works that focus on length extrapolation or long-context all focus on RoPE. * Also, currently data is relatively important so the architecture from LLaMA to LLaMA 3 is not changed a lot. Hence, the current mainstream position encoding method is RoPE. **Q4: Should the mainstream RPE now use additive RPE rather than cumulative RPE like RoPE?** A4: It is an interesting question, and we are also curious about it. Both additive PE and RoPE-based PE try to incorporate the key-query similarity with the positional information of tokens. However, they adopt different operations, using addition and multiplication operations respectively. In this paper, we developed an additive context-adaptive PE. It is hard to say which kind of PE (additive or RoPE-based) will be the mainstream finally. [RoPE is widely recognized and used in Llamma models.] Based on the current evidence and the experiment results, additive RPE may be a better choice. * The FIRE has prove the it could achieve better performance than RoPE, whatever within training length or beyond training length. * CAPE further helps achieve better performance within training. As proved in our paper (Figure 2 and Table 5), the CAPE could help additive RPE achieves better performance within training length. * CAPE further helps achieve better length extrapolation performance. Also, as proved in our paper, the CAPE could help additive RPE achieve better performance beyond training length. * Therefore, based on the current evidence and experiment results, the additive RPE may be better. **Thank you very much for your constructive comments. If there is any further question, please let us know.** --- Rebuttal Comment 4.1: Title: Final Response to Authors Comment: Dear Authors Glad to hear the authors' constructive responses, both the general and individual responses. The authors conduct numerous experiments during the discussion period and offer many insightful opinions and explanations about the work. I hope this paper can be accepted by NIPs, as it may contribute to the LLM community. BTW, I hope the authors make sure the final revision includes some modifications during the discussion phase, as it can better help readers understand the work and drive progress in the field of long context models. I have no further questions and will raise my score from 6 to 8. Best, Reviewer NzBK --- Reply to Comment 4.1.1: Title: Response to Reviewer NzBk Comment: Dear Reviewer NzBk, Thank you very much for your reply, for your encouragement and support. **We promise that our final revision will include some modifications during the discussion phase**, including but not limited to the experiment of large model size, the context length boundary of CAPE, the modification of "semantically dependent", the discussion of RoPE and so on. Again, thank you very much for your attention and support to this work, and wish you a good day.
Summary: The paper proposes context-adaptive positional encoding. The paper proposes a 2-layered MLP to non-linearly integrate positional bias information (can be computed using prior methods like Alibi, FIRE, and Kerple) and query-key content-based dot product values representing semantic relations across different heads to dynamically create a contextual position-informed matrix that can additively modulate the self-attention matrix. Strengths: 1. The idea is reasonably motivated. 2. The empirical results are promising for length extrapolation in language modeling. The experiments on Chomsky's hierarchy tasks are a nice touch. Weaknesses: 1. If I understand correctly, the original relative encoding (from Shaw et al. and then the one in Transformer XL) could also count as contextual. There, the position-based additive values are computed based on the dot product of queries and distance encodings (treated as keys). So, the context can influence the distance-related bias through the query representations. This could be better discussed in the paper. I would also be curious how Transformer XL style relative encoding would perform in the language modeling datasets. 2. There is some hit in the computation expense. Although it is manageable, the authors don't treat each head independently but use the concatenation of heads with a shared hidden state. Technical Quality: 3 Clarity: 3 Questions for Authors: Minor suggestions: * It Would be good to refer to the appendix implementation section G when discussing function f and the multi-head CAPE in the main paper. * Perhaps it is better to present the pseudocode with einops-style syntax. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: There isn't an explicit section purely for limitations, but there is a section for computational cost analysis, which may illuminate some limitations (and authors treat the section like a limitation section in the checklist). Overall, it's mostly adequate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer CfZF, Thank you very much for appreciating our work. We will address your concerns below. **Q1: The performance of transformer-xl relative encoding** A1: We have shown the performance of transformer-xl below, which also presents great length extrapolation performance. Also, the Relative in Table 2 is just the transformer-xl relative encoding. The experiment is conduct on 125M model with training length 512 and batch size 32. | Method | 512 | 1024 | 2048 | 4096 | 8192 | |------------|-----------|-----------|-----------|-----------|-----------| | Rope | 19.75 | 261.39 | 411.24 | 635.80 | 762.86 | | T5's bias | 19.67 | 19.45 | 33.41 | 141.94 | 347.36 | |Transformer-XL|19.40|19.23|19.17|21.21|23.23| | Alibi | 20.04 | 19.75 | 20.17 | 20.50 | 21.31 | | Kerple | 19.83 | 19.20 | 20.49 | 28.33 | 40.95 | | FIRE | 19.77 | 21.09 | 103.14 | 308.58 | 484.55 | | Cape-Kerple| **19.25** | **18.28** | **17.20** | **17.58** | **17.85** | According to the experiment, the transformer-xl achieves good performance on length extrapolation. Therefore, this also suggests that the position encoding should interact with attention/query/key to further improve the performance. **Q2: The computation expense** A2: Thank you very much for your precious comment on the compuation cost. Yes, we concatenate the attention and bias on the head dimension, and then we use an mlp to process them to dynamically adjust the position encoding values. We have further analyze the CAPE cost in **CAPE computation cost** in **Author Rebuttal by Authors**. **Q3: Mention appendix implementation section G when discussing function f and the multi-head CAPE in the main paper.** A3: Thank you very much for your suggestion. We will revise the paper with the following sentence: **Original:** It then outputs $h$-dimensional vectors, where each element corresponds to the CAPE for the respective head. **Revised:** It then outputs $h$-dimensional vectors, where each element corresponds to the CAPE for the respective head. We have shown the code implementation in Appendix G. **Q4: Present the pseudocode with einops-style syntax** A4: Thank you very much for your suggestion. We will add the einops-style syntax in Appendix G. ```python import torch import torch.nn as nn from einops import rearrange, repeat class CAPE(nn.Module): def __init__(self, head_number=12, mlp_width=12): """ CAPE attention bias module. Args: head_number: number of attention heads. mlp_width: Width of MLP. """ super(CAPE, self).__init__() self.mlp = nn.Sequential( nn.Linear(2 * head_number, mlp_width), nn.LeakyReLU(), nn.Linear(mlp_width, head_number) ) def forward(self, attention: torch.Tensor, bias: torch.Tensor): """ Args: attention: input sequence, which is q^T * k, shape [bsz, num_heads, seq_len, seq_len] bias: bias matrix, which can be generated by Alibi, Kerple FIRE or other additive position encodings shape [1, num_heads, seq_len, seq_len] Returns: attention with CAPE, shape [bsz, num_heads, seq_len, seq_len] """ # Repeat the bias for batch size bias_tile = repeat(bias, '1 h l1 l2 -> b h l1 l2', b=attention.shape[0]) # Concatenate attention and bias attention_bias_concat = torch.cat((attention, bias_tile), dim=1) # Rearrange the dimensions for MLP processing attention_bias_concat = rearrange(attention_bias_concat, 'b h l1 l2 -> b l1 l2 h') # Apply the MLP attention_bias_concat = self.mlp(attention_bias_concat) # Rearrange back to original dimensions attention_bias_concat = rearrange(attention_bias_concat, 'b l1 l2 h -> b h l1 l2') return attention + bias + attention_bias_concat ``` If there are any questions, please let us know. And if you think that we have addressed your concerns, could you please consider raising the score? Thank you very much for your support. --- Rebuttal Comment 1.1: Title: Rebuttal Response Comment: Thank you for the rebuttal. It generally addresses my initial concerns. The new experiments should improve the paper. I would encourage adding more discussion in comparison with Transformer XL as well. I also do not see any technical issue unlike tTna and your rebuttal makes sense to me. I maintain my acceptance score for now. But two questions: 1. I noticed in your bigger LLM experiments that you provided in the response that AliBi is missing. Is there a reason for that? Do you have any comments on it? 2. I may have missed it the first time around - but how does it stack up against something like xPos? Can you comment more on that method. I noticed that the xPos paper is in the references (42), but I couldn't find the context where it is cited (if at all). --- Rebuttal 2: Title: Author Response Comment: Dear Reviewer CfZF, Thank you very much for appreciating our work. We will follow your suggestion to add the new experiment results and discuss the transformer-xl in our paper, and we will update it immediately when we are allowed to revise the paper. Also, we will answer the two questions mentioned in the following: **Q1: The Experiment of Alibi** A1: The experiment of Alibi is the following. **For model size 2.7B:** | Model size| Method | 512 | 1024 | 2048 | 4096 |-------|-------|-------|-------|-------|-------| |2.7B|RoPE| 21.01|25.00|48.13|160.59 | | | Alibi|21.23|22.17|22.91|23.22| | | T5's bias (RPE)|21.10|21.88|23.59|33.23| | | Kerple|21.14|22.08|23.38|27.21| | | CAPE-Kerple|20.52|21.01|20.23|19.57| **For model size 6.7B**: | Model size| Method | 512 | 1024 | 2048 | |-------|-------|-------|-------|-------| |6.7B|RoPE| 20.86|22.27|28.01| | | Alibi|20.79|21.63|22.45|23.22| | | T5's bias (RPE)|20.79|21.60|22.32| | | Kerple|20.71|21.57|22.07| | | CAPE-Kerple|20.09|20.54|19.83| The reason for missing Alibi experiment result: with the model size increase, we face some engineering challenges: * Our implementation is based on the framework GPT-NeoX, which already implements the Alibi position encodings. * You may notice that our maximum evaluation length is 8192 for 125M and 350M, while the maximum evaluation length for 2.7B is 4096 and the maximum evaluation length for 6.7B is 2048. The reason is that the Alibi position encoding faces out-of-memory challenges for the evaluation length 8192 for the 2.7B model size and 4096 for the 6.7B model size. * Therefore, considering two aspects ( 1. evaluation as long as possible; 2. the most popular position encoding method is RoPE), we mainly present the results of RoPE, T5's bias, Kerple and CAPE-Kerple in rebuttal so that the evaluation length can be longer (evaluation length 4096) for both 2.7B model size and 6.7B model size. * If you would like to see more experiment results, please let us know. We will try our best to finish the experiment as soon as possible. **Q2: Discussion with XPos** A2: The CAPE is designed for Additive Relative Position Encoding, while XPos is an improved version of RoPE (Note: RoPE is Relative Position Encoding, but NOT Additive Relative Position Encoding). Therefore, CAPE and XPos focus on different directions of position encodings. **The Difference between XPos and CAPE** The XPos: * The implementation of RoPE: $f_q(q,n)=qe^{i\theta n}$ * The implementation of XPos: $f_q(q,n)=qe^{\xi n+i\theta n}$ * Apparently, the XPos is an improved version of RoPE, while XPos degrades to RoPE when $\xi$ becomes zero. The CAPE: * The implementation of Additive Relative Position Encoding: $A(x)=XW_Q(XW_K)^T+B$. The $B$ is the bias matrix, which could be Alibi, Kerple, FIRE, or other potential additive relative position encoding methods. * The implementation of CAPE: $A(x)=XW_Q(XW_K)^T+B + f(XW_Q(XW_K)^T, B)$. * Apparently, the CAPE can be applied to any potential additive relative position encoding methods. **The performance of XPos and CAPE-Kerple** The previous paper BiPE [1] conducted experiments using XPos on the Arxiv dataset. As shown in the BiPE [1] paper's Figure 4, with the training length 1024, the perplexity of XPos increases quickly from about 6 ppl (at evaluation length 1024) to about 16 ppl (at evaluation length 6144), while our proposed method (CAPE-Kerple) decreases the ppl from 5.21 (evaluation length 1024) to 5.00 ppl (at evaluation length 8192) with training length 128. This suggests that our method should be better than the mentioned XPos. Therefore, the XPOS is an improved version of RoPE (note that RoPE is not additive relative position encoding), while CAPE generally improves the performance of the additive relative position encodings, including Alibi, Kerple, FIRE and so on. **Finally, if you have any questions or would like to discuss anything, please let us know. We will try our best to share our opinions or conduct experiments.** Reference: [1] He, Z., Feng, G., Luo, S., Yang, K., He, D., Xu, J., ... & Wang, L. (2024). Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation. arXiv preprint arXiv:2401.16421. --- Rebuttal Comment 2.1: Comment: Thank you for the additional details. This could be included in the paper/appendix. --- Rebuttal 3: Title: Response to Reviewer CfZF Comment: Dear Reviewer CfZF, Thank you very much for your reply, and thank you very much for your support. We promise that we will include the additional details/experiments in the paper/appendix.
Summary: This paper offers a doable solution for long context tasks. The paper introduces the Context-Adaptive Positional Encoding (CAPE) method to enhance transformer model adaptability and flexibility in processing long input lengths and contexts. To overcome the limitations of static positional encodings such as Absolute Positional Encoding and Relative Positional Encoding, the proposed method adjusts the positional encodings based on input context and learned fixed priors. Experimental evaluation on Arxiv, Books3, and CHE shows that the proposed method significantly improves model performance in length generalization. Strengths: 1. The long-context ability of LLMs is crucial and fundamental for advancements in the LLM domain, significantly impacting downstream tasks. 2. The proposed method is intuitive and straightforward to implement and understand. 3. The performance of the proposed method, as indicated by PPL, appears promising and is well-demonstrated in Figures 2, 3, and 4. Weaknesses: 1. The major concern is in the experimental evaluation part. The experiment is limited to PPL, which is insufficient for verifying long-context ability. 2. The experiments are limited to a very small LLM model - a 124M transformer, which is not enough to verify the proposed method, as large LLMs and small LLMs behave quite differently. 3. The proposed method is not convincing to me. The proposed method is mainly in Equations (2) and (3). In Equation (2), the proposed method does not seem promising. Considering the next step ( $h(*)$ ) in the neural network, it will be $ h(A_{\text{CAPE}}(X)) = h(XW_Q(XW_K)^\top + f(XW_Q(XW_K)^\top, B)) = g(XW_Q(XW_K)^\top, B) $. Could the author please elaborate on this? 4. Similarly, for Equation (3), $ A_{\text{CAPE}}(X) = XW_Q(XW_K)^\top + B + f(XW_Q(XW_K)^\top, B) $, it has a similar problem to Equation (2). Due to the major weakness in the experiment and the unreasonable proposal of the bias term, I would like to reject this paper. Technical Quality: 2 Clarity: 3 Questions for Authors: See Weaknesses Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: See Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer tTna, Thank you for the detailed review. We will address your concerns below. **Q1: The major concern is in the experimental evaluation part.**. A1: We have presented the experimental results besides PPL, as shown in Page 9 Section 4.7 and Appendix D Experiments on Chomsky Hierarchy Evaluation Benchmark (the benchmark and its variants are used in previous works), which **uses accuracy as evaluation metrics.** Four convenience, we directly copy the result of CHE Benchmark here. We train with 200K steps and length 40, while we test on length 500. The random accuracy is 50%, except for modular arithmetic simple, cycle navigation, bucket sort, solve equation, and modular arithmetic brackets, where it is 20%. $\dagger$$\dagger$$\dagger$ denotes permutation-invariant tasks, which are expected to be solved without positional information. | Level | Task | Learned | sin/cos | RoPE | Relative | ALiBi | Kerple | FIRE | | Alibi | Kerple | FIRE | |-------|------|---------|---------|------|----------|-------|--------|------|-------|--------|-------|------| | | | **Randomized** | **Randomized** | **Randomized** | **Randomized** | **Randomized** | **Randomized** | **Randomized** | | **CAPE (Ours)** | **CAPE (Ours)** | **CAPE (Ours)** | | R | even pairs | 50.04 | 91.27 | 99.98 | 96.60 | 73.52 | 57.50 | 73.86 | | 99.99 | 99.58 | **100** | | | modular arithmetic simple | 19.95 | 20.39 | 21.35 | 20.84 | 20.02 | 21.79 | 21.09 | | 23.58 | **24.47** | 24.46 | | | parity check$\dagger$$\dagger$$\dagger$ | 50.14 | 50.52 | 50.05 | 50.09 | 50.09 | 50.07 | **50.97** | | 50.30 | 50.07 | 50.04 | | | cycle navigation$\dagger$$\dagger$$\dagger$ | 24.97 | 25.37 | 27.63 | 26.95 | 24.64 | 29.47 | 28.41 | | 22.99 | **34.53** | 27.54 | | DCF | stack manipulation | 59.92 | 65.92 | 61.49 | 64.73 | 66.42 | 66.93 | 69.33 | | 68.18 | **72.04** | 70.90 | | | reverse string | 52.76 | 67.28 | 65.23 | 65.59 | 71.09 | 71.54 | 65.89 | | 73.37 | 70.74 | **76.40** | | | modular arithmetic brackets | 31.00 | 30.70 | 31.25 | 31.74 | 30.56 | 24.79 | 30.92 | | 31.34 | **32.37** | 31.50 | | | solve equation | 20.00 | 19.97 | 21.85 | **22.93** | 19.92 | 21.15 | 22.06 | | 20.03 | 22.49 | 22.42 | | CS | duplicate string | 52.77 | 65.44 | 64.97 | 67.66 | 65.13 | 66.72 | 69.03 | | 70.84 | **72.95** | 72.71 | | | missing duplicate | 50.38 | 49.78 | 63.37 | 72.34 | 74.21 | 79.06 | 79.27 | | 83.41 | 87.57 | **89.17** | | | odds first | 52.77 | 58.61 | 61.00 | 61.57 | 59.88 | 62.59 | 63.28 | | 63.78 | **67.08** | 66.34 | | | binary addition | 54.63 | 55.78 | 55.59 | 56.96 | 54.72 | 56.35 | 55.70 | | 59.71 | **60.88** | 56.62 | | | compute sqrt | 50.47 | 51.11 | 51.88 | 51.63 | 50.63 | 51.11 | 50.80 | | 51.64 | 51.33 | **52.46** | | | bucket sort$\dagger$$\dagger$$\dagger$ | 98.32 | 98.92 | 98.12 | 99.31 | 98.45 | 99.38 | **99.57** | | 99.38 | 98.81 | 99.37 | **Comparative performance improvements.** CAPE consistently enhanced performance across various tasks, especially on permutation-variant tasks. Specifically, CAPE improved upon Alibi and FIRE's results in all 11 tested permutation-invariant tasks. Similarly, it outperformed Kerple in 10 of these tasks. **Q2: Experiments on large model size** A2: The following is the result of the experiment on 2.7B and 6.7B, with training length 512 and micro_gpu_batch_size 4. We further discuss the experiment on large model size in **Result on Large Model Size 2.7B and 6.7B** in **Author Rebuttal by Authors** | Model size| Method | 512 | 1024 | 2048 | 4096 |-------|-------|-------|-------|-------|-------| |2.7B|RoPE| 21.01|25.00|48.13|160.59 | | | RPE|21.10|21.88|23.59|33.23| | | Kerple|21.14|22.08|23.38|27.21| | | CAPE-Kerple|20.52|21.01|20.23|19.57| |6.7B|RoPE| 20.86|22.27|28.01|110.00 | | RPE|20.79|21.60|22.32|26.31| | | Kerple|20.71|21.57|22.07|24.48| | | CAPE-Kerple|20.09|20.54|19.83|19.32| **Q3: Explanation of Equation 2 and Equation 3** A3: The next step of our operation is $softmax$. Therefore, $h(A_{CAPE}(x))=softmax(XW_Q(XW_K)^T+f(XW_Q(XW_K)^T, B))!=softmax(XW_Q(XW_K)^T+B)=h(A(x))$, while $A(x)$ is the naive transformer attention score calculation without CAPE. Let us explain the attention module in Transformer step by step: * Step 1 (Calculate attention score via query $XW_Q$ and $XW_K$): * Previous implementation of $A_{score}=XW_Q(XW_K)^T+B$ * CAPE implementation of $A_{score}=XW_Q(XW_K)^T+B+f(XW_Q(XW_K)^T, B)$ * Step 2 (use softmax on attention-score, row-wise. Therefore, the **next step is** $\textbf{softmax}$): * $A_{scoreSoftmax}=softmax(A_{score})$ * Step 3 $A_{scoreSoftmax}$ and value $XW_K$ to get embedding of each token: * output=$A_{scoreSoftmax}XW_V $ If we understand correctly, the neural network h(∗) mentioned in the comment corresponds to the feedforward network layer (FFN), that follows the attention layer. We would like to clarify that the MLPs introduced in the CAPE model are not duplicated. In CAPE, these MLPs dynamically adjust the positional encodings based on context information (**CAPE: Step 1**). Subsequently, softmax operations are applied across attention score, row-wise. It is important to note that the CAPE's adjustments, applied during the attention phase, do not directly alter the token values but act on the attention computation. However, the FFN layer that follows attention modifies (**FFN: after Step 3**) each token through a nonlinear transformation. If there are any questions, please let us know. And if you think that we have addressed your concerns, could you please consider raising the score? Thank you very much for your support. --- Rebuttal 2: Title: Kindly Remind of Discussion Period Comment: Dear Reviewer tTna, We would like to thank you again for your detailed reviews. We have updated the experiment results and the explanation of our method in the above response. As the discussion period will be closed soon, we would appreciate it if you could let us know if our responses have addressed your concerns and whether you still have any other questions. We would be happy to do any follow-up discussion or address any additional comments. Again, thank you very much for your attention to our work.
Rebuttal 1: Rebuttal: Dear all reviewers: We sincerely appreciate the reviewers for the time and efforts on the review. We first address some common questions, followed by detailed responses to each reviewer separately. We hope our responses clarify existing doubts. We will really appreciate it if Reviewer tTna can kindly reconsider the decision, provided that the main comments are well addressed. **Q1: CAPE computation cost (Reviewer CfZF, Reviewer NzBk, Reviewer QhvW)** **The additional training ratio will gradually decrease with a larger model size, compared to baseline Kerple.**. Theorically Analysis: * The cost of Feed-Forward Network is: $O(Nd_{head}^2d_{hidden}^2)$=$aNd_{head}^2d_{hidden}^2$, where a is a constant, N is the sequence length, $d_{head}$ is the attention head number and $d_{hidden}$ is the dimension for attention calculation. * The cost of Attention: $O(N^2d_{head}d_{hidden})$=$bN^2d_{head}d_{hidden}$, where b is a constant. * The additional cost of CAPE: $O(N^2d_{head}d_{cape})$=$cN^2d_{head}d_{cape}$, where c is a constant. * The cost ratio is $\frac{aNd_{head}^2d_{hidden}^2+bN^2d_{head}d_{hidden}}{aNd_{head}^2d_{hidden}^2+bN^2d_{head}d_{hidden}+cN^2d_{head}d_{cape}}$=$\frac{ad_{head}d_{hidden}^2+bNd_{hidden}}{ad_{head}d_{hidden}^2+bNd_{hidden}+cNd_{cape}}$. Therefore, with the fixed sequence length and $d_{CAPE}$, with the model becomes larger (with bigger $d_{head}$ and $d_{hidden}$), the additional cost ratio of CAPE will greatly become smaller. Also, we have shown in Figure 6 that $d_{cape}$ still works well with very small value, such as 4. The following is the time cost with a training length of 512 with micro_gpu_batch_size 1 on Books3 dataset. | Method | 350M Total | Ratio(Compared to CAPE-Kerple) | 2.7B Total |Ratio(Compared to CAPE-Kerple) | 6.7B Total | Ratio(Compared to CAPE-Kerple)| |------|------|------|------|------|------|------| |RoPE|210.01|0.9366| 472.63|1.1187| 635.57|0.8858 |T5's bias|355.16| 1.5839|537.62|1.2725|808.85|1.1273 |Alibi|172.60|0.7697|325.95| 0.7715|596.77|0.8317 |**Kerple**|189.91| **0.8469**|370.32| **0.8765** |661.82|**0.9224** |FIRE|248.13|1.1066|432.63| 1.0240|797.68|1.1118 |**CAPE-Kerple**|224.22|**1.0000** |422.48|**1.0000**|717.46|**1.0000** Apparently, when the model becomes large, the additional computational cost of CAPE gradually decreases. Therefore, the CAPE may be a potential good choice for an extremely large language model. **Moreover, CAPE indeed can speed up training, Compared to current popular RoPE** | Evaluation | RoPE Length 4096 & Batch 1 |Kerple Length 512 & Batch 8|CAPE-Kerple Length 128 & Batch 32 |CAPE-Kerple Length 512 & Batch 8 |CAPE-Kerple Length 1024 & Batch 4 | CAPE-Kerple Length 2048 & Batch 2 | CAPE-Kerple Length 4096 & Batch 1 | |------|------|------|------|------|------|------|------| |128|38.36|33.04|31.49|32.22|33.22|34.71| 36.65| |256|33.21|29.11|28.27|28.32|29.02|30.08|31.57| |512|27.33|24.68|24.93|23.88|24.14|24.77|25.68| |1024|25.49|23.82|24.31|22.62|22.62|23.09|23.80| |2048|23.55|24.03|23.34|21.16|21.00|21.30|21.84| |4096|**24.58**|30.76|24.38|**21.79**|21.34|21.45|21.83| |8192|152.54|36.81|25.01|21.70|21.12|21.24|21.50 |Time Cost|**265.48** |117.10 |128.94|**192.45**|314.86|547.78|1217.34 With the same training token, the CAPE with a training length of 512 and batch size of 8 can even with comparable performance with a RoPE training length of 4096 and batch size of 1. Also, the CAPE with a training length of 512 and batch size only takes 192.45ms, while RoPE takes 265.48 ms. Therefore, the CAPE could be a choice for speeding up training in the future. Finally, with developments in hardware, the cost of CAPE will become more manageable. For instance, the development of GPUs has led to the widespread acceptance of large models, something that would have been unimaginable 10 years ago when training a 175-billion parameter model. **Q2: Result on Large Model Size 2.7B and 6.7B (Reviewer tTna, Reviewer NzBk)** A2: We add larger model (2.7 and 6.7BB) experiments in the following, with micr_gpu_batch_size 4 and length 512 on Books3 dataset. | Model size| Method | 512 | 1024 | 2048 | 4096 |-------|-------|-------|-------|-------|-------| |2.7B|RoPE| 21.01|25.00|48.13|160.59 | | | RPE|21.10|21.88|23.59|33.23| | | Kerple|21.14|22.08|23.38|27.21| | | CAPE-Kerple|20.52|21.01|20.23|19.57| |6.7B|RoPE| 20.86|22.27|28.01|110.00 | | RPE|20.79|21.60|22.32|26.31| | | Kerple|20.71|21.57|22.07|24.48| | | CAPE-Kerple|20.09|20.54|19.83|19.32| According to the result, we can find that the proposed CAPE still works well, whatever the model size is 2.7B or 6.7B. With 2.7B model size, RoPE achieves 21.01 on evaluation length 512 and 160.50 on evaluation 4096, while our CAPE-Kerple achieves 20.52 and 19.57 respectively. Also, CAPE-Kerple achieves the best performance, whatever the model size is 2.7B or 6.7B from evaluation length 512 to 8192. This suggests that our proposed CAPE has great scalability.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
OwMatch: Conditional Self-Labeling with Consistency for Open-World Semi-Supervised Learning
Accept (poster)
Summary: The paper introduces OwMatch, a novel method for open-world semi-supervised learning (SSL). This approach incorporates conditional self-labeling and open-world hierarchical thresholding. Additionally, the paper provides theoretical analyses that demonstrate the unbiasedness and reliability of the label assignment estimator. Rigorous experimentation confirms that the proposed method achieves a significant improvement in performance. Strengths: 1. The paper provides adequate theoretical analyses for the proposed novel method. 2. The paper is well-presented with clear organization throughout. 3. The experiments are adequate and achieve better performance compared to existing methods. Weaknesses: 1. The paper exhibits limited innovation. The two strategies proposed—Conditional Self-labeling and Open-world Hierarchical Thresholding—are commonly employed in existing OwSSL (Open-world Semi-Supervised Learning) and GCD (Generalized Category Discovery) methodologies. Specifically, the Conditional Self-labeling strategy mirrors the optimal transportation label assignment method detailed in references [1-4], and the Hierarchical Thresholding closely resembles the Momentum Prior Update described in references [3-4]. 2. The method assumes that the number of unknown classes is pre-determined and fails to address the identification of unknown classes. The estimation of the number of unknown classes is a critical issue in both OwSSL and GCD fields. 3. The comparison with other algorithms is insufficient. Recent publications such as OpenLDN, TRSSL, and OpenCon from 2022 are included; however, more recent methods, particularly those utilizing pretrained ViT (Vision Transformer) architectures, should also be compared. Additionally, comparisons with methods addressing imbalance GCD are necessary. [1] Towards Realistic Semi-Supervised Learning [2] OpenLDN: Learning to Discover Novel Classes for Open-World Semi-Supervised Learning [3] ImbaGCD: Imbalanced Generalized Category Discovery [4] Bootstrap Your Own Prior: Towards Distribution-Agnostic Novel Class Discovery Technical Quality: 3 Clarity: 4 Questions for Authors: See weakness Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed and valuable suggestions. They play a crucial role in improving our manuscript. **[W1]** We acknowledge the ubiquity of self-labeling (SL) and adaptive thresholding (AT) techniques, and we drew inspiration from the seminal contributions of TRSSL [6], OpenLDN [7], and BYOP [8]. We propose simple, intuitive, and effective modifications - transforming SL into **conditional** SL and AT into **hierarchical** AT - to ensure better compatibility with considered settings, thereby enhancing state-of-the-art performance. Our concepts of **conditional** and **hierarchical** are different from existing works in OwSSL. Specifically, by incorporating label data into the online clustering process, we can theoretically ensure optimal-transport-based optimized label assignment unbiasedness and lower variance in test statistics. On the other hand, the hierarchical design stems from the empirical observation of significant disparity in overall learning conditions between seen and novel classes. Although simple, this design is effective in mitigating the instability arising from the distinct learning dynamics of seen and novel classes. Extensive experiments validate the effectiveness of every modification. Moreover, to the best of our knowledge, this is the first work proposing the expectation of chi-square statistics (ECS) to evaluate the reliability of label assignment estimation. Thanks for your professional concerns. We will put more emphasis on how our approach differs from foundational techniques in subsequent manuscripts. **[W2]** Thanks for highlighting this important aspect. We conducted additional experiments mainly following the approaches of GCD and TRSSL. Specifically, to estimate the number of classes, $k$-means clustering is performed on representations of the entire dataset from pre-trained ViT-B/16. The optimal value of $k$ is determined by evaluating the clustering accuracy on the labeled samples calculated by the Hungarian algorithm. This accuracy serves as a scoring function, optimized using Brent's algorithm to find the $k$ that maximizes performance on the labeled data. The estimation result is shown in **Table 4** of Reference PDF, which illustrates that estimation comes close to the ground truth. We also evaluate the method's sensitivity to the class estimation error. As illustrated in **Figure 1** of Reference PDF, our method still achieves reasonable performance over a larger range of errors. **Table 4: Estimation of the number of novel classes.** | | CIFAR10 | CIFAR100 | ImageNet-100 | |--------------|---------|----------|--------------| | Ground Truth | 10 | 100 | 100 | | Estimation | 10 | 104 | 112 | | Error | 0% | 4% | 12% | **[W3]** Here, we compare recent GCD-related works. **Table 1** of Reference PDF shows that our method outperforms existing approaches in novel-class and all-class accuracy on ImageNet100. It's worth noting that GCD-related works typically employ pre-trained ViT-Base/16 as the backbone, which has over three times more parameters than our ResNet-50. Despite a more compact model, our approach achieves superior performance. We do not compare with imbalanced GCD methods since their experimental settings are different from ours. **Table 1: Comparison on GCD-related work: average accuracy on the ImageNet100 with 50% novel classes and 50% labeled data within seen classes.** | Method | Ref | Backbone | Seen Acc | Novel Acc | All Acc | |------------|---------------|-----------|-------|-------|------| | GCD [1] | (CVPR'22) | ViT-B/16 | 91.8 | 63.8 | 72.7 | | SimGCD [2] | (ICCV'23) | ViT-B/16 | 93.1 | 77.9 | 83.9 | | InfoSieve [3] | (NeurIPS'23) | ViT-B/16 | 84.9 | 78.3 | 80.5 | | CiPR [4] | (TMLR'24) | ViT-B/16 | 84.9 | 78.3 | 80.5 | | PromptCAL [5] | (ICCV'23) | ViT-B/16 | 92.7 | 78.3 | 83.1 | | OwMatch+ | OURS | ResNet-50 | 91.5 | **79.6** | **85.5** | [1] Vaze, Sagar, et al. "Generalized category discovery." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [2] Wen, Xin, Bingchen Zhao, and Xiaojuan Qi. "Parametric classification for generalized category discovery: A baseline study." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [3] Rastegar, Sarah, Hazel Doughty, and Cees Snoek. "Learn to categorize or categorize to learn? self-coding for generalized category discovery." Advances in Neural Information Processing Systems 36 (2024). [4] Hao, Shaozhe, Kai Han, and Kwan-Yee K. Wong. "Cipr: An efficient framework with cross-instance positive relations for generalized category discovery." arXiv preprint arXiv:2304.06928 (2023). [5] Zhang, Sheng, et al. "Promptcal: Contrastive affinity learning via auxiliary prompts for generalized novel category discovery." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [6] Rizve, Mamshad Nayeem, Navid Kardan, and Mubarak Shah. "Towards realistic semi-supervised learning." *European Conference on Computer Vision*. Cham: Springer Nature Switzerland, 2022. [7] Rizve, Mamshad Nayeem, et al. "Openldn: Learning to discover novel classes for open-world semi-supervised learning." *European Conference on Computer Vision*. Cham: Springer Nature Switzerland, 2022. [8] Yang, Muli, et al. "Bootstrap your own prior: Towards distribution-agnostic novel class discovery." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2023. --- Rebuttal Comment 1.1: Comment: Thank you for your response. It has addressed some of my concerns, so I will raise my score. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your constructive feedback and suggestions for improving our work. We will incorporate these additions into the final version.
Summary: This paper proposes OxMatch, a semi-supervised learning (SSL) algorithm for an open-world setup where unlabelled data might come from outside of the labeled class distribution. The authors combine consistency regularization and self-labeling techniques to address the main challenges in open-world SSL (OwSSL). In particular, they use a conditional self-labeling approach to improve the label assignment stage (i.e. computation of pseudo-labels) and a hierarchical thresholding scheme to weight the contribution of different samples depending on the state of the model (i.e. the rate at which each class is being learned). Additionally, the authors provide a theoretical analysis of the conditional self-labeling estimator. Strengths: - The paper is theoretically sound and the authors properly justify the proposed elements. - The experimental section is complete and provides insights into the contribution of the different elements of the model. Weaknesses: - The presentation and phrasing of some paper parts should be addressed. The explanation of the method is slightly confusing and difficult to follow at some parts. - The paper addresses confirmation bias but this is not introduced at any point. Similarly, the text refers to Figure 1 as an example of the mitigation of confirmation bias (line 48) but this is not explained or elaborated. I would suggest addressing this in the main text and writing a more comprehensive caption for Figure 1. - Line 138 refers to Figure 3 for an illustration of issues with self-labeling and confirmation bias. This is not clear. Whether this refers to Figure 1 or Figure 3, it needs some more explanation. - In general, it is a good practice to write self-contained captions for figures. These should explain the elements in the figure and highlight their relationship with the main text. - Captions of tables should also be reviewed to include details of the metrics being displayed in the table. for better readability I would suggest highlighting the relevant results. Technical Quality: 4 Clarity: 2 Questions for Authors: None Confidence: 3 Soundness: 4 Presentation: 2 Contribution: 2 Limitations: The authors already mention the main limitations fo the proposed approach: the method assumes that the class distribution is known a priori. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. With your help, our revised manuscript is now clearer and more readable. **[W1;4;5]** We accept your suggestions regarding improving the alignment between the chart explanations and the text and providing more detailed legends and scales in our figures. We will make the necessary revisions to address these concerns. **[W2]** We regret that the absence of a clear definition for "confirmation bias" may have confused readers. We will include a detailed explanation at the start of our revised document. In our context, confirmation bias means that the model is biased because it has been exposed only to instances from seen classes. During the prediction phase, a model with confirmation bias may predict an ambiguous sample as belonging to a seen class, even if it is likely from a novel class. This results in high recall but low precision for seen classes. **[W3]** We apologize for the error; the correct reference should be Figure 1(a). We will add further illustrations to clarify this figure.
Summary: This paper proposes OwMatch, a new approach for open-world semi-supervised learning (OwSSL). The key contributions are: (1) A conditional self-labeling method that incorporates labeled data into the clustering process to reduce confirmation bias and misalignment. (2) A hierarchical thresholding strategy to balance learning difficulties across different classes. (3) Theoretical analysis of the unbiasedness and reliability of the conditional self-labeling estimator. (4) Extensive experiments demonstrating state-of-the-art performance on multiple benchmarks. The method combines ideas from self-supervised and semi-supervised learning, using consistency regularization and self-labeling tailored for the open-world setting where unlabeled data may contain novel classes. Strengths: 1. The paper introduces a novel combination of conditional self-labeling and open-world hierarchical thresholding to address the challenges in OwSSL. Hierarchical thresholding is proposed to address the issue of different learning pace of seen and novel classes and seems to work well. 2. The paper is well-written and clearly structured, with detailed explanations of the methodology, theoretical analysis, and experimental results. The figures and tables effectively illustrate the key points and support the claims made by the authors. 3. The theoretical analysis provides a solid foundation for the proposed method, demonstrating its unbiasedness and reliability in label assignment. The empirical results show significant performance improvements across various datasets, indicating the method's robustness and effectiveness. Weaknesses: 1. The Open-world Semi-supervised Learning setting is very similar to or the same as Generalized Category Discovery, both assuming novel classes exist and part of the data has labels in seen classes. The paper lacks discussion and comparison with closely-related work in generalized category discovery, such as GCD[1], SimGCD[2], and BaCon[3]. 2. A critical ablation study is lacking/needed to verify the effectiveness of the proposed open-world hierarchical thresholding (OwAT). OwAT is an incremental adaptation to existing self-adaptive thresholding in FreeMatch[4] by considering the different group learning speeds in novel and seen classes. In the existing ablation study, it is shown that OwAT boosts model performance, but I think the previous self-adaptive thresholding contributes most of the improvement and it is unclear how much the proposed hierarchical design really helps. Besides, since the self-adaptive thresholding adjusts thresholds based on the learning status of different classes, I think it can automatically consider the different learning paces of novel and seen classes and assign different thresholds, which also questions the effectiveness of the proposed hierarchical design. 3. The proposed conditional self-labeling aims to incorporate labeled data into the online clustering process. But its goal and function seem to overlap with the supervised loss, which is used in this work. How is the performance (1) without the supervised loss, (2) without the conditional part of self-labeling, (3) without both? 4. Currently, the code is not available to verify the performance and ablation study. [1] Vaze, Sagar, Kai Han, Andrea Vedaldi, and Andrew Zisserman. "Generalized category discovery." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7492-7501. 2022. [2] Wen, Xin, Bingchen Zhao, and Xiaojuan Qi. "Parametric classification for generalized category discovery: A baseline study." In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 16590-16600. 2023. [3] Bai, Jianhong, Zuozhu Liu, Hualiang Wang, Ruizhe Chen, Lianrui Mu, Xiaomeng Li, Joey Tianyi Zhou, Yang Feng, Jian Wu, and Haoji Hu. "Towards distribution-agnostic generalized category discovery." Advances in Neural Information Processing Systems 36 (2023): 58625-58647. [4] Wang, Yidong, Hao Chen, Qiang Heng, Wenxin Hou, Yue Fan, Zhen Wu, Jindong Wang et al. "Freematch: Self-adaptive thresholding for semi-supervised learning." arXiv preprint arXiv:2205.07246 (2022). Technical Quality: 2 Clarity: 2 Questions for Authors: Please refer to questions in the weakness section. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Limitations and potential impact are well discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your insightful analysis, it has been instrumental in refining our research. We hope that the following responses can make our manuscript clearer and more persuasive. **[W1]** GCD and its related works indeed focus on a setting similar to OwSSL. However, it's worth noting that GCD-related works typically employ pre-trained ViT-Base/16 as the backbone, which has over three times more parameters than our ResNet-50. It is unfair to compare these two types of methods. Here, we still compare those GCD-related works. **Table 1** of Reference PDF shows that our method outperforms existing approaches in novel-class and all-class accuracy on ImageNet100, despite using more compact model. We do not compare with imbalanced GCD methods (BaCon [6]) since its experimental settings are different from ours. **Table 1: Comparion on GCD-related work: average accuracy on the ImageNet100 with 50% novel classes and 50% labeled data within seen classes.** | Method | Ref | Backbone | Seen Acc | Novel Acc | All Acc | |------------|---------------|-----------|-------|-------|------| | GCD [1] | (CVPR'22) | ViT-B/16 | 91.8 | 63.8 | 72.7 | | SimGCD [2] | (ICCV'23) | ViT-B/16 | 93.1 | 77.9 | 83.9 | | InfoSieve [3] | (NeurIPS'23) | ViT-B/16 | 84.9 | 78.3 | 80.5 | | CiPR [4] | (TMLR'24) | ViT-B/16 | 84.9 | 78.3 | 80.5 | | PromptCAL [5] | (ICCV'23) | ViT-B/16 | 92.7 | 78.3 | 83.1 | | OwMatch+ | OURS | ResNet-50 | 91.5 | **79.6** | **85.5** | **[W2]** Thanks for pointing out our shortcomings in the ablation experiment. We conduct additional ablation studies comparing static, class-wise adaptive thresholding which is used in FreeMatch [7], and our OwAT thresholding techniques as shown in **Table 2** of Reference PDF. While FreeMatch has proven effective in closed-world Semi-SL, it faces challenges in open-world settings. The significant disparity in overall learning conditions between seen and novel classes, as illustrated in Figure 1(b) of our manuscript, can lead to unstable global thresholds. The class-wise adaptive approach based on that may exaggerate this issue, resulting in suboptimal performance. We implement hierarchical structure to mitigate the instability sourcing from distinct learning dynamics of seen and novel classes. **Table 2: Comparison of static, FreeMatch's and our OwAT thresholding techniques on CIFAR100 with novel class ratio of 50\%** | Method | Seen Acc | Novel Acc | All Acc | |---------------|----------|-----------|---------| | Stat thre - 0.7| 80.1 | 59.4 | 69.6 | | Stat thre - 0.8| 79.8 | 63.9 | 71.7 | | Stat thre - 0.9| 80.2 | 62.8 | 71.3 | | FreeMatch | **81.0** | 60.5 | 70.6 | | OwMatch+ | 80.1 | **63.9** | **71.9**| **[W3]** Appreciate your insightful question, our complementary results, presented in **Table 3** of Reference PDF, can demonstrate the following: * Without supervised loss: we observe a decrease in seen-class accuracy but maintain novel-class clustering performance. * Without incorporation of labeled data into clustering: seen-class accuracy remains high, but novel-class clustering accuracy declines. * Without both: the model collapses in seen-class accuracy due to the absence of label information in the objective function. Our objective integrates both components to achieve a balance between clustering and confidence. The supervised loss enhances seen-class accuracy through one-hot supervision, while the clustering objective with conditional self-labeling improves novel-class clustering accuracy by incorporating labeled data. This harmonious approach yields the best all-classes accuracy while roughly maintaining both seen-class accuracy and novel-class clustering accuracy. Additionally, the parameters that balance the significance of each loss term are missing in equation (10). We will rectify this accordingly. **Table 3: Comparison of different settings.** | Method | Seen Acc | Novel Acc | All Acc | |-----------------------------------------|----------|-----------|---------| | OwMatch+ | 80.1 | 63.9 | 71.9 | | w/o supervised loss | 76.8 | 64.4 | 70.6 | | w/o supervision within clustering loss | 80.3 | 61.2 | 70.7 | | w/o both | 0.62 | 45.1 | 40.0 | **[W4]** Under the NeurIPS's regulation, we are unable to share the code at this stage. However, we will make the code publicly available upon acceptance of the paper. [1] Vaze, Sagar, et al. "Generalized category discovery." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [2] Wen, Xin, Bingchen Zhao, and Xiaojuan Qi. "Parametric classification for generalized category discovery: A baseline study." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [3] Rastegar, Sarah, Hazel Doughty, and Cees Snoek. "Learn to categorize or categorize to learn? self-coding for generalized category discovery." Advances in Neural Information Processing Systems 36 (2024). [4] Hao, Shaozhe, Kai Han, and Kwan-Yee K. Wong. "Cipr: An efficient framework with cross-instance positive relations for generalized category discovery." arXiv preprint arXiv:2304.06928 (2023). [5] Zhang, Sheng, et al. "Promptcal: Contrastive affinity learning via auxiliary prompts for generalized novel category discovery." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [6] Wen, Xin, Bingchen Zhao, and Xiaojuan Qi. "Parametric classification for generalized category discovery: A baseline study." In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 16590-16600. 2023. [7] Wang, Yidong, et al. "Freematch: Self-adaptive thresholding for semi-supervised learning." arXiv preprint arXiv:2205.07246 (2022). --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed responses and efforts to conduct additional experiments. I have adjusted my rating accordingly. --- Reply to Comment 1.1.1: Comment: We are very grateful for your valuable feedback to improve our ablation experiments and for kindly increasing the score. We will use these insights to refine the final version.
Summary: The paper has a novel idea, clear problems, and rich experimental results. It is a good article. However, there are some problems that need to be further optimized. Strengths: The idea is clear and the problem is prominent. The experimental results prove the superior performance of the proposed method in open world scenarios. Weaknesses: Some results need to be supplemented and explained Requiring some proof of principle, theory or experiment. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. References should not be used as sentence components, such as reference [31]. 2. I would like to know how the proposed method reduces the bias, not just the final result. Can the author provide some principle experimental analysis? 3. Maybe I overlooked it. I did not find any experimental results on Tiny ImageNet that are consistent with Table 1. 4. From the confusion matrix, at 100 iterations, for unseen data, only the sixth category can be predicted correctly, while the other categories are all wrongly predicted. This is different from what the author described. 5. Why does the prediction effect of seen classes become worse after adding the ConSL module? 6. The authors should try to align the experiments. The experimental data for each experiment is incomplete and is based on three of the datasets. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: As above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Your comments are appreciated, and they have been beneficial in enhancing our work. **[Q1;3;4]** Based on your suggestion, we will revise our manuscript to fit the appropriate writing requirements. First, we will correct the reference citations in detail to avoid them being sentence components. Secondly, we will move manuscript's Table 8, which is the results on Tiny ImageNet, from the appendix to the experimental section. Regarding the confusion matrix, we utilize the Hungarian algorithm to calculate clustering accuracy for novel classes, which matches discovered novel clusters to ground truth labels through optimal assignment. Therefore, the samples don't need to align along the diagonal; the presence of five distinct clusters already indicates effective clustering on novel-class samples. To avoid confusion, we will reorder the confusion matrix to match the manuscript's description. **[Q2]** The main challenge in OwSSL involves the tendency to predict samples in unseen classes as seen classes. Conditional self-labeling incorporate label data into online clustering as prior to correct the the unseen class distribution. Based on this, the posterior prediction exhibits less bias tendency. To demonstrate its effectiveness, we compare the number of samples predicted to one class with ground truth. We expected a smaller gap between them under an effective method. Figure 1(a) in the manuscript shows that with the conditional component key, the number of samples predicted to each class is drawn closer to its ground truth for both seen and unseen classes. **[Q5]** The last two columns of manuscript's Table 2 may seem to imply a decline in seen-class accuracy. However, this resulted in significant improvements in both unseen-class and all-class accuracy. Besides, seen-class accuracy is calculated by applying the formula on samples whose ground truth belongs to seen classes rather than all classes. However, that fails to measure the false assignment of novel classes. Consider an extreme model only assigns samples to seen classes, while it minimizes the mistake of losing any possible seen instances, it completely ignores the unseen set. This is not a preferred model/representation. An unbiased and generalized model should balance the prediction of all classes. As for the comparison of these two models, performance does not degrade, in fact gets more accurate. While the recall slightly decreases, the precision is improved. **[Q6]** The “-” in the tables indicates that the authors of these works have not publicly provided the relevant results. While we are confident in the performance of our method, we are concerned that our simple reproduction of their work might not fully represent the potential of their methods. --- Rebuttal Comment 1.1: Comment: thanks for your rebuttal. the Figure 1(a) is still the final results. I want to know the principle explanation. --- Rebuttal 2: Comment: Thanks for your comment, here we provide further principle explanation and analysis in two folds: **Additional experiment to illustrate how our model alleviate confirmation bias:** We use the Manhattan distance $\sum_i|a_i-b_i|$ as a metric to evaluate the confirmation bias between the predicted class distribution and the ground truth. The model initially exhibits a biased prediction trend, as previously discussed. During training, we employ conditional self-labeling to optimize self-label assignments, which have been shown to provide unbiased estimation and lower variability in test statistics theoretically (see Theorem 4.4 and Theorem 4.5). Then we align the model's prediction with these optimized self-label assignment. The following **Table 5** demonstrates the debiased training process. In the early epochs, model's confirmation bias (first row) is significant, whereas the self-label assignment bias is relatively acceptable. As a training process, the self-label assignments continue to guide the model, effectively reducing confirmation bias. This is reflected in the decreasing Manhattan distance between the model's predicted class distribution and the ground truth. **Table 5: The Manhattan distance (MD) evaluates confirmation bias. First, we consider the bias between the model’s predictive class distribution and the ground truth. We also show the bias of the corresponding optimized self-label assignment compared to the ground truth**. | MD | Epoch 1 | Epoch 2 | Epoch 3 | Epoch 6 | Epoch 10 | Epoch 30 | Epoch 50 | |-------------|---------|---------|---------|---------|----------|----------|----------| | prediction | 0.4463 | 0.2939 | 0.2474 | 0.1753 | 0.1505 | 0.0904 | 0.0798 | | self-label | 0.1004 | 0.0754 | 0.0893 | 0.0613 | 0.0407 | 0.0255 | 0.0219 | **Further explanation on motivation**: While other self-labeling-based methods can mitigate confirmation bias, our approach demonstrates superior performance. The primary reason is that our proposed conditional self-labeling technique provides unbiased estimates and lower variance in test statistic. This allows optimized self-label assignment to align more closely with the ground truth distribution, ultimately guiding the model towards better accuracy. I am wondering whether these analysis would help clarify the effectiveness of our proposed method. Please let us know if you want to see any specific experimental analysis.
Rebuttal 1: Rebuttal: We appreciate the reviewers' valuable feedback, which has significantly helped us improve our paper. Below, we provide detailed responses to each reviewer. Additionally, we've included all required tables and figures in a one-page Reference PDF. Pdf: /pdf/787ed6b7dd7cda4df3f32188437350afbb1ffc55.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Distributed Least Squares in Small Space via Sketching and Bias Reduction
Accept (poster)
Summary: Sketching is a technique from randomized numerical linear algebra which compresses an input matrix $A$ by multiplication with a random matrix $S$. Recent works [18] have shown how to characterize the bias in the sketched inverse covariance $(\tilde A^\top\tilde A)^{-1}$, where $\tilde A = SA$ for an input matrix $A$ and a sketching matrix $S$. This work extends this machinery to construct approximately unbiased estimators $\tilde x$ for the least squares regression problem $\min_x \|Ax-b\|$. As the authors show in Theorem 2, this gives an efficient distributed algorithm by averaging the unbiased estimator over $q = O(1/\epsilon)$ servers. The techniques introduced by the authors also improve prior results on bias-free estimation of the inverse covariance. Strengths: The authors introduce a new way to exploit unbiased estimators, which appears to be different from prior works, which incorporate debiasing in a second order optimization framework (e.g. https://arxiv.org/abs/2007.01327). The new technique simply averages local solutions, and conceptually seems simpler than prior techniques. Technically, the authors improve prior results on bias-free estimation of the inverse covariance by showing an improved moment bound on a certain random variable by carefully exploiting better bounds for smaller moments via a Holder’s inequality. Other new tools from random matrix theory are additionally needed to handle this change. These ideas, as well as their improvements to bias-free estimation of the inverse covariance, are interesting and may be useful for future results in this area. Weaknesses: The result and techniques seem to be a bit niche and incremental. The notion of distributed computation is different from ones that I see in other works in the sketching literature (e.g. https://arxiv.org/abs/1408.5823, https://arxiv.org/abs/1504.06729) where the input is partitioned across multiple servers. Instead, this work considers a setting where there is just one stream which contains the entire input, and the central server has access to $q$ machines that can each access this stream (see the Computational model section), which seems nonstandard. The number of servers needed is also rather large (e.g. $q = O(1/\epsilon)$), which also seems restrictive. I encourage the authors to include references that prove results in this setting if there are any others. The main technical novelty (line 275) also seems to be an improvement to a bound in the analysis of LESS embeddings from prior work, while the overall sketching framework is largely unchanged. Technical Quality: 3 Clarity: 3 Questions for Authors: - The authors may consider discussion the related work https://arxiv.org/abs/2203.09755 on distributed least squares and https://arxiv.org/abs/2007.01327 on using debiasing approaches Other comments - Lemma 2 should have a probability statement (e.g., with probability at least $1-\delta$) Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors claim that the work does not have limitations in the checklist. Some of the points I raised in the weaknesses section could be discussed in the work, such as restrictions on the computational model and the number of servers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and questions. We are glad that the reviewer appreciates the simplicity of our algorithmic approach, and that the reviewer finds our technical contributions interesting and useful. Below, we provide clarifications on the reviewer's remarks, including on the distributed computation framework and the number of machines, as well as providing a comparison with the four references provided by the reviewer. We will be sure to include all of those details in the final version. If you think the responses adequately address your concerns, we encourage you to consider increasing your score. - **Novelty in our techniques.** Our main contribution is in the theoretical analysis, which includes several new ideas (including the higher-moments version of Restricted Bai-Silverstein inequality, as well as the careful use of H\"older's inequality in the analysis of the dominant term). In fact, these ideas are not niche, since they are relevant to many instances of RMT-style analysis (i.e., analysis relying on the Stieltjes transform of the resolvent matrix) for sparse sketching operators, which has been used for Newton Sketch [16], Randomized SVD [17], and Sketch-and-Project [21]. We chose to focus on least squares, as this gives the clearest computational improvements. - **Number of machines required.** The use of $O(1/\epsilon)$ machines in our main results comes from the small space complexity, namely $O(d^2\log nd)$ bits per machine, that we impose in our setting. However, if we relax this constraint, then there is a natural trade-off between the space complexity and number of machines required by our methods: If we allow $((1/\theta)d^2\log nd)$ bits per machine for some $\theta\in [\epsilon,1]$, then we only need $O(\theta/\epsilon)$ machines. Setting $\theta=\epsilon$, we recover the standard sketched least squares on a single machine, while $\theta=1$ recovers our main results, but of course we can freely interpolate between those two extremes. - **Comparison with [BP22, arXiv:2203.09755].** This work indeed considers essentially the same problem setting as we do. In fact their Algorithm 1 is the same basic distributed averaging procedure that we use, and their computational model described in Remark 2.3 (Option 2) matches our single-server multiple-machines setup (except that they allow random access instead of streaming access to the data). Our results can be viewed as a direct improvement over their guarantees for Gaussian sketches (e.g., their Theorem 2.2), since we obtain analogous guarantees for extremely sparse sketches which are far more computationally efficient. In fact, we already mention these Gaussian sketch guarantees as a baseline for our work (see Table 1 for a comparison), and we will certainly add the reference to this work in that context. - **Comparison with [DBPM20, arXiv:2007.01327].** This work also considers a very similar problem setting to us, except they focus on regularized least squares, and how the regularization parameter affects the bias of the sketched least squares estimator, with distributed averaging as a main motivation (their main results also assume that all workers have access to the centralized data for the purpose of sketching). Similarly as for the earlier reference, their theoretical results require expensive sketching methods (called "surrogate sketches"), which are based on Determinantal Point Processes. We use these sketches as one of our theoretical baselines (see Table 1 for a comparison), and we will certainly add this work as a reference. It is worth noting that our random matrix theory techniques can likely be used to extend the regularization-based debiasing techniques developed in that work to fast sparse sketching. We leave this as a promising direction for future work. - **Comparison with [BKLW14, arXiv:1408.5823] and [BWZ16, arXiv:1504.06729].** The distributed computation framework considered in those works partitions the data across multiple servers. Importantly, note that these works solve the task of Principal Component Analysis, which is different from our task of least squares regression, so these works are not directly comparable. Nevertheless, our methods and results can be naturally extended to the multiple-server setting, as we discuss below. The main reason we focused the paper on a more centralized computational model, with one server and multiple machines, is because this allowed us to obtain worst-case results which are independent of condition-number type quantities (such as the data coherence defined below). - **Results for data partitioned into multiple servers.** As mentioned above, our results can be extended to the setting where a dataset is partitioned into multiple servers, so that each one of the $q$ machines is accessing a separate chunk of the data. Suppose that a dataset $A,b$ of size $N\times d$ is partitioned uniformly at random into smaller chunks of size $n=N/q$, and each machine constructs an estimate $\tilde x_i$ based on a sketch of its own chunk. Then with the same computational guarantees on each machine as in Theorem 2, the averaged estimator $\tilde x=\frac1q\sum_i\tilde x_i$ will with high probability enjoy a guarantee of $||A\tilde x-b||\leq (1+\epsilon + \tilde O(\mu/n))||Ax^*-b||$, where $\mu=N\max l_i(A)$ is the coherence of the dataset (here, $l_i$ denotes the $i$th leverage score). The additional error term $\tilde O(\mu/n)$, which arises from the partitioning (e.g., see [43]), is often negligible compared to the sketching error $\epsilon$ when the chunk size $n$ is sufficiently larger than the sketch size. An analogous result can be obtained for Distributed Newton Sketch. We will add these claims to the final version. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I appreciate the additional comments which show how the limitations on the number of servers can be addressed with a smooth trade-off, as well as allowing the data to be partitioned if the chunks are partitioned uniformly (although perhaps it still doesn't work if the data is partitioned adversarially, which is still a limitation in my mind). Thank you also for the explanation of connections with the prior work. I have increased my score, in recognition that there is a growing body of work on RMT-style arguments in the sketching literature, and thus this work may have a broad impact in many future works.
Summary: Sketched least squares involve estimating the term $(X^TX)^{-1}$ which has a high bias when the sketch matrix $S$ is not sub-Gaussian. This paper gives a sparse sketching method using a LESS embedding which runs in optimal space and current matrix multiplication time, where $S$ is sparse, and constructed based on the leverage scores of the data matrix $A$ (Definition 2 $(s,\beta_1,\beta_2)$-LESS embedding). The paper also improves the sharpness of the probability bounds which is applicable to similar problems in RMT using LESS embeddings. For $s=1$, nothing is different, but when $s > 1$, the bias bound is reduced. Strengths: Inversion bias for estimates of $(X^TX)^{-1}$ is a challenging problem in least-squares sketching. Sub-gaussian sketches have high computational cost but low bias ; other sketches have low computational cost but high bias. The sketch proposed in this paper minimizes the bias of the estimator only requires 1 parallel pass over the data, and has runtime of nnz($A$) + $\tilde{O}(d^2/\epsilon)$ which is much faster. Moreover, computing the estimate can be done in parallel, with only the final result needing to be averaged. The authors rigorously prove that their sketch fulfills the above criteria by using techniques from random matrix theory, and this is not trivial at all. I find the key contribution in the paper is proving these statements, and the authors did an excellent job here. The paper is extremely well written and easy to follow. Of particular note is the bias analysis for the least squares estimator (Section 4) which is concisely written, where the main ideas (proof sketch) given in the main paper, and technical details in the appendix. The ideas given in the proof sketch can be used for similar problems, and are straightforward to understand. Moreover, the technical details come with sufficient exposition such that it is straightforward for a reader to understand the direction the proof is going (which is certainly much appreciated). Weaknesses: 1. It would be nice to have experiments with other sketching methods to (empirically) justify some statements on the bias, variance (although not stated) and computational time, e.g. estimators mentioned in Table 1. For example, are there sketching estimators with higher bias, but less variance? There are some minor typos, e.g. line 212 reference missing, lines 282 to Equation (2) at bottom of the page is missing a bracket for the numerous expectations (in contrast to lines 542 onwards in the appendix). In Appendix A, notation for concentration inequalities should be looked at and made consistent, e.g. Lemma 6 / H{\"o}lder's inequality should have a $\frac{1}{q}$, Lemma 7 ($\lambda$ max isn't defined), Lemma 8 / Azuma's inequality ($\lambda$, $m$ should be consistent), Lemma 10 is missing a bracket for $\mathbb E[x_i^2]$. The presentation of the proof for Theorem 5 was slightly jarring (due to Lemma 11, Lemma 12 appearing in the proof), but there also doesn't seem to be a good way to include them (since referring to the two lemmas requires the upper bounds, and flipping back a page is also inconvenient). Maybe a solution is to indent the Lemmas, or box them up? The dot before line 626 (after 72) should be removed. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. It would be nice to have experiments with other sketching methods to (empirically) justify some statements on the bias, variance (although not stated) and computational time, e.g. estimators mentioned in Table 1. For example, are there sketching estimators with higher bias, but less variance? 2. Despite being clear to read, I had to go back and forth a bit to find out what the novelty is. I appreciate the clarity and thoroughness of explaining the bounds on the bias and variance, runtime, but I would like it if Definition 1 & 2 came much earlier (or at least maybe an informal Definition 2 after line 85?) Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for a careful read and detailed comments. We will address all of the typos and presentation suggestions in the final version. - **Additional experiments.** We include additional experiments with other sketching methods (all of which are more computationally expensive than the fast sparse LESSUniform method we used in the paper): Leverage Score Sampling, Gaussian and Subgaussian sketches, as well as Subsampled Randomized Hadamard Transform (see our general response and the PDF). We note that the fact that sketching methods tend to yield smaller least squares bias than uniform subsampling has been empirically observed in prior works [42], which is why we did not focus on this here. Our main contribution is to provide the first sharp theoretical characterization of this phenomenon for extremely sparse sketches. - **Are there sketching estimators with higher bias, but less variance?** By definition, the bias has to be no larger than the variance, but there can be cases where the two quantities are comparable in size (which implies that distributed averaging will not be effective). The main example here is i.i.d. sub-sampling, including leverage score sampling. There are theoretical lower bounds [18] which show that (in some cases) the bias of leverage score sampling may not be much smaller than the variance. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal, and for providing the experiments. I was thinking more of biased estimators with lower variance though. My score remains unchanged.
Summary: The paper studies the least squares regression task and improves the space and communication amount in a distributed setting to be independent of $\epsilon$. The key to achieve this is by sketching the data in blocks that have an $\epsilon$ dependence which can be reduced to only d-dependencies by aggregating them. For the covariance A^TA this becomes only a d x d matrix and for the other required term A^Tb it can be handled by communicating and aggregating just the solutions which have a dimension of d. Although the idea seems very simple described this way, the analysis seems highly non-trivial and has very interesting aspects and novel techniques (or at least new to me). It analyzes in this setting not only the standard least squares error but also a bias term, which allows for smaller sketch dimensions, and only 1/sqrt(eps) dependencies. Strengths: * gives least squares sketching results with lower time, space, and communication complexities * interesting techniques * some further applications are given, though details are fully in the appendix * very good writing Weaknesses: * motivation of the model, seems to be a niche * very low improvements but there seems no big gap that can be leveraged Technical Quality: 3 Clarity: 3 Questions for Authors: * I am slightly confused by the definition of the bias. Is $||AE(\tilde x)-b||$ standard in some literature? I would say this is the variance of the expected sketched estimator. Why would I be interested in this expected estimator, instead of the actual outcome after sketching? * I would rather define the bias as $||\tilde x -x^*||$ which I think does not allow for any improvements over standard sketching results, right? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: f Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback, as well as the comments and questions. We will address them in the final version. - **Definition of bias.** At a high level, we rely on the statistical definition of the bias of an estimator, which is (informally) the difference between the expectation of that estimator and its target quantity (the estimand). In the case of a least squares estimator $\tilde x$, the most standard statistical notion of bias would be $E[\tilde x]-x^*$. Since the estimator is multivariate, one will typically measure the amount of bias by taking the norm, i.e., $||E[\tilde x]-x^*||$ (as opposed to $||\tilde x-x^*||$ which is typically referred to as the estimation error). In our case, since we are interested in the least squares prediction vector, i.e. $A\tilde x$, as an estimator of the vector $b$, we compute the bias as $||E[A\tilde x]-b||=||A E[\tilde x]-b||$. This turns out to be the exact right notion for the purpose of distributed averaging and bounding the least squares loss. - **Motivation of the model.** Our model is motivated by the general distributed averaging framework (also known as model averaging, or bagging), which is widely used in many settings beyond least squares. In fact, our methods and results are applicable much more generally than the single-server multiple-machine computation model used in the paper. For example, they can be naturally extended to the multiple-server model [43], where the data is randomly partitioned into multiple chunks stored on separate servers, which is common in the literature (see response to Reviewer h7iG for details). The main reason we focused the paper on the single-server multiple-machine model is because this allowed us to obtain worst-case results that are independent of condition-number type quantities (those are unavoidable in the multiple-server model, given our other computational constraints). --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. I will keep my score.
Summary: This paper presents new techniques for distributed least squares regression using matrix sketching. The key contributions are: 1. A sparse sketching method that produces a nearly-unbiased least squares estimator in two passes over the data, using optimal space and current matrix multiplication time. 2. Improved communication-efficient distributed averaging algorithms for least squares and related tasks. 3. A novel bias analysis for sketched least squares, characterizing its dependence on sketch sparsity. This includes new higher-moment restricted Bai-Silverstein inequalities. The theoretical results are backed by experiments on real datasets showing the practical benefits of the approach. Strengths: 1. Provides a sparse sketching method that achieves near-unbiased least squares estimation in optimal space and current matrix multiplication time. 2. Achieves O(d^2 log(nd)) bits of space, which is optimal. Matches current matrix multiplication time O(d^ω), improving over previous approaches. 3. Introduces a new bias analysis for sketched least squares that sharply characterizes dependence on sketch sparsity. The techniques developed may be applicable to other sketching and randomized linear algebra problems. Weaknesses: 1. Experiments are conducted on only a few datasets. And it does not explore a wide range of problem sizes or distributed computing scenarios. 2. Primarily focused on least squares regression, with limited discussion of extensions to other problems. 3. Some of the theoretical results rely on assumptions (e.g., about leverage score approximation) that may not always hold in practice. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Are there natural extensions of this work to other loss functions beyond least squares? 2. Do you expect the techniques developed here to be applicable to other sketching problems beyond least squares? If so, which ones? Minor Comments: 1. The abstract could more clearly state the key theoretical results/bounds achieved 2. Some additional discussion of practical implications and potential applications would be valuable 3. A few typos noted (e.g. line 330) Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for comments and feedback. We will revise the abstract, and also expand our discussion of applications beyond least squares as outlined below. - **Extensions to other loss functions beyond least squares.** In addition to least squares, we provide a more broadly applicable result in Theorem 3, which in particular applies to minimizing general convex loss functions via the framework of Distributed Newton Sketch (see Corollary 1). Thus, our sketching methods and proof techniques are of broader interest to sketching-based optimization algorithms for a variety of loss functions, including the logistic loss, other generalized linear model losses such as the hinge loss, etc. - **Applications to other sketching problems.** Our theoretical analysis, which includes several new ideas (including the higher-moment version of Restricted Bai-Silverstein inequality, as well as the careful use of H"older's inequality in the analysis) is relevant to many instances of RMT-style analysis (i.e., analysis relying on the Stieltjes transform of the resolvent matrix) for sparse sketching operators. This RMT-style analysis has been used in Newton Sketch [16], Randomized SVD [17], and Sketch-and-Project [21]. We chose to focus on distributed least squares, as this gives the clearest worst-case computational improvements. - **Distributed computing scenarios.** In fact, our methods and results are applicable much more generally than the single-server multiple-machine computation model used in the paper. For example, they can be naturally extended to the multiple-server model [43], where the data is randomly partitioned into multiple chunks stored on separate servers, which is common in the literature (see response to Reviewer h7iG for details). The main reason we focused the paper on the single-server multiple-machine model is because this allowed us to obtain worst-case results that are independent of condition-number type quantities (those are unavoidable in the multiple-server model, given our other computational constraints). - **Experiments.** Our main contribution is to provide the first sharp theoretical characterization of the least squares bias for extremely sparse sketches, and this is also where we focused our experiments. Nevertheless, we include additional experiments (see the general response and PDF) with other sketching methods (all of which are more computationally expensive than the fast sparse LESSUniform method we used in the paper): Leverage Score Sampling, Gaussian and Subgaussian sketches, as well as Subsampled Randomized Hadamard Transform. - **Assumptions about leverage score approximation.** Our main results, Theorems 2 and 3, do not require any assumptions related to leverage score approximation, as they include leverage score approximation as part of the algorithmic procedure. --- Rebuttal Comment 1.1: Comment: Thank you for your response on the questions! After reading the other reviews, I would like to keep my current score of 5.
Rebuttal 1: Rebuttal: Thanks to all reviewers for the positive feedback and comments. We responded to those comments in the individual responses to each reviewer. We also provided additional experimental results on four different sketching methods (included in the PDF), and discussed the implications of our theoretical results beyond least squares and in other distributed models. All of this will be included in the final version of the paper, alongside other reviewer suggestions. Here, we summarize the main takeaways. - **Theoretical implications beyond least squares.** Our main contribution is a set of new theoretical techniques (e.g., higher-moment Restricted Bai-Silverstein, Lemma 3) for analyzing sparse sketching methods, which goes far beyond least squares, as we showed in Theorem 3 and Corollary 1, with an application to general optimization over convex losses (e.g., logistic loss, GLMs, etc) via a variant of the Distributed Newton Sketch. These techniques have wide implications for the analysis of other sketching problems where Restricted Bai-Silverstein-type inequalities have been used, including Randomized SVD [17] and Sketch-and-Project [21]. - **Extensions to other computation models.** In fact, our methods and results are applicable much more generally than the single-server multiple-machine computation model used in the paper. For example, they can be naturally extended to the multiple-server model [43], where the data is randomly partitioned into multiple chunks stored on separate servers, which is common in the literature (see response to Reviewer h7iG for details). The main reason we focused the paper on the single-server multiple-machine model is because this allowed us to obtain worst-case results that are independent of condition-number type quantities (those are unavoidable in the multiple-server model, given our other computational constraints). - **Additional experiments.** The fact that sketching methods tend to yield smaller least squares bias than uniform subsampling has been empirically observed in prior works [42]. Our main contribution is to provide the first sharp theoretical characterization of this phenomenon for extremely sparse sketches, and this is also where we focused our experiments. Nevertheless, we include additional experiments (see the PDF) with other sketching methods (all of which are more computationally expensive than the fast sparse LESSUniform method we used in the paper): Leverage Score Sampling, Gaussian and Subgaussian sketches, as well as Subsampled Randomized Hadamard Transform (SRHT). These numerical results further support the claim that sketching enjoys small least squares bias. Pdf: /pdf/d8acae65df2f4668b245238cf177239e5dfee31f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Sequoia: Scalable and Robust Speculative Decoding
Accept (spotlight)
Summary: This paper introduces a speculative decoding method Sequoia, which uses a novel sampling and verification method that outperforms prior work across different decoding temperatures. The speedup of Sequoia is large. Strengths: 1. This paper discuss the proposed method in a detailed way. The algorithm is novel and achieves large speedup. 2. This paper have a thorough and sound evaluation section. Sequoia outperforms SpecInfer in various datasets and temperature settings. Weaknesses: 1. Sequoia shows linear speedup with the tree size growing exponentially. However, it is less useful in batch serving (e.g., vllm). Do you think this problem can be solved by integrating a more efficient draft token proposing method (e.g., Medusa/Eagle) with the proposed Sequoia? Technical Quality: 3 Clarity: 3 Questions for Authors: I notice that the speedup of speculative decoding decreases as the temperature grows (and in many previous works). Do you think this problem is solvable? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful review. We are glad that the reviewer found our algorithm novel and the evaluation thorough and sound. We have tried to carefully address your questions. We hope the reviewer may consider raising their score in light of our response. ### Q1: Batch serving setting Thank you for this question. As you have correctly pointed out, Sequoia gives the largest gains for small batch sizes. This is because at small batch sizes, decoding is more memory-bound, and thus one can afford to verify a large tree without increasing verification time; by discovering the optimal structure for this large tree, Sequoia gives large speedups over baselines. Conversely, for larger batch sizes, the speculation budget is smaller, and the relative gains of Sequoia relative to other speculation tree structures (chains, k-ary trees, etc) is much smaller—we show this in Figure 1(a). In a real inference system with continuous batching, one could use Sequoia to determine the optimal tree for any batch size (as in Liu et al, 2024 [1]), thus attaining strong performance across the board. Furthermore, as you correctly pointed out, Sequoia can be combined with more powerful draft models (like Eagle) to attain even larger speedups (discussed briefly in Section 3.1.1). [1] Xiaoxuan Liu, Cade Daniel, Langxiang Hu, Woosuk Kwon, Zhuohan Li, Xiangxi Mo, Alvin Cheung, Zhijie Deng, Ion Stoica, Hao Zhang. Optimizing Speculative Decoding for Serving Large Language Models Using Goodput. CoRR abs/2406.14066, 2024. ### Q2: Performance decreasing with temperature Thank you for your insightful question! In this work, we use sampling without replacement to make the performance more robust across temperature. However, you are correct that decreasing is still observed when temperature goes up. Generally, high temperature means a higher degree of randomness, which is more difficult for draft models to guess. Improving the performance at higher temperatures is an interesting direction for future work—perhaps methods that better align draft and target models (e.g., via distillation [2]), or more advanced sampling algorithms [3, 4], could yield improvements here. [2] Zhou et al. DistillSpec: Improving Speculative Decoding via Knowledge Distillation. ICLR 2024. [3] Sun et al. Block Verification Accelerates Speculative Decoding. Efficient Systems for Foundation Models workshop, ICML 2024. [4] Qin et al. Multi-Token Joint Speculative Decoding for Accelerating Large Language Model Inference. Arxiv 2024. --- Rebuttal Comment 1.1: Comment: I thank authors for their detailed response. I will maintain my score.
Summary: The paper proposes SEQUOIA, an algorithm designed to improve the efficiency of serving large language models (LLMs) through scalable and robust speculative decoding. The SEQUOIA algorithm introduces a dynamic programming method to construct optimal token trees for speculation, enhancing scalability. Additionally, a novel sampling and verification method ensures robustness across various decoding methods (top-k sampling, top-p sampling, temperature, ...). Empirical results demonstrate 4.04x speedup in a small model and 9.5x speedup in a large model with offloading inference. Strengths: - The simplest tree structure is a single chain (i.e., list) and the most complicated tree structure is k-ary full tree. SEQUOIA finds sweet spots between them, utilizing dynamic programming - SEQUOIA maintains high hit-ratio across different sampling methods. This robustness make speculative decoding more practical. - Paper presents extensive experimental results including ablation studies. Weaknesses: - Building optimal tree is more time-consuming than previous approaches (k-ary tree or k-independent sequences) - Due to this, the batch size that can benefit from speculation might be much smaller compared to other methods Technical Quality: 3 Clarity: 3 Questions for Authors: - In Figure 4, why SpecInfer shows the opposite trend compared to Sequoia & Top-k sampling? In other words, Why it shows higher speedup when the temperature increases? - It seems that SEQUOIA needs to conduct the dynamic programming algorithm every iteration. How long does that process take? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No additional limitations exist. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful review. We are glad that the reviewer appreciates our dynamic programming based tree search algorithm as well as the robustness of sampling and verification algorithms. We have tried to carefully address your questions. We hope the reviewer can consider raising your score in light of our response. ### Q1: Time cost for dynamic programming and building optimal tree Thank you for raising this concern. The time for building a Sequoia tree has two parts: - *Offline Dynamic Programming*: Fixing the draft/target model and the workload, our dynamic programming can be conducted before inference (or offline). It is a one time preprocessing effort and does not introduce extra overhead during inference. Furthermore, our dynamic programming can run very fast, taking less than 20s to generate a tree as large as 1536, and less than 5s for trees of size 64 or 128. - *Tree building during inference*: Time for building the tree is decided by the inference cost of the draft model and the depth of the tree (as we analyzed in Section 4.1: Hardware Optimizer). In this sense, building a Sequoia-optimal tree of depth D should cost the same time as building a k-array tree of depth D. ### Q2: Sequoia’s performance at various batch sizes Thank you for this question. As you have correctly pointed out, Sequoia gives the largest gains for small batch sizes. This is because at small batch sizes, decoding is more memory-bound, and thus one can afford to verify a large tree without increasing verification time; by discovering the optimal structure for this large tree, Sequoia gives large speedups over baselines. Conversely, for larger batch sizes, the speculation budget is smaller, and the relative gains of Sequoia relative to other speculation tree structures (chains, k-ary trees, etc) is much smaller—we show this in Figure 1(a). In a real inference system with continuous batching, one could use Sequoia to determine the optimal tree for any batch size (as in Liu et al, 2024 [1]), thus attaining strong performance across the board. [1] Xiaoxuan Liu, Cade Daniel, Langxiang Hu, Woosuk Kwon, Zhuohan Li, Xiangxi Mo, Alvin Cheung, Zhijie Deng, Ion Stoica, Hao Zhang. Optimizing Speculative Decoding for Serving Large Language Models Using Goodput. CoRR abs/2406.14066, 2024. ### Q3: Different trends for SpecInfer, Top-K and Sequoia Thank you for raising this thoughtful question! In short, SpecInfer’s performance gets worse at low temperature, whereas Sequoia and Top-K sampling improve at lower temperatures, because these methods use different sampling and verification algorithms. SpecInfer uses sampling with replacement, Top-K uses top-k sampling and Sequoia uses sampling without replacement. As a result, at low temperatures SpecInfer is the only method that would repeatedly sample an incorrect token, leading to poor performance. We will now give a more detailed answer to your question, by first discussing how top-k sampling and SpecInfer sampling compare at both low and high temperatures, and then explaining how Sequoia is able to get “the best of both worlds” and perform well across both low and high temperatures. **In a low temp regime**, the token with largest probability in the target model’s output will take up over 95%. This token is very likely to appear in the top-k tokens of draft models (even if it is not the top-1 token). In this case, Top-K sampling can easily get accepted. However, for SpecInfer’s sampling with replacement, since the temperature is low, SpecInfer will keep drafting the same token. When the drafted token is not exactly the top-1 token of the target model, SpecInfer will get rejected. To summarize, in the low temp regime, Top-K requires the top-1 token of target model is among the top-k tokens of draft model, while SpecInfer requires the top-1 of the target model to be exactly the same as the top-1 of the draft model, leading to worse performance. Top-K method benefits from its cover property (Section 3.2.2). **In a high temp regime**, let’s consider an extreme case (temp $\rightarrow \infty$), i.e. the output of target model and draft model are totally random. In this case, it’s nearly impossible for Top-K to get accepted, as its total acceptance chance is just K / V, where V is the vocabulary size and K is the number of proposed tokens. However, for SpecInfer, all the draft tokens will be accepted, since the target and draft token probability is the same (1/V). SpecInfer method benefits from its optimal transport property (Section 3.2.2). Sequoia with sampling without replacement, behaves more like Top-K in low temp and more like SpecInfer in high temp as it has both cover property and optimal transport property. That’s the reason why Sequoia is able to perform well across a wide range of temperatures. The trend of acceptance rates of three methods is shown in Figure 3, which verifies our above claims.
Summary: This paper proposed an improvement on the tree-based speculative decoding methods to make the accepted tokens scale roughly in logarithm to the number of tokens generated by the draft model. The author provided theoretical and empirical justifications for the tree construction and verification procedures. The experiments shows this method has a good speedup over naive method, and it can generate more tokens each time. Strengths: 1. The paper has a strong theoretical guarantee for the scalability. 2. The method works for different scenarios with different temperature. 3. The dynamic programming problem can be pre-computed offline. Weaknesses: 1. The differene with existing tree-based method in algorithm is marginal. 2. The speedups are compared against naive methods. The speedup over SOTA is not provided. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Could the author provide more reason why they don't compare with multiple draft model version of SpecInfer? 2. Could the author provide more justification for the validity of the positional acceptance assumption? What's the impact of this assumption theoretically and empirically? What is the distribution being used if the acceptance probability does not dependent on the token t? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitations are acknowledged but not fully addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful review. We are glad that the reviewer found our work to have strong theoretical guarantees for scalability and also have good empirical results. Also thanks for noticing that our dynamic programming algorithm can be pre-computed offline. We have tried to carefully address your questions. We hope the reviewer can consider raising your score in light of our response. ### Q1: Differences between Sequoia and existing tree-based methods Thank you for raising this question. Sequoia is designed to be a scalable and robust speculative decoding method that can yield large speedups in settings where the speculation budget is large (e.g., small batch serving, and offloading). In these settings, we show how we can optimally construct very large trees (e.g., 128 tokens for on-chip, and 768 for offloading), and that the number of accepted tokens keeps growing as the tree size grows—in Figures 1,4,6 (and theorem 3.6) we show that the number of accepted tokens can grow roughly logarithmically with the number of speculated tokens in the tree. We believe that Sequoia could be even more powerful on future hardware—the gap between computation and memory bandwidth is getting higher (see Figure 1), thus allowing a larger speculation budget. Previous works like SpecInfer, Eagle, and Medusa apply shallow and small trees without studying the problem of scalability. By leveraging Sequoia’s tree search algorithm, our empirical results show that Sequoia scales well to arbitrary tree sizes. On the offloading setting on L40, which has a large budget, Sequoia can outperform SpecInfer by 51% on average. In our work, in addition to improving the structure of the tree, we show that an improved sampling and verification algorithm can outperform others (SpecInfer, TopK) in terms of scalability and robustness. As described in Figure 4 and Table 5 in appendix, we can see that Sequoia achieves the largest speedups across all temperatures, attaining up to **1.65×** and **1.27×** speedup relative to SpecInfer and top-k sampling, respectively. ### Q2: Speed up over SOTA Thank you for raising this issue. In the last columns of Table 1/2, we have presented the speedups of SpecInfer, which is a very strong baseline. In Table 1, we show that compared to SpecInfer, Sequoia attains speedups of 5% to 30% in the A100 on-chip setting (avg 22%). In Table 2, we show that Sequoia attains speedups over SpecInfer of 36% to 62% in the L40 offloading setting (avg 51%). More A100 on-device results can be found in Table 4 in appendix. We will add relative speedups compared to SpecInfer in the revised version. In addition, since our method does not improve/train draft models, we only compare with draft model agnostic baseline. And the draft model improvement methods, like Eagle, Glide, Medusa are orthogonal to us and can be combined to achieve a better performance. ### Q3: Reasons for not comparing with multiple-draft version of SpecInfer Thank you for raising this problem! We have the following three reasons. 1. *Draft model availability*: Often only a single draft model is available (e.g., Llama3-8B as a draft model for Llama3-70B), and it would thus require significant time/energy for practitioners to train additional draft models. 2. *Performance*: As compared in [SpecInfer](https://arxiv.org/pdf/2305.09781v3) (Appendix A, Table 4), most of the time one draft model outperforms multiple draft models. 3. *System*: SpecInfer serves each draft model on one GPU. However, every experiment in Sequoia is conducted on a single GPU, which means if we want to use the multiple-draft version, we need to run these draft models sequentially. This will further reduce the performance of the multi-draft version of SpecInfer. ### Q4: Positional Acceptance Assumption **Validity of the positional acceptance assumption** The positional acceptance assumption states that the probability of a verification algorithm accepting a token $t$ which is the $k^{th}$ child of an already accepted token depends only on the value of $k$. This is a simplifying assumption, given that the probability of a specific token getting accepted additionally depends on the token and context. However, we can consider the average acceptance rates, across many tokens/contexts, for sampled tokens, as a function of their position $k$. In practice, we observe that this average is quite stable. For example (JF68M for Llama-2-7b, CNN, T=0.6, width=8), we show that when we measure the average acceptance rate vectors across 10 different groups of 200 prompts each, the variance across these groups is quite small.. - Variance Vector: [0.0067, 0.0030, 0.0024, 0.0015, 0.0010, 0.0011, 0.0008, 0.0007] - Acceptance rate Vector: [0.5608, 0.1077, 0.0539, 0.0343, 0.0236, 0.0186, 0.0146, 0.0122] - Relative Error [1.2%, 2.8%, 4.4%, 4.5%, 4.2%, 5.9%, 5.3%, 5.4%] **Impact of positional acceptance assumption** - *Theoretically*: we pointed out that, in tree based speculation methods, the probability of getting accepted is a function of the position of the speculated token. As a result, the optimal tree cannot be a balanced one (e.g. k-independent chains). This is also the intuition why we need and why we can search for an optimal tree. - *Empirically*: With expected acceptance probability for each position (i.e. our acceptance vector, vector P in algorithm 1), we can pre-compute expected accepted tokens to search for the optimal tree structure (algorithm 1). For each experiment, we sample a subset of 200 sentences to calculate acceptance vectors and feed into algorithm 1 for tree searching.
Summary: In this paper, the authors propose a novel speculative decoding algorithm to accelerate LLM generation. By leveraging the positional acceptance assumption and dynamic programming, they can determine an optimal tree topology with the tree size and depth. The experiments demonstrate that the proposed method outperforms previous works across various settings. Strengths: 1. The paper is well-written and easy to follow. 2. The proposed method is supported by robust theoretical analysis and algorithm design. 3. Strong and comprehensive empirical results validate the proposed methods. Overall, this paper demonstrates novelty, soundness, and a significant contribution to the field. Weaknesses: 1. The experimental setup for offloading is not clearly explained. It is unclear if the draft model is placed on a GPU while part of the target model is on the GPU and the other part on the CPU. If this is the case, the speedup is understandable since the throughput of the target model would be very low, and the draft model can run very fast. 2. The comparison with the SpecInfer algorithm may not be fair. In Table 1, the size of the SpecInfer tree is 5x8=40, which is much smaller than the Sequoria tree. On the other hand, in Table 2, the size of the SpecInfer tree is 16x48=768, the same as the Sequoria tree. However, the length of SpecInfer would be very long (48) in this case, which seems impractical. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. What is the standalone throughput of the draft and target models under different configurations? This information would help readers understand where the speedup comes from. 2. How do you determine the best configuration of SpecInfer for comparison? 3. How accurate is the acceptance rate vector measuring with 200 examples? 4. Should the dynamic programming algorithm also consider the standalone throughput of the target and draft models? For instance, if the draft model is exactly the same as the target model, each element in the acceptance rate vector would be close to one, as the draft model can generate the same output as the target model. In this case, the tree will keep growing, but the more tokens the draft model generates, the slower it will become. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Please refer to the previous sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful review. We are glad that the reviewer found our work **easy-to-follow** and having **comprehensive empirical results as well as theoretical analysis**. We have tried to carefully address your questions. We hope the reviewer can consider raising your score in light of our response. ### Q1: Clarifying the offloading setting Thank you for pointing out our unclear description. In our offloading setting, we performed a layer-wise offloading for the target model (default setting of deepspeed-zero) and the draft model is on-device. So your understanding is correct, in this setting, the draft model is very fast (24.2ms) compared to the offloaded target model (5.7s). With Sequoia, we can accelerate offloading based decoding from 5.7s/token to 0.6s/token, which is more tolerable for running a large model on a consumer GPU. Furthermore, the setting of offloading (a big gap between FLOPs and memory bandwidth) also simulates the trend of future hardware, as shown in Figure 1(a). ### Q2: Comparison with SpecInfer Thank you for your question about our SpecInfer baseline. Below we show, through 3 sets of experiments, that Sequoia outperforms SpecInfer when we perform thorough sweeps for the SpecInfer tree configuration: **(1) Sweep of SpecInfer tree structure** Here, we use JF68M as the draft model, and Llama2-7b as the target, and sweep a very wide range of SpecInfer tree structures (for both greedy and stochastic): Greedy Decoding (T=0.0): |Width/Depth| 1 | 2 | 4 | 8 | 16 | 32 | 64 | 128 | |---------|----|----|----|----|----|----|----|----| |1 | N/A| N/A| N/A|3.09x|3.14x|2.75x|1.94x| 1.19x| |2 |N/A | N/A|2.95x|3.36x|3.46x|2.69x|1.74x| | |4 |N/A |2.40x|3.14x|3.46x|3.41x|2.47x| | | |8 |1.88x|2.44x|3.14x|**3.70x**|3.03x| | | | |16 |2.00x|2.55x|3.27x|3.14x| | | | | |32 |1.86x|2.57x|2.81x| | | | | | |64 |1.92x|2.22x| | | | | | | |128 |1.68x| | | | | | | | Stochastic Decoding (T=0.6): |Width/Depth| 1 | 2 | 4 | 8 | 16 | 32 | 64 | 128 | |---------|----|----|----|----|----|----|----|----| |1 | N/A| N/A| N/A|2.08x|1.87x|1.48x|1.11x| 0.69x| |2 |N/A | N/A|2.14x|2.20x|1.89x|1.46x| 1.07x| | |4 |N/A |1.99x|2.30x|2.28x|1.95x|1.53x | | | |8 |1.73x|2.09x|2.42x|**2.42x**| 2.14x | | | | |16 |1.78x|2.07x|2.41x|2.18x| | | | | |32 |1.78x|2.08x|2.24x| | | | | | |64 |1.73x |2.04x | | | | | | | |128 |1.61x | | || | | | | Sequoia attains speedups of 4.04x (greedy) and 3.18x (stochastic), outperforming all tree configurations of SpecInfer, including 5x8 tree (3.45x greedy, 2.47x stochastic). **(2) More SpecInfer results for A100** For on-device settings, we add the results for the 8x8 tree and 16x8 tree (SpecInfer) as follows, Greedy Decoding (C4, T=0) |Draft|target| Tree Config | Sequoia | SpecInfer(5x8) | SpecInfer(8x8) | SpecInfer(16x8) | |----|----|----|----|----|----|----| |JF68M| Llama-2-7b| (128,10) |**4.04x**| 3.45x| 3.70x| 3.16x| |JF68M | Llama-2-13b | (64,9) | **3.73x** | 3.30x | 3.10x| 2.4x| |SL1.3B| Vicuna-33B | (64,6) | **2.27x** | 1.83x| 1.73x|1.45x| Stochastic Decoding (C4, T=0.6) |Draft|target| Tree Config| Sequoia | SpecInfer(5x8) | SpecInfer(8x8) | SpecInfer(16x8) | |----|----|----|----|----|----|----| |JF68M| Llama-2-7b| (128,7) | **3.18x**| 2.47x| 2.45x| 2.18x| |JF68M | Llama-2-13b | (64,7) |**3.19x** | 2.48x | 2.42x|1.81x| |SL1.3B| Vicuna-33B | (64,6) |**2.16x** | 1.64x| 1.52x| 1.32x | SpecInfer’s performance already degrades by enlarging the tree from 5x8 to 8x8. For SpecInfer, although the accepted tokens will marginally increase, the cost of verification/draft proposing will increase more. **(3) More SpecInfer results for L40 offloading** In Table 2 of the submission, we compare Sequoia trees of size 768 to SpecInfer trees of size 768, composed of 16 independent sequences of length 48 ("16x48"). Here we additionally compare to SpecInfer trees of shape 32x24 and 24x32: |Draft|target| Tree Config| Sequoia | SpecInfer(16x48) |SpecInfer(32x24)|SpecInfer(24x32)| |----|----|----|----|----|----|----| |Llama-2-7b| Llama-2-70b| (768, 18)| **8.4x** (9.91) | 5.2x (7.03)| 5.5x (6.82) | 5.2x (6.66) | We will include a larger sweep in the revised version (these experiments are time consuming and require specific CPU/PCIEs that are often not available on cloud servers). Thank you for understanding! ### Q3: Standalone throughput of the draft and target models Thank you for pointing out this missing part. We will include the numbers in the revised version. JF68M: 0.5ms Llama-2-7b: 24.2ms Llama-2-70b: 5.7s Our system is implemented in Huggingface with CUDAGraph and Static KV Cache (not as optimized as frameworks such as vLLM and deepspeed). ### Q4: Accuracy of acceptance vector measurement Empirically, this measurement is accurate. Below, we show that when we measure the average acceptance rate vectors across 10 different groups of 200 prompts each, the variance across these groups is quite small: Setting: JF68M for Llama-2-7b, CNN, T=0.6, width=8 - Variance Vector: [0.0067, 0.0030, 0.0024, 0.0015, 0.0010, 0.0011, 0.0008, 0.0007] - Acceptance rate Vector: [0.5608, 0.1077, 0.0539, 0.0343, 0.0236, 0.0186, 0.0146, 0.0122] - Relative Error [1.2%, 2.8%, 4.4%, 4.5%, 4.2%, 5.9%, 5.3%, 5.4%] ### Q5: dynamic programming algorithm should consider draft/target throughput Yes. We discussed this in Section 4.1 (Hardware Aware Optimizer). In the case you mentioned, the optimizer will choose the shallowest tree (depth = 0) even if the acceptance rate is high, which means we do not use speculative decoding at all. --- Rebuttal Comment 1.1: Comment: Thank you for the responses. The authors have provided sufficient explanations and additional results, which have addressed most of my concerns. One additional limitation that comes to my mind is the context length: in this work, the context length is relatively short (128). It is unclear how it would perform with a longer context length (up to 8k). This could be explored in future work. --- Reply to Comment 1.1.1: Comment: Thank you for your response and insightful question! For long context serving, the memory bottleneck shifts from model parameters to KV cache. The increasing in context length is **orthogonal** to Sequoia since it almost does not increase the arithmetic intensity of decoding process, thus not reducing speculation budget. A recent work, Sun et al 2024 [1] discusses self-speculation for long contexts. We also conducted a simulation based on their methods, finding that Sequoia can help to accept about **30%** more tokens in their offloading setting for Llama-2-7B-128K (on PG19) than k-array trees with 256-512 speculation budget, demonstrating Sequoia's scalability in this scenario. It’s an interesting future work to evaluate Sequoia on a wide range of contexts with various draft models. **Simulation Results** |Budget| 256 | 384| 512| |----|----|----|----| |Sequoia|15.2|16.1|16.6| |16-array-tree|12.1|12.4|12.6| The number stands for #accepted tokens. Thank you once again for your feedback. [1] Sun, Hanshi, Zhuoming Chen, Xinyu Yang, Yuandong Tian, and Beidi Chen. Triforce: Lossless acceleration of long sequence generation with hierarchical speculative decoding
Rebuttal 1: Rebuttal: We thank all the reviewers [**R1** (uMCB), **R2** (qoEg), **R3** (XEn9), **R4** (yRTd)] for their thoughtful and highly supportive feedback! We were glad that the reviewers found the work **novel and meaningful** [R1,R3,R4], believed our theoretical analysis was **detailed, robust and strong** [R1, R2, R4], felt the experimental results were **sound and showed good speed ups** [R1, R2, R3, R4], and found the presentation **easy to follow** [R1]. We have updated the paper to incorporate constructive suggestions, which will show in the revision. We summarize the major changes: 1. **Comparison with SpecInfer/Baselines** [R1, R2]: - We added relative speed up numbers over SpecInfer, achieving avg 22% for A100 and 51% for offloading. - We swept a wide range of tree configurations for SpecInfer for a more fair comparison. We added this part as an ablation. 2. **Positional Acceptance Assumption** [R1, R2]: To further clarify this assumption and its correlation with our algorithm, we added a discussion about the acceptance rate vector we measured for each experiment and its variance (1~5%, indicating the measurement is accurate).
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Moving Off-the-Grid: Scene-Grounded Video Representations
Accept (spotlight)
Summary: This work presents a video representation learning approach where tokens are decoupled from explicit grid locations in the video sequence. Rather than simply extract patches to construct tokens and apply self-attention across blocks of frames, this proposed model predicts sub-pixel motion given a history of frames to advect latent features throughout the video. Once those grid-aligned features are advected to the next frame to their non-grid aligned positions, cross attention can be applied between the grid aligned observed features and the advected features in order to correct the result. The authors claim three contributions: * Introduction of MooG * Qualitative validation of the approach * Quantitative evaluation on a number of different downstream tasks Strengths: Idea seems like a fundamental one that the community will use. It applies to a wide variety of tasks. The concept is novel and general. There's a lot of great qualitative experiments that help me understand that this method is implicitly learning a tracking representation. Weaknesses: I had a hard time understanding the point of the corrector for awhile. If the predictor functions as intended, shouldn't a reconstruction loss applied to D(P(z)) be enough? But after some more thought, I guess the intent isn't to build a an open loop video predictor (although I suppose the authors could evaluate it as such?), but instead the predictor is just a way to establish alignment between features in neighboring frames so that the cross attention of the corrector is more effective? I think the prior work could go farther back in history in order to frame this proposed prediction-correction framework as being related to old school recursive filters. For example, I think if I were told how this is designed similar to a Kalman filter, then I'd have a much clearer picture of what is trying to be done. I don't think Fig 1 does enough to clarify the system. I felt it was still necessary to read eqn 1 and 2 to understand what exactly was going on. Some ideas include: (1) include arrows going up in the direction of image -> encode -> state and state -> decode -> image. (I originally read this as left-to-right top-to-bottom and realized I went the wrong direction once I saw encode and decode -- arrows would prevent this). (2) Make the relationship between prediction and correction clear. It's currently presented as two colorful feature map sets, which doesn't communicate much. Once again, if it's established early on what the predict-correct framework looks like, then this is obvious, but I failed to grok it on the first read through. Technical Quality: 3 Clarity: 3 Questions for Authors: I didn't get a good impression of what kind of motion dynamics the predictor properly supported. It looked like there was some overshooting happening in the gifs, but it's not clear to me if this is a real issue. Can this be clarified? One ablation that would be useful is to change the backprop through time length. It would be great if there was evidence that only 2 or 3 frames were enough to get the predictor to estimate dynamics. Another question could be, what kind of dynamics are modeled with 2, 3, 4, 8, etc. frames and are there diminishing returns? Can this be clarified? Given that this model is deterministic and subject to over-blurring in the reconstruction task, have the authors explored other losses such as an LPIPS loss which might be more invariant to small misalignments? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: looks good to me Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and constructive feedback. We were pleased to hear that you consider our contribution to be novel, general, and the ideas widely applicable. We appreciate that you recognize the strength of our qualitative experiments to help understand the method’s working. **“I had a hard time understanding the point of the corrector for a while. [...] the predictor is just a way to establish alignment between features in neighboring frames so that the cross attention of the corrector is more effective?”** Thank you for pointing out this confusion. We will clarify this in the updated version of the paper. Indeed, as is mentioned in the paper, the separation between the corrector and predictor is slightly artificial if the predictor is not unrolled in an open-loop prediction set-up. Since this is a deterministic model, unrolling in an open loop would produce blurry predictions quite quickly, but this indeed an interesting future direction. The purpose of the corrector is to integrate new information in the state based on the current observation. However, if we were to decode from this state it could create a shortcut that the model could exploit (the model could learn to ignore the temporal prediction component, i.e. the alignment between features in neighboring frames, and just learn to auto-encode via the corrector). This is why we decode from the predicted state, which forces temporal prediction. For readout of down-stream tasks, we additionally make use of the corrected state, which combines both the prediction and the current observation. **“I think the prior work could go farther back in history in order to frame this proposed prediction-correction framework as being related to old school recursive filters. For example, I think if I were told how this is designed similar to a Kalman filter, then I'd have a much clearer picture of what is trying to be done.”** Thank you for pointing out the connection to Kalman filters and recursive filters. Indeed, there are some resemblances worth pointing out that we could comment on in the related work section. We will revise the text accordingly, which hopefully improves the overall presentation. **“I don't think Fig 1 does enough to clarify the system.”** Thank you for pointing this out. We intend for the revised draft to make the connection between the corrector and the predictor clearer. Further, we have updated the main model figure with this in mind, which can be found in the supplementary pdf attached to the general response. Please let us know if you have any suggestions for further changes. **“I didn't get a good impression of what kind of motion dynamics the predictor properly supported. It looked like there was some overshooting happening in the gifs, but it's not clear to me if this is a real issue. Can this be clarified?”** What the predictor would learn exactly is difficult to quantify as it will depend on many factors - model capacity, training data and so on. We do observe that when there is fast motion the model tends to lose tracking and assigns a new token to the moving element. This makes sense considering that the amount of uncertainty grows with faster motion. Whether this is a real issue will similarly depend on the nature of the down-stream tasks. Evidently, for our current evaluation spanning several standard computer vision tasks, this is not a major issue. However, for more complicated dynamics the predictor could benefit from a more specialized architecture. **“One ablation that would be useful is to change the backprop through time length. It would be great if there was evidence that only 2 or 3 frames were enough to get the predictor to estimate dynamics. Another question could be, what kind of dynamics are modeled with 2, 3, 4, 8, etc. frames and are there diminishing returns? Can this be clarified?”** Thank you for this suggestion. Generally speaking, we’d anticipate that a minimum of 3 frames is needed to make good predictions as acceleration information requires 3 different measurements - with 2 frames only velocity information is available. We agree that it would be interesting to ablate this hyper-parameter and we will commit to doing so in the updated version of the paper. Previously we have explored training with a probabilistic “stop-gradient” that activated at random times through the sequence to effectively train on 3-4 frames long sequences but not overfit to a specific length. However as the model development continued, we no longer found this to be needed. **“Given that this model is deterministic and subject to over-blurring in the reconstruction task, have the authors explored other losses such as an LPIPS loss which might be more invariant to small misalignments?”** Thank you for this suggestion. In fact, we have not explored LPIPS or other perceptual losses to mitigate this, though this would be an interesting direction for future work. Integration of perceptual losses into MooG is not straightforward as we decode a random subset of pixels at every time step for efficiency reasons, as pixels directly serve as queries in the transformer decoder. Future work could explore decoding (latent) patches instead of raw pixels to mitigate this issue, which would make the model compatible with perceptual losses. A related direction that is worth pursuing is to have the state be the outcome of a sampling process (eg. as in a diffusion model or VAE), which might also help address blurry predictions. --- Rebuttal Comment 1.1: Title: Concerns addressed Comment: My concerns were addressed in the rebuttal and I will keep the rating of weak accept.
Summary: This work discusses Scene-Grounded Video Representations. Compared with current vision models that make each layer consist of tokens organized in a grid-like fashion, the authors introduce Moving Off-the-Grid (MooG), a self-supervised video representation model that proposes an alternative approach. The novelties are: Introducing Moving Off-the-Grid (MooG), a novel transformer-based recurrent video representation model designed to learn off-the-grid (OTG) representations using a straightforward next-frame prediction loss; illustrating how this representation enhances a range of downstream vision tasks, including point tracking, monocular depth estimation, and object tracking. Strengths: + This work allows individual tokens to consistently track elements of the scene through videos of arbitrary length and “anticipate” where a scene element will be observed next. + The authors designed MooG to process an unlimited number of video frames while consistently maintaining a scene-grounded representation of the video. + Tokens within the latent state are inherently associated with particular pixel positions. The model can obtain corresponding token representations based on different features of the image. Weaknesses: - In line 135, you said “It is, however, crucial that the image is decoded only from the predicted state—decoding the current frame from the corrected state reduces the problem to simple auto-encoding and hurts representation quality considerably.” So how to prove the effectiveness of the Corrector module? - What’s the difference between MooG and Deformable DETR? Does the Corrector play a same role as the offset modal in Deformable DETR? Besides, do you treat the states as the tokens? - It seems that you used transformer (attention module) in Corrector and Predictor, and how does the algorithm perform in tracking tasks? What is its speed? Can it meet real-time requirements? How’s the performance in the Multi-object tracking field? - In table 3, the performance of TAPIR on Davis-full is better than MooG, so could you please provide more detailed explanations? Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the Weaknesses Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors do some analysis and the work will not have any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and constructive comments. **“how to prove the effectiveness of the Corrector module?”** Thank you for your comment. Note that the corrector is the only part of the model that has access to the observation at the current time-step, i.e. to “correct” the prediction, and hence is the only channel for integrating new information. However, it also introduces a shortcut that bypasses the previous state when using the corrected state for reconstructing the observation. This is what necessitates both a predicted state (to be used for reconstruction that requires temporal prediction) and a corrected state (to be used for readouts that incorporate the current observation and is maximally informative). **“What’s the difference between MooG and Deformable DETR? Does the Corrector play a same role as the offset modal in Deformable DETR? Besides, do you treat the states as the tokens?”** One of the key differences between MooG and Deformable DETR is that Deformable DETR utilizes sparse spatial sampling of deformable convolution on the 2D feature map (with explicit multi-scale features), while MooG models the image as an off-the-grid set of latent representations. There is no sparsity nor offset in the corrector in MooG, whose purpose is to “corrrect” the state predicted from the previous timestep based on the current observation. The corrector cross-attends to the feature maps and updates the set of latent features through transformer layers directly. MooG is easier to implement, it is mainly attention blocks, there is no need for any specialized kernel (while deformable DETR requires specialized [deform_attn_cuda.cu](https://github.com/fundamentalvision/Deformable-DETR/tree/main/models/ops/src/cuda)). **“It seems that you used transformer (attention module) in Corrector and Predictor, and how does the algorithm perform in tracking tasks? What is its speed? Can it meet real-time requirements?”** MooG is a very lightweight model, totalling approximately only 35M parameters during training (and fewer during inference). In particular, the transformers layers you mentionhave only 2 or 3 layers (predictor and corrector), while the conv-net is only 6 layers which should be no problem to run at real time on modern hardware. Further, for readouts (like point/box tracking) the pixel decoder is not needed at inference time. The model performs well even when reducing the number of available tokens considerably (as shown in Figure 6 in Appendix C), which further improves speed. **“In table 3, the performance of TAPIR on Davis-full is better than MooG, so could you please provide more detailed explanations?”** Thank you for pointing this out. Indeed, because of the auto-regressive nature of the readout module, error accumulates when unrolling readouts over long sequences. This is not an issue with the base representation (as the corrector receives continuous observations) but we have observed how the read-out module ends up drifting eventually, i.e. as the initial conditioning signal (box or point tracks in the first frame) becomes less informative. We note how TAPVid and TAPIR are domain-specific SOTA approaches that leverage specialized architectures, such as explicit cost volumes, to mitigate such issues for a particular domain. They further have access to both past as well as future states, while MooG is a causal model with access only to past frames, i.e. it can be used for online tracking. MooG learns representations that are useful for a variety of downstream tasks, and we have not incorporated domain-specific improvements in the readout decoder. One interesting direction for future work might be how to make MooG representations perform competitively on one single domain by incorporating more specialized components in the readout decoder. We will update the draft to give more context to this comparison and why we do not necessarily expect to beat domain-specific SOTA methods with MooG.
Summary: The authors present a self-supervised video representation learning strategy. A grid-structure free feature extractor is trained using a next frame prediction objective. A corrector module extracts per-frame features. A predictor module predicts the next frame features. A decoder (with suitable grid free architecture) reconstructs frames. A sparse (for efficiency) reconstruction loss is applied on decoder output as the learning signal. The method, MooG, appears to bind features to specific structures in the visual inputs and tracks these across frames. The authors present evaluations for tracking and depth estimation. Strengths: 1. Interesting and novel idea for self-supervised representation learning from videos 2. Clear explanation of methodology and through details Weaknesses: 1. **Evaluations:** Learned representations are evaluated on a) niche tasks, b) with only image SSL baselines. Proposed method uses videos to learn unlike the baselines. Please compare a) on generic tasks (evaluate learned representations for classification, object detection, or segmentation) and b) compare against video SSL methods (on the identified tasks and generic ones). In fact, in introduction, the authors argue that tasks like object detection and segmentation need the kind of off-the-grid architectures being proposed. Evaluation on such tasks will strengthen the authors case. 2. **Prior Video SSL works:** It is unclear how these representations compare against video SSL methods [2,3,4,5]. Maybe try to apply some of these methods on the selected tasks to compare how MooG representations compare to them? Or evaluate MooG on tasks these are commonly used for. 3. **Related Works:** Consider discussing [1] which explores a similar idea of next frame prediction in a latent space to learn good representations. Also [6] which learns local features using language which shows DINO like grouping / tracking behaviour. [1] Sequential Modeling Enables Scalable Learning for Large Vision Models, CVPR 2024 [2] Time Does Tell: Self-Supervised Time-Tuning of Dense Image Representations, ICCV 2023 [3] Is ImageNet worth 1 video? Learning strong image encoders from 1 long unlabelled video, ICLR 2024 [4] Self-supervised Video Transformer, CVPR 2022 [5] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training, NeurIPS 2022 [6] Perceptual Grouping in Contrastive Vision-Language Models, ICCV 2023 Technical Quality: 2 Clarity: 2 Questions for Authors: See weaknesses Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Ok Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and constructive comments. We are pleased to find that you consider MooG to be an interesting and novel idea for self-supervised representation learning from videos, and that the presentation was clear to you. **“Please compare a) on generic tasks (evaluate learned representations for classification, object detection, or segmentation)”** Thank you for suggesting this comparison. MooG was designed with spatio-temporal grounding in mind, without placing a focus on high-level semantic features, and we chose evaluation domains that would demonstrate this most clearly. Keeping this mind in, we chose point tracking, box tracking, and depth estimation as low-level tasks that require little semantic understanding of a scene, which aligns well with our goal of learning scene-grounded, temporally-consistent representations. In preliminary experiments we observed that features learned by MooG are not well-suited for semantic tasks such as action classification in combination with linear readout heads. This is potentially due to the local nature of the representation or the simplicity of the readout setup we tried. We are happy to include a discussion of these preliminary observations in the paper and make suggestions for improvements for future follow-up work. **“It is unclear how these representations compare against video SSL methods [2,3,4,5]. Maybe try to apply some of these methods on the selected tasks to compare how MooG representations compare to them?‘)”.** Thank you for pointing this out. We have now added a comparison to VideoMAE v2 in the supplementary pdf attached to the main response, which is a more recent version of [5]. We consider representations from 3 [publicly available VideoMAE v2 checkpoints](https://github.com/OpenGVLab/VideoMAEv2/blob/master/docs/MODEL_ZOO.md): the ViT-small and ViT-base variants, which contain 22M and 83M parameters respectively, as well as a ViT-giant model (1B params). The smaller variants were obtained by distilling the predictions of the ViT-giant model. We note that MooG contains approximately 35M parameters, which includes the pixel decoder. There are other differences that further make it difficult to compare, for example MooG was pre-trained on MOVI-E, while the VideoMAE models were trained on a superset of all Kinetics videos. MooG was trained purely in a self-supervised manner, while the VideoMAE v2 teacher network was finetuned for action recognition, and the ViT-small and ViT-base models were initialized from finetuned networks too. Keeping these differences in mind, we observe how MooG performs significantly better than the ViT-small and ViT-base sized models that offer similar parameter counts. Compared to the teacher ViT-giant model that has 30x more parameters, MooG performs very well: it is better on MOVi points and Waymo boxes, but worse on MOVi depth and DAVIS points. Both methods perform the same on MOVi boxes. We are hopeful that scaling MooG to 1B+ parameters and larger scale pretraining may lead to further improvements, though we leave this for future work as mentioned in our limitations section. **“Related Works: Consider discussing [1] which explores a similar idea of next frame prediction in a latent space to learn good representations. Also [6] which learns local features using language which shows DINO like grouping / tracking behaviour.”** Thank you for pointing this out. We will make sure to contextualize our work further w.r.t. autoregressive next-frame prediction methods (e.g. regarding the reference you provided which uses a grid-based VQ-GAN representation [1]), and with perceptual grouping / object-centric methods in our related work section. In short: end-to-end object-centric grouping methods are indeed closely related (as alluded to in our related work section), but this line of work typically aims at grouping entire objects or well-defined parts into a single object – this includes methods such as GroupViT [7] and the paper you referred to [6]. For more general, fine-grained spatio-temporal tasks such as point tracking or monocular depth estimation, enforcing an object bottleneck (e.g. 1 token per object) may be too restrictive: we find that downstream task performance significantly increases by providing the model with more capacity in terms of number of “off-the-grid” vectors/tokens that it can use to encode the video. **References** [1] Bai et al., Sequential Modeling Enables Scalable Learning for Large Vision Models (CVPR 2024) [2] Salehi et al., Time Does Tell: Self-Supervised Time-Tuning of Dense Image Representations (ICCV 2023) [3] Venkataramanan et al., Is ImageNet worth 1 video? Learning strong image encoders from 1 long unlabelled video (ICLR 2024) [4] Ranasinghe et al., Self-supervised Video Transformer (CVPR 2022) [5] Tong et al., VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training (NeurIPS 2022) [6] Ranasinghe et al., Perceptual Grouping in Contrastive Vision-Language Models (ICCV 2023) [7] Xu et al., GroupViT: Semantic Segmentation Emerges from Text Supervision (CVPR 2022) --- Rebuttal Comment 1.1: Title: Most concerns addressed; update to weak accept Comment: I thank the authors for the detailed rebuttal. The authors address most concerns here. I believe this paper will be valuable to the community, hence support accepting. Suggestions: Please update the paper with the explanation on why action recognition benchmarks is unfair to MooG. Highly recommend adding at least one line to the intro to explain this (while highlighting its unique strengths). It is useful to highlight the kind of tasks these learned representations are ideal for alongside limitations. Also, please mention this in detail somewhere in the paper, Note on comparison to video SSL methods: MAE style pre-training is known to not emerge strong grouping / tracking behavior (i.e. what DINO has). Consider evaluating against Video-SSL works building off DINO (e.g. [4] or preferably a similar more recent work). Maybe in later work or as an addition to the appendix.
Summary: This paper proposes a field-based method for video representation learning called MooG. Instead of propagating a discretized grid of features for every pixel location in the video, the method updates an arbitrary set of state tokens. These state tokens are used to parametrize a context dependent PerceiverIO-style “field”: a network that decodes the RGB values at a particular location (x, y) in the target frame conditioned on the “predicted” state tokens from previous frames and a position encoded (x, y) input. Since the state tokens parametrize a field, they are dubbed “off the grid”. The paper proposes to train such a model in a recurrent fashion using next frame prediction. Further, it describes methods to read-out the (potentially pre-trained) representations for downstream tasks. Results show qualitatively that the off-the-grid representations are “well behaved” through compelling visualizations and quantitatively work better than on-the-grid counterparts for a range of dense visual tasks, such as object tracking and depth prediction. Strengths: - The idea of combining recurrent state token estimation together with an off-the-grid field-like representation that is repeatedly queried across time and trained with next frame prediction is interesting and novel. - The methods section is simple and intuitive and the writing is clear and concise. At important steps (such as 130 – 132) where there could be room for confusion (eg. decoding from predicted tokens vs correct tokens), the details are highlighted to elucidate the key elements that make the method work. - A wide range of experiments are shown for three important dense prediction tasks: monocular depth estimation, points tracking and boxes tracking. Baselines are chosen appropriately, representing widely used and ubiquitous recent methods in the respective fields. Quantitative experiments show that MooG is a powerful representation learning method, and that it outperforms baselines in the frozen setting, which should be the dominant paradigm to evaluate a video pre-training model. - The qualitative visualizations are impressive and creative. That tracking (in the token attention sense) and segmentation features (in the PCA sense) emerge from using the limited token and off-the-grid inductive bias are exciting results. The paper shows many examples of these results. Weaknesses: - While appropriate baselines are chosen for all downstream evaluations, and comparison is made with on-the-grid representations that simply propagate features, baselines with architectures similar to MOOG have not been explored in a lot of detail. In the related work, the “off-the-grid” similarity between Perceiver IO and MooG is acknowledged, with the key difference being that MooG additionally implements recurrence of state tokens. One baseline that would be a closest comparison would be to train a Perceiver IO like model, except with spatio-temporal queries: in particular, this would entail conditioning on time and position. This could be trained simply like an auto-encoder (essentially reducing to Perceiver IO for video), or with next frame prediction. - Papers that use similar architecture but specialized for domain-specific video prediction tasks are missing from the related works. While MooG is a general purpose representation learning method, a thorough discussion of some domain specific models would be important to highlight key differences between those papers and the potential general purpose nature of MooG representations. Some of these papers are already cited (eg. CONDITIONAL OBJECT-CENTRIC LEARNING FROM VIDEO, Kipf et al. 2022), but are missing deeper discussion on fundamental differences to MooG. Technical Quality: 3 Clarity: 4 Questions for Authors: - Is the paper the first to show token binding to task related activities emerging unsupervised in a field-based transformer? - Are the baselines chosen for point tracking (TAP-Net, TAPIR) and for depth prediction (DPT, etc) the state of the art results in the domains? (Given MooG is a general purpose video representation learning method, I agree it would not be fair to compete against the best task-specific engineered methods, but a discussion on limitation and deficit to state of art (if any) would be useful). Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The limitations section is adequate and thorough (discussing disocclusions, occlusions, and alternate coarse / fine tasks possible on video), and future directions for improvement are also duly noted in Appendix A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and constructive comments. We are pleased to hear that you found MooG interesting and novel, and recognized its considerable improvement over baselines in the frozen setting. We are glad that you found the writing and visualizations of high quality. **“One baseline that would be a closest comparison would be to train a Perceiver IO like model, except with spatio-temporal queries”** Thank you for suggesting this baseline, which is indeed a very sensible one. In fact it is close to one variant of the model we experimented with in the early phases of the project. At that time, we experimented with a non-autoregressive version of our model that mapped all encoded frames in parallel to a joint set of latent tokens (a single set for multiple time steps) and finally replicated this set across time steps with added temporal position encoding for decoding. Such an approach is in fact very similar to Perceiver IO for video (with some small design differences in encoder, decoder and position encoding). In preliminary experiments, we found that when evaluated on 8-frame DAVIS point tracking, both models performed roughly equally well. However, we faced challenges continuing the point tracking predictions of the parallel (Perceiver IO-style) model when moving beyond the 8-frame window that it was trained on. On the contrary, this was straightforward to achieve with MooG due to its autoregressive formulation. **“Papers that use similar architecture but specialized for domain-specific video prediction tasks are missing from the related works”** Thank you for pointing this out. We will deepen the discussion of prior works in the related work section with regard to domain-specific approaches (such as SAVi), and highlight similarities and differences to MooG where relevant. **“Is the paper the first to show token binding to task related activities emerging unsupervised in a field-based transformer?”** To the best of our knowledge, this paper is among the first to *explicitly study* this relationship in standard transformer architectures trained in a self-supervised fashion. That said, it is difficult to make a precise claim about this due to numerous (domain-specific) prior works such as SAVi and PARTS exploring architectures that display similar behavior at a coarser scale (i.e. object-level binding) and are at least “transformer-like”. It is also worth mentioning the recent body of work on 4D Gaussian splatting (e.g. Luiten et al., Dynamic 3D Gaussians, 2023), which aims to learn explicit 3D Gaussian representations that track elements of a dynamic scene, typically by using more explicit geometric constraints instead of generic transformer-based architectures. **“Are the baselines chosen for point tracking (TAP-Net, TAPIR) and for depth prediction (DPT, etc) the state of the art results in the domains? […] a discussion on limitation and deficit to state of art (if any) would be useful).”** TAPIR is very close to state of the art in the point tracking domain, and only outperformed by the very recent BootsTAP [1], which extends TAPIR by augmenting the pretraining dataset with unlabeled YouTube videos. The DPT architecture itself is commonly used in competitive monocular depth-estimation approaches such as the MiDaS line of work [2]. However, to achieve competitive performance, such approaches are usually pre-trained on multiple different depth estimation datasets and adopt more sophisticated losses to handle depth targets from varying sources in a single model. More recently, diffusion-based approaches have shown great promise for monocular depth estimation [3, 4], which move away from the DPT architecture. We agree that it would be useful to contextualize these baselines a bit better relative to SOTA in the field, and we will update the paper accordingly. [1] BootsTAP: Bootstrapped Training for Tracking-Any-Point [2] MiDaS v3.1 -- A Model Zoo for Robust Monocular Relative Depth Estimation [3] Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation [4] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. My questions and concerns have been adequately addressed, and I will maintain my score and recommendation for acceptance.
Rebuttal 1: Rebuttal: We thank the reviewers for their helpful comments.We are pleased to hear that the reviewers find our paper to be well-presented (MZrN13, RPGH11, ykZT08), interesting (MZrN13, r1sm12), and novel (MZrN13, r1sm12, ykZT08). Reviewers MZrN13 and ykZT08 positively highlight the quality of the experimental evaluation. Reviewer ykZT08 further notes that the “idea seems like a fundamental one that the community will use”. We address the concerns raised by each reviewer individually. Further, the supplementary pdf includes several additional baselines and updated results: * We have included a comparison to DINOv1 and DINOv2 in the end-to-end setting, which was previously missing. It can be seen how both approaches perform considerably better to their frozen counter-parts, yet MooG performs considerably better in both settings. * After submission, we discovered a bug in the “Grid Rec.” baseline where no gradients were being back propagated into the encoder. After re-running, we now observe how “Grid Rec.” consistently outperforms the “Grid” baseline without recurrence, as one might expect. Compared to MooG, in the frozen setting (which is the main paradigm of interest), “Grid Rec” continues to perform considerably worse than MooG. In the end-to-end setting, where we finetune the model on the down-stream task of interest, we now observe how “Grid Rec” and MooG perform comparably, suggesting that the added supervision can help balance out architectural differences. * Based on reviewer r1sm12’s suggestions, we have included a comparison to VideoMAE v2 [1]. We consider representations from 3 [publicly available VideoMAE v2 checkpoints](https://github.com/OpenGVLab/VideoMAEv2/blob/master/docs/MODEL_ZOO.md): the ViT-small and ViT-base variants, which contain 22M and 83M parameters respectively, as well as a ViT-giant model (1B params). The smaller variants were obtained by distilling the predictions of the ViT-giant model. We note that MooG contains approximately 35M parameters, which includes the pixel decoder. There are other differences that further make it difficult to compare, for example MooG was pre-trained on MOVI-E, while the VideoMAE models were pretrained on a superset of all Kinetics videos. MooG was trained purely in a self-supervised manner, while the VideoMAE v2 teacher network was finetuned for action recognition, and the ViT-small and ViT-base models were initialized from finetuned networks as well. Keeping these differences in mind, we observe how MooG performs significantly better than the ViT-small and ViT-base sized models that offer similar parameter counts. Compared to the teacher ViT-giant model that has 30x more parameters, MooG performs very well: it is better on MOVi points and Waymo boxes, but worse on MOVi depth and DAVIS points. Both methods perform the same on MOVi boxes. We are hopeful that scaling MooG to 1B+ parameters and larger scale pretraining may lead to further improvements, though we leave this for future work as mentioned in our limitations section. * Based on reviewers’ ykZT suggestion we have revised the main model figure, a draft of which is included. We will continue to make revisions based on reviewer input and any additional feedback is much appreciated. [1] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking Pdf: /pdf/40de86304bed6b6f3a130acd6f747cc0778bc2c4.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Reranking Laws for Language Generation: A Communication-Theoretic Perspective
Accept (spotlight)
Summary: This paper proposes a reranking principle for language generation from a communication-theoretic perspective. The paper conceptualizes the generator as a sender transmitting multiple descriptions of a message through parallel noisy channels. A receiver is designed to decode the message by ranking the descriptions and selecting the one found to be most reliable. Experiments show the effectiveness of proposed method in text-to-code generation task and machine translation of medical data task. Strengths: 1 This paper proposes a reranking principle for language generation from a communication-theoretic perspective. The motivation is interesting and the theoretic analysis seems reasonable. 2 The paper is well-written and easily readable. Weaknesses: 1 The related work analysis is not comprehensive. There are several ranking and reranking works in recommended systems, and none of them is mentioned or compared in this paper. 2 The experiments are not convincing enough. This paper only conducts two downstream experiments, i.e., Code generation and Machine translation. The results should be evaluated through more common and popular downstream tasks, such as QA (question choice) scenarios. 3 There is only 1 baseline for the text-to-code generation task and 2 baselines for the machine translation task. In particular, the one baseline for the text-to-code generation task is majority voting, which is not representative enough. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weakness. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and suggestions. We are happy that you found our paper well written and the motivation and theoretical analysis interesting. We understand that your main concerns about our paper are related to our empirical validation—we address them below. We hope that this clarifies and alleviates your concerns. > “The related work analysis is not comprehensive. There are several ranking and reranking works in recommended systems, and none of them is mentioned or compared in this paper.” We welcome any suggestions you may have about related literature in reranking on recommender systems. Note, however, that the focus of our paper is very different: we are primarily concerned with reranking outputs of LLM generators and we study how the quality of the combined system is affected by the number of generated hypotheses and what their asymptotic properties are. Most of the works we are familiar with in the context of recommender systems study the different problem of, given a list of $N$ recommendations, reduce its size to $K<N$ elements through reranking. > “The experiments are not convincing enough. This paper only conducts two downstream experiments, i.e., Code generation and Machine translation. The results should be evaluated through more common and popular downstream tasks, such as QA (question choice) scenarios.” Thank you for the suggestion. We have run additional experiments on mathematical and commonsense reasoning benchmarks, and we observed that the same trends hold also for these two tasks, validating our method on other domains. We hope that this alleviates your concerns regarding the experimental part of our paper. Please see the general response for more details. > “There is only 1 baseline for the text-to-code generation task and 2 baselines for the machine translation task. In particular, the one baseline for the text-to-code generation task is majority voting, which is not representative enough.” Could you please clarify what you mean by “baseline”? Please note that our main goal is not to compare any method to a specific baseline but rather to validate our theoretical analysis using perfect and imperfect rerankers. Besides, we want to highlight that majority voting can be seen as a particular case of MBR decoding (Bertsch et al., 2023). In our experiments on text-to-code generation, we use MBR-exec (Shi et al., 2022), which is based on execution match. We would like to note that reranking methods relying on execution-based metrics are widely used in code generation (see, e.g., Chen et al., 2023; To et al., 2024). To clarify, as explained to Reviewer aDZG, MBR-exec consists of (1) sampling programs from an LLM, (2) executing each program on one test case, and (3) selecting the example with the minimal execution result-based Bayes risk. We use a 0/1 matching loss between execution results and the Bayes risk of a program is defined by the sum of the loss between itself and the other sampled programs. Since we are comparing the execution result of different programs, the Bayes risk will be minimal for the programs whose execution result is more frequent, hence the term “majority voting”. However, we understand that this term may be a bit misleading in this context and will update the paper accordingly. We will also use the additional page to include information on how MBR decoding works, instead of simply pointing to the papers. References: Chen et al., 2023. CodeT: Code Generation with Generated Tests. To et al., 2024. Functional Overlap Reranking for Neural Code Generation. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response! As my concerns are eliminated, I will raise my score.
Summary: A number of recent works in language generation can be framed as proposing two step methods, with a method to generate proposal strings, and another to rank these strings before choosing the best one to be output (this includes, e.g., MBR decoding). This paper analyses this practice with a communication-theoretic approach. They first assume that, given a query $q$, the goal of decoding is to generate a string $y$ in a set $\mathcal{X}(q)$. They then assume the generator’s output $p(y_{1:N} \mid q)$ can be decomposed as $p(x_{1:N} \mid q)p(y_{1:N} \mid x_{1:N})$, with $x_n \in\mathcal{X}(q)$ and $p(y_{1:N} \mid x_{1:N})$ representing some kind of noise perturbation. They then analyse the probability $p(rank(y_{1:N}) \notin\mathcal{X}(q) \mid q)$ under different assumptions, showing that for many this value goes to zero as $N$ goes to infinity. Finally, they run two experiments showing their theoretically derived predictions $p(rank(y_{1:N}) \notin\mathcal{X}(q) \mid q)$ seem to correlate with the empirical probability of decoding errors. Strengths: The paper is well written and in general easy to follow (although I think section 4 could use a bit more hand-holding). The paper provides an interesting theoretical analysis to a widely popular text generation framework. The paper then investigates whether these theoretical insights are reflected empirically in real decoding settings. Weaknesses: In general, I liked this paper. In my opinion however, its main weaknesses are: * limited empirical evaluation, with only two tasks, one generator model, and two reranking methods (besides an oracle ranker). * the evaluation also makes some (in my opinion) debatable claims. E.g., in line 236 the authors state “[...] the imperfect reranker with majority voting, which fits the data well, as shown by the red curve.”. However, analysing Fig 4 (top), I would argue that the solid lines do not capture the data behaviour that well. In fact, the model’s performance seems to be empirically close to convergence with N, but the solid lines go monotonically down. Maybe running this analysis for larger values of $N$ would show whether the data indeed fits the predictions (specially if the predicted power law would generalise to larger values of $N$ as fit in the current data, and without fitting it on the new results). Technical Quality: 3 Clarity: 3 Questions for Authors: In the code generation experiments, the paper says “we use only one test case for each problem (Shi et al., 2022), and select one candidate by taking a majority vote over the execution results, dismissing hypotheses that fail to execute on the test case.” If a single test case is used, how is majority voting performed exactly? More details here could be helpful. As a minor suggestion: I found the use of a “communication theoretical” framing here a bit distracting, and it seems to me it could be discarded with no significant change to the paper’s contributions. The authors, for instance, discuss error correcting codes early in the paper, but then they (admittedly) do not require generated strings to be error-corrected. (The selected string simply needs to be in an acceptable set $rank(y_{1:N}) \notin\mathcal{X}(q) $.) Besides, the generator and ranker are framed as a sender and a receiver—with a noisy channel in between them—but no message is actually decoded by the receiving ranker. Alternatively, highlighting the role that a communication-theoretical framing has in the paper (and why it is needed) could be useful. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors properly discuss their analysis limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review and suggestions. We are glad that you found our paper well written and easy to follow, and the theoretical analysis interesting. We address below your main concerns. > “limited empirical evaluation, with only two tasks, one generator model, and two reranking methods (besides an oracle ranker).” Thanks for pointing this out. While our focus was on machine translation and text-to-code generation, we have run additional experiments on mathematical and commonsense reasoning benchmarks and observed that the same trends hold also for these two tasks, validating our method on other domains. We hope that this alleviates your concerns regarding the limited empirical evaluation of our method. Please see the general response for more details. > “the evaluation also makes some (in my opinion) debatable claims (...) the authors state “[...] the imperfect reranker with majority voting, which fits the data well (...) analysing Fig 4 (top), I would argue that the solid lines do not capture the data behaviour that well (...) Maybe running this analysis for larger values of N would show whether the data indeed fits the predictions (...).” As discussed in the limitations section, while the experiments suggest a reasonable fit, you are right that the fit is not perfect and we will adjust the text accordingly. In fact, for large $N$, errors are rare events, and therefore prone to statistical inaccuracies (this is visible in the “steps” observed in the code generation plots). For text-to-code generation, in practice, most work does not use more than 200 samples due to the increased cost. For LLM-based machine translation, the work of Farinhas et al. (2023), which we use in our experiments in Section 5.2, suggests that using a smaller $N$ is enough. Even though this is not discussed in the paper, the cost of MBR decoding grows quadratically with the number of hypotheses $N$, making it impractical to try values higher than these ones. In any case, both results appear to be consistent with the new tasks we experimented on (as mentioned in the previous point). > “In the code generation experiments, the paper says “we use only one test case for each problem (Shi et al., 2022), and select one candidate by taking a majority vote over the execution results, dismissing hypotheses that fail to execute on the test case.” If a single test case is used, how is majority voting performed exactly? More details here could be helpful.” We agree more details will be helpful – due to space constraints we ended up trimming some details about specific reranking techniques such as MBR decoding or reranking based on quality estimation. In this particular experiment, we follow MBR-exec, an approach proposed by Shi et al. (2022) that consists of (1) sampling programs from an LLM, (2) executing each program on one test case, and (3) selecting the example with the minimal execution result-based Bayes risk. We use a 0/1 matching loss between execution results and the Bayes risk of a program is defined by the sum of the loss between itself and the other sampled programs (of course, the ground-truth program output is not used). This is described in detail in the original paper (see., e.g., their Section 3). We break ties by selecting the program with the smallest sampling index, corresponding to a random selection. For completeness, for machine translation we followed the exact same procedure as described in Farinhas et al. (2023). In this case, as described in L243-246, MBR decoding does not use “execution results” but is rather based on a utility function based on a reference-based metric (in our case, Comet-22). We agree that this information should be described in more detail, and we will add descriptions of all the methods that we used for text-to-code generation and machine translation in a dedicated section. Thank you for the suggestions! --- Rebuttal 2: Title: Response to Authors Comment: I thank the authors for their response. I still think that running this analysis for larger values of $N$ would be good (or improving how the paper assesses that the data fits its predictions), but I have increased my scores due to the extra experiments added. I think this is an interesting paper that should be accepted.
Summary: This paper proposes to regard generator-reranker LLMs, i.e., LLMs generating multiple outputs and then reranking them, as communication systems. The idea is to consider the outputs noisy with the objective for the reranker to find the less noisy one. Strengths: - the approach is very flexible. It doesn’t depend on a particular architecture and the outputs to rerank can be generated by multiple different models. - a sound parallel is made with communication theory which helps to understand why this approach works. - the approach is well formalized. I couldn’t find any error but be aware that I’m not very familiar with Zipf-Mandelbrot and Mallows model. - Two scenarios are taken into account: with and without an independence assumption - Experiments with machine translation are very relevant for generator-reranker LLMs Weaknesses: - Absence of analysis of the experiment results. This is a critical weakness of the paper. The experimental settings are described, and the results are given (plots), but without any comment. For the MT experiments, the authors wrote that they got some scores, and then we have the next section. Technical Quality: 4 Clarity: 4 Questions for Authors: Please comment on your results. What do you conclude from them? Why are they insightful (or not insightful)? How can they be used in future work? etc. The paper is really good but we miss some analysis. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review and suggestions. We are happy that you found our approach to be well formalized and flexible, the parallel with communication theory useful, and the experiments relevant. We agree that the paper would benefit from more discussion about how our results can be useful in practice. Sections 6 and 8 already provide some information about this (e.g., the reranking laws allow us to predict how many hypotheses are necessary to achieve a desired error probability), but we will update the manuscript with a more specific analysis in Section 5. Additionally, we have run additional experiments on mathematical and commonsense reasoning benchmarks. Similar trends hold for the new experiments, which confirms the general applicability of our approach and further validates our theoretical model. Please see the general response for more details.
Summary: The paper provides a framework for understanding the theoretical properties of generator-reranker systems for language generation. It relates the reranking process to error correction during the decoding of messages in noisy channels, a concept that has been well-studied in communication theory. Explicitly, the paper conceptualizes an LLM generator as a sender transmitting messages through noisy channels, with the reranker acting as the receiver decoding these messages. This framing explains why 1) things like redundancy in the set of generated strings are helpful in generator-reranker systems, actually increasing the likelihood of an acceptable output and 2) increasing the number of options from the generator in the reranking process generally increases system performance. The paper makes several theoretical contributions, showing that when generator-reranker systems meet certain theoretical requirements, there is a guarantee of “error-free” performance. This property holds even when channel distributions are not independent, i.e., when the same model is used to generate possible solutions to the input. The paper provides some empirical verification of their proposed laws. Strengths: The paper is very well written. The math is clearly explained and sound; it provides a nice theoretical justification of why generator-reranker systems work well The topic is also very relevant, since LLM reliability (which is improved by the generator-reranker paradigm) is of utmost concern. The laws proposed in this paper also have practical use: they would allow practitioners to decide the number of strings needed from the generator system to achieve a certain accuracy, without lots of trial and error. Weaknesses: The applicability of the communication system framework to generator-reranker systems is somewhat questionable given that there is not the same binary notion of acceptable/unacceptable for language generation systems. Rather, we’re dealing with a continuous spectrum of quality and the appeal of the generator-reranker system is its ability to increase quality (perhaps amongst “acceptable” solutions) rather than move from the realm of unacceptable to acceptable answers. The impact of this difference between the theoretical framework and the evaluation of generation systems in practice isn’t really discussed. There are a few points that could be addressed to improve readability: * Some aspects of the abstract/intro are confusing because terms have not been defined and their equivalences in an LLM generator-reranker system have not yet been specified. For example, the reader won’t know what the implications of “channel distributions being statistically dependent” are (mentioned in the abstract) until after they’ve read through much of the paper. * There isn’t much intuition about what the R.V. X corresponds to in the generator-reranker system * Some more intuition behind the scale parameter (other than just the settings that it gives for its extreme values) would be helpful The computational experiments are not very comprehensive, exploring only two generation tasks (one generator model for each task). It is thus unclear how general their results are. Technical Quality: 4 Clarity: 4 Questions for Authors: * I didn't understand the (bolded) comment in lines 148-9. It makes it sound as though the quality of the reranker depends on the quality of the generator. Could this be clarified? * Minor style recommendation: Perhaps move the first sentence of 3.3 to right after providing the expression for the partition function. That feels like a more natural place to me. In footnote 1: equivalent class -> equivalence class Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors discuss most of the limitations present in their work. I would like to see an additional discussion of the implications of their results for continuous evaluation metrics (rather than binary acceptable/unacceptable) Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review and suggestions. We are glad that you found our paper to be very well written, the math to be clear and sound, and our method to have practical use. We address below your concerns about our paper. > “there is not the same binary notion of acceptable/unacceptable (...) I would like to see an additional discussion (...) for continuous evaluation metrics” This is a very good suggestion. Our framework can indeed be extended to continuous evaluation metrics, although some concepts (e.g. the notion of “asymptotically error-free”) would need to be modified accordingly. We sketch below some ways in which this extension could be made: - We would need to posit a probability density for the continuous evaluation metric (instead of a Bernoulli error probability) for each hypothesis coming from the generator. In the simplest case, this could be a Gaussian distribution with some input-dependent mean and variance. For bounded evaluation metrics (e.g. between 0 and 1) other families would be more suitable (e.g. uniform or Beta distributions). - For a perfect reranker and independent hypotheses, the resulting output after reranking would be distributed according to the corresponding **extreme value distribution** (this models the distribution of the highest quality score among the $N$ hypotheses). Extreme value distributions are an important subject of study in order statistics. For example, for the Gaussian case above, we would obtain a Gumbel distribution, for uniform we obtain Beta, etc. The asymptotic case ($N \rightarrow \infty$) corresponds to one of Gumbel, Fréchet or Weibull families (this is a consequence of the Fisher–Tippett–Gnedenko theorem [1]). From the extreme value distribution, we can obtain the expected quality score or the probability of a quality score being below a threshold. - Unfortunately, the generalization to imperfect rerankers (e.g. Mallows or Zipf-Mandelbrot rerankers) seems much harder than in the binary case. In the paper, we opted for focusing on the binary acceptable/unacceptable case for three main reasons: (1) This case is simpler to analyze and to understand (particularly when rerankers are imperfect). (2) It is still highly relevant in practice – e.g., in code generation, as well as other tasks, the code either executes and gives the correct answer, or it doesn’t (regardless of its quality). (3) It would be very hard to cover both the binary and continuous cases in the right level of detail in a single 9-page paper, hence we decided to go deeper on the former and leave the latter for future work. Yet, we will use the additional page to add this discussion, as suggested. [1] David, Herbert A.; Nagaraja, Haikady N. (2004). Order statistics. John Wiley & Sons. p. 299. > “Some aspects of the abstract/intro are confusing because terms have not been defined and their equivalences in an LLM generator-reranker system have not yet been specified.” Thank you for pointing this out. The paragraph in L41-47 and Figure 1 provide a brief explanation of how an LLM generator-reranker system can be seen as a communication system, but we agree that the wording might not be easily understood for a first-time reader. We will improve the text and update the caption of Figure 1 to include more information, hopefully making it more clear. > “There isn’t much intuition about what the R.V. X corresponds to in the generator-reranker system”. In our setup, we assume that a sender transmits $N$ message descriptions in parallel through noisy channels, resulting in the generation of $N$ potentially corrupted hypotheses $y_i, i\in\{1,...,N\}$. The LLM generator consists of both the sender and the noisy channels, where the $y_i, i\in\{1,...,N\}$ are the $N$ hypotheses sampled from the model. For example, in a machine translation scenario, these $y_i$ correspond to $N$ alternative translations, each potentially containing errors. The message descriptions $x_1, ..., x_N \in \mathcal{X}(q)^N$ correspond to acceptable answers within the equivalence class $\mathcal{X}(q)$ (as explained in footnote 1), before being corrupted by the noisy channel. This will be clarified in the final version. > “Some more intuition behind the scale parameter (...) would be helpful” We agree that this can be further clarified in the paper. We mentioned the cases of a random reranker ($e^{-\lambda}=1$) and a perfect reranker ($e^{-\lambda}=0$). For a Mallows model, $e^{-\lambda}$ strictly between 0 and 1 correspond to imperfect rerankers that are better than random. The lower this value, the higher the quality of the reranker. Thus, $e^{-\lambda}$ works as an inverse measure of reranker quality. We will clarify. > “The computational experiments are not very comprehensive (...).” We agree that the paper becomes stronger if we report experiments in more tasks beyond machine translation and text-to-code generation. We have run additional experiments on mathematical and commonsense reasoning benchmarks, validating our method on other domains. Similar trends hold for the new experiments, which confirms the general applicability of our approach. We hope that this alleviates your concerns. Please see the general response and attached figure for details. (_continues in a follow-up comment_) --- Rebuttal 2: Comment: > “I didn't understand (...) lines 148-9 (...) the quality of the reranker depends on the quality of the generator.” Thank you for letting us know that you found this part unclear. We want to clarify that the quality of the reranker itself is independent of the quality of the generator. While both perfect and Mallows rerankers achieve exponentially decreasing error probabilities as the number of hypotheses $N$ increases, the exact rate of convergence is different. For the Mallows reranker, the rate of convergence also depends on the parameters of the Mallows model ($\lambda$). As shown by Eq. (2), $P_\mathrm{err}(N; q)$ decays exponentially as $\epsilon^N$, where $\epsilon$ is the probability of generating an unacceptable hypothesis. For a Mallows model, Proposition 1 shows that the error probability also decays exponentially, with $P_\mathrm{err}(N; q) = \mathcal{O}((e^{-\lambda}(1-\epsilon) + \epsilon)^N)$. While the convergence rate $e^{-\lambda}(1-\epsilon) + \epsilon$ is generally greater than $\epsilon$, it still ensures an exponentially decreasing error probability as $N$ increases. Thus, what we meant with the bolded comment in lines 148-9 is that Mallows rerankers behave asymptotically like perfect rerankers but with a higher effective error probability, due to the additional factor depending on $e^{-\lambda}$. That is, asymptotically, a bad (Mallows) reranker is equivalent to a perfect reranker with a worse generator. We will clarify this in the final version. > “move the first sentence of 3.3 to right after providing the expression for the partition function.” Even though this result is not used in the previous section, we agree with the recommendation. We will update the paper accordingly, keeping a reminder in Section 3.3. > “In footnote 1: equivalent class -> equivalence class.” Thanks for pointing this out, we will fix the typo.
Rebuttal 1: Rebuttal: Dear reviewers, We appreciate the time and effort you have taken to review our paper and provide constructive feedback. We are pleased to see that our work has been positively received. The main weakness pointed out by the reviewers is that we illustrate our reranking laws on two applications only, code generation and machine translation. We picked these tasks due to their particularly challenging nature and the prevalent use of reranking techniques to improve model performance. However, we believe our approach is fully general and can be useful in other domains as well. Therefore, we followed the reviewers' suggestions and ran additional experiments on mathematical and commonsense reasoning benchmarks, as prior work (Wang et al., 2023) has shown that generating multiple hypotheses as an intermediate step is also advantageous in these scenarios. We used samples generated by Aggarwal et al. (2023) with code-davinci-002, a GPT-3-based model with 175 billion parameters (please refer to their Section 4 for more details; these samples were made publicly available by the authors at https://github.com/Pranjal2041/AdaptiveConsistency). We applied self-consistency over diverse reasoning paths (Wang et al., 2023), selecting the most frequent answer in the candidate set. We report results on SVAMP (Patel et al., 2021) and StrategyQA (Geva et al., 2021). The attached pdf includes plots similar to Figure 4 in our manuscript, showing the log failure rate as a function of N. We observed that the same trends hold also for these two additional tasks. We hope this alleviates the main concern raised by the reviewers. References: Wang et al., 2023. Self-Consistency Improves Chain of Thought Reasoning in Language Models. Aggarwal et al., 2023. Let’s Sample Step by Step: Adaptive-Consistency for Efficient Reasoning and Coding with LLMs. Patel et al., 2021. Are NLP Models really able to Solve Simple Math Word Problems? Geva et al., 2021. Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies. Pdf: /pdf/3f09711e9fa624d87537cc4fc178f437d00920da.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Super Consistency of Neural Network Landscapes and Learning Rate Transfer
Accept (poster)
Summary: The paper investigates the loss landscape of the model with scaling width or depth, through observing the largest eigenvalue of the Hessian marix and the NTK matrix. Authors show empirically that the loss Hessian evolve almost identically for different model sizes (which is named Super consistency), however, ntk accumulates finite-size effects over time. Authors also validate their empirical findings using theory in a two-layer NN with linear activations. Strengths: - The paper discusses an interesting phenomenon that the loss landscape gradually becomes stable, this might explain the transfer of learning rate - The paper shows NTK (lazy learning) has different behavior when comparing with Hessian of the loss, this suggests NTK is insufficient to explain the behavior - The paper gives an explicit evolution law under 2-layer linear NN setting, Weaknesses: - The authors do not conduct larger-scale environment since needing to compute Hessian Technical Quality: 4 Clarity: 4 Questions for Authors: - I am confused about Line 209, this line mentions we can decompose H into G + R, while G = K^T K and the dynamics are mainly driven by G. NTK and G share the same nonzero eigenvalues, doesn't this mean H and NTK should have similar behavior? Or feature learning happens in H which makes the difference between NTK and Hessian? Confidence: 2 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors have fully addressed the limitations in their paper Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the strong overall score and for marking “excellent” our paper across all the three evaluation axes. On the weakness: 1. **Larger scale experiment**: We managed to scale the Hessian computation up to 300 million parameters in a Transformer model trained on Wikitext with Adam (width scaling), surpassing the scale reached in the reference literature on the EoS (Cohen et al, 2021). Please consult Figure 1 in the one-page pdf attached. We find that Super Consistency still holds at this scale. See also the global response on this point. Questions: 1. **Hessian vs NTK dynamics**. Under $\mu$P, both the Hessian and NTK evolve from initialization, a distinctive feature of feature learning scaling limits. These two quantities are indeed related by the Gauss-Newton decomposition of the Hessian, which for MSE loss reads $H = K^T K + R$. Indeed, for MSE loss the dynamics of the NTK and Hessian are connected, in that the change of the Hessian is largely driven by the change in the NTK (Figure 5). However, notice that the matrix $R$ still plays a role and cannot be neglected when the network is far from convergence (more subtly, the interaction between K and R gives the self-stabilization property that preserves edge of stability for $\lambda_{\max}$). Secondly, under cross-entropy loss, the first term of the decomposition is not $K^T K$ but $K^T D K$, where $D$ contains the second derivative of the loss for different datapoints. Thus, there is an additional term that potentially makes the dynamics of the Hessian different from the NTK. Indeed, this is what we observe in Figure 2, where the NTK eigenvalues accumulate significant finite-size effects, while the Hessian top eigenvalues are Super Consistent. We hope that the new experiments and clarification will make the Reviewer more confident about their evaluation. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response!
Summary: The authors argue that the top eigenvalues of the loss Hessian stabilize throughout training under width and depth muP scaling. This phenomenon is called the Super Consistency of the loss landscape. The authors provide theoretical convergence guarantees and empirical experiments supporting their claims. The learning rate transfer under muP scaling also correlates with the super consistent landscape, i.e. optimization follows the trajectories of sharpness (top eigenvalue of the loss Hessian). Under NTK or other suboptimal scaling, the super consistency is also violated and thus learning rate transfer failed. The authors further show that the dynamics of the weights are fully specified by a set of governing equations and thus one can derive a theoretical edge of stability result under scaling. The phenomenon of progressive sharpness towards stability also happens along with the NTK evolution with finite-size effects, suggesting other factors contributing to the super consistency of the sharpness. **[raising score from 6 to 7 after rebuttal]** Strengths: 1. The observation about the super consistency of the loss landscape and its relation to learning rate transfer under muP scaling is quite an important contribution to the community. I’m not an expert in this field, but this result seems novel to me. 2. The paper is well-written and easy to follow. Weaknesses: The empirical evaluations are a bit limited. - First, a lot of the claims are made under the setting of ConvNet on CIFAR-10. It’s understandable from a compute perspective, but it’s not clear if super consistency scales to even larger models. - Second, the GPT-2 results on WikiText seem to break the super consistency, though it could be possible that it’s due to the Transformer block itself as the authors explained. So it’s not clear if the super consistency results hold across different models and modalities. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How are you getting the eigenspectrum of the loss Hessian? It seems possible to get at least a few top eigenvalues using iterative Lanczos methods for models larger than 124 million parameters GPT-2 with A100-80GB. 2. I don’t find enough evidence that the super consistent sharpness causally helps the learning rate transfer under muP scaling. In line 178, “optimization happens along trajectories of super consistent sharpness λ_max”. Is there evidence that the optimization does happen first along this direction, i.e. some kind of spectral bias during learning? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weakness and questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the importance and novelty of our results. Here we address the concerns: 1. **A more diverse set of experiments**: we would like to gently push back on this point. We have already performed experiments on CIFAR-10, Imagenet (vision), and wikitext (language) with two different architectures: ResNets and Transformers (including Vision Transformers). We have also performed experiments on several parametrizations, including NTK, $\mu$P and SP in width, and Depth-$\mu$P and residual networks without $1\sqrt{L}$ scaling for depth. We have trained models with up to 96 layers in depth (e.g. Figure 19), and 8172 units in width. We have other interesting ablations, including batch size, data augmentation, and different optimizers (SGD, Adam, now including AdamW as well, see the one page pdf included in the global response). 2. **New Evidence on Transformers at scale and convolutional networks**. We have performed several new experiments on *width* scaling of Transformers (with Adam) and convolutional networks (with Adam and AdamW). In contrast to depth-scaling, where Super Consistency was not great due to the nature of the Transformer layer, in this case we observe Super Consistency to hold from very small to very large width Transformers’ (scaling up to about 300 million parameters for the largest model) and the learning rate to transfer significantly better. Please see Figures 1 and 2 of the one-page pdf. In particular, our results are based on Post-LN (like GPT-2) models trained on wikitext (with Adam) and Covnets on CIFAR-10. We hope that this extra experimental evidence addresses the Reviewer's concerns about the presence of Super Consistency across more models and datasets. 3. **Absence of Super Consistency in Transformer experiment (with $\mu$P-Depth)**: as the Reviewer mentions, we do hypothesize that the observed breach of super consistency is due to the nature of the Transformer block design, having multiple layers per residual block. This breach is thus expected, and we have also reproduced it in the ResNet case with 2 layers per residual block, as in Figures 4, 19 (a,b), and 20 (a,b,c). Furthermore, the absence of Super Consistency correlates with worse learning rate transfer, and it is thus compatible with our claims. Questions: 1. **Hessian Computation**. We use PyHessian to compute the eigenvalues, which adopts the Power iteration method to get the eigenvalue spectrum. We use a large fixed batch of 1024 datapoints for Hessian estimation. Models with more than 124 million parameters would certainly fit a single A100 GPU, but the estimation becomes very slow and thus cannot be performed at the same frequency as a smaller model. However, as shown in Figure 1 of the one-page pdf, we have run a Post-LN Transformer with up to about 300 million parameters and observed that super consistency still holds at this larger scale. 2. **SGD along Hessian directions and learning rate transfer** Indeed, there is compelling evidence (Gur-Ari et al, 2018) that SGD happens in the subspace spanned by the top $k$ Hessian eigenvectors, where k is the number of classes. In Figure 2 of the paper, we report how a subset of the top 10 eigenvalues behaves across time and model scale. In most of the other experiments of the paper, we restrict to top eigenvalue $\lambda_{\max}$ due to computational reasons and due to its importance in learning rate choice. Please see Appendix A for a thorough discussion on the importance of the sharpness in the choice of the learning rate, before and after the EoS era. Regarding the specific wording, by that sentence, we meant that “lambda_{\max} is super consistent across the training trajectory”. We made this clear in the revised version of the paper. + New Evidence on Gradient-Hessian alignemnt. We performed an experiment where we computed $g^THg/||g||^2$, where g are the weight gradients and H is the loss Hessian, thus taking into account the curvature along the gradient direction. The results are in Figure 3. We observe that Super Consistency largely holds in this case as well. Finally, please notice that all our experiments show a clear correlation between sharpness super consistency (in general, top k eigenspectrum) and learning rate transfer. Due to the importance of the sharpness in determining the trainability of the model at large learning rates, our results provide strong empirical evidence for a potential explanation of the phenomenon of learning rate transfer. However, more work has to be done in neural network optimization to establish (i.e. prove) how the optimal learning rate depends on the Hessian spectrum (together with other quantities). --- Rebuttal Comment 1.1: Comment: Thank you for the additional experiments and clarifications. I'm raising my score to 7, though I still think the following two points can be addressed further to improve the paper, but it could be too demanding in the short time frame of the rebuttal: 1. empirically show that super consistency scales to larger model. This is possible via parallel model sharding across more GPUs from the system level. It's also possible to use more efficient numerical algorithms other than power iteration. I'm curious if super consistency holds or even improves as a function of model scale. 2. For the point "more work has to be done in neural network optimization to establish (i.e. prove) how the optimal learning rate depends on the Hessian spectrum (together with other quantities)", I believe this is worth investigating further, but I know I am asking for too much for the scope of this paper. --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer for the points they brought up, as well as for the score increase. Regarding the additional points: 1) We agree that it would be interesting to see how Super Consistency behaves at larger scales. In the final version of the paper, we will aim to scale up models further using the suggestions proposed by the reviewer. 2) We also believe that a very interesting future avenue would be to theoretically study the relationship between the optimal learning rate and the Hessian eigenvalue spectra. Understanding this dynamic could lead to better schedulers or even optimization algorithms that could speed up convergence in large models.
Summary: This paper proposes the concept of super consistency, which describes the stable properties of the loss landscape during training. By analyzing the maximum eigenvalue of the Hessian matrix, it is found that the sharpness under the μP and Depth-μP frameworks remains super-consistent and stable near the threshold. In contrast, NTK and other frameworks show significant differences. Strengths: This paper proposes a new concept of "super consistency", which provides a new perspective for understanding the behavior of models of different scales. Through a large number of experiments, the learning rate migration phenomenon under the μP and Depth-μP frameworks is verified, and the consistency of these frameworks on different tasks and datasets is demonstrated, including ResNets, Vision Transformers, and GPT-2. Weaknesses: This paper provides some theoretical analysis, mainly focusing on two-layer linear networks and failing to fully verify the theoretical applicability in nonlinear networks and complex structures. Technical Quality: 3 Clarity: 3 Questions for Authors: Is it feasible to extend the results of this article to other algorithms, such as AdamW? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The theoretical setting i simple. The author can consider trying to expand the two-layer linear network to two-layer ReLU or deep linear network. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the careful assessment of our paper, and for highlighting the extensive suite experiments that we run, including different scaling regimes ($\mu$P, Depth $\mu$P, SP, NTK, etc..) and architectures. On the weaknesses: 1. **Theoretical limitation**: please notice that our paper is the first that analyzes a two-layer network in this generality. The closest work to our paper is Kalra et al, 2023. In contrast to this work we allow multiple datapoints instead of a single data point study, learning targets different than 0. Additionally, we use the standard definition of sharpness (largest eigenvalue of the loss Hessian) instead of the Hessian trace as a proxy. Extending our framework to nonlinear (and deeper) networks is certainly a fascinating avenue for future work. However, it will likely incur significant difficulties in computing the sharpness of deep nonlinear networks during training. Finally, please notice that our paper is mainly empirical, and the scope of the theory is mainly to provide intuition and justification of the empirical results. Under this perspective, the two-layer linear network is an interesting tractable model that expresses the phenomenology observed in practice for more complex networks. 2. **AdamW**: We have performed new width-scaling experiments with Adam and AdamW on 3 layers Convolutional Networks, extending Super Consistency to these settings. The weight decay scaling in AdamW follows the settings of Wang, X. and Aitchison, L., 2024 and Yang et al., 2022. Note that in the added experiments on Convolutional Networks, the learning transfers across width, and we have a Super Consistent sharpness evolution during training. We also believe that our results could be empirically extended to other optimization algorithms as well. See Figure 2 of the one-page pdf for our training runs on CIFAR-10. We note that a very interesting future work avenue would involve understanding the role of adaptivity in Super Consistency from a theoretical point of view. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the additional experiments and clarifications. Your response has addressed my concerns.
Summary: The authors conduct a series of experiments in which they investigate for which attributes of the neural network the gap between its infinite-width (or infinite depth) value and the finite counterpart grows or shrinks during training. Among other attributes, they look at the training loss, largest loss Hessian eigenvalue, and largest NTK eigenvalue. They investigate how the results for different parameterisations (such as muP, neural tangent parameterisation, and some other unstable parameterisations with no well-defined infinite width behaviour). Strengths: The asymptotic scale properties of neural network training are an important area of research. This paper investigates an interesting question: how do the properties of the loss landscape, such as the curvature along the training trajectory, evolve as one scales up the neural network size. Furthermore, the experimental results are interesting, and well-presented in the figures. I liked Figure 3 in particular, which most convincingly illustrates the claim that the gap between finite-width and infinite-width attribute value shrinks/grows throughout training. In fact, I wish for every hypothesis investigated, a plot like that in Figure 3 was shown. Lastly, it appears the authors did proper ablations, investigating multiple datasets and models, to verify their empirical conclusions. Weaknesses: The paper falls short in presentation and the formalism. One of the largest issues is the definition of the term “super consistency”. Definition 3.1 has several issues: - The authors defines $S_N(t)$ as a function of the predictor $f_t(x)$. This is pretty vague, and not sure matches what authors are trying to do. If the predictor $f_t$ is interpreted to be a function $f_t:\mathcal{X}\to\mathcal{Y}$ from some input space to some outputs space implied by the neural network architecture and weights at time $t$, then $S_N(t)$ cannot capture something like the curvature of the loss with respect to the weights. I'm convinced this hurdle can be overcome by carefully defining all objects, and what they are (e.g. a ‘predictor’). - If $S_N(t)$ depends on the weights (such as when considering the spectral norm of the loss Hessian), then it's a random variable. Hence, all the notions of distance and convergence need to be defined for random variables for the expressions in Definition 3.1 to make sense. - It's not clear the limit $S_\infty(t)=\lim_{N\to\infty}S_N(t)$ exists for many properties being considered. In fact, this already precludes the authors from talking about super-consistency of parameterisations that do not have well-defined infinite-width limit training dynamics (e.g. SP). - line 127 “if at any finite $N\geq N_0$:” – did the authors mean to say ‘there exists some $N_0$ such that for all $N\geq N_0$’? Otherwise this sentence doesn't make sense. - Does the symbol $\sim$ here represent asymptotic equivalence as $t\to\infty$? This is something that should have been defined. - “$\sim$ denotes the finite-time behaviour during training” is not a formal definition. I have no idea what it's meant to say. - Also, the whole condition of “$|S_N(t)-S_\infty(t)|\sim g(t)$ where $g(t)$ is a non-increasing function of time...” seems like it could have been equivalently stated much more simply as $|S_N(t)-S_\infty(t)|=\mathcal{O}(1)$ as $t\to\infty$. - This definition seems different from how super consistency was described in the abstract introduction: “certain [...] properties [... are largely independent] of the width and depth[...]. - We name this property super consistency of the landscape”. The concept of super consistency, as defined in Definition 3.1, is simple enough to express in two sentences, that I don't see a reason it can't be explained in the introduction and/or abstract properly. Later in the paper, the authors proceed to use ‘super consistency’ in a sense completely disjoint from that in Definition 3.1. On lines 168-159 they say: “The optimal learning rate is preserved across widths/depths, indicating very fast convergence with respect to the scaling quantity (i.e. it's super consistency).” Definition 3.1. has seemingly nothing to do with the speed of convergence in the scaling quantity (width/depth). In fact, I think the paper would have been stronger had the authors cut back on formalising things like “super consistency”, and just presented the empirical results for what they are. The term “super consistency” doesn't strictly seem necessary to convey the take-aways of the paper, and, at the moment, makes the paper more convoluted. Of course, a thorough formalism might be preferred, but it its current form it detracts from the paper. Others: - I find the phrase “optimization happens along trajectories of super consistent top Hessian eigenvalues” quite confusing. It took me a while to realise the authors are saying that the ‘Hessian eigenvalues are super consistent along the optimisation trajectory’, which is the way I'd recommend they phrase it throughout. - There are several typos throughout the paper. Sometimes, seemingly parts of mathematical expressions are missing (e.g. line 161). Technical Quality: 2 Clarity: 1 Questions for Authors: - Why are there two sets of lines for each width on Figure 1, one solid one dashed? - Is the Appendix G, which is meant to discuss “the effect of the lower order eigenvalues”, empty? - In Figure 2, the authors look at 4-th and 10-th largest eigenvalue. How come the authors decided to look at a fixed N-th largest eigenvalue? Given the scaling in width, wouldn't it be more interesting too look at say 4th and 10th percentile largest eigenvalue, given that the number of eigenvalues grows with depth/width? Confidence: 5 Soundness: 2 Presentation: 1 Contribution: 3 Limitations: - Definition 3.1 concerns asymptotic properties in *both* width/depth and training time, which are then doubly difficult to establish with certainty with finite width, depth, and training time experiments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the detailed feedback and for the further discussion that we anticipate. Also, we thank the reviewer for acknowledging certain strengths of the paper, such as the importance and validity of our findings in the context of understanding neural networks’ loss landscape at different scales. The reviewer’s main concern is regarding how Super Consistency is defined (Definition 3.1). The main purpose of putting forward the definition was to have an actionable measure of Super Consistency that would reflect the precise quantitative results of Figure 3 and it is not supposed to be used in a mathematical theory. This seems to be in line with the Reviewer’s suggestions to “cut back on formalising things like “super consistency”. However, we do agree with the Reviewer that certain aspects of the presentation may seem to introduce mathematical formalism, which is not intended due to the experimental nature of the presented results. Thus, we decided to describe superconsistency outside of a formal definition environment, taking into account the Reviewer's valuable feedback. We hope that the Reviewer agrees that this is a more sensible choice in the context of our paper. **To clarify the reviewer’s confusion on the concept of Super Consistency**: we started from the concept of Consistency in Vyas et al (2023) which refers to how certain aspects of the network (such as the logits and the loss function) are the same at different scales *early in training*. We extend this concept to Super Consistency by requiring that Consistency is maintained for a longer period of training time, as we state in the abstract (lines 8-9) and introduction (lines 34-36). In the revised version of the paper, we try to achieve this objective once again, hoping to resolve the confusion. For your reference, here is the new phrasing of Section 3, replacing Definition 3.1: *Super Consistency refers to when certain aspects of the loss landscape and of the predictor $S_N(t)$ (in this paper $S_N(t)$ refers to the NTK's and loss Hessian's eigenvalues or the loss itself) exhibit the following two properties:* *1. At large $N$, $S_N(t)$ does not deviate significantly from its limit $S_{\infty}(t) := \lim_{N\to\infty}S_N(t)$. This is what is referred to as consistency in Vyas et al, 2023.* *2. $S_N(t)$ does not accumulate significant finite size effects over time, i.e. the curves of $S_N(t)$ and $S_\infty(t)$ remain close over a sustained period of training.* *With respect to the experiment illustrated in Fig. 1, notice that the curves of the loss (center) at different widths show progressive and significant deviations, thus violating Super Consistency. On the other hand, the sharpness dynamics for $\mu$P qualitatively exhibit little-to-no deviations. Also, notice that we assume the existence of the limit $\lim_{N\to\infty}S_N(t)$. For those parametrizations (e.g. standard parametrization) that do not have a well-defined limit, $S_N(t)$ diverges at large $N$ and Super Consistency is trivially violated.* And a few lines later: *To give a quantitative measure to the finite-size accumulation property, we measure deviations over time by estimating the following quantity: $$g(t) := |S_{N}(t) - S_{\infty}(t)|.$$ When $g(t)$ is an increasing function (up to fluctuations), Super Consistency is violated.* The current phrasing simplifies the notation (addressing the first 5 points in the weakness Section), and clarifies the intended meaning of Super Consistency in the abstract and intro (addressing the 6th point). We hope that the clarifications on the goal of the definition, together with the new simplified version of the corresponding Section solve the Reviewer’s concerns about the formalization of the Definition and its underlying meaning. Others: 1. “I find the phrase “optimization happens along trajectories of super consistent top Hessian eigenvalues” quite confusing”. This wording was intended and comes from a bias on related literature, such as “Optimization happens at the edge of stability” or “Gradient Descent Happens in a Tiny Subspace”. However, we agree that it might create slight confusion and have changed it according to the Reviewer’s recommendation. 2. The typo in line 161 has been fixed. *Questions:* 1. **Why are there two sets of lines for each width in Figure 1, one solid and one dashed?** Referring to the left column, different line styles indicate different learning rates, we plot 3 learning rates for $\mu$P and 2 learning rates for NTK to not make the plots too cluttered. 2. **Appendix G**. We thank the reviewer for pointing this out. We have now completed the section. 3. **How come the authors decided to look at a fixed N-th largest eigenvalue?** The reason for this is that earlier work established that SGD dynamics largely happens in the subspace spanned by the top $k$ Hessian’s eigenvectors, where $k$ is the number of classes. In our setting, $k$ is a constant independent of the width. Also, due to EoS-type of results, the largest eigenvalue would be converging to $2/\eta$ regardless of the width. However, in the direction proposed by the Reviewer, we have performed a new experiment where we measure the curvature along the gradient direction, which is a more global measure in the sense that it takes into account the whole Hessian. Super Consistency still holds (See Figure 3 of the one-page pdf) **Other remarks:** 1. **More figures like Figure 3.** The Reviewer appreciated the experiments on the distance between finite and infinite models over time as in Figure 3, suggesting that more plots like that should be included for other experiments. Please notice that Figures 20 and 22 also show this kind of analysis for other experiments. 2. **On the time horizon.** We operate at large training time, but not to complete convergence, as we show in Appendix E.1), thus we avoided talking about time asymptotics of the sharpness dynamics. --- Rebuttal 2: Title: Rebuttal by Authors Comment: Finally, we thank the reviewer for their valuable and precise feedback, and we would like to gently push back on the score of 1 given to the paper presentation. The general consensus amongst the other reviewers is that the main body of work is well-written and well-presented and we would kindly ask the reviewer to reconsider their score. --- Rebuttal Comment 2.1: Title: Response 1 Comment: Thank you for engaging on the points regarding presentation, and for trying to work with the feedback in the review to improve it. I'll try and go through the author's rebuttals, and point out where I think issues still remain. > Super Consistency refers to when certain aspects of the loss landscape and of the predictor $S_N(t)$ (in this paper $S_N(t)$ refers to the NTK's and loss Hessian's eigenvalues or the loss itself) [...] I think that is an improvement. I'm familiar with the (Vyas et al. 2023) work. As far as I know, they don't have a formal or semi-formal definition of consistency, but my understanding is it just colloquially means that some quantity is close to the infinite-width counterpart, whenever that limit is well-defined (e.g. muP, NTP). The key interesting part of their paper is not that some quantities become consistent (that directly follows from what it means for a limit to exist), but that they do so at realistic widths and depths. I think the definition you ascribe to the term consistency: >  At large N, $S_N(t)$ does not deviate significantly from its limit. This is what is referred to as consistency in Vyas et al, 2023. is a non-sequitur. By definition, for **any** $S_N(t)$ that has a limit it is true that $S_N(t)$ does not deviate significantly from the limit at large $N$. I guess a way to phrase this that would make more sense would be to just say: > $S_N(t)$ has a limit. This is what is referred to as consistency in Vyas et al, 2023. But at that point you might as well just say that $S_N(t)$ has a limit, rather than that it is consistent. That being said, I think informally referring to some quantities as being “consistent” (meaning they are either “close” to one another, or are “close” to some limit), like is done (Vyas et al. 2023) is perfectly clear, and doesn't need a formal or informal definition. The reminder of the changes also sound great. I would maybe slightly reword certain parts: > When $g(t)$ is an increasing function (up to fluctuations), Super Consistency is violated. nit: I would change this to “when $g(t)$ increases over time (up to fluctuations)...” just because “increasing function” is a mathematically commonly used term that implies monotonicity. --- ### Clarifying the definition in the abstract: Even given the clarification, I still don't think the line in the abstract is particularly clear: > [...] **find that certain spectral properties under μP are largely independent of the width and depth of the network along the training trajectory. We name this property super consistency of the landscape.** I think a reader, after reading only the abstract, would have no idea that what you have in mind is what you later describe as super-consistency in Section 3. “properties being largely independent of size along the training trajectory” could just mean that they are consistent, or close to the limit at reasonable sizes. It's absolutely not clear that super-consistency encompasses whether the gap grows or shrinks as the training progresses. Here is a suggested alternative: > we find that certain spectral properties under μP are largely independent of the width and depth of the network along the training trajectory, **and they become more consistent as the training progresses**. We name this property super consistency of the landscape. --- Reply to Comment 2.1.1: Title: Answer to Response 1 Comment: We thank the reviewer for the additional valuable feedback. Indeed some of the extra points (e.g. convergence of the Hessian spectrum) have been in our minds after the first rebuttal and we have been thinking about that experiment. 1. **Clarifying the definition in the abstract**: By *certain spectral properties under μP are largely independent of the width and depth.* we mean that they are consistent (i.e. the finite width object is “close” to the infinite width one). And by *along the training trajectory* we mean that it remains independent of the size along the training trajectory (i.e. **super** consistent). However, we agree that the fact that we often observe the sharpness curves getting even closer to their large width limit is not stressed by this phrasing. Thus, we agree that “[...] become more consistent as the training progresses” better captures this intuitive meaning and have updated the abstract accordingly. 2. **On the meaning of “Consistency”** We partially disagree on the interpretation of “Consistency” in the work of Vyas et al (2023). The Reviewer (we apologize if we misunderstood) interprets consistency in a similar way as having a well-defined limit. In our interpretation, Consistency refers to the fact that at realistic widths (the word realistic is crucial here) the object of interest is (informally) practically converged to its limiting object. This is a very important result, as it implies that the infinite width model (proxied by a large width model in their work) is a good model for finite-width neural networks at realistic scales. In this sense, the existence of the limit alone does not imply consistency of the dynamics. In fact, under the NTK parametrization, consistency of the dynamics is not observed despite having a well-defined limit. Quoting from Vyas et al, 2023: “*We stress that this observed **consistency** is a property of networks in mean field/μP parameterization but is not present in other parameterizations which also give an infinite width limit like NTK parameterization*". We are slightly modifying our phrasing from "*At large $N$, $S_N(t)$ does not deviate significantly from its limit*" to "*At **realistically** large $N$, $S_N(t)$ does not deviate significantly from its limit*" to match the wording and meaning of consistency in Vyas et al (2023). Finally, we modified the phrasing of $g(t)$ increasing with time to the Reviewer’s suggestion of “when $g(t)$ increases over time (up to fluctuations)...” --- Rebuttal 3: Comment: We deeply appreciate that the reviewer is now in favor of acceptance. To wrap it up, we will include the additional experiments on the hessian's eigenspectrum, and clarify the meaning of Super Consistency at all points in the paper according to the meaning and phrasing agreed here. We will also run a coord check experiment to verify that features updates are $\mathcal{\Theta}(1)$.
Rebuttal 1: Rebuttal: We thank the reviewers for their initial reviews and interesting comments. In particular, we summarize that all the reviewers have acknowledged the validity and importance of Super Consistency as a novel and important property for understanding the loss landscape of neural networks at different scales, in particular with relation to the phenomenon of learning rate transfer. We did not find any common weakness shared across the majority of reviewers. The main concerns are: Super Consistency under very large models (Rev. 47kG and ssWb) and other optimizers such as AdamW (Rev. 11Ly) and the definition of the term Super Consistency (Rev. F2zq). 1. **On larger scale experiments**. We would like to note that the main published work in this area that we take as reference (Cohen et al, 2021, Gur-Ari et al, 2018) train smaller models, and on a less diverse set of tasks and architectures. In fact, we have experimented across several tasks (CIFAR-10, Imagenet (vision), and wikitext (language)) with two different architectures: ResNets and Transformers (including Vision Transformers). We have also performed experiments on several parametrizations, including NTK, $\mu$P and SP in width, and Depth-$\mu$P and residual networks without $1\sqrt{L}$ scaling for depth. We have trained models with up to 96 layers in depth (e.g. Figure 19), and 8172 units in width. Thus, we believe that our claims remain valid to a sufficient scale, that is compatible with previous work in this area. However, it remains interesting to see what would happen at an even larger scale. Thus, **we ran new experiments scaling up to about 300 million parameters**. See next: 2. **New experiments (Figures refer to the one-page pdf)**: + **Adam + Post-LN Transformers, width scaling (Figure 1)**. We have adapted the Hessian computation to the Adam case as in Cohen et al. (https://arxiv.org/pdf/2207.14484) (Eq. 2), where the largest eigenvalue of the preconditioned Hessian is computed: $\lambda_{\max}(P^{-1}H)$, where P is the diagonal Adam’s preconditioner. This is important for width scaling (in contrast to depth scaling) because the preconditioner itself is width-dependent. Results are in the attached pdf (Figure 1). We show that Super Consistency holds for a Post-LN Transformer architecture trained on wikitext, where the largest network has about 300 million parameters. + **Alignment of gradient and top hessian eigenvector (Figure 3)**. Rev. 47Kg has raised a concern about how the Super Consistency of the first few eigenvalues can explain learning rate transfer. The fact that only the first eigenvalues are important is a finding of Gur-Ari et al, 2018, where it is shown that SGD happens in the space spanned by the top $k$ Hessian eigenvectors. However, we have performed an experiment where we compute $g^T H g / ||g||^2$, which is an unnormalized measure of alignment between gradient and Hessian, thus capturing the curvature along the gradient direction. We observe Super Consistency in two experiments using residual networks, both under $\mu$P (width scaling) and Depth-$\mu$P (depth scaling). This further strengthens the connection between the Super Consistency of the landscape and learning rate transfer and we believe it resolves the reviewer’s concern. + **Adam and AdamW experiments, width scaling (Figure 2)**. We have performed new width-scaling experiments with Adam and AdamW on 3 layers Convolutional Networks, extending Super Consistency to these settings. 3. **Definition of Super Consistency**. Rev. F2zq has expressed concerns about Definition 3.1. We would like to point out that we found it overcomplicating to aim for an airtight mathematical description of our observation. To enhance clarity, we opted instead for an actionable definition that would reflect the actual measurements of e.g. Figure 3. However, we agree with the reviewer that some parts of Definition 3.1 can be potentially confusing, and can be interpreted as an attempt of rigorous formalization. We provide a revision below which considers all of the Reviewer's helpful comments. We are sorry the initial writing caused confusion, and we are positive our current description, now under no claim of "definition", better reflects our mental picture and resolves potential doubts. We have updated the paper by removing the Defintion environment and partially rephrasing some parts. Please see the rebuttal to Rev. F2zq for details. Pdf: /pdf/3a67db8b3c4f9e5a9106342204e468452591841f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Discovering Sparsity Allocation for Layer-wise Pruning of Large Language Models
Accept (poster)
Summary: This paper introduces DSA, an automated framework for determining layer-wise sparsity in large language models (LLMs). The approach aims to enhance pruning techniques by using an evolutionary algorithm to discover optimal sparsity allocation functions, thereby improving model performance on various tasks. The proposed method can improve the performance of layer-wise pruning baselines on various datasets. Strengths: **S1.** This paper tackles an important problem in LLM efficiency by proposing a novel automated framework for sparsity allocation. **S2.** Extensive experiments demonstrate that DSA outperforms existing methods like SparseGPT and Wanda across multiple benchmarks and LLMs. **S3.** The approach is validated on diverse tasks, including arithmetic, knowledge reasoning, and multimodal tasks, showcasing its versatility and effectiveness. Weaknesses: **W1.** The novelty of the approach is somewhat limited as it combines existing techniques such as AutoML and evolutionary algorithms, which are already well-explored in other contexts. **W2.** Some existing works, such as "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" by Yi-Lin Sung et al., already address the same problem setting of adaptive sparsity allocation across layers, potentially diminishing the perceived novelty of this paper. **W3.** The methodology may require significant computational resources, which could be a limitation for practical applications. **W4.** The results on some benchmarks, while improved, may not be sufficiently groundbreaking to warrant acceptance in top-tier conferences like NeurIPS. Technical Quality: 2 Clarity: 2 Questions for Authors: **Q1.** Can you provide more insights into the computational cost of the DSA framework compared to traditional pruning methods? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors discussed some limitations of their method in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Dear Reviewer VeXf ,** Thanks for the valuable feedback. We have tried our best to address all concerns in the last few days. If the reviewer finds our response adequate, **we would appreciate it if the reviewer considers raising the score**. Please see our responses below one by one: > **Q1: Compare to existing AutoML techniques.** **A1:** In contrast to common AutoML and evolutionary algorithms, our method is specifically **tailored for sparsity allocation discovery in LLMs** and involves several novel components: **(1) Novel problem formulation:** We are the first to frame LLM sparsity allocation as an AutoML problem, opening new avenues for optimizing LLM efficiency. **(2) Tailored search space:** We introduce a distinctive search space customized for LLM sparsity allocation, combining pre-processing, reduction, transformation, and post-processing operations in novel ways, allowing for more nuanced and effective sparsity distributions. **(3) Function discovery and generalization:** Diverging from typical AutoML methods such as NAS and HPO that search for specific modes or hyperparameters, our framework emphasizes generalized function discovery, identifying common patterns across LLMs and formuling interpretable sparsity allocation. **(4) Search acceleration:** We develop LLM-specific acceleration techniques to reduce search time, making our DSA practical for large-scale LLM optimization. > **Q2: Compare to ECoFLaP [ref1] .** **A2:** We clarify that our novelty lies in being the **first automated search for adaptive sparsity methods**, which significantly differs from traditional adaptive pruning methods like ECoFLaP. Our work differs significantly from ECoFLaP in several key aspects: **(1) Automation and adaptability:** We employ an automated search method that eliminates the need for expert design and adapts strategies to different models and tasks, whereas ECoFLaP relies on hand-designed, hyperparameter tuning. **(2) Comprehensive search space:** Our comprehensive search space systematically maps ***element-wise scores → per-layer importances → sparsity ratios***. In contrast, ECoFLaP simply computes the keep ratio linearly during its two-stage pruning. **(3) Superior performance:** Our method obtains significant performance gains across various large language and multimodal models, demonstrating superior performance compared to ECoFLaP (see Table below). Table : Perplexity of Wanda, ECoFLaP and our DSA with LLaMA 7B at 0.6 sparsity on WikiText2 | Dense | Wanda | ECoFLaP(fIrst-order) | ECoFLaP(zeroth-order) | DSA (Ours) | | ----- | ----- | -------------------- | --------------------- | ---------- | | 7.26 | 10.68 | 10.16 | 9.83 | **9.15** | **We will augment this discussion and cite more Efficient AI studies [ref1]-[ref5] in the revision.** > **Q3 & Q5: About computational resources and cost compared to traditional methods**. **A3: Our responses are:** **(1)** Our method's main computational cost is in the initial search phase, taking about 0.5 days on LLaMA-1-7B. But the discovered allocation functions are transferable to other models **without additional costs. This one-time cost can be spread across multiple pruning runs. As discussed in the limitations section.** we've developed acceleration techniques to address the search cost challenge common in AutoML methods. **(2)** Most conventional pruning methods also require time-consuming hyperparameter tuning, which may be less efficient than our well-optimized automated search process. **(3)** Applying our allocation function to prune the model proves highly efficient **(see following Table for detailed pruning speeds).** This efficiency stems from two key factors: **(a)** we utilize element-wise scores from the uniform pruning method directly, avoiding extra forward and backward computations. **(b)** we employ reduction operations to simplify computations linearly. *Table : Comparison of time overhead used for computing the pruning metric of LLaMA-1-65B to 50% sparsity.* | SparseGPT | BESA | ECoFLaP(Zeroth-order) | Wanda | OWL(Wanda) | DSA(Wanda) | | ------------ | --------- | --------------------- | ------------ | ----------- | ----------- | | 1353 seconds | 4.5 hours | 6.6 seconds | 5.6 seconds | 6.3 seconds | 6.5 seconds | > **Q4: About significance of results.** **A4: Our responses** are: **(1)** We would like to clarify that our method consistently achieves groundbreaking gains across multiple challenging tasks (reasoning and multimodal benchmarks). Notably, the LLaMA-1|2|3 model pruned by our DSA reach 7.48%|5.69%|14.14% gains on the state-of-the-art models Wanda and SparseGPT. **(2)** To ensure fair comparisons, we follow standard settings like 50% sparsity, where baseline results are strong, making additional gains challenging. **(3)** Our experiments in Table 8 at higher sparsity levels (65%-80%) on LLaMA-1 show our method's ability to achieve substantial gains. **(4)** Results in the rebuttal PDF **(See our general response)** further demonstrate that our method can achieve substantial gains, with improvements ranging from **1% ~ 7% at 60% sparsity, 2% ~ 7.6% at 70% sparsity, and 1.4% ~ 4.2% at 2:4 sparsity.** > **References:** > > [ref1] ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models. In ICLR, 2024. > > [ref2] Training Neural Networks with Fixed Sparse Masks. In NeurIPS, 2021. > > [ref3] Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy. In ICLR 2024. > > [ref4] LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning. In NeurIPS 2022. > > [ref5] VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks. In CVPR 2022. **Finally,** we hope our response could address the concerns, and we thank the reviewer again for the helpful comments. We are glad to discuss further comments and suggestions. --- Rebuttal Comment 1.1: Comment: Thank you for your thorough response to my initial review, and some of my previous issues have been resolved. I would like to follow up with a few additional questions: - You mentioned that the optimal allocation function requires approximately 0.5 days to compute, while this allocation function can be transferred to other models. Could you provide more empirical evidence supporting this claim? (apologize if I missed). Given that different LLMs can vary drastically in structure, depth, etc, how does it affect the generalizability of the allocation function? Furthermore, is it possible to transfer the allocation function between different pruning sparsity levels? If not, would it be necessary to redo the search for each sparsity setting? - The omission of a highly relevant paper, "*ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models*" (ICLR, 2024), was a critical gap in the original submission. While I appreciate that you have now included an empirical comparison of your method with ECoFLaP, the current evaluation is limited to a single model, sparsity level, and dataset combination. Given the significance of this related work, I strongly encourage a more comprehensive and detailed comparison across various models, sparsity levels, and datasets to fully demonstrate the advantages of your approach. I apologize for the delay in my follow-up due to the heavy workload of the rebuttal process. I would greatly appreciate it if you could consider adding the additional experiments mentioned above. --- Rebuttal 2: Title: Look Forward to The Post-Rebuttal Feedback Comment: Dear Reviewer VeXf , We express our deepest appreciation for your careful and constructive feedback. Our rebuttal addresses your concerns comprehensively, and we welcome any additional questions. If our response has successfully addressed your concerns and clarified the significance of our work, we would be immensely grateful if you could reconsider your recommendation. **We promise to add citations to related studies [ref1]-[ref5] in the revision.** We have diligently integrated feedback from all reviewers and hope this is positively reflected in your evaluation. Thank you for investing your valuable time in reviewing our response. With profound respect and gratitude, Paper 66 Authors --- Rebuttal 3: Title: Request to review the rebuttal [Author-Reviewer discussion phase ending soon] Comment: Dear Reviewer VeXf , We would like to sincerely thank you again for your valuable feedback and insights, which have greatly improved our paper. We promise to thoroughly reflect all your comments in the final manuscript. As we are towards the end of the author-reviewer discussion period, we request you to please go through our rebuttal, and we would be immensely grateful if you could reconsider your recommendation. Best regards, Paper 66 Authors --- Rebuttal 4: Title: Additional Response: Part 1 Comment: **Dear Reviewer VeXf ,** We would like to sincerely thank you again for your constructive feedback and apologize for any unclear points in the initial responses. After receiving new comments, we have made our best efforts to augment experiments and clarifications as follows. Please see our responses below one by one: ------ > **Q6& Q7: Generalizability of allocation function and transferability between sparsity levels.** **A6:** We would like to highlight that all of our performance gains across models (LLaMA-1|2|3, Mistral, Vicuna, and OPT) and tasks (reasoning and multimodal benchmarks) are **obtained by transferring the same searched allocation function on LLaMA-1-7B (see Equation 6), without additional individual searches:** **(1) About empirical evidence:** **(a)** As **detailed in Table 2 &3 &4 & 5 [Line 250-286]**, our searched allocation function demonstrates high generality and consistent improvements across various model sizes and architectures, including LLaMA-1, LLaMA-2, LLaMA-3, OPT, Mistral, and LLaVA (Vicuna-based) models. For zero-shot accuracy at 50% sparsity in Table 2, our searched allocation function shows significant gains, improving magnitude pruning by up to **14.14%, Wanda by up to 4.36%, and SparseGPT by up to 9.82% on LLaMA-3 8B**. On the MMLU task in Table 4, our searched allocation function shows **0.96% gains over OWL** for LLaMA-1 13B. In multimodal tasks in Table 6, our searched allocation function improves performance by up to **1.65% on LLaVA-1.5 7B and 1.98% on LLaVA-1.5 13B (SQA task) at 50% sparsity**. **(b)** We also directly transfer our searched allocation function on **LLaMA-3 70B** in the rebuttal PDF **(See our general response)**. Our searched allocation function shows **2.23% to 4.38% gains** across different pruning approaches, aligning with our effectiveness on multiple LLMs. **These results underscore the effectiveness and generality of our searched allocation function across various models, tasks.** **(2) Understanding generalizability:** Sufficient experimental evidence in **(1)** solidly demonstrates the generality of our searched allocation function across different models and tasks. We provide several aspects to understand its generality: **(a)** In the model sparsity area, **different LLMs share the common sparsity laws** (e.g., important weights have salient gradients) resulting in sparsity methods with fixed metrics or functions that can be generalized to various models. For example, SparseGPT, Wanda, OWL and ECoFLaP also apply the same metric for different tasks without discussing whether their functions need to be changed for different structure and depth models (apologize if I missed). Our allocation function search space is **based on observation element-wise score distributions** in LLMs, as discussed in the **Introduction [Lines 62-78] and Figure 1(left).** This observation aligns with OWL (see Figure 1 (middle)) and the sparsity laws that the initial layers of LLMs are more important. Because of these reasons, our searched allocation function naturally follows the sparsity laws, allowing good generalizability to the different LLMs. **(b)** The operators contained in our allocation function are **architecture-agnostic and serve to normalize and eliminate architecture variance**. For example, our pre-process operator standardizes inputs by normalizing scores across layers, ensuring consistent performance metrics by addressing scale variations. As our **stability analyses [Lines 243-250] and theoretical understanding in Appendix B [Lines 554-571]**, our allocation function is stable and can well alleviate perturbations of various models. In addition, as detailed in search robustness experiment (see Appendix C.1 [Lines 575-588]), our allocation functions searched with different initial seeds share the similar performance and similar expressions. **(c)** **Based on (a) and (b), various LLMs share the common sparse allocation law and our search space and operations are architecture-agnostic, stable and robust to different search trials and models, resulting in good generality of our search allocation function.** To further validate this, we re-search the allocation function in OPT-6.7B model with different size and architecture. Our directly transferred allocation function has quite similar formulation and performance with the re-search function, confirming its generality across LLMs to some extent. We will involve this discussion in the revision. *Table: Mean accuracies (%) of transferred and re-searched allocation functions with SparseGPT (Uniform baseline 55.19) for OPT-6.7B at 0.5 sparsity* | Method | Detailed Allocation Functions | Mean accuracy (gain↑) | | ----------- | -------------------------------------- | --------------------- | | transferred | LOG _ABSLOG →GEOMETRIC_MEAN →COS →EXP | 57.85 (2.66↑) | | re-searched | LOG _ABSLOG →GEOMETRIC_MEAN →ACOS →SIGMOID | 57.96 (2.77↑) | --- Rebuttal 5: Title: Additional Response: Part 2 Comment: **(d)** **In our A5 for Reviewer of38**, we apply our allocation functions to ConvNeXt in Table below. surpasses other methods, especially at higher sparsity levels, showing its generalizability for various models **(Recognized by Reviewer of38 with improving the score)**. *Table: Accuracy (%) of Sparse ConvNeXt-Base on ImageNet-1K.* | Sparsity | 50% | 60% | 70% | | --------------- | --------- | --------- | --------- | | Wanda | 82.72 | 80.55 | 68.18 | | OWL + Wanda | 82.76 | 80.53 | 68.28 | | **DSA + Wanda** | **83.12** | **81.68** | **71.55** | **(3) About different sparsity levels:** **Yes.** We directly transfer searched allocation function (see Equation 6) for different sparsity levels and achieve noticeable gains at 65%-80% sparsity on LLaMA-1 in Table 8. In the rebuttal PDF **(See our general response)**, our searched allocation function demonstrates gains ranging from **1.24% to 7.03% at 60% sparsity and 1.91% to 7.68% at 70% sparsity on LLaMA-2 and on LLaMA-3**. > **Q8: About more comparison with ECoFLaP across various models, sparsity levels, and datasets.** **A8:** Following the suggestion, we conduct more experiments to compare our DSA method with ECoFLaP across various models, sparsity levels, and datasets. The results, summarized in Table below, demonstrate the superior performance of our DSA method across different LLaMA models and sparsity rates. For the LLaMA-2-7B model, DSA achieves gains of **1.92% and 1.18% at 60% and 70% sparsity levels respectively, compared to ECoFLaP's gains of 0.77% and 0.52%**. Similar trends are observed for the LLaMA-2-13B model, where DSA outperforms ECoFLaP with improvements of 1.72% and **1.24% at 60% and 70% sparsity levels, surpassing ECoFLaP's gains of 0.86% and 0.65%**. The most significant improvements are seen in the LLaMA-3-70B model, where **DSA achieves remarkable gains of 2.23% and 2.34% at 60% and 70% sparsity levels, substantially outperforming ECoFLaP's gains of 0.71% and 0.68%**. These consistent improvements across different model sizes and sparsity levels highlight the effectiveness and generalizability of our DSA method, demonstrating its superiority over ECoFLaP in enhancing the performance of pruned language models. *Table: Mean accuracies (%) of our DSA on zero-shot task with 7 different datasets.* | Method | LLaMA-2-7B (60%) | LLaMA-2-7B (70%) | LLaMA-2-13B (60%) | LLaMA-2-13B (70%) | LLaMA-3-70B (60%) | LLaMA-3-70B (70%) | | ------------------- | ----------------- | ----------------- | ----------------- | ----------------- | ----------------- | ----------------- | | Wanda | 36.08 | 60.90 | 41.46 | 72.00 | 40.51 | 40.44 | | **Wanda (ECoFLaP)** | 36.85 (0.77↑) | 61.42 (0.52↑) | 42.32 (0.86↑) | 72.65 (0.65↑) | 41.22 (0.71↑) | 41.12(0.68↑) | | **Wanda (DSA)** | **38.00 (1.92↑)** | **62.08 (1.18↑)** | **43.18 (1.72↑)** | **73.24 (1.24↑)** | **42.74 (2.23↑)** | **42.78 (2.34↑)** | ------ **Finally,** we hope our response could address the concerns, and we thank the reviewer again for the helpful comments. **We promise to include these discussions and citations in the revision and sincerely hope that reviewer will consider improving the recommendation.** --- Rebuttal Comment 5.1: Title: Request to review the rebuttal [only a few hours left] Comment: Dear Reviewer VeXf, We greatly appreciate the time and effort you've invested in reviewing our paper. Your constructive feedback has been invaluable in enhancing the quality of our work. We've diligently addressed all your concerns in our point-by-point rebuttal. As we approach the conclusion of the author-reviewer discussion period, we kindly request that you review our additional responses. Other two reviewers have raised their scores after considering our rebuttals and revisions. In light of this, we sincerely hope you'll also reconsider your recommendation and potentially improve your score. Once again, we extend our heartfelt thanks for your time and expertise. Your insights have been crucial to the refinement of our paper. Best regards, Paper 66 Authors
Summary: This article introduces DSA (Discovering Sparsity Allocation) which is designed to automate the discovery of sparsity allocation schemes for layer-wise post-training pruning in large language models (LLMs). Strengths: 1. This paper presents a framework for automatically discovering effective sparsity allocation functions. 2. This paper demonstrates consistent performance improvements across various tasks and datasets. Weaknesses: 1. This article seems to combine OWL and Pruner-Zero. It uses the evolutionary algorithm to determine each layer's sparsity ratio. It's not quite innovative for me. 2. This article doesn't offer adequate evidence when the sparsity is higher than 50%. Table 12 shows the higher sparsity results on LLaMA1 7b. However, I think more evidence should be provided. **Minor:** L632:Typo of Erdos-Renyi. The caption of Table 8: There is no 8B model in the LLaMA-1 family. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. This article only uses DSA to determine the sparsity ratio of each layer, while Pruner-zero searches for a pruning metric to prune the network. I am curious about the potential performance of DSA combined with Pruner-zero. 2. Can DSA be integrated with the 2:4 constraint to achieve actual acceleration? 3. The article mentions in line 227 that DSA can find potential allocation functions in only 0.5 days. What model was searched in this 0.5-day period, and how large was it? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have addressed the limitations, but I believe they should also mention that current techniques do not support the inference of unstructured sparsity, which prevents them from achieving actual acceleration. While some companies are working on this, their devices are not yet publicly available. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Dear Reviewer gYq,** Thanks for the valuable feedback. We have tried our best to address all concerns in the last few days. If the reviewer finds our response adequate, **we would appreciate it if the reviewer considers raising the score**. Please see our responses below one by one: ------ > **Q1: Compare to OWL and evolution method.** **A1:** Our method's novelty lies in being the **first automated search for adaptive sparsity allocation**, significantly differing from traditional adaptive pruning methods like OWL [14] and evolution methods like Pruner-Zero [37]: **(1) Compare to OWL:** **(a)** Our method revolutionizes the field with an automated search approach that removes the need for expert design. In contrast, OWL relies on hand-designed, hyperparameter tuning and is limited by fixed form constraints. **(b)** Our comprehensive search space systematically maps element-wise scores to sparsity ratios, whereas OWL only computes the keep ratio based on outlier ratio linearly, lacking our introduced nonlinear mapping operators. **(c)** Comparative experiments show our method's significant outperformance over OWL in Table 8. **(2) Compare to evolution method: Our DSA differs Pruner-Zero [37] in method type, search space, task, strategy, and Input-Output characteristics (See Table below).** **(a)** We uniquely frame LLM sparsity allocation as an AutoML challenge, opening novel avenues for enhancing LLM efficiency. **(b)** Our search space is customized for LLM sparsity allocation, integrating various operations in innovative ways. **(c)** We develop LLM-specific acceleration techniques like program checking, making our approach practical for large-scale LLM optimization. | Method | Types | Task | Search space | Input | Output | Strategy | | -------------- | --------------- | -------------------------------- | ----------------------------------------------------------- | ------------------------------ | ------------------------------ | -------------------------- | | Pruner-Zero | uniform | symbolic Pruning Metric | unary/binary operations | element-wise weights/gradients | element-wise score | symbolic Regression | | **DSA (ours)** | **non-uniform** | **adaptive allocation function** | **pre-process/reduction/transform/post-process operations** | **element-wise score** | **layer-wise sparsity ratios** | **evolutionary Algorithm** | Please note that the title and abstract of Pruner-Zero [37] are accessible in May 2024, **but its full paper is released (on arXiv and OpenReview) in June 2024, after the NeurIPS deadline.** We already discuss OWL and Pruner-Zero in the related work [lines 143-148] and will add more comparisons in the revision. > **Q2: More results for high sparsity ratios.** **A2: Our responses** are: **(1)** To ensure fair comparisons, we follow standard settings like 50% sparsity, where baseline results are strong, making additional gains challenging. **(2)** Our experiments in Table 8 at higher **sparsity levels (65%-80%)** on LLaMA-1 show our method's ability to achieve substantial improvements. **(3)** Following the suggestions, we provide 60% and 70% sparse experiments in the rebuttal PDF **(See our general response)**. Our method demonstrates **gains ranging from 1.24% to 7.03% in LLaMA-2-7B&13B, and LLaMA-3-70B at 60% sparsity. At 70% sparsity, our improvements ranging from 1.91% to 7.68%** underscore the effectiveness of our DSA for high sparsity ratios. > **Q3: About typos.** **A3:** We appreciate the reviewer's corrections of typos in Erdos-Renyi and LLaMA-1-7B and commit to fixing them in the revision. > **Q4: About potential combination with Pruner-Zero.** **A4: Yes.** Following the suggestion, we conduct experiments combining our method with Pruner-Zero, yielding new state-of-the-art results. This successful integration is due to the orthogonal nature of the two methods: Pruner-Zero optimizes element-wise importance scoring, while our DSA specializes in adaptive layer-wise sparsity allocation. *Table: Mean accuracies (%) at 0.5 sparsity on 7 zero-shot task.* | Models | Pruner-zero | **Pruner-zero+ DSA (Ours)** | | ----------- | ----------- | ----------------------- | | LLaMA-2-7B | 58.87 | **62.22** | | LLaMA-2-13B | 64.83 | **67.05** | > **Q4: About integration with 2:4 constraint.** **A4:** Following the suggestions, we evaluate our DSA under 2:4 sparsity on **LLaVA (See Table below) and LLaMA-2 (See our general response)**. These consistent gains **(2.2% ~ 2.8% on LLaVA and 1.4% ~ 4.2% on LLaMA-2)** across different model sizes and datasets suggest that our approach is more effective at maintaining performance under tighter sparsity constraints. | 7B LLaVA-1.5 | VQAv2 | SQA | VQA | 13B LLaVA-1.5 | VQAv2 | SQA | VQA | | -------------- | --------- | --------- | --------- | -------------- | --------- | --------- | --------- | | Dense | 78.50 | 66.80 | 58.20 | Dense | 80.00 | 74.94 | 61.30 | | Wanda (2:4) | 68.92 | 55.06 | 45.42 | Wanda (2:4) | 75.39 | 64.89 | 52.52 | | **Ours (2:4)** | **71.18** | **57.44** | **48.25** | **Ours (2:4)** | **76.75** | **67.13** | **54.05** | ------ > **Q5: Details of the time cost .** **A5:** As **detailed in the experiment [Line 252-258]**, the 0.5-day search time mentioned is for the **LLaMA-1 7B model** on WikiText2 based on Wanda at 50% sparse setting. We will clarify this detail in the revision. ------ **Finally,** we hope our response could address the concerns, and we thank the reviewer again for the helpful comments. We are glad to discuss further comments and suggestions. --- Rebuttal 2: Title: Look Forward to The Post-Rebuttal Feedback Comment: Dear Reviewer gYqu, We are profoundly grateful for your thorough and constructive comments. We have addressed your concerns point-by-point in our rebuttal and welcome any further inquiries. If our response has successfully alleviated your concerns and highlighted the merit of our work, we would be deeply appreciative if you could reconsider your recommendation. We promise to thoroughly reflect all your comments in the final manuscript. We have conscientiously incorporated feedback from all reviewers and hope this is positively reflected in your assessment. Thank you for generously dedicating your time to review our response. With sincere gratitude, Paper 66 Authors --- Rebuttal 3: Title: Request to review the rebuttal [Author-Reviewer discussion phase ending soon] Comment: Dear Reviewer gYqu, Thank you again for spending your valuable time reviewing our paper. We have carefully considered and addressed your concerns. As we are towards the end of the author-reviewer discussion period, we request you to please go through our rebuttal. We appreciate your constructive and insightful comments and will definitely incorporate them into the final manuscript. Best regards, Paper 66 Authors --- Rebuttal Comment 3.1: Title: Request to review the rebuttal [only one day left] Comment: Dear Reviewer gYqu, Thank you for your time reviewing the paper. Your constructive feedback will help improve the quality of our paper. We have also addressed all your concerns in our rebuttal point-by-point. As we are towards the end of the author-reviewer discussion period, we request you to please go through our rebuttal, and we sincerely hope that the reviewer will consider improving the recommendation. We thank you again for your time! Best regards, Paper 66 Authors --- Rebuttal 4: Title: Official Comment by Reviewer gYqu Comment: Thanks for the authors' replies. Most of my questions have been addressed. One remaining question is about the running time. As the author replied to Reviewer s93u: For other different LLM models, we directly transfer this discovered allocation function without additional search. I'm wondering how this is realized in practice. Given a different model, for instance, the LLaMA2-70b or OPT model family, how can it be transferred into the new model which has different number of layers or model structure? --- Rebuttal 5: Title: Additional Response Comment: **Dear Reviewer gYq,** We would like to express our sincere gratitude for your time and valuable feedback on our paper. In our rebuttal, we have carefully considered and incorporated your suggestions. We are eagerly anticipating any additional feedback you may have. Additionally, to further address your concerns regarding **generalizability of allocation function in the running time**, we make best efforts to clarify comprehensively as follows: **(1) Practical implementations** of searched allocation function: The searched allocation function comprises four operations: LOG_ABSLOG (pre-process), GEOMETRIC_MEAN (reduction), COS (transformation), and EXP (post-process). **(a)** Among them, our pre-process operation is element-wise and architecture-agnostic, similar to metrics in SparseGPT or Wanda. Our reduction operation, akin to OWL's outlier operator, processes intra-layer scores to importances, independent from layer lengths and generalize to various model structures. **(b)** As alternative to OWL linear expression, our transformation post-process operations are non-parametric functions and convert per-layer importances → sparsity ratios. **They (implemented as "COS" and "EXP" using PyTorch's torch.cos and torch.exp functions) can handle inputs of different lengths (corresponding to different layer counts), making them adaptable to diverse model sizes.** **(c)** In summary, our searched allocation function can be applied like OWL to different numbers of layers or model structures. We have also provided the implementation codes in the Supplementary Material and will open-source them upon acceptance. The core practical code of our searched allocation function is in the Supplementary Material's code\lib\autolayer.py file. **(2) Understanding generalizability:** Sufficient **results in Table 2 &3 &4 & 5 [Line 250-286] and rebuttal PDF (See our general response)** solidly demonstrates the generality of our allocation functions across different models and tasks. We provide several aspects to understand its generality: **(a) Different LLMs share the common sparsity laws** (e.g., important weights have salient gradients) resulting in sparsity methods with fixed metrics that can be generalized to various models. For example, Wanda and OWL apply the same metric for different tasks without discussing whether their functions need to be changed for different structure and depth models. Our allocation function search space is **based on observation element-wise score distributions** in LLMs, as discussed in the **Introduction [Lines 62-78] and Figure 1(left).** This observation aligns with OWL and the sparsity laws that the initial layers of LLMs are more important. Thus, our searched allocation function naturally follows the sparsity laws, allowing good generalizability to the different LLMs. **(b)** The operators contained in our allocation function are **architecture-agnostic and serve to normalize and eliminate architecture variance**. For example, our pre-process operator standardizes inputs by normalizing scores across layers, ensuring consistent performance metrics by addressing scale variations. As our **stability analyses [Lines 243-250] and theoretical understanding in Appendix B [Lines 554-571]**, our allocation function is stable and can well alleviate perturbations of various models. As detailed in search robustness experiment (see Appendix C.1 [Lines 575-588]), our allocation functions searched with different initial seeds share similar performance and expressions. **(c)** **Based on (a) and (b), various LLMs share the common sparse allocation law and our search space and operations are architecture-agnostic, stable and robust to different search trials and models, resulting in good generality of our search allocation function.** To further validate this, we re-search the allocation function in OPT-6.7B model with different size and architecture. Our directly transferred allocation function has quite similar formulation and performance with the re-search function, confirming its generality across LLMs to some extent. We will involve this discussion in the revision. *Table: Mean accuracies (%) of transferred and re-searched allocation functions with SparseGPT (Uniform baseline 55.19) for OPT-6.7B at 0.5 sparsity* | Method | Detailed Allocation Functions | Mean accuracy (gain↑) | | ----------- | -------------------------------------- | --------------------- | | transferred | LOG _ABSLOG →GEOMETRIC_MEAN →COS →EXP | 57.85 (2.66↑) | | re-searched | LOG _ABSLOG →GEOMETRIC_MEAN →Sine →EXP | 57.96 (2.77↑) | ----- **Finally,** we hope our response could address the concerns, and we thank the reviewer again for the helpful comments. **We promise to include these discussions in the revision and sincerely hope that reviewer will consider improving the recommendation.** Best regards, Paper 66 Authors --- Rebuttal Comment 5.1: Title: Official Comment by Reviewer gYqu Comment: Thanks for your replies. Most of my concerns are addressed. I will raise my score. --- Reply to Comment 5.1.1: Title: Many Thanks for the Increasingly Positive Assessment and Recognition of Our Work and Rebuttal Comment: Dear Reviewer gYq, We are profoundly thankful for your thoughtful reconsideration of our work and rebuttal. Your decision to raise the rating is not only deeply appreciated but also serves as a powerful source of motivation for us. Your recognition of our efforts is incredibly heartening and reinforces our dedication to excellence. We are committed to leveraging your valuable insights to further refine and elevate the quality of our paper. We cannot express enough gratitude for your constructive comments, the time you've invested, and your patience throughout this review process. Your expertise and guidance have been absolutely crucial in enhancing the depth and rigor of our research. We are truly indebted to you for your thorough evaluation and genuine engagement with our responses. Your significant contribution to the peer review process exemplifies the highest standards of academic integrity and collaboration. With heartfelt appreciation and warmest regards, Paper 66 Authors
Summary: This paper presents DSA, which models layer importance to sparsity ratios, and integrates the allocation function discovered by evolutionary algorithms into various methods, resulting in significant performance improvements. Strengths: This manuscript is a qualified paper, i.e, The method seems technically sound and straightforward in principle. Empirical results demonstrate the strength of this approach. Weaknesses: All the experiments in this article are done on unstructured sparsity. It is well known that unstructured sparsity will not bring the effect of computation acceleration. Have the authors ever tried the experimental results on structured sparsity? It is well known that search algorithms are often very time-consuming, so the 0.5day mentioned by L227 is the result of experiments conducted on which model, 7B, 13B and 70B? Technical Quality: 3 Clarity: 3 Questions for Authors: Please see weaknesses Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Dear Reviewer s93u,** Thanks for constructive comments. We have tried our best to address all concerns in the last few days. If the reviewer finds our response adequate, **we would appreciate it if the reviewer considers raising the score**. Please see our responses below one by one: ------ > **Q1: About results on structured sparsity.** **A1: Our responses** are: **(1)** Following the suggestion, we apply our non-uniform layer-wise sparsity allocation method to the structured pruning technique LLM Pruner [32], which accelerates pruned LLMs directly. The results in Table below show that **our method performs well in structured pruning scenarios** and outperforms OWL. *Table: Perplexity of Structure Pruning with LLaMA-7B on WikiText-2.* | Pruning Method | Layerwise Sparsity | 20% | 40% | 60% | 80% | | -------------- | ------------------ | --------- | --------- | --------- | ---------- | | LLM Pruner | Uniform | 19.09 | 30.39 | 90.017 | 1228.17 | | LLM Pruner | OWL | 18.57 | 28.65 | 76.99 | 321.64 | | **LLM Pruner** | **DSA** | **17.85** | **26.98** | **68.82** | **202.42** | **(2)** We would like to highlight that our method **supports and enhances 2:4 and 4:8 structured sparsity**, compatible with Nvidia GPUs for hardware acceleration. The results presented in Tables 6 and 7, as well as the results in the rebuttal PDF **(See our general response)**, demonstrate the efficacy of our approach, showcasing **gains of 1.4% ~ 4.2% at 2:4 sparsity and ~1% gains at 4:8 sparsity.** These gains are promising advancements in this area **(Recognized by Reviewer 1f5c, of38, gYqu, and VeXf)**. **(3)** Recent advancements in advanced GPU kernels like NVIDIA cuSPARSE [ref1], Sputnik [ref2], and Flash-LLM [ref3] have rapidly supported unstructured sparsity. This practical relevance extends beyond GPUs to non-GPU hardware such as CPUs (e.g., XNNPACK [ref4]) and specialized accelerators like FPGA accelerator. Our method under unstructured sparsity also **achieves 1.8x~3.7x speed up with DeepSparse inference engine** for LLaMA-V2-7B-chat-hf, as presented in Table below. | Method | Unstructured Sparsity | Dense | 40% | 50% | 60% | 70% | 80% | | -------- | ----------------------- | -------- | -------- | -------- | -------- | -------- | -------- | | ours | Latency (ms) | 213.8 | 121.4 | 90.8 | 86.0 | 78.3 | 58.6 | | ours | Throughput (tokens/sec) | 4.7 | 8.2 | 11.0 | 11.6 | 12.8 | 17.1 | | **ours** | **Speedup** | **1.0x** | **1.8x** | **2.4x** | **2.5x** | **2.7x** | **3.7x** | ------ > **Q2: Details of the time cost .** **A2:** As **detailed in the experiment [Line 252-258],** the 0.5-day search time mentioned is for the **LLaMA-1 7B model** on WikiText2 based on Wanda at 50% sparse setting. **For other different LLM models, we directly transfer this discovered allocation function without additional search.** We apologize for not explicitly stating this and will clarify this detail in the revision. > **References:** > > [ref1] NVIDIA GPUs scalability to solve multiple (batch) tridiagonal systems implementation of cuThomasBatch. In PPAM 2018. > > [ref2] Sparse gpu kernels for deep learning. In SC conference 2020. > > [ref3] Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity. ArXiv:2309.10285. > > [ref4] Fast sparse convnets. In CVPR2020. **Finally,** we hope our response could address the concerns, and we thank the reviewer again for the helpful comments. We are glad to discuss further comments and suggestions. --- Rebuttal 2: Title: Look Forward to The Post-Rebuttal Feedback Comment: Dear Reviewer s93u, We are deeply thankful for your careful consideration and constructive feedback. Our rebuttal addresses your concerns in detail, and we are open to any additional questions. If our response has successfully clarified the value of our work and addressed your concerns, we would be incredibly grateful if you could reconsider your recommendation. We promise to thoroughly reflect all your comments in the final manuscript. We have diligently incorporated feedback from all reviewers and hope this effort is evident in your evaluation. Thank you for your time and expertise in reviewing our response. With utmost respect, Paper 66 Authors --- Rebuttal 3: Title: Request to review the rebuttal [Author-Reviewer discussion phase ending soon] Comment: Dear Reviewer s93u, Thank you for your time reviewing the paper. Your constructive feedback will help improve the quality of our paper. We have also addressed all your concerns in our rebuttal point-by-point. As we are towards the end of the author-reviewer discussion period, we request you to please go through our rebuttal, and we would be truly grateful if you could reconsider your recommendation. We thank you again for your time! Best regards, Paper 66 Authors --- Rebuttal Comment 3.1: Title: Request to review the rebuttal [only one day left] Comment: Dear Reviewer s93u, Thank you for your time reviewing the paper. Your constructive feedback will help improve the quality of our paper. We have also addressed all your concerns in our rebuttal point-by-point. As we are towards the end of the author-reviewer discussion period, we request you to please go through our rebuttal, and we sincerely hope that the reviewer will consider improving the recommendation. We thank you again for your time! Best regards, Paper 66 Authors --- Rebuttal 4: Title: Request to review the rebuttal [only a few hours left] Comment: Dear Reviewer s93u, We greatly appreciate the time and effort you've invested in reviewing our paper. Your constructive feedback has been invaluable in enhancing the quality of our work. We've diligently addressed all your concerns in our point-by-point rebuttal. As we approach the conclusion of the author-reviewer discussion period, we kindly request that you review our responses. Other two reviewers have raised their scores after considering our rebuttals and revisions. In light of this, we sincerely hope you'll also reconsider your recommendation and potentially improve your score. Once again, we extend our heartfelt thanks for your time and expertise. Your insights have been crucial to the refinement of our paper. Best regards, Paper 66 Authors
Summary: This paper introduces DSA, an automated framework for discovering optimal sparsity allocation schemes for layer-wise pruning in LLMs. The proposed framework uses per-layer importance statistics and an evolutionary algorithm to explore effective allocation functions, which are then integrated into various pruning methods. Extensive experiments on challenging tasks demonstrate significant performance gains for models like LLaMA-1|2|3, Mistral, and OPT, achieving notable improvements over state-of-the-art models. Strengths: 1. The authors evaluated on a wide range of models, ranging from representative LLMs like LLaMA-1/2/3, Mistral, OPT to a multi-modal model LLaVA. 2. This paper tackles an important problem as to how to assign layerwise pruning ratio for sparsity based pruning. The authors have demonstrated strong results compared to state-of-the-art methods. Weaknesses: 1. A majority of the results in this paper are performed under the setting of 50% unstructured sparsity, which is a relatively low sparsity level. It would be good to demonstrate more results on higher level of sparsity, e.g., 60% and 70%. 2. The authors described the search space of the proposed algorithm as four transform operations in Equation 3. However, I am not sure if this is the best way to find layerwise sparsity. Is there any motivation for such design of the search space? More specifically, why should we convert the element-wise scores to layer-wise scores first? I think some space should be used to discuss the motivation and insights behind Equation 3. 3. It would be good to show some numbers on the practical runtime speedup of dynamic layerwise pruning as compared to layerwise uniform pruning. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Are the top-performing allocation functions the same across pruning settings, e.g., sparsity ratio and models? More specifically, does the allocation function in equation 6 generalize across LLMs? 2. How does the proposed sparsity allocation method apply to neural networks beyond Transformers, e.g., convolutional neural networks? 3. The authors evaluated all the open-source LLMs in LLaMA and LLaMA-2. However, for LLaMA-3, only the 8b model is evaluated. Have the authors experimented with LLaMA-3-70B? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Dear Reviewer of38,** Thanks for constructive comments. We have tried our best to address all concerns in the last few days. If the reviewer finds our response adequate, **we would appreciate it if the reviewer considers raising the score**. Please see our responses below one by one: ------ > **Q1: More results on higher sparsity levels.** **A1: Our responses** are: **(1)** To ensure fair comparisons, we follow standard settings like 50% sparsity, where baseline results are strong, making additional gains challenging. **(2)** Our experiments in **Table 8 at higher sparsity levels (65%-80%) on LLaMA-1** show our method's ability to achieve substantial improvements **(Recognized by Reviewer s93u, gYqu, and VeXf)**. **(3)** Following the suggestions, we provide 60% and 70% sparse experiments in the rebuttal PDF **(See our general response).** Our method demonstrates gains ranging from **1.24% to 7.03% at 60% sparsity and 1.91% to 7.68% at 70% sparsity**, which underscore our effectiveness for high sparsity ratios. ----- > **Q2: About motivation and insights of design of the search space**. **A2: Our responses** are: **(1)** As discussed in **Introduction [Lines 62-78] and Figure 1 (left),** our design is motivated by analyzing element-wise score distributions, and mean values of per-layer scores, inspiring reduction and transformation ops. While basic reduction showed modest gains, applying transform ops yielded more promising results. We also include pre-processing for score normalization and post-processing to enhance function fit. **(2) Converting element-wise scores to layer-wise scores is important:** layer-wise scores provide a consolidated view of importance, reducing noise and offering more stable ratios. By focusing on layer-wise metrics, critical layer information can be leveraged for better parameter retention. This process enhances computational efficiency by simplifying complexity from the element to the layer level. **(3)** **As detailed in Primary Operators section [Lines 184-197]**, key insights of our design are: **(a) Pre-process:** Standardizes inputs by normalizing scores across layers, ensuring consistent performance metrics by addressing scale variations. **(b) Reduction:** Condenses element-wise information by extracting representative values per layer, reducing computational complexity. **(c) Transformation**: Models complex relationships with functions, enabling the representation of intricate patterns in layer importance. **(d) Post-process:** Fine-tunes the allocation function for optimization, enhancing flexibility. **(4)** Ablation study **(See Table below)** shows that the Reduction ops is the most influential, followed by the Transformation ops. *Table : Perplexity with LLaMA 7B at 0.7 sparsity on WikiText2* | DSA | DSA without Pre-process | DSA without Reduction | DSA without Transformation | DSA without Post-process | | ----- | ----------------------- | --------------------- | -------------------------- | ------------------------ | | 22.60 | 23.45 | 26.55 | 25.61 | 23.22 | > **Q3: About practical runtime speedup.** **A3:** Following the suggestions, we evaluate the runtime speedup **(See Table below)** of employing DeepSparse inference engine on LLaMA-V2-7B-chat-hf model. The findings indicate that our method exhibits comparable speedup to uniform pruning. | Method | Sparsity | Dense | 40% | 50% | 60% | 70% | 80% | | -------- | ---------------------- | -------- | -------- | -------- | -------- | -------- | -------- | | uniform | Latency (ms) | 213.8 | 112.5 | 89.1 | 85.5 | 82.2 | 61.1 | | uniform | Throughput (tokens/sec) | 4.7 | 8.9 | 11.2 | 11.7 | 12.2 | 16.4 | | uniform | Speedup | 1.0x | 1.9x | 2.4x | 2.5x | 2.6x | 3.5x | | ours | Latency (ms) | 213.8 | 121.4 | 90.8 | 86.0 | 78.3 | 58.6 | | ours | Throughput (tokens/sec) | 4.7 | 8.2 | 11.0 | 11.6 | 12.8 | 17.1 | | **ours** | **Speedup** | **1.0x** | **1.8x** | **2.4x** | **2.5x** | **2.7x** | **3.7x** | > **Q4: About generalization of allocation functions.** **A4: Yes.** We directly transfer top-performing allocation function in Equation 6 to different models and takes without additional search. Our experiments show this allocation functions do show good generalization across models and sparsity settings. > **Q5: About applicability to non-Transformer networks.** **A5:** We would like to highlight that **our method is architecture-agnostic and can be applied to various model types, including CNNs. The reason is that our search space is comprehensive and includes diverse types of operations that can be utilized across different model architectures.** To confirm this, we apply our allocation functions to ConvNeXt in Table below. Our DSA surpasses other methods, especially at higher sparsity levels, showing its effectiveness with CNNs. *Table: Accuracy (%) of Sparse ConvNeXt-Base on ImageNet-1K.* | Sparsity | 50% | 60% | 70% | | --------------- | --------- | --------- | --------- | | Wanda | 82.72 | 80.55 | 68.18 | | OWL + Wanda | 82.76 | 80.53 | 68.28 | | **DSA + Wanda** | **83.12** | **81.68** | **71.55** | > **Q6: About experiment with on LLaMA-3 70B.** **A6:** Following the suggestion, we perform experiments on LLaMA-3 70B in the rebuttal PDF **(See our general response)**. Our method shows **2.23% to 4.38% gains** across different pruning approaches, aligning with our effectiveness on multiple LLMs. **Finally,** we hope our response could address the concerns, and we thank the reviewer again for the helpful comments. We are glad to discuss further comments and suggestions. --- Rebuttal 2: Title: Look Forward to The Post-Rebuttal Feedback Comment: Dear Reviewer of38, We extend our heartfelt thanks for your meticulous and invaluable comments. Our rebuttal addresses your concerns comprehensively, and we welcome any further inquiries you may have. If our response alleviates your concerns and clarifies the value of our paper, we would be truly grateful if you could reconsider your recommendation. We pledge to thoroughly incorporate all your astute observations in the final version. We have conscientiously integrated feedback from all reviewers and hope this is reflected favorably in your assessment. We are truly grateful for the time you've invested in reviewing our work. With sincere appreciation, Paper 66 Authors --- Rebuttal 3: Title: Request to review the rebuttal [Author-Reviewer discussion phase ending soon] Comment: Dear Reviewer of38, Thank you for your time reviewing the paper. Your constructive feedback will help improve the quality of our paper. We have also addressed all your concerns in our rebuttal point-by-point. As we are towards the end of the author-reviewer discussion period, we request you to please go through our rebuttal, and we would be truly grateful if you could reconsider your recommendation. We thank you again for your time! Best regards, Paper 66 Authors --- Rebuttal Comment 3.1: Comment: I would like to thank the authors for the response. My concerns are adequately addressed. Thus I have improved my score to 6. --- Reply to Comment 3.1.1: Title: Many thanks for improving the score and recognition of our work and rebuttal Comment: Dear Reviewer of38, Thank you so much for the recognition of our responses. We are glad to see that you have raised your score. We will make more efforts to improve our paper further. Many thanks for your constructive comments, time and patience. Best regards and thanks, Paper 66 Authors
Rebuttal 1: Rebuttal: # **General Response** **Dear Reviewers, Area Chairs, Senior Area Chairs and Program Chairs,** We sincerely thank all reviewers for their positive feedback and constructive comments. **In the initial review, 3 Positive ratings are given.** Reviewers positively acknowledge **the novelty of the idea, the methodology employed, the extensive experiments conducted, the superior performance, and the good presentation of the paper**. More encouragingly, **Reviewer of38, 1f5c, VeXf, and gYqu** think our **novel automated** **framework tackles an important problem in LLM efficiency** for the community. **[Important problem]:** - **Reviewer VeXf:** "tackles an important problem in LLM efficiency" - **Reviewer of38:** "tackles an important problem" **[Novelty]:** - **Reviewer 1f5c**: "proposes a novel method" - **Reviewer VeXf**: "novel automated framework" - **Reviewer gYqu**: "framework for automatically discovering effective sparsity allocation functions" **[Theoretical Soundness]:** - **Reviewer s93u**: "technically sound and straightforward in principle" **[Extensive experiments]:** - **Reviewer of38**: "evaluated on a wide range of models" - **Reviewer VeXf:** "Extensive experiments" **[Superior performance]:** - **Reviewer 1f5c:** "shows better performance than SOTA pruning methods" - **Reviewer of38:** "demonstrated strong results compared to state-of-the-art methods" - **Reviewer s93u:** "Empirical results demonstrate the strength of this approach" - **Reviewer gYqu**: "demonstrates consistent performance improvements" - **Reviewer VeXf:** "DSA outperforms existing methods" **[Good presentation]:** - **Reviewer 1f5c:** "insight of the paper is clear and easy to understand" - **Reviewer s93u:** "This manuscript is a qualified paper" In the past days, we carefully improved the experiments (using all computational resources we have), the clarifications, and the discussions of our work to address the concerns, the questions, and the requests of all four reviewers. **In the attached rebuttal PDF, we provide detailed experiment results of higher sparsity ratios and 2:4 sparsity on LLaMA-2 and LLaMA-3 (Mean accuracies are summarized in Table below).** Our DSA method consistently boosts mean accuracies on seven zero-shot tasks across various LLaMA models and sparsity levels. The application of DSA leads to notable improvements in performance, **with improvements ranging from 2.34% to 7.03% in Magnitude pruning, 1.95% to 7.68% in SparseGPT, and 1.92% to 3.30% in Wanda**, demonstrating the effectiveness of DSA in enhancing model accuracy in different scenarios and with various pruning methods. *Table: Mean accuracies (%) of our DSA on 7 zero-shot task.* | Method | LLaMA-2-7B (60%) | LLaMA-2-7B (70%) | LLaMA-2-13B (60%) | LLaMA-2-13B (70%) | LLaMA-3-70B (60%) | LLaMA-3-70B (70%) | LLaMA-2-7B (2:4) | LLaMA-2-7B (2:4) | | ------------------- | ----------------- | ----------------- | ----------------- | ----------------- | ----------------- | ----------------- | ----------------- | ----------------- | | Magnitude | 35.61 | 50.81 | 38.38 | 51.16 | 55.86 | 38.76 | 45.58 | 49.89 | | **Magnitude (DSA)** | **37.95 (2.34↑)** | **57.84 (7.03↑)** | **46.06 (7.68↑)** | **54.28 (3.12↑)** | **60.24 (4.38↑)** | **42.98 (4.22↑)** | **49.78 (4.20↑)** | **53.38 (3.49↑)** | | SparseGPT | 43.61 | 60.68 | 48.76 | 70.14 | 65.03 | 43.22 | 50.94 | 54.86 | | **SparseGPT (DSA)** | **45.56 (1.95↑)** | **61.31 (0.63↑)** | **50.04 (1.28↑)** | **72.12 (1.98↑)** | **67.34 (2.31↑)** | **45.73 (2.51↑)** | **52.66 (1.72↑)** | **56.35 (1.49↑)** | | Wanda | 36.08 | 60.90 | 41.46 | 72.00 | 40.51 | 40.44 | 48.75 | 55.03 | | **Wanda (DSA)** | **38.00 (1.92↑)** | **62.08 (1.18↑)** | **43.18 (1.72↑)** | **73.24 (1.24↑)** | **42.74 (2.23↑)** | **42.78 (2.34↑)** | **52.05 (3.30↑)** | **57.07 (2.04↑)** | **Finally**, based on the constructive comments by all reviewers and our responses, **we will carefully revise the manuscript of our work**. We hope our detailed responses help address the concerns, the questions, and the requests of all reviewers. Pdf: /pdf/191ab357554727107809f70f83d560ed317eceb2.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper introduce a novel model pruning algorithm, DSA, aiming to prune unimportant model weights to increase the sparsity of models. Unlike previous model pruning methods which assign the same sparsity ratio for all layers. DSA proposes a method to calculate the sparsity allocation for different layers, which achieves a more adaptive pruning for each layer. The evaluation shows that DSA has strong empirical performance. Strengths: 1. The insight of the paper is clear and easy to understand. 2. The paper proposes a novel method to estimate the allocation budget for different layers. 3. The proposed method shows better performance than SOTA pruning methods. Weaknesses: 1. The presentation of the proposed method is quite confusing. Especially the section 4, I am not quite clear about why the allocation function uses this design. Why do we need the pre-process, reduction, transformation, and post-process? What are the insights for these components? 2. Performance improvement is not as promising as the abstract claims, especially in structured pruning cases. Structured pruning is a more meaningful setting for model pruning, since it can be directly accelerated by hardware to achieve wall-clock time speedup. In Tables 6 and 7, the performance improvement is around 1%. 3. Wanda paper also reported 2:4 sparsity. How is the performance of DSA in a 2:4 sparsity setting? Technical Quality: 3 Clarity: 2 Questions for Authors: See weakness. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Dear Reviewer 1f5c,** Thanks for the valuable feedback. We have tried our best to address all concerns in the last few days. If the reviewer finds our response adequate, **we would appreciate it if the reviewer considers raising the score**. Please see our responses below one by one: ------ > **Q1: About motivation and insights of allocation function design.** **A1: Our responses** are: **(1)** Our allocation function design is **motivated by analyzing element-wise score distributions**, as discussed in the **Introduction [Lines 62-78] and Figure 1(left).** **(a)** We notice that mean, variance, and entropy values of per-layer element-wise scores can serve as allocation indicators, inspiring reduction operations. **(b)** While basic reduction of element-wise scores showed modest improvements, applying transform functions yielded more promising results, prompting the introduction of transform operations. **(c)** We include pre-process to normalize scores for fair comparison and post-process to further enhance function fit's upper bound. **(2)** The key insight of our four-component design is its flexibility in exploring diverse allocation functions tailored to each LLM's characteristics, while capturing complex, non-linear relationships between element-wise scores and sparsity ratios. **As detailed in the Primary Operators section [Lines 184-197]**, each component serves a specific purpose: **(a) Pre-process:** Standardizes inputs by normalizing scores across layers, ensuring consistent performance metrics by addressing scale variations. **(b) Reduction:** Condenses element-wise information by extracting representative values per layer through operations like variance or entropy, reducing computational complexity. **(c) Transformation**: Models complex relationships with functions like sine or exponential, enabling the representation of intricate patterns in layer importance. **(d) Post-process:** Fine-tunes the allocation function for optimization, enhancing flexibility. **(3)** Ablation study **(See Table below)** shows that the Reduction component is the most influential, followed by the Transformation component, while the Pre-processing and Post-processing components have a smaller impact on the overall DSA performance. *Table : Perplexity with LLaMA 7B at 0.7sparsity on WikiText2* | DSA | DSA without Pre-process | DSA without Reduction | DSA without Transformation | DSA without Post-process | | ----- | ----------------------- | --------------------- | -------------------------- | ------------------------ | | 22.60 | 23.45 | 26.55 | 25.61 | 23.22 | ------ > **Q2: About performance improvements.** **A1: Our responses** are: **(1)** We would like to clarify that our method consistently demonstrates substantial enhancements **(1%-14%) across a spectrum of scales (7B~70B), models (LLaMA-1|2|3, Mistral, LLaVA, and OPT models), and complex tasks (reasoning and multimodal benchmarks)**, which robustly confirms its effectiveness. The magnitude of gains naturally varies depending on the model and task. For instance, in Tables 6 & 7, considering the **Dense LLaVA-1.5 Models ranging from 7B to 13B, we observe a only 1.5%** performance increase on VQAv2. Notably, our around 1% gains over Wanda, the current state-of-the-art pruning method, is promising advancements in this area **(Recognized by Reviewer of38, s93u, gYqu, and VeXf)**. **(2)** To ensure fair comparisons, we follow standard settings like 50% sparsity, where baseline results are strong, making additional gains challenging. **(3)** Our experiments in Table 8 at higher sparsity levels (65% ~ 80%) on LLaMA-1 show our method's ability to achieve substantial improvements. **(4)** Additional results in the rebuttal PDF **(See our general response)** further demonstrate that our method can achieve substantial gains, with improvements ranging from **1% ~ 7% at 60% sparsity, 2% ~ 7.6% at 70% sparsity, and 1.4% ~ 4.2% at 2:4 sparsity**. > **Q3: About performance in 2:4 sparsity setting.** **A3**: Following the suggestions, we evaluate our DSA under 2:4 sparsity on LLaVA (See Table below) and LLaMA-2 (See our general response). These consistent gains **(2.2% ~ 2.8% on LLaVA and 1.4% ~ 4.2% on LLaMA-2)** across different model sizes and datasets suggest that our approach is more effective at maintaining performance under tighter sparsity constraints. | 7B LLaVA-1.5 | VQAv2 | SQA | VQA | 13B LLaVA-1.5 | VQAv2 | SQA | VQA | | ------------ | ----- | ----- | ----- | ------------- | ----- | ----- | ----- | | Dense | 78.50 | 66.80 | 58.20 | Dense | 80.00 | 74.94 | 61.30 | | Wanda (2:4) | 68.92 | 55.06 | 45.42 | Wanda (2:4) | 75.39 | 64.89 | 52.52 | | **Ours (2:4)** | **71.18** | **57.44** | **48.25**| **Ours (2:4)** | **76.75** | **67.13** | **54.05** | **Finally,** we hope our response could address the concerns, and we thank the reviewer again for the helpful comments. We are glad to discuss further comments and suggestions. --- Rebuttal 2: Title: Look Forward to The Post-Rebuttal Feedback Comment: Dear Reviewer 1f5c, We sincerely appreciate your thoughtful and constructive feedback. We have diligently addressed each of your concerns in our point-by-point rebuttal. We hope our response has alleviated your concerns and illuminated the value of our work. If so, we would be immensely grateful if you could reconsider your recommendation. We assure you that all your insightful comments will be meticulously incorporated into the final manuscript. We have earnestly integrated feedback from all four reviewers and hope this is evident in your evaluation. Thank you for dedicating your valuable time to review our response. With deepest gratitude, Paper 66 Authors --- Rebuttal 3: Title: Request to review the rebuttal [Author-Reviewer discussion phase ending soon] Comment: Dear Reviewer 1f5c, We sincerely thank you for your valuable feedback, which has provided our paper with deeper insights. We promise to thoroughly reflect all your comments in the final manuscript. As we are towards the end of the author-reviewer discussion period, we request you to please go through our rebuttal, and we would be immensely grateful if you could reconsider your recommendation. Best regards, Paper 66 Authors --- Rebuttal Comment 3.1: Title: Request to review the rebuttal [only one day left] Comment: Dear Reviewer 1f5c, Thank you for your time reviewing the paper. Your constructive feedback will help improve the quality of our paper. We have also addressed all your concerns in our rebuttal point-by-point. As we are towards the end of the author-reviewer discussion period, we request you to please go through our rebuttal, and we sincerely hope that reviewer will consider improving the recommendation. We thank you again for your time! Best regards, Paper 66 Authors
null
null
null
null
null
null
Aligning to Thousands of Preferences via System Message Generalization
Accept (poster)
Summary: In this paper, the authors introduce a novel method to align LLMs with diverse user preferences without requiring continual retraining for each specific preference. The approach utilizes a unique system message protocol that guides LLMs to produce responses tailored to specific, nuanced user preferences. Strengths: 1. The idea is novel: adapting to different users' preferences by training on preference data with different system prompts. 2. It is intuitively a useful research, effectively strengthening the alignment effectiveness by clarifying the user identities besides the alignment data themselves. 3. The paper is well-written and easy to follow. Weaknesses: 1. How many humans did you hired to perform the human evaluation? There lacks a illustration on this. If we want to illustrate its ability to cater to a wide range of peoples with different backgrounds, then we should also apply a wide range of people to evaluate. 2. I feel that there lacks an illustration to show that the model is not becoming verbose. As all of the benchmarks you use prefer verbose responses. Technical Quality: 2 Clarity: 2 Questions for Authors: Is this a way to "hack" the Chatbot Arena? (in the right way lol) Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer bEUV, we deeply appreciate your valuable feedback. We provide responses to your comments in the weaknesses (W) and questions (Q) section. --- **Details of human annotation/evaluation (W1)** We employ 14 undergraduate students aged 20-24 from various majors, consisting of 6 males and 8 females. Hiring annotators with diverse backgrounds and preferences is an important aspect of human evaluation. However, in this study, human annotators assess the quality of the {system message, user prompt, response} set according to clear criteria, regardless of their personal preferences. They also evaluate which model better reflects a given preference in response generation. The Stage 1 and Stage 2 human evaluation process is detailed in Appendix F.1. Therefore, while individual preferences of the annotators are valuable, they do not apply in our specific context. --- **Analysis of response verbosity (W2)** In Appendix G.1 Figure 8, we plot the length distribution of responses and reference answers on Multifaceted Bench. It indeed shows that Janus’s responses are longer than other LLMs including Mistral 7B Instruct v0.2 and GPT 3.5 Turbo. Since the distribution of Janus responses and that of reference answers made by GPT-4-Turbo-0125 are similar, it can be seen as the result of supervised learning. Still, Table 3 shows that Janus outperforms models including the same Mistral 7B Instruct v0.2 and GPT 3.5 Turbo on AlpacaEval 2.0 when compared using the *length-controlled* (LC) win rate. The LC win rates are a debiased version of the win rates that control for the length of the outputs and are reported to improve leaderboard correlation [1]. Since Janus exhibits high performance consistently across benchmarks including length-controlled measures, we stress that verbosity does not play as a critical factor for evaluators to prefer Janus responses over those from other models. --- **Hacking the Chatbot Arena (Q1)** Could you please elaborate on your question regarding our method 'hacking' the Chatbot Arena? Are you suggesting that this hacking is a result of biases in the datasets or models, or are you referring to biases in the evaluation methods? --- [1] Dubois et al.. Length-Controlled AlpacaEval: A Simple Way to Debias Automatic Evaluators. arXiv 2024. --- Rebuttal Comment 1.1: Comment: Hi authors, Thanks for your responses! To clarify the question: can this method be used as a way to improve the models' ranking on chatbot arena simply by fitting the human preferences or arena users' preferences? --- Rebuttal 2: Comment: Dear reviewer bEUV, thank you for the clarification! Our training dataset and method does improve the model in preference-based benchmarks like AlpacaEval 2.0 (Table 3). However, our method is not centered on how to extract user preferences from the human population. Instead, we created training data simulating user preferences by distilling GPT-4 Turbo’s knowledge, which itself was effective in achieving both personalization and general helpfulness. Also, since Chatbot Arena is updated via *real-time user votes*, we do not see it feasible to fit the arena users’ preferences as well. We emphasize that our method cannot be used to mine or fit existing users’ preferences. The intended use of our method is to align to diverse individuals’ preferences and generalize to unseen system messages. We hope this answers your question!
Summary: This paper addresses the issue that humans inherently have diverse values, while current LLMs are primarily aligned with general public preferences such as helpfulness and harmlessness. Previous work has trained new reward models (RMs) and LLMs for individual preferences, which is time-consuming and costly. The authors propose a new paradigm where users specify their values within system messages. Specifically, they create the Multifaceted Collection, containing 65k user instructions, each with 3 system messages, resulting in 192k combinations of fine-grained values with the assistance of GPT-4. They then train mistra-7b-v0.2 into Janus-7B and test it on 921 prompts collected from 5 benchmarks. Compared to leading open-sourced and proprietary models using pairwise ranking human evaluation or GPT-4-as-a-judge, Janus-7B achieves a higher win rate while maintaining harmlessness. Strengths: The paper proposes a new paradigm to address the problem of aligning to fine-grained values without repeatedly training multiple RMs and LLMs. It also creates a large-scale preference dataset with fine-grained system prompts representing multiple alignment targets. Detailed experiments across many leading open-source and proprietary models validate the method's effectiveness. Weaknesses: While the paper is well-written and presents a solid analysis, several weaknesses need addressing: 1. The construction of the Multifaceted Collection is costly and not scalable. GPT-4 generates preference sets for each of the 65k instructions, converts each of the 198k preference sets into a system message, and crafts gold-standard multifaceted responses for each system message. 2. The role of system message generalization has not been clarified. What are the differences between varying the system messages for individualized alignment targets and simply using different instructions in the user prompt while maintaining the system message unchanged to reach the specified alignment targets? Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What are the differences between varying the system messages for individualized alignment targets and simply using different instructions in the user prompt while maintaining the system message unchanged to reach the specified alignment targets? It seems that I can just use specialized instructions in user prompts to reach the same goal of aligning to individualized values. Can you comment on this? 2. There are several typos: - Line 136: 5 datasets, not 4 - Line 152: Appendix B.1? - Line 286: Appendix G.1? And so on Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The construction of the Multifaceted Collection is costly and not scalable. And the role of system message generalization is unclear. It is not explained how varying system messages for individualized alignment targets differs from using different instructions in user prompts while keeping the system message unchanged. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer k2dN, thank you for the important remarks that will help us better shape our contributions. We will address your concerns in weaknesses (W1, W2) and questions (Q1, Q2) below. --- **Redefining the scalability of our approach (W1)** Creating a synthetic dataset can indeed be costly, and it is challenging to consider the values of every individual in the world when generating data. However, this does not necessarily mean that the process is not scalable. According to previous studies [1, 2, 3], training with a sufficiently diverse and large dataset can enable language models to address problems requiring skills they have not explicitly learned. As shown in Table 2, all the preferences used in the benchmark are ones that Janus has not encountered during training. Nonetheless, Janus's superior performance compared to the baselines suggests that it possesses significant generalizability. We believe that our approach can overcome the scalability problem in individualized alignment by previous personalized RLHF approaches (Section 2). --- **Discussing the role of system messages for individualized alignment (W2, Q1)** The core of our methodology is to verbalize preferences in the model input and train a model on many of such instances to improve generalization. Our method is an extension of instruction tuning, where we *control preferences in the instruction* to give stronger signals to the model on fine-grained preferences than on general helpfulness and harmlessness dimensions. In this aspect, your comment on using different instructions in the *user prompt* to achieve individualized alignment is valid. Applying our training recipe using user prompts instead of system prompts would yield similarly strong results. Nevertheless, we tried to separate the preference-specific instructions in the system message, since there is much underexplored potential in it for reaching alignment targets. From an application standpoint, the best scenario would be when the user specifies their preferences by themself in the user prompt, but in reality, users would not take much effort to do so. A way to improve user experience would be the developers inferring preferences (possibly based on past conversation history), verbalizing them in the system message, and aligning the model response with the user’s *hidden* interests. A model can be trained to assign highest privilege to the system message for more granular preference steering [4]. However, there is little work that experiments with diverse system messages (Section 2), so we aimed to conduct a study on the benefits of varying system messages for the same user prompt. --- **Typos (Q2)** Thank you for the suggestions. We will go over the draft carefully and revise any inaccuracies. --- [1] Sanh et al.. Multitask Prompted Training Enables Zero-Shot Task Generalization. ICLR 2022. [2] Longpre et al.. The Flan Collection: Designing Data and Methods for Effective Instruction Tuning. ICML 2023. [3] Kim et al.. Prometheus: Inducing Fine-Grained Evaluation Capability in Language Models. ICLR 2024. [4] Wallace et al.. The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions. arXiv 2024. --- Rebuttal Comment 1.1: Comment: Thank you for your clear response. The role of system messages in individualized alignment can indeed be seen as a method to infer users' preferences. However, in practice, what specific system message would you use to better align with users' goals? Would it be based on past conversation history? If so, have you experimented with "developers inferring preferences (possibly based on past conversation history), verbalizing them in the system message"? Additionally, can system messages inferred from past conversation history be generalized using JANUS? My main concern is that aligning with varying preferences through system message generalization seems somewhat similar to using diversified instructions in instruction tuning, aiming to enable generalization to other instructions. --- Rebuttal 2: Title: Author Response to Reviewer k2dN (1/2) Comment: Dear reviewer k2dN, thank you for raising the discussion. We took your concerns carefully and will provide detailed comments below. First, as per your main concern on our method’s similarity with instruction tuning, we agree with the following statement: > aligning with varying preferences through system message generalization seems somewhat similar to using diversified instructions in instruction tuning, aiming to enable generalization to other instructions. System message itself is an instruction that the model should follow, so the basis of system message generalization is indeed in instruction tuning. With sufficiently many diversified system messages, our method takes advantage of the instruction tuning recipe to generalize to unseen system messages (i.e., individualized alignment targets). However, our focus on the *content* differs from previous instruction tuning works. While previous works scale inputs, input-free tasks, or tasks per input (three paradigms described in [1]) in instructions, **we aim to scale *meta-instructions*, instructions that guide how to respond to subsequent instructions**. Our motivation was that different people expect different responses on the same instruction, so such meta-instructions that set preferences for task execution should also be taught to the model. Including this in instruction tuning helps the model approach various types of prompts and dynamically adjust response strategies, as shown in Section 5.1 and 6.2. Method-wise, our hierarchical data generation strategy enables careful curation of those meta-instructions to specifically simulate user preferences instead of presenting irrelevant challenges or hinting solutions with respect to the instruction. To the best of our knowledge, **there is no work that shares a similar motivation with us or has publicly released a training dataset containing meta-instructions**. --- [1] Lou et al.. MUFFIN: Curating Multi-Faceted Instructions for Improving Instruction Following. ICLR 2024. --- Rebuttal 3: Title: Author Response to Reviewer k2dN (2/2) Comment: Lastly, we would like to elaborate on why we have set meta-instructions in the system message and how to further utilize them. While some meta-instructions can only be defined by a specific user message, it can also be more general. For example, the following system message in our dataset is associated with a user prompt giving a specific hyperbola equation, but is also applicable to other kinds of problems. ``` You are a mathematics coach, specializing in presenting complex mathematical concepts to users with an intermediate understanding. Your teaching method is known for its succinctness, offering clear, direct explanations and step-by-step walkthroughs without overcomplicating the information. To facilitate learning, incorporate visual aids like charts and diagrams, making abstract concepts tangible and easier to understand. Your approach always considers the users' math anxiety, striving to create an engaging, supportive environment that reduces stress and fosters confidence. Your aim is to make math accessible and less intimidating, ensuring users can grasp the essentials of mathematical theories quickly and effectively, with patience and creativity at the core of every interaction. ``` In this sense, **meta-instructions can be effective when set programmatically on relevant user messages in the form of system messages**. This (i) allows reflecting the users’ common needs on various instructions and (ii) lowers user burden. In practice, OpenAI has already shipped similar features like custom instructions [2] and memory controls [3] in ChatGPT; a user can specify their preferences in the custom instruction by oneself (e.g., *When I ask you for code, please just give me the code without any explanation on how it works.*) or ChatGPT will memorize the details itself from the conversations. When user preferences are gathered or summarized from interactions, it can be decided to be included as part of the system message without user specification for seamless chat experience. Exactly how to infer user preferences or how to manage user controls on system messages is application-dependent and beyond the scope of our work (noted in broader impact in Appendix I). In the line of works on dialogue summarization [4, 5], we look forward to future work to explore the challenges of inferring preferences from conversations. The focus of our work lies more in **representing user preferences in the form of meta-instructions** and **training models to follow meta-instructions**. Also, from a technical perspective, training meta-instructions in system messages with accompanying gold responses would make the model (with appropriate system tokens) better distinguish between meta-instructions and instructions and tailor responses to the higher-level guidance. We greatly value your feedback and will improve our writing to shape the storyline better. --- [2] OpenAI. Custom instructions for ChatGPT. OpenAI Blog 2023. [3] OpenAI. Memory and new controls for ChatGPT. OpenAI Blog 2024. [4] Zou et al.. Unsupervised Summarization for Chat Logs with Topic-Oriented Ranking and Context-Aware Auto-Encoders. AAAI 2021. [5] Zhao et al.. TODSum: Task-Oriented Dialogue Summarization with State Tracking. arXiv 2021. --- Rebuttal Comment 3.1: Comment: Thanks for your detailed response. Your responses solve my concern to some extent. I maintain my score.
Summary: This paper aims to align Large Language Models (LLMs) with individual user preferences at scale. The authors propose a paradigm where users specify their values within system messages to guide the LLM's generation behavior. To address the challenge of generalizing to diverse system messages, the authors create the MULTIFACETED COLLECTION, a dataset with 192k unique combinations of values and instructions. They train a 7B LLM called JANUS using this dataset and demonstrate its effectiveness in adhering to various user preferences without the need for retraining for each individual. The JANUS model outperforms other models like Mistral 7B Instruct v0.2, GPT-3.5 Turbo, and GPT-4 in benchmarks that assess response helpfulness. Strengths: 1. The creation of the MULTIFACETED COLLECTION, as well as the trained models, makes a good contribution to the community for studying diverse preference alignment. 2. The idea of utilizing system messages, while simple, is demonstrated as a scalable solution to the problem of individualized LLM alignment. 3. The paper shows that training with a diverse array of system messages not only supports personalized responses but also enhances alignment with general public preferences. Weaknesses: While widely adopted, the GPT4-based data synthesis process may introduce some artifacts. The authors should include quantitative and qualitative analyses to investigate the potential biases and representativeness in human preferences in the collected MULTIFACETED COLLECTION. Technical Quality: 4 Clarity: 3 Questions for Authors: How do you measure the diversity of user preferences in MULTIFACETED COLLECTION? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer uZmQ, we appreciate your thoughtful comments. Our responses to the issues raised in the weaknesses (W1) and questions (Q1) sections are detailed in the global responses (G2: Diversity, G4: Bias). --- **Diversity of User Preferences (Q1)** We measured the ROUGE-L score among user preferences in the training data and found an average similarity of about 0.2. Compared to previous works, this indicates sufficient diversity in the preferences present in our dataset. Furthermore, the model trained with this data, Janus, generates more diverse responses compared to baselines. This demonstrates the diversity of our dataset. --- **Bias and representativeness of human preferences (W1)** We tested Janus’ performance on three social bias benchmarks and discovered that Janus exhibits a bias level comparable to LLaMA 3 8B Instruct across all tasks and surpasses Mistral 7B Instruct v0.2 in most tasks. No significant bias issues are observed in Janus relative to other models. We also followed experiments in previous work to measure the similarity between the human distribution and model distribution on survey questions and personality tests. Results using the Jenson-Shannon distance show that Janus shows 2x smaller decrease in similarity to Mistral 7B Instruct v0.2. The similarity is especially evident in terms of entropy distributions. These findings suggest that Janus is reasonably calibrated to the human population and that Multifaceted Collection can be representative of diverse human preferences to a large degree. Please see more information in G2 and G4 of our global response. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I have read the author response, and I maintain my rating.
Summary: This paper introduces JANUS, a novel approach to aligning large language models (LLMs) with diverse individual user preferences without retraining. The key contributions are the MULTIFACETED COLLECTION: A dataset of 192k diverse system messages reflecting varied user preferences, paired with 65k instructions. JANUS: A 7B parameter LLM trained on this dataset to generalize to unseen system messages at test time. Evaluation showing JANUS outperforms several larger models, including GPT-3.5 and GPT-4, in adhering to specified preferences. Strengths: 1. Using diverse system messages for alignment is novel and creative. It tackles the challenge of personalized alignment in a scalable way, avoiding the need to retrain models for each user. 2. The methodology is rigorous, with careful dataset construction and comprehensive evaluations against strong baselines. The use of human evaluation alongside automated metrics strengthens the results. 3. This paper is well-structured and clearly written. The approach and results are explained thoroughly, with helpful figures illustrating key concepts. It work has potentially high impact, addressing a crucial challenge in AI alignment. The ability to adapt to individual preferences without retraining could be transformative for deploying personalized AI systems at scale. Weaknesses: 1. The paper doesn't deeply examine potential risks of allowing such flexible preference specification, such as potential for misuse or unintended biases. - There's no discussion on how to handle conflicting or ethically questionable preferences. 2. While the overall approach is effective, it's unclear which components contribute most to the performance gains. For example, how much does the hierarchical nature of the preference augmentation contribute versus simply having a large number of diverse preferences? The impact of the number of unique system messages or instructions is not explored. 3. The study focuses on 7B parameter models, primarily comparing to other 7B models or larger. It's unclear how well the approach scales to smaller models or much larger models (e.g., 70B+). The comparison to LLaMA 3 8B Instruct is promising, but more exploration of scale effects would be valuable. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Have you explored any potential negative consequences of allowing such flexible specification of preferences through system messages? Are there safeguards to prevent misuse or handling of conflicting ethical preferences? 2. How does the performance of JANUS change as you vary the number of unique system messages in the training data? Is there a point of diminishing returns, and how does this relate to the hierarchical structure of your preference augmentation? 3. Have you tested the approach on models significantly larger or smaller than 7B parameters? How does the effectiveness of this method scale with model size, especially compared to the scaling behavior of traditional fine-tuning approaches? 4. How consistent is JANUS in maintaining the specified preferences across a long conversation or multiple diverse tasks? Have you evaluated this aspect, and if so, what metrics did you use? 5. The paper mentions outperforming larger models like GPT-3.5 and GPT-4. Have you analyzed why your approach seems to be more effective than simply scaling up model size for this task? Could this indicate a fundamental advantage of your method over scale alone? How did you ensure the quality and diversity of the generated system messages and reference answers in the MULTIFACETED COLLECTION? Did you implement any specific measures to mitigate potential biases introduced by using GPT-4-Turbo-0125 for generation? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer eXeU, we deeply appreciate your constructive feedback. Here we address the weaknesses (W) and questions (Q) that you raised on our work. --- **Discussing the potential risks of allowing flexible preference specification and available safeguards (W1,Q1)** We agree that training a model with flexible preference specifications may increase its vulnerability to misuse, such as jailbreaking. To address this, we incorporated safety considerations into our data generation process by including a harmlessness dimension in every system message. As discussed in G3, our method does not significantly compromise safety, as demonstrated through evaluations of both the dataset's safety and the model’s likelihood of generating toxic responses to harmful prompts. In future work, we plan to enhance our approach by filtering toxic data from the dataset, increasing the proportion of safety-related data in the training set, and utilizing AI moderation technologies like LLaMA-Guard and Prompt-Guard. --- **Demystifying the factors that contribute to performance gains (W2, Q2, Q5)** **(1) Number of training data** We conducted an ablation study to examine the effects of data scaling. Janus 7B was trained on a dataset of 197k instances, where each user prompt is paired with 3 different system messages and responses. To test the effects of scaling, we reduced the number of system messages and responses per user prompt to 2 and 1, resulting in datasets of 132k and 66k instances, respectively, and trained separate models. The results indicate that, across five benchmarks, the scaling effect is evident: increasing the amount of training data leads to higher scores. | \# system message per instruction | \# total train instances | mf -AlpacaEval | mf-FLASK | mf-Koala | mf-MT-Bench | mf-Self-Instruct | | --------------------------------- | ------------------------ | -------------- | -------- | -------- | ----------- | ---------------- | | 1 | 66k | 4.415 | 4.017 | 4.358 | 4.083 | 4.025 | | 2 | 132k | 4.427 | 4.052 | 4.39 | 4.1 | 4.01 | | 3 | 197k (→ Janus) | 4.43 | 4.06 | 4.41 | 4.11 | 4.01 | **(2) Hierarchical data generation strategy** We prompted GPT-4-Turbo-0125 to generate preferences freely (a preference set of four values or a single detailed preference description) and qualitatively compared them to our original hierarchically generated ones. On ten samples, we observed that free-form preference generation can create more topic-specific preferences, but oftentimes they deviated from what we expect preferences to be. Specifically, some generations included preferences irrelevant to the goal of the user instruction (e.g., a preference for nutritional benefits in a *math* problem illustrated with apples) or attempted to resolve the user request and hint solutions (e.g., explicitly instructing correct implementations for a coding problem). These problems arise because the model needs to understand and elicit preferences from the human user side, not the assistant side. Our hierarchical data generation strategy allows sufficient control over synthesizing *what users would expect in the response*. We have decided that preferences for any response would differ under the style, background knowledge, informativeness, and harmlessness dimensions based on various existing literature (See Appendix A.2). Our method of providing the dimensions in context, coupled with manually crafted seed examples, is instrumental in obtaining high-quality, individualized preferences. Please see our global response for further verification of our synthetic data and method. --- **Effectiveness of our method as model scales (W3, Q3)** We explored how variations in the type and size of the base model affect performance. Initially, we used Mistral 7B v0.2, but we also experimented with LLaMA models. The results in the table indicate that LLaMA 2 7B performs similarly to or slightly worse than Mistral 7B v0.2. However, increasing the model size from 7B to 13B (LLaMA 2 7B vs. 13B) results in a clear performance improvement. Notably, the latest model, LLaMA-3 8B, surpasses the larger LLaMA 2 13B in benchmark scores (LLaMA 3 8B vs. LLaMA 2 13B). Therefore, both model size and the capabilities of the base pre-trained model significantly impact performance when applying our method. | Base pre-trained model | mf -AlpacaEval | mf-FLASK | mf-Koala | mf-MT-Bench | mf-Self-Instruct | | ----------------------- | -------------- | -------- | -------- | ----------- | ---------------- | | Mistral 7B v0.2 (→ Janus) | 4.43 | 4.06 | 4.41 | 4.11 | 4.01 | | LLaMA 2 7B | 4.41 | 4.01 | 4.41 | 4.08 | 4.03 | | LLaMA 2 13B | 4.5 | 4.3 | 4.5 | 4.23 | 4.08 | | LLaMA 3 8B | 4.5 | 4.4 | 4.34 | 4.31 | 4.14 | --- **Consistency of Janus in maintaining preferences in multi-turn scenarios (Q4)** To assess whether Janus effectively adheres to diverse preferences in multi-turn scenarios, we conducted an evaluation focusing solely on the multi-turn questions from the MT-Bench, excluding single-turn questions. The results indicate that Janus consistently outperforms the baselines, Mistral 7B Instruct v0.2 and LLaMA-3 8B Instruct (6, 6.6 vs 6.8). This is consistent with the trends observed in Table 3, suggesting that Janus remains highly competitive in multi-turn settings. | Models | Score [0, 10] | | -------------------------------|--------------------| | Mistral7B Instruct v0.2 | 6 | | LLaMA 3 8B Instruct | 6.6 | | Janus 7B | 6.8 | --- Rebuttal Comment 1.1: Comment: Thanks the authors for their response. After reading the response, I think my current score is appropriate.
Rebuttal 1: Rebuttal: We provide extra analyses on Multifaceted Collection and Janus in this global response, which we believe will comprehensively address various concerns about our approach. Four aspects of our method are further investigated: quality (G1), diversity (G2), safety (G3), and bias (G4). We attach a **PDF containing three supplementary figures** below. --- **G1: Quality** While we have shown the effectiveness of our dataset through the fine-tuned model’s superior benchmark performances (Section 5.1-5.2) and ablation studies (Section 6.1-6.2), we additionally cross-checked the quality of the LLM-generated system messages. Specifically, we developed two criteria that a proper system message should adhere to: (1) relevance and specificity (2) coherence and naturalness. We created a score rubric indicating a scale of 1 to 5 for each criterion. Inspired by recent works that use LLM-as-a-Judge to assess the process of training data synthesis [1,2], we used LLaMA 3.1 8B Instruct to score a random 20k subset of system message-user instruction pairs. Results show an average of 3.74 on relevance and specificity and 4.01 on coherence and naturalness, with 68.8% and 85.6% instances at or above score 4, respectively. This demonstrates the quality of verbalized preferences, potentially revealing why our model is effectively steerable for pluralistic alignment. --- **G2: Diversity** We point to various pieces of evidence presented in our paper regarding the diversity of preferences in the Multifaceted Collection. - The number of individual preference values embedded in system messages is 797k, derived from 6k subdimensions (see Table 1). Since a single system message is a mixture of preferences from different dimensions, we assert that our dataset contains a wide range of multifaceted human preferences. Example sub-dimensions and keywords of preference descriptions are in Table 7. - We calculated the ROUGE-L similarities for every possible pair of *preference descriptions* associated with each instruction. The average ROUGE-L score across all dimensions is approximately 0.21, peaking at 0.25 (see Appendix B, Figure 4). Compared to results from previous studies creating synthetic datasets [3], this demonstrates significant diversity among preferences for the same instruction. - Furthermore, as illustrated in Section 5.1 and Figure 9, we measured the resulting ROUGE-L scores between *responses* generated by language models when different system messages were presented in response to a single user instruction. Janus showed lower ROUGE-L scores compared to the Mistral 7B Instruct v0.2 and GPT-4-Turbo. This also confirms that diversity is learnable from Multifaceted Collection. In addition, we test if Janus exhibits less similarity to human populations compared to its base pre-trained model (Mistral 7B v0.2) and its post-aligned counterpart (Mistral 7B Instruct v0.2). Following [4], models were evaluated on GlobalOpinionQA and Machine Personality Inventory (MPI), and then we calculated the Jensen-Shannon distance between the human (US and Japan) and model distributions in answer choices. Echoing the findings of [4], Supplementary Figure 1 shows that aligned models including Janus become more distant to the human population after fine-tuning. Still, Janus diverges less from the pre-trained distribution than Mistral 7B Instruct v0.2 does. We also measured the entropy, and the Supplementary Figure 2 visualizes that Janus is significantly closer to pre-trained and human distributions than Mistral 7B Instruct v0.2 does. These experiments suggest that our training method can facilitate calibration to diverse individuals. --- **G3: Safety** To check the presence of unsafe content in our synthetic dataset, we evaluated the dataset's system messages, user prompts, and gold responses using a content safety classifier, Llama Guard 3 8B. 99.2% of the 196,998 instances were classified as safe. Moreover, as presented in Table 4, we have tested Janus on RealToxicityPrompts, showing that Janus has 5.2% and 5.7% lower probability of generating toxic text compared to Mistral 7B Instruct v0.2 and LLaMA 3 Instruct 8B, respectively. This indicates that neither the dataset nor the model is exceptionally unsafe compared to others. --- **G4: Bias** Since it is difficult to directly expose biases in our dataset besides the analyses above, we evaluated Janus and three baselines on three social bias benchmarks: Winogender, CrowS-Pairs, and BBQ, all in zero-shot. We include Gemma 2 9B IT as a baseline as it is a SOTA similar-sized model reported to have been extensively tested on bias benchmarks. | Model | Winogender | CrowS-Pairs | BBQ (Ambig) | BBQ (DisAmbig) | |-------|------------|-------------|-------------|----------------| | | Acc ↑ | Likelihood Diff ↓ / % Stereotype ↓ | Acc ↑ / Bias score ↓ | Acc ↑ / Bias score ↓ | | Mistral 7B Instruct v0.2 | 0.61 | 4.45 / 67.74 | 0.11 / 11.63 | 0.87 / 2.52 | | LLaMA 3 8B Instruct | 0.64 | 4.05 / 64.52 | 0.08 / 12.99 | **0.88** / 1.98 | | Gemma 2 9B IT | **0.68** | 5.44 / **62.43** | **0.42** / **7.62** | 0.86 / **1.33** | | **Janus 7B** | 0.64 | **4.02** / 67.68 | 0.08 / 12.26 | 0.86 / 3.25 | According to the results in the table above, Janus shows a degree of bias similar to that of LLaMA 3 8B Instruct across all tasks, and is better than Mistral 7B Instruct v0.2 except in BBQ. Overall, we do not see critical issues of bias in Janus compared to other models, and hypothetically Multifaceted Collection as well. Category-wise Winogender evaluation results are visualized in Supplementary Figure 3. --- [1] Yuan et al.. Self-Rewarding Language Models. ICML 2024. [2] Xu et al.. Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. arXiv 2024. [3] Honovich et al.. Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor. ACL 2023. [4] Sorenson et al.. A Roadmap to Pluralistic Alignment. ICML 2024. Pdf: /pdf/a02ea6e63caf9998782aba4ab26f0806a5886b06.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
MIDGArD: Modular Interpretable Diffusion over Graphs for Articulated Designs
Accept (poster)
Summary: This paper presents a diffusion-based framework for generating articulated objects represented as part graphs. Different from prior work [35] that generates both the part shapes and graph structures simultaneously, this work introduces a two-stage strategy that first generates the structure with per-part conditions and then generates the part shapes. It also proposes a part latent encoding that enables multimodal conditions including text and images. Strengths: - The reparameterization of the Plucker coordinates is simple but more natural than that in [35], making the diffusion results better satisfying the constraints. - The designed node embedding with image and text features is not only a more detailed representation of part shapes, but also enables more flexible conditioned generation for different downstream applications. - The framework relaxes the constraints of training on canonical-posed shapes. This is an important relaxable for better usage of data. Weaknesses: - As also briefly discussed in the paper, generating the parts independently is a bit unnatural to this problem, especially given the importance of inter-part motions/relations in articulated objects. But overall I am OK with this point given all the node conditions in the framework. - Quantitatively, the improvements compared to [35] on unconditioned generation seem marginal. - The description "simulation-ready" sounds overclaimed to me. When talking about "simulation", I feel most people will expect physical viability and accuracy. But from the demonstration on the website, it seems that the parts are not well connected. There is also no explicit physics enforcement in the framework. Minor things: - I believe it would be better to swap the two sentences "This image latent is derived by" (line 148) and "...the structure generator denoises a latent representation of an image..." (line 150), so the readers know where the "image latent" in line 148 comes from. Technical Quality: 3 Clarity: 2 Questions for Authors: I am curious about some details: - Sec. 3.4 says "During training, the model uses renderings of the part mesh from a frontal view" (line 217). How is the frontal view chosen? For inference-time image-conditioned generation, do the images have to be in frontal view, or can they be of any view? Are the images in Suppl. Fig. 5 of "frontal view"? - I wonder how exactly are the bounding boxes represented and predicted. Because for articulated objects, the part poses are changing during part motions. [35] only uses the rest-state bounding boxes (together with the joint limits). In this work, is the rest-state bounding box viewed as having the identity pose matrix? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Limitations are well-discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Part-level generation** Part-level generation requires conditioning on the articulation graph, since the appearance of a part depends on its role within the graph. In future work, we aim to test further possibilities to condition the part-generation on already existing parts, e.g. by framing it as a shape completion problem. However, we chose this approach since it also has several benefits: * As already noted by the reviewer, it allows for part-level control via text or images. * learning to generate object parts is arguably simpler than learning to generate arbitrary objects. Generating a complex high-resolution object is still an open challenge in 3D generation, but generating basic components of objects such as wheels and boxes is feasible. **Unconditional generation** NAP is already a rather compact and powerful model but the quality measurement metric (Instantiation Distance or ID) has limitations, as it mixes shape geometry with structure and motion evaluations; two properties that should be separately measured. Since points are sampled randomly for the CD, this ignores unrealistic states of small parts, such as the lid of the bottle in Fig 4. This is an issue that was also raised in CAGE. However, for a fair comparison to NAP's results, we used their evaluation protocol. We provide a qualitative comparison between NAP and MIDGArD in the attached PDF, Figure (B). We also provide further quantitative results to show the improvement of our approach in terms of physical plausibility, we compute the distribution of joint types and compare the distribution among our generated objects with the ones from NAP and the real data. As shown in the main rebuttal section, our model learns to match the distribution well. Since we determine the joint ranges based on the type, this directly leads to an improved motion plausibility. Finally, it is worth noting that one of our contribution is the higher level of control achieved with our approach, offering novel ways to guide the generation of articulated objects. Please refer to the attached PDF, specifically Figure (A) in the main rebuttal section. **Simulation ready sounds overclaimed.** By "simulation-ready", we refer to our contribution of providing the possibility to export the result to Mujoco, which is indeed a physically accurate simulation environment. Building a MuJoCo environment with multiple articulated assets will be straightforward with our codebase, which is why “simulation-ready” seemed to be a suitable term. Nevertheless, we do not want to neglect the huge effort involved with building full simulations, so we will weaken “simulation-ready” to “simulatable” in our revised manuscript. **It seems that the parts are not well connected** We believe the reviewer refers to the fact that screws and other connectors are not shown in the simulated videos. The connectors are omitted since they are not part of the ground truth dataset. However, adding those would be straightforward based on the predicted edge features in the graph (joint type \& Plucker coordinates). The crucial part of the simulation in our point of view is the motion of the object parts, which is realistic in our videos. Please let us know if we have addressed your concerns or if any further clarifications would be useful. **There is no explicit physics enforcement.** We agree that there is no strict enforcement of physics in the framework; however, this is a rather difficult endeavor as it usually requires the interaction of the learning pipeline with a physics engine. Meanwhile, several components of our framework ensure physical plausibility: * Our bounding-box constrained generation approach (with subsequent part scaling and orientation) ensures that the parts fit within the object in a physically plausible way. * We enforce realistic kinematics by specifying the joint ranges based on the joint category. As Fig 4B shows, this works much better than the approach taken in NAP, i.e., predicting the joint range directly. **Frontal-view images** While we use a specific perspective for training and testing in the initial submission, we provide additional results here showing that our model generalizes to image inputs from other perspectives (see Fig. D in the main rebuttal section). The PartnetMobility dataset is already normalized such that all parts are shown from the front (if such asymmetry even exists). However, rendering with a camera angle of [0,0,0] is problematic since some objects are not recognizable from the front; e.g., a wheel may appear as a rectangle from the front. Therefore, our “frontal view” refers to a rendering from an axis angle [\phi/6, \pi / 12, 0] which usually shows the relevant part geometry. We've added this clarification in the manuscript. Due to the variety of parts in our training dataset, we hypothesize that our model should also generalize to images from other views. We tested this by conditioning the model on images from a randomly sampled perspective. **Are the images in Suppl. Fig. 5 of "frontal view"?** Yes, the images in Suppl Fig 5 are rendered with the same angle as the images used for training **Bounding boxes** In the articulation graph, the part-bounding box is represented by three parts: 1) its center, 2) its size 3) its rotation. These three components are generated for each part in the structure generator. The shape generator, in turn, is conditioned on the desired size (2) and generates a centered object of the correct size. The rotation (3) is applied afterwards. Note that this approach allows to generate parts in arbitrary orientation, in contrast to previous approaches (CAGE), that assumes that all parts have axis-aligned bounding boxes when the full object is in resting state. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed responses. My concern about the evaluation and baseline comparisons is very well addressed. I will keep my positive rating. I would also encourage the authors to add these explanations (especially about the "frontal view") to the paper in their revisions. --- Reply to Comment 1.1.1: Comment: Thank you very much for your kind words and positive rating. We are glad to hear that your concerns about the evaluation and baseline comparisons have been well addressed. We will integrate this feedback to improve the clarity and comprehensiveness of our work.
Summary: This work tackles the problem of generating articulated 3D assets that are animatible. The authors mention that their generated shapes are directly compatible with existing physics simulation tools, i.e. MuJoCo. To this end, they first propose a structure generator, which conditionally or unconditionally generates an articulation graph encoding structural object information such as the kinematics attributes, i.e. the kinematic tree and joint types. Next, the multi-modal shape generator synthesizes a shape that follows the kinematic tree and size of the object while also allowing conditioning on multi-modal inputs such as text and images. Results demonstrate some improvement over the SOTA (NAP). Overall, I like the setting that the authors propose as well as their technical contribution to solve it. There are a few minor concerns about the presentation in terms of writing, clarity, and qualitative visualizations (see comments above). However, I believe those are minor. Thus, I recommend acceptance. Strengths: - Interesting setting with a lot of potential for future work. It seems only very limited amount of works (NAP) have explored this setting - Evaluation is to the best of my knowledge complete. - Well-written related work section and references seem to be complete - The paper is technically sound Weaknesses: - Writing: - I would limit the contribution bullets to the technical contribution of this work rather than focusing on results and open sourcing. - Clarity - I would introduce proper notations for all steps discussed in section 3.3 as I feel it is hard to understand the setting just from the textual descriptions, e.g. what is input, what is output. - I have a similar concern regarding section 3.4 - Video - I would have expected to also see some video results of the generated and animated objects. While this is not strictly required, it would have made the exposition much more complete. Technical Quality: 3 Clarity: 3 Questions for Authors: -- Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: Limitations are sufficiently discussed in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive feedback that help us to improve the clarity and quality of our paper. **Writing (comment: I would limit the contribution bullets to the technical contribution of this work rather than focusing on results and open sourcing)**: Thank you for your suggestion. We fully agree and will replace bullet points 4-6 with the following technical contributions: * An approach for constrained shape generation within oriented bounding boxes that improves alignment of the kinematic links by 17\% on average. * A pipeline to create fully simulatable assets with an interface to Mujoco **Notation and clarity (comment: I would introduce proper notations for all steps discussed in section 3.3 as I feel it is hard to understand the setting just from the textual descriptions, e.g. what is input, what is output. I have a similar concern regarding section 3.4)** Thank you for this suggestion to improve the clarity of the paper. We will rewrite this part to improve the description of the setting. Specifically, we will add that we train the model to learn the distribution of articulated object graphs, by applying a denoising diffusion process: * Input: Noisy Graph $\boldsymbol{\tilde{G}}_N = \left\lbrace \boldsymbol{\tilde{x}}, \boldsymbol{\tilde{e}} \right\rbrace$ with node and edge features as introduced in section 3.3. For conditional generation, the input is a partially noisy graph, where certain node or edge features are masked. * Output: Denoised articulation graph $\boldsymbol{G}_N = \left\lbrace \boldsymbol{x}, \boldsymbol{e}\right\rbrace_N$ Similarly, for section 3.4, we will add the following description: The aim of the structure generator is to learn the conditional probability distribution $P(z_i | a_i, b_i, d_i, r_i, t_i, g_i)$ for generating the latent representation $z_i $ of a part geometry's Signed Distance Function (SDF) using a diffusion model, conditioned on inputs $[a_i, b_i, d_i, r_i, t_i, g_i]$ as explained in section 3.3. This is achieved 1) by transforming $a_i$ and $b_i$ into a text description and encoding with the BERT model, 2) by transforming the image (decoded $g_i$) via a ResNet and 3) by applying constrained generation with $r_i$ and $t_i$ as explained in section 3.4 **Video (comment: I would have expected to also see some video results of the generated and animated objects. While this is not strictly required, it would have made the exposition much more complete.)** : We have provided videos in a repository here: https://anonymous.4open.science/r/MIDGArD-E1DE/README.md (see folder “gifs”). We apologize if the reference to this repository was unclear in the paper. Upon publication, we will provide a proper website with these examples in addition to the open-source code. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. After reading the rebuttal and other reviews, I am still convinced that this work is suitable for Neurips. Therefore, I keep my original rating. --- Reply to Comment 1.1.1: Comment: Thank you for your detailed response and positive evaluation; your support for our work's suitability for NeurIPS is greatly appreciated.
Summary: This paper proposes several interesting improvements over existing articulated object modeling and highlights higher-quality part generation. Specifically, an articulated object is parameterized to a graph where parts are nodes and joints are edges. This paper proposes a multi-model part VAE for generating higher quality and controllable shapes and improving physical awareness through a bounding box alignment. It also improves the kinematic structure generation by considering parameterize on the plucker manifold. With these technical contributions, the results show significant improvement over baselines and the supplemental repo provides simulatable mujoco files, showing the practical value of this work. I recommend a clear acceptance of this paper. Strengths: - Good results, supp MuJoCo simulation: I really like the MuJoCo demo in the suppl repo, which is impressive. - Several technical solid and important improvements including the shape representation, plucker manifold encoding, and bounding box alignment. etc - Multimodel information and condition: i specifically like the text and language part of the part shape encoding, which opens a lot of opportunities for conditioned generation and connection to LLM. Weaknesses: - More conditioned generation results: while the part-level multimodel information is used in shape encoding, it would be better to show more conditioned generation results using this information. - Open source the part text and bounding box alignment data: it's not clear whether the shape part encoding training data will be publicly released or not. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weakness Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Adequately discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for highlightning the strength of our approach. We are delighted that you find the MuJoCo simulations in the supplementary repository impressive and recognize the significance of our technical improvements, including shape representation, Plücker manifold encoding, and bounding box alignment. These were key motivations in our effort to advance the field of articulated object modeling. **Conditioned generation results: while the part-level multimodal information is used in shape encoding, it would be better to show more conditioned generation results using this information.** We acknowledge your request and have included additional conditioned generation results in the attached PDF, specifically Figure A ("Conditional generation"). This figure showcases our model's capability to generate consistent articulated assets based on supplied part features in form of image and text. In addition to the experiment in Figure 5 of the original submission, this figure shows two settings: Generating an articulated object solely based on image and text input, and generating the articulated object based on image, text and bounding box input. The latter is comparable to the "Part2Motion" setup in NAP. However, in NAP one would have to provide the full geometry for each part, whereas we demonstrate the same capability just based on human-interpretable input features. Furthermore, we have extended our experiment on image-based conditioning (Figure 3 and 5 of the original manuscript) by another experiment using images from arbitrary viewpoints for conditioning (Figure D in the attached PDF). This shows the generalisation capability of our approach. We hope these experiments match the reviewer's expectations and we welcome further suggestions to improve our empirical results. **Open source: it's not clear whether the shape part encoding training data will be publicly released or not.** Thank you for your suggestions. We commit to open-source the data (text, bounding box etc) and source code upon the paper's acceptance facilitating reproducibility and further research in this area. This commitment aligns with our goal of promoting transparency and accessibility within the research community. --- Rebuttal Comment 1.1: Title: Keep my original positive score Comment: After reading the reviews and author response, the reviewer feel very good that the area of articulated object modeling is blooming. I believe this paper provides practically much more improvement over NAP. I keep my original score. --- Reply to Comment 1.1.1: Comment: Thank you for your encouraging feedback.
Summary: This work addresses the task of 3D asset generation for articulated objects. This work aims to enhance the prior approach by achieving three main objectives: 1) increasing the interpretability and controllability of the generation process; 2) generating more natural joint motions and 3) reducing violations of physical constraints. For the first goal, they propose a framework MIDGArD to decompose the generation into two sequential modules: 1) a structure generator that unconditionally generates a kinematic graph and attributes for each articulated part; 2) a shape generator to produce part SDF that allows conditioning on the graph from the first module, an image, and a text description for each part. For the second goal, they introduce a representation for joint parameters that allows the diffusion process to operate on the Plucker manifold directly. For the third goal, they replace the AABBs with OBBs to bound the part shape which is claimed to achieve better part alignment. Strengths: - This work contributes to an increasingly important area and identifies three valuable perspectives for articulated object generation. - The proposed framework that separates the generation of the articulation structure and part geometry by using images as an intermediate proxy allows users to control the part shape more explicitly, compared with the prior work NAP [35] which requires modifying the latent code instead. - Both quantitative and qualitative evaluations are provided to demonstrate improved quality of data distribution modeling in the unconditional generation setting. Weaknesses: - The technical contribution is limited: - The network architecture of the structure generator is the same as NAP [35]. The contribution for this part is only the representation alternation for joint parameters and bounding boxes. These are reasonable changes for slightly better distribution modeling, but the insight is not particularly significant. - The shape generator is also adapted from an existing work SDFusion [7], where the benefits of detailed geometry modeling with TSDF and the flexibility of multimodal conditioning are just inherited from the original work. The only modification is an additional conditioning from a graph. However, how this graph condition affects the part geometry output and whether it is necessary is unknown. Also, there is no strong/sufficient qualitative evidence to show a better geometry generation compared to NAP. - The experiments and evaluation are insufficient and unclear: - The side-by-side qualitative comparison with prior work (e.g. NAP [35], CAGE [43]) is missing to convince the improvement in the claimed aspects. - Unfair comparison with NAP in the “reconstruction” setting in Table 1. According to the description in lines 280-282, NAP is only provided with graph topology and motion attributes for each node. In this case, there is no way for NAP to reconstruct the object with no shape information (which should have been given the latent code extracted from the ground truth part geometry). In contrast, MIDGArD is additionally provided with shape features extracted from the ground truth node images. - Lack of quantitative and qualitative results to support the argument of more “natural joint motion” and “less physical constraint violation” and how these improvements are correlated with the specific designs. Other ablation studies discussed in the manuscript are also generally hard to follow. - The related work section does not contextualize this work very well. Technical Quality: 1 Clarity: 2 Questions for Authors: - The argument in lines 47-48 “leveraging categorical embeddings for improving consistency across the graph” is unclear. What does consistency mean exactly? The ablation experiment to support this argument is missing. - Would it be possible to compare with CAGE [43] on certain aspects of the generation? E.g., in terms of the physical plausibility of objects in both abstraction level (bounding boxes) and mesh output. - The argument in line 33 “both NAP [35] and CAGE [43] provide no control on the part level due to their opaque encoding as a latent” is a bit misleading. What specific control on part is missing from the prior work? Also, to correct the fact, the representation used in CAGE has no latent encoding for any attribute, while the representation used in NAP only makes the geometric feature encoded as latent and leaves others in an explicit form. - How robust and generalizable the image control is? Confidence: 5 Soundness: 1 Presentation: 2 Contribution: 1 Limitations: - Under the formulation of this work, the image control from users is not particularly practical: - The image of a single articulated part is not natural to find easily. - The control is more of a post-editing. The user has to wait to see what object is generated and then find compatible part images to feed into the corresponding node in the graph. Otherwise, it won’t be matched with the articulation parameters and the part arrangement being generated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness 1.1** While our structure generator builds up on NAP \cite{lei2023nap} and also applies a denoising diffusion process on the graph, the representation and generative process are fundamentally different: 1.) Our approach *modularizes* articulated asset generation. This fundamentally diverges from the approach taken in NAP which aims to learn both shape and structure generation within one model. While both the shape generator and the structure generator build up on previous work, they are substantially modified to enable their interplay for generating articulated assets. Only this modular approach enables fine-grained control on the object level (Table 1 and Fig. A in the attached PDF) and part level (Fig. 3, Fig. 5 appendix). 2.) The graph representation is different to NAP's representation. In the node features, the geometry encoding is removed, the image encoding and categorical variables are added, and bounding boxes are replaced by oriented bounding boxes. Only the part-existence indicator is the same. For edge features, joint category is added and we directly denoise on the Plücker manifold. 3.) These changes enable more control and enhances intuitiveness for the use (see examples for conditional generation, Figure A in the attached PDF). **Weakness 1.2** We respectfully disagree. As outlined in Section 3.4, we do not only supply the graph as another conditioning input, but modify the SDFusion pipeline to generate only the *difference* to the bounding box (see Figure 2). Thus, the core innovation within the shape generation pipeline is the introduction of a bounding box constraint. To the best of our knowledge, this is novel. This constraint, combined with the use of oriented bounding boxes (OBBs), enables generation of geometries that are more closely aligned with the articulation solutions proposed by the structure generator, as demonstrated in Table 2. The synergy between conditional graph generation, and constrained 3D generation based on OBBs results in realistic articulated assets with appropriate dimensions, as shown in Fig. 4A. **Weakness 2.1** Per the reviewer's suggestions, we've added a side-by-side qualitative comparison with NAP (see general author rebuttal). Figure B in attached PDF shows that our model 1) improves the quality of the geometry (see fan), 2) improves the physical plausibility (see laptop motion), and 3) generates more consistent and realistic shapes (see both globes). **Weakness 2.2** We acknowledge the concern raised regarding the fairness of the comparison, and will clarify this in the revised manuscript. Our aim was to demonstrate the conditioning capabilities of our approach, which are not present in NAP. We do not see a feasible alternative that would ensure a fair comparison and are very open to a fairer comparison that can highlight MIDGArD's capability of conditioning (compared to the lack of any text- or image-based conditioning in NAP) as one of the novelties and contributions of our work. Regarding the suggestion that NAP should have been given the latent code extracted from the ground truth part geometry: This would create an unfair advantage over our method, which relies solely on single-view images. Additionally, the practical applications where part-geometry is readily available are quite limited, thus the utility of such a capability is uncertain. Leaving this drawback aside, for a fair qualitative comparison one can consider the Part2Motion experiment in NAP (\cite{lei2023nap} Fig 5) as the counterpart to our results for text- and image-based conditional generation provided in the attached PDF Figure A. **Weakness 2.3** See general response for experiments and results. **Question 1** By "consistency across the graph," we refer to the semantic coherence of part relationships within the articulated structure. For instance, a "bottle" is unlikely to contain a "drawer". We found that adding categorical information to the graph improves the consistency. In MIDGArD-generated data, we found that our model generates 100% valid examples (see general author rebuttal). Qualitatively, this advantage of our approach is seen in the failure cases (attached PDF Fig. C). In NAP, the lack of categorical information leads to objects of mixed parts that do not fit together. In our approach, there may be issues with the geometry or kinematics, but the object parts remain consistent. **Question 2** A fair comparison with CAGE is not possible, because CAGE was designed for a very limited setting, i.e., (1) providing shape geometry via \textit{part retrieval} instead of part generation. This method does not generalize and leads to failure cases where parts overlap and don't fit (see \cite{liu2023cage} Fig 10), (2) assuming canonical poses of the objects and axis-aligned bounding boxes. Thus, the experiments in CAGE are restricted to 8 categories from PartnetMobility (3) requiring actionable parts, e.g., doorknobs and handles, which were added by the authors of CAGE and only for 8 categories. **Question 3** We have corrected and clarified the mentioned line. Indeed, for NAP we refer to the latent encoding the geometry. For CAGE, it is rather the part-retrieval procedure that prevents control on this level. Our framework allows users to specify part-level details through images and text, e.g., requiring a lamp to consist of a "lamp stand" and a "lamp shade" and guiding the style of the lamp shade with an image. This control mechanism is not available in prior work. **Question 4** It generalises. See general response for results. **Limitation 1** One could leverage existing image segmentation pipelines to automatically extract relevant images from a single picture. Also, one image oftentimes suffices (e.g. sunglasses), 3) we are planning to provide a library of part images to assist guidance. **Limitation 2** As shown in the attached PDF Figure A, the user can directly input images and text instead of post-editing. --- Rebuttal Comment 1.1: Comment: I appreciate the effort and detailed response provided in the rebuttal. At a high level, I agree with the authors that modularity is an effective strategy for enabling flexible control over part geometry. Although the current approach—providing images of each part rendered separately—feels somewhat unnatural, this work is trying to address an important problem and has the potential to inspire further exploration in this increasingly significant field. I appreciate the insights and the effort that went into this paper. However, I still have concerns that the experiments in the current state cannot well support the central claims and effectiveness of the proposed components. - **About controllability**: I have concerns about the practicality and effectiveness of the proposed approach based on the limited results presented. - For image control: It would be more convincing to show qualitative results to demonstrate how different image inputs affect the final output in variations while keeping other attributes fixed. Plus, the requirement of per-part rendering may limit the method's generalizability to real-world scenarios. While the authors suggest that these images could be extracted from a single picture using image segmentation in the rebuttal, there is no supporting evidence for this claim, making the viability of this approach uncertain. - For text control: The results provided do not sufficiently demonstrate cases where each part's geometry is edited using purely text-based inputs. The supplementary material only includes two examples of combined image and text control, with four additional examples in the rebuttal PDF. It remains unclear how the text input influences the process, and how diverse and compatible outputs can be generated with varying descriptions. - For graph control: there is no evidence presented on how graph-based control can be implemented or how effective and useful it might be. - **About representation** used in the structure generator: this work introduces several modifications to the representation used in NAP. However, I consider only two of them (using OBB and image latent) to be critical and well-supported contributions. Denoising joint parameters on the Plucker manifold is an interesting design, but the work lacks ablation studies or evidence to demonstrate its specific advantages. - **About the claim of “more natural joint motion” and “less physical constraint violation”**: Only three cases are presented for qualitative comparison with NAP—one in Fig. 3 of the main paper and the laptop and drawer examples in the rebuttal. This limited number of examples is insufficient to demonstrate improvement across a broader distribution, as it is unclear whether these cases were selectively chosen. - **Comparison with CAGE**: I disagree with the authors' assertion that this comparison is impossible. I believe it is a crucial comparison that should be conducted, at least qualitatively. - I understand that CAGE makes certain simplifications in its assumptions, which may work well for specific categories of objects. However, these categories represent only a subset of the objects considered in MIDGArD. It should be feasible to demonstrate certain aspects of the comparison for these categories. - On the geometry side, both NAP and CAGE can operate in a retrieval-based setting. My understanding is that it should be an alternative mode for MIDGArD, using an image as a retrieval proxy. In this context, it is important to compare both the retrieval and generation modes of MIDGArD with NAP and CAGE. The difference between generation and retrieval modes does not justify excluding such a comparison. Overall, I believe this is a valuable work in terms of its motivation and potential impact. Once the experiments are fully completed to substantiate all the contributions the authors claim, I would be inclined to support the acceptance of this paper. However, in its current form, I would suggest resubmission in a future round. --- Reply to Comment 1.1.1: Comment: We appreciate your detailed feedback and your recognition of our key contributions, which you have deemed "critical and well-supported." While we regret any oversight in our previous analysis, we would like to clarify that our review does indeed address most concerns you raised. It is possible that these points may not have been fully apparent, and we welcome this opportunity to elaborate further: - **Controllability - Image control:** The qualitative impact of varying image inputs while keeping other attributes fixed was precisely illustrated in Fig. 3 of the main paper as well as Fig. 5 of the supplementary material, and Fig. D of the rebuttal. These examples demonstrate the modularity and flexibility of our approach in controlling part geometries. We respectfully disagree with the impracticality claim of part image condition, considering that (1) images are easier to handle than 3D representations, (2) our experiments show the shape generator works even without input image condition (the results will be integrated into the revised version of the paper) and (3) modern segmentation methods can streamline the process (but falls outside the scope of this paper). - **Controllability - Text control:** In addition to the text+image conditioning examples provided in our supplementary material, we have conducted further experiments where the shape generator is conditioned solely on textual descriptions. Our findings indicate that removing image conditioning does not degrade the quality of the generated objects. Variations in text label (e.g., changing ''furniture'' to ''oven'') leads to corresponding changes in geometry. - **Controllability - Graph control:** We regret any confusion regarding graph conditioning. The term graph conditioning refers in the original manuscript to the possibility of using any feature of the graph object produced by the structure generator, as an additional condition mechanism to the shape generation process. During our experiments, we only used the node bounding box features, as stated in line 332-333 of the article. Combined with our bounding box prior, this design choice is a core innovation that significantly improves the physical plausibility of generated objects, as evidenced by the results in Table 2. - **Representation:** Regarding denoising on the Plücker manifold: we only claim that it "enhances interpretability" (line 201 in the manuscript) and "eliminating the necessity for iterative projections" (line 212). These claims are supported *by definition*, since our method indeed eliminates the need for post-processing and makes the output directly interpretable as Plücker coordinates (see methods section). These benefits are inherent to the design and are thus supported by the methodology itself, requiring no additional empirical validation. As part of our ablation study, we evaluated the ID metrics—MMD, COV, and 1-NNA—comparing unconditional generation scenarios with and without Plücker manifold parameterization. The results are presented in the table below: | Metric \ Method | Ours no-manifold | Ours + manifold | NAP | |-----|-----|-----|-----| | 1-NNA | 0.6221 | **0.5831** | **0.5831** | | MMD | 0.0270 | **0.0264** | 0.0282 | | COV | 0.4779 | **0.4857** | 0.4675 | These results suggest improvements in terms of coverage and MMD, supporting our claim that diffusion over Plucker manifold improves asset consistency. - **About the claim of ''more natural joint motion'' and ''less physical constraint violation'':** This was covered by experiment 3) and 4) in our general Author Rebuttal above, which provides *quantitative* evidence for these points. - **Comparison with CAGE:** We agree that a comparison with CAGE on a limited set of categories and with geometry retrieval could be feasible; however, our modular approach is specifically designed to enable fine-grained control and flexibility in part generation, which contrasts with CAGE’s retrieval-based method. Implementing a retrieval-based mode within our framework would necessitate significant modifications that would undermine one of our method’s key strengths—its ability to generate parts from scratch using multimodal inputs. We believe that such a comparison, would not fairly represent the advantages of our method in terms of generalizability and user control. For the sake of transparency, we will explain in the revision why we did not compare with CAGE directly. While we are currently unable to include additional figures due to rebuttal constraints, we will incorporate the results into the revised manuscript. We hope this response addresses your concerns and clarifies the contributions of our work. We are committed to further improving our manuscript based on your valuable feedback and are confident that the additional evidence we plan to include will more clearly demonstrate the effectiveness and potential impact of our approach. Thank you again for your constructive feedback.
Rebuttal 1: Rebuttal: We would like to thank the editors and reviewers for the time they devoted to reviewing our paper and for their valuable feedback and constructive criticism. We have endeavored to address every suggestion and additional comment to the best of our abilities. Below, we provide a summary of our approach to the review: * **Additional Experiments and Comparative Analysis:** We provided additional experiment results, such as a (1) qualitative and quantitative side-by-side comparison to NAP [37], (2) additional conditional generation results, and (3) quantitative analysis of the plausibility and consistency of the generated assets (see below). * **Clarifications and Revisions:** We have improved the clarity of our method and results thanks to thoughtful feedback of the reviewers and thoroughly incorporated all reviewer's feedback. Detailed descriptions of the data processing in the structure and shape generator are provided, along with empirical results highlighting conditional generation capability and qualitative improvements. * **Related Work:** We have revised the related work section to better contextualize our contributions within the existing literature and emphasize the novelty and impact of our work. We hope these revisions and clarifications address the reviewers' concerns and enhance the understanding of our contributions. We remain committed to further improving our manuscript and welcome any additional feedback. Best regards, Authors of the submitted paper **------------------- Additional experiments------------------** **1) Conditional generation capability** We provide additional results demonstrating the conditional generation capability of our approach in a "PartToMotion" setup where the model is provided with part features only (i.e. no joint data) and outputs consistent articulated assets (see attached PDF - Figure "conditional generation"). In contrast to NAP, our model can be guided with image and text input instead of requiring full geometries. **2) Side-by-side comparison** Figure B in the attached PDF provides a side-by-side comparison between our approach and NAP. Since the graphs generated by NAP do not include categorical information, we manually go through the generated data of NAP and MIDGArD and selected pairs of assets with the most similar appearance. Figure B shows that our approach 1) improves the quality of the geometry (see the fan and holes in drawer-meshes), 2) improves the physical plausibility (see laptop motion), and 3) generates more consistent and realistic shapes (see both globes). **3) Physical plausibility** We provide further results supporting the improvement in physical plausibility achieved with our method. In Figure 4, we have already shown qualitative examples where NAP yields unrealistic joint ranges. Our method alleviates such failure cases by introducing categorical joint labels in the articulation graph. Here, we support this observation with a quantitative analysis of the generated joint types. Specifically, we compare the distribution of joint types in the training data with those in samples generated by NAP and MIDGArD (400 samples each). To minimize the impact of objects having many joints of the same type, we count each joint type only once per object. | | Screw | Revolute | Prismatic | |---------------------|-------|----------|-----------| | Training data | 0.06 | 0.62 | 0.32 | | NAP-generated | 0.95 | 0.01 | 0.04 | | MIDGArD-generated | 0.02 | 0.62 | 0.36 | The results indicate that NAP predominantly produces screw joints, despite their low occurrence in the training data. Conversely, the objects generated by MIDGArD exhibit a joint type distribution similar to that of the training data, thereby enhancing the physical plausibility of the generated data. The Chi-Square statistic, which measures how much the observed counts deviate from the expected counts, confirms the large difference with $\chi^2(\text{NAP}) = 5618$ whereas $\chi^2(\text{MIDGArD})=12.7$. NAP's high $\chi^2$ value indicates that there is a large difference between the observed and expected counts, whereas MIDGArD's low $\chi^2$ indicates that the observed data are close to the expected data. More intuitively, the maximal misalignment of our method in the three joint categories is 4% compared to maximal mismatch of 89% for NAP. **4) Consistency of parts within the object** By "consistency" we refer to the semantic coherence of part relationships within the articulated structure. For instance, a "bottle" asset is unlikely to contain a "drawer" body. We found that our approach of adding categorical information to the graph improves the consistency. Unfortunately, it is not possible to measure the consistency within NAP-generated objects due to the lack of categorical information. In MIDGArD-generated data, we can compute whether part-types (e.g., "leg") occur in conjunction with a fitting asset type (e.g., "table"), or whether they are mixed (e.g., a "leg" being part of a "laptop"). Assume that every part-asset-type combination occurring in the training dataset is valid, our model actually generates 100% valid examples. In other words, none of the generated graphs is a mixture of parts that typically belong to different asset types. Qualitatively, this advantage of our approach is visible in the failure cases (see attached PDF - Figure C). In NAP, the lack of categorical information leads to objects of mixed parts that do not fit together. In our approach, there may be issues with the geometry or kinematics, but the object parts remain consistent. **5) Generalisation to images from various perspectives**: We show in the attached PDF - Figure D that our conditional part-generation generalizes to images from different viewpoints. These results will be included in the revised manuscript. Pdf: /pdf/101a3fb1324c798bb2a76ebdbbe19281bcd9b931.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Decision-Making Behavior Evaluation Framework for LLMs under Uncertain Context
Accept (poster)
Summary: This paper proposes a comprehensive framework to evaluate LLM's decision-making behavior under uncertain contexts. By leveraging computational models from behavioral economics, as well as the experiment paradigms, the authors investigated the risk preferences, probability weighting, and loss aversion of LLMs from the parameters in the models. Finally, by injecting demographic information into LLMs, the authors found significant differences in those parameters measured from behaviors, suggesting potential biases from demographic information (e.g., age, education, or gender) on decision-making tendencies. Strengths: - This paper advances the evaluation framework of LLM's decision-making behavior under uncertainty, borrowing the models from behavioral economic theories to quantitatively model the behavior of LLMs. By leveraging risk preference, probability weighting and loss aversion, the authors have a deeper investigation on risky decision-making behaviors. - This paper also investigates demographic influences on the LLMs' behaviors, suggesting potential biases in generating behaviors under different demographic settings. Weaknesses: - __Lack of human data__. As the authors have mentioned in the limitation section, it would be fun to investigate the LLM vs. Human behavior at the parameter level of computational models. These parameter differences might explain why humans and LLMs are different or similar on a deeper level. - __Lack of model selections__. The authors ONLY use one model (TCN model) to evaluate the LLM's behaviors. In computational cognitive sciences and psychology (Wilson, Collins, 2019), it is common to propose multiple cognitive models to test which model might best explain the behaviors of humans (and of machines). The authors could strengthen the model evidence by proposing multiple competing models, comparing models, showing goodness-of-fitting, and even possibly simulation and parameter recovery to show the robustness of the currently selected model. Perhaps there might be better models that can explain the behaviors better and thus, the attribution of behaviors may change. - __More investigations on LLMs are recommended__. The authors list one of the contributions as investigating multiple LLMs in this framework. The LLM selections are the most known commercial LLMs (like GPT-4, Claude, and Gemini). These model comparisons somehow make sense but two of them are not open-sourced and the training data are not transparent. If the authors could compare some LLMs with a more transparent basis to provide insights into LLMs, this would strengthen the contribution of the paper. For example, the authors could investigate the relationship between model size and the parameters measured from the behaviors. Choosing models within the same family, and trained on similar datasets would make comparisons more informative. Looking at prompt engineering, the way of instruction-tuning or model structure (e.g., encoder-decoder model or decoder-only model) can also make sense while making this framework provide more information. Technical Quality: 4 Clarity: 4 Questions for Authors: - One interesting finding from the paper is that human studies may not find significant differences in risk preferences in sexually minority groups but LLMs do respond differently under these settings. There might be two but completely different reasons: first, the authors did not get the full knowledge of the literature in that field and the paper listed there may be only one study that reports non-significant research. The other reason could be intriguing: the studies about one specific topic are limited but humans are generally exhibiting such effects which are overlooked in empirical study. However, LLMs are more likely to learn such effects from massive datasets and thus exhibit effects. So there might be a reverse approach that understanding human behaviors (especially in fields that are overlooked) from LLM's behavior. The authors are not required to formally give an answer to this question, but I find it interesting in public discussion about such potentials. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: As indicated above, the major limitations of the paper are not providing a deeper insight into some human cognitive or LLM mechanistic findings, though this framework is valuable and more comprehensive than previous work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your thoughtful review and the detailed feedback provided. Below, we address each of the concerns and questions raised in the weaknesses section. 1. Lack of Human Data: We agree that including more human data would significantly enhance our study. The bottom row in Table 6 shows human parameter data retrieved from one of the original studies that adapted the same evaluation model on human subjects. We acknowledge that we did not emphasize human data in this work due to the complexity and variability inherent in human studies. Human data are influenced by numerous factors, including cultural, socioeconomic, and environmental contexts. Additionally, existing human studies often do not represent the entirety of the human population. For instance, some studies might use college students as samples, while others use rural households, each occurring in different countries and involving people with diverse backgrounds. Integrating such diverse datasets requires the involvement of sociology and economic theories capable of accounting for these variations and ensuring that the resulting analysis is both representative and meaningful. Moreover, ethical considerations are crucial, especially when studying minority groups. We are currently collaborating with behavioral economics experts on ongoing research to compare LLM and human data for an apple-to-apple comparison. We hope to complete this work in the near future and believe it will add significant value to our study. 2. Lack of Model Selection: For model selection, we conducted a comprehensive literature review. In the domain of decision-making behavior under risky contexts, foundational theories are Expected Utility Theory (EUT) and Prospect Theory (PT). Various models based on these frameworks have been developed for human subjects. While previous studies have used Prospect Theory models, we discussed their limitations in our paper, particularly the issue of pre-assumptions. The TCN model combines elements of both EUT and PT without relying on these pre-assumptions, making it particularly suitable for testing LLMs. This was the primary reason for our adoption of this model. Although our current paper is limited by space constraints, we see great value in your suggestions and will include related results of more models in the final manuscript if accepted and consider them in our future works. We are actively developing new models that may better suit the evaluation of LLM decision-making processes. If our paper is accepted, we will also include a section on future work to discuss these potential directions, integrating more evaluation models and data to enhance the robustness and utility of our framework. 3. Lack of LLM Variety: We truly consider your suggestions regarding this to be valuable. The ideas of investigating the relationship between model size and behavior parameters, comparing models within the same family and trained on similar datasets, and examining factors like prompt engineering and model structures, are all meaningful and important research questions to be explored. While the primary goal of this study is to propose a framework for evaluating decision-making behavior in LLMs, rather than to exhaustively test all models, we are excited to incorporate these directions in future work to enhance the robustness and informativeness of our framework. 4. Response to the Question: Thank you for your interest in this finding regarding the differences in risk preferences between LLMs and human studies. This approach that tries to understand human behaviors from LLM behaviors could indeed be intriguing and valuable for public discussion. We touched on this topic slightly in our discussions and did our best to review relevant literature. Studying minority groups is always challenging in social sciences, given the context and biases such as sample size limitations and societal prejudices. If it is evident that LLMs have the ability to serve as a supplemental tool, they could help overcome some of these challenges by identifying patterns and preferences that are difficult to detect in traditional studies. Our work aims to serve as an evaluation framework for both researchers and users to at least reveal some information and subsequently infer human characteristics. Leveraging the vast datasets from LLMs, we hope the use can potentially gain a deeper understanding of human behaviors, particularly in under-researched areas, and enrich the field of social sciences. Thank you again for your kind review. We hope our work can be accepted and become a useful evaluation tool for all end-users and researchers. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the rebuttal. I think the comments well addressed my concerns and positively engaged in discussions. My original evaluation is to have this paper present in NeurIPS and thus I will maintain my current evaluation, given the constraint of the overall scope. However, I do think this paper has a lot of interesting points and may raise more valuable research in the future. --- Reply to Comment 1.1.1: Comment: We appreciate your positive evaluation and are glad our rebuttal addressed your concerns. Thank you for your thoughtful feedback and support. We look forward to contributing to further research in this area.
Summary: This paper presents a novel framework for evaluating the decision-making behavior of large language models (LLMs) under uncertain contexts, grounded in behavioral economic theories. The authors conducted experiments to estimate risk preference, probability weighting, and loss aversion for three commercial LLMs: ChatGPT-4.0-Turbo, Claude-3-Opus, and Gemini-1.0-pro. The study further explores the impact of embedding socio-demographic features on the decision-making process of these models, revealing significant variations and potential biases. The paper concludes with a call for the development of standards and guidelines to ensure ethical and fair decision-making by LLMs. Strengths: 1. Introduces a comprehensive framework for evaluating LLMs' decision-making behavior, which is the first application of behavioral economics to LLMs without preset behavioral tendencies. 2. Provides empirical evidence of LLMs' tendencies towards risk aversion and loss aversion, with a nuanced approach to probability weighting, offering insights into their alignment with or divergence from human behavior. 3. Conducts further experiments with socio-demographic feature embeddings, uncovering disparities in decision-making across various demographic characteristics, which is crucial for understanding potential biases in LLMs. Weaknesses: 1. The paper does not provide a detailed analysis of the potential causes behind the observed variations in LLM behavior when demographic features are introduced, which could be crucial for understanding and mitigating biases. 2. The study's focus on three commercial LLMs may limit the applicability of the findings to some open source LLMs like LLaMA. 3. The authors should discuss the references like chain-of-thought (CoT) prompting and reasoning in LLMs, such as "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models" (https://arxiv.org/abs/2201.11903), "Automatic Chain of Thought Prompting in Large Language Models" (https://arxiv.org/abs/2210.03493), and "Keypoint-based Progressive Chain-of-Thought Distillation for LLMs" (https://arxiv.org/abs/2405.16064). These works could provide additional context and depth to the understanding of LLM decision-making processes.’ Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the Weakness. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your thorough review and insightful comments. Below, we address each of the questions raised in the weaknesses section. 1. Regarding the Analysis of Potential Causes for Observed Variations: As mentioned in our discussion section (lines 308-313), we have identified several potential reasons for the observed discrepancies between human and LLM behaviors from ethical and sociological perspectives: (1) Flaws in Human Studies: Studies on minority groups often face privacy challenges in obtaining representative samples, leading to potential biases and flawed understandings. (2) LLM Misunderstanding: LLMs might possess biases from data and training, which could affect their decision-making behavior. We focused our discussion from the data and training perspective because we believed whichever algorithms the LLMs use to update their parameter would’ve inevitably introduced bias, as long as the data wasn’t “perfectly just”. We suggested that if LLMs could reflect real-world trends more accurately with broader, cleaner data and careful training, they might be valuable for social studies, but further research is needed to address human-LLM discrepancies. While exploring the algorithmic details of model architecture or training datasets is beyond the current paper's scope, we recognize the importance of algorithm-level analysis and will add to our discussion section a recommendation for future research to study the underlying model architectures and training data. Recent algorithm-level studies, such as Meade et al. (2021), Bender et al. (2020), Zhang et al. (2024), Liu et al. (2024), and Guo et al. (2022), as well as the papers related to Chain-of-Thought (CoT) prompting (Wei (2022), Zhang (2022), Feng (2024)), provide foundational insights into addressing biases in LLMs. These works suggest techniques like adversarial debiasing, data augmentation, and fairness constraints, which could be explored in future studies to mitigate biases in LLMs. Our aim is to serve as an evaluation tool of the behavioral aspects of LLMs in specific contexts. However, we hope that our work provides a robust foundation for such evaluations, facilitating a deeper understanding and mitigation of biases in LLM decision-making behaviors. Understanding the exact causes of LLM behavior remains a complex question; it also cannot be done without evaluating a multitude of models with a foundational framework like ours. 2. Regarding the Inclusion of Commercial LLMs: Thank you for your suggestion to include more models, especially the open-source models; we also believe this is important work for this research area. We also have carefully planned that for the future work, we will include more models, and testing various dimensions such as architecture differences, model size differences, and dataset differences. We specifically chose to evaluate these three commercial LLMs due to their uniqueness and closed-source nature. This choice underscores the need for a reliable and stable evaluation framework to ensure these models behave as intended by their developers. Our framework serves as the first step toward this goal. Nevertheless, comparing LLMs with more transparent/open-sourced bases would indeed strengthen the contribution of our paper. For example, investigating the relationship between model size and behavior parameters, or examining prompt engineering and instruction-tuning techniques, could provide more detailed insights. While these extensions are beyond the scope of our current study, they represent valuable directions for future research. 3. Regarding References to Chain-of-Thought Prompting and Reasoning: Thank you for the suggestions. We will incorporate discussions of the related works on CoT prompting and reasoning in our revised manuscript if accepted. These references could indeed provide additional context and depth to our understanding of LLM decision-making processes and would make our work more comprehensive. References: • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2020). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21). • Guo, Y., Yang, Y., & Abbasi, A. (2022). Auto-Debias: Debiasing Masked Language Models with Automated Biased Prompts. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). • Meade, N., Poole-Dayan, E., & Reddy, S. (2021). An empirical survey of the effectiveness of debiasing techniques for pre-trained language models. arXiv preprint arXiv:2110.08527. • Liu, T., et al. (2024). LIDAO: Towards Limited Interventions for Debiasing (Large) Language Models. arXiv preprint arXiv:2406.00548. • Zhang, Y. F., et al. (2024). Debiasing large visual language models. arXiv preprint arXiv:2403.05262. • Wei, Jason, et al. (2022). "Chain-of-thought prompting elicits reasoning in large language models." Advances in neural information processing systems 35: 24824-24837. • Zhang, Zhuosheng, et al. (2022). "Automatic chain of thought prompting in large language models." arXiv preprint arXiv:2210.03493. • Feng, Kaituo, et al. (2024). "Keypoint-based Progressive Chain-of-Thought Distillation for LLMs." arXiv preprint arXiv:2405.16064 .
Summary: In this paper, the paper the authors model the decision-making behavior of certain open-source LLMs on lottery selection tasks. They further extend this work, by repeating similar analysis after priming the models with demographic information. Results indicate that the different LLMs differ in their risk profiles as inferred by the parameters of the evaluative model. Some prominent differences also emerge on demographic-primed LLMs. The overarching theme that comes through is the need for caution when using LLMs in practice, especially as analogue to human decision-making. **Conclusion:** Overall, while the paper considers an interesting problem, the results don’t appear to yield any general insights, and the methodology seems to make assumptions that may be difficult to justify. The paper can still be an alright addition to the conference since it’s timely and touches on an important problem. Strengths: 1. It’s clearly written, provides ample context and the ideas are developed in a streamlined fashion. 2. The problem considered is pertinent and the approach seems to be well motivated by a long history of modeling human behavior. 3. The analysis makes sense — studying the distribution of pertinent parameters conditioned on different agents, and demographics to derive insights about the underlying behavior. Weaknesses: 1. It’s unclear whether the TCN model is actually apt in this setting. For example, it seems to assume deterministic values of parameters like sigma, alpha and lambda, and then tries to infer them based on the LLMs decisions. However, what if these parameters are themselves random variables? Does it make sense to assume the LLM has a fixed risk profile across trials? As an analogy, assuming the coin has a fixed bias p, and estimating it via repeated tossing experiments when p is sampled from a uniform distribution each trial. 2. The utility function was a little unclear to me (line 135). Why does it not cover the entire (x,y) domain? w(p) simplifies to 1/p for alpha = 1. Should it be p? As a tangentially related point, consider numbering the equations. 3. In Step 4 (section 4), can we be certain the estimation algorithm actually converges? This relates to 1, so that the convergence might not be guaranteed if the latent parameters are not fixed. 4. While the qualitative results are sensible — due diligence is needed before LLMs can be entirely trustworthy, the quantitative results don’t impart any deep or actionable insights. Observations like a given version of a pre-trained model tends to show risk aversion doesn’t quite generalize. Technical Quality: 3 Clarity: 4 Questions for Authors: NA Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your careful review and valuable feedback. Your comments are very insightful and have provided us with the opportunity to clarify our work. Before addressing the specific questions, we would like to emphasize that we recognize that LLMs are machines, not humans. However, by considering the set of queries of LLMs as analogous to a population of humans, we can begin to evaluate their behavior by adapting well-established models tested by human experiments. This approach represents the initial phase of our studies, laying the groundwork and finally moving towards the development of models specifically for LLMs. 1. Regarding the Aptness of the TCN Model: The TCN model referenced in the paper was originally designed to evaluate human decision-making behavior, assuming each individual has fixed parameters (sigma, alpha, lambda). In human studies, these parameters are considered deterministic for each individual; while across a population, they are treated as random variables due to inter-individual variability. Our work treats LLMs as analogous to human populations, where each LLM exhibits overall tendencies. When applied to LLMs, each query to a specific LLM can be viewed as one interaction with a population of subjects. These subjects in the population share one distribution of the parameters. For each interaction, we assume fixed parameters specific to that interaction. This assumption applies to each particular interaction rather than the entire set of interactions or “population”. Therefore, while these parameters are fixed for a particular interaction, they may vary across different interactions, akin to different individuals in a population study. By conducting multiple rounds of experiments (e.g., 300 times), we aim to capture the population’s overall tendencies. The mean and standard deviation of these parameters across interactions provide an estimation of the general tendencies and variability of the LLMs’ behaviors. Since the study of LLMs' decision-making behavior is still in its infancy, there is a lack of literature addressing the granularity of the studies and whether their behavior should be considered as having fixed biases or as random variables, which haven’t been decided by some recent studies using simpler models either, as discussed in lines 105-112 of our paper. We also recognize the need for more comprehensive modeling, potentially involving hierarchical models. We have submitted the raw data and plan to release it, enabling other researchers to further explore the possibility of other distributions and refine models to better understand LLM behavior. 2. Regarding the Utility Function: (1) Domain of (x,y) This utility function actually have covered the all possible cases of (x,y): our utility function determines which value should be x or y based on their signs and magnitudes, ensuring that all combinations of positive and negative values are appropriately handled. The detailed logic and examples are as follows: Denote two outputs O1 and O2, abs(O1) > abs(O2). if O1O2>0: x = O1, y=O2 if O1O2<0: x = min(O1, O2), y = max(O1, O2). Our utility function is adapted from the framework in the TCN work which is extensively validated with human subjects. This adaption ensures our methodology is grounded in rigorously tested principles of behavioral economics. (2) Typo in the Utility Function: You are correct; upon cross-checking our mathematical derivation and code script (available in the supplementary materials), we found a typo in the equation between lines 135 and 136, where we missed a “-” in w(p). This typo led to confusion regarding the utility function. We apologize for this error and appreciate you pointing it out. We will correct the associated mathematical work throughout. Please note that our code and the results presented in the study are based on the correct form of the utility function, so the typo in the manuscript did not affect our computational results. (3) Numbering Equations: Thank you for suggesting numbering the equations. We will incorporate this into our revised manuscript to enhance clarity. 3. Regarding the Convergence of the Estimation Algorithm in Step 4: Each LLM interaction is treated as an independent trial with fixed parameters, as discussed in our response to the first question. In this setup, the parameters are treated as constants within one particular interaction but vary across different interactions, similar to how characteristics vary among individuals in human population studies. Given this setup, the convergence of the estimation algorithm in Step 4 is facilitated by systematically narrowing the intervals for each parameter through the iterative solution of groups of inequalities. We have three groups of inequalities and three parameters; therefore, they allow us to obtain the interval for the three parameters. Iteratively solving for the intervals ensuring that the algorithm converges by gradually refining the estimates until the intervals are sufficiently narrow so that our parameter estimates are robust and reliable across different experimental iterations. 4. Regarding the Depth of Quantitative and Qualitative Insights: In our work, the quantitative estimation of parameters, followed by regressions to determine the correlations between embedded demographic features and decision-making behavior, provides the foundation for claiming qualitative observations of a given version of a model. The observations are specific to the models, but the method to draw qualitative observation from the quantitative parameters are generalizable. Nevertheless, we acknowledge that finer insights from quantitative results will be beneficial to obtain more detailed qualitative observations. We plan to enhance our analysis granularity in our future work. --- Rebuttal Comment 1.1: Title: Update after authors' rebuttal Comment: I appreciate the authors taking the time to offer clarification and answer a couple of of the questions I had. I agree that even though the authors only consider a few popular models, their methodology generalizes irrespective of whether the specific findings around a given LLM may or may not. I continue to believe that this is a nice interdisciplinary paper, and I am maintaining my acceptance decision along with the score. --- Rebuttal 2: Comment: Dear Reviewer, Thank you for your positive feedback and for recognizing the generalizability of our methodology. We appreciate your support and are glad that our clarifications were helpful. Thank you for maintaining your acceptance decision!
Summary: This paper proposes a framework to evaluation the decision making behavior of LLMs. It focuses on ChatGPT4Turbo, Claude3Opus and Gemini1pro. The results show that LLMs exhibit patterns similar to humans. Impacts of socio-demographic features are also analyzed and the results show that different LLMs can vary from each other. Strengths: - The problem in consideration is important. The findings appear interesting. Weaknesses: - While the problem is interesting, the technical contribution of the paper seems limited. The measures chosen are also not fully justified. It would be important to highlight the novelty of the results. - Following the above comment, the examination does not appear comprehensive. The reviewer worries that the questions may only cover a small set of problems. As a result, the conclusions drawn might contain significant randomness. Technical Quality: 2 Clarity: 2 Questions for Authors: - The problem in consideration is important. The findings appear interesting. - While the problem is interesting, the technical contribution of the paper seems limited. The measures chosen are also not fully justified. It would be important to highlight the novelty of the results. - Following the above comment, the examination does not appear comprehensive. The reviewer worries that the questions may only cover a small set of problems. As a result, the conclusions drawn might contain significant randomness. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Please see the above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the time and effort you have taken to review our paper and provide constructive feedback. Below, we address each of your comments and concerns: Clarification of Summary: Our paper presents a framework for evaluating the decision-making behavior of LLMs under uncertainty grounded in behavioral economics theories. The three models we evaluated—ChatGPT4Turbo, Claude3Opus, and Gemini1pro—serve as examples of the framework's application on commercial LLMs. To clarify, our results do not simply show that LLMs exhibit human-like patterns; they reveal significant variations across LLMs and from human in the three-dimensional parameters we evaluated. Furthermore, our framework uncovers ethical implications and biases in LLMs, such as increased risk aversion for attributes of sexual minority groups or physical disabilities in Claude3Opus. Response to Weaknesses and Questions: Our work provides a novel and comprehensive approach to evaluating LLMs' decision-making behavior. We also believe the measures we chose are justified, as they align with current practices in evaluating human decision-making behaviors. Highlighting the novelty of our results, our framework uncovers biases and ethical implications that were previously unexplored in LLM evaluations. Specially: (1) About Technical Contribution: We consider our technical contributions to be multifaceted: 1. This is the first work to propose a framework for evaluating LLM decision-making behavior under economic uncertainty without pre-required assumptions. This framework provides a standardized valid method for future research. 2. Our technical contribution includes a comprehensive analysis of existing models' mathematical pre-assumptions, ensuring our chosen model is the most appropriate for creating this baseline framework. 3. We conduct an evaluation of three state-of-the-art commercial LLM models: ChatGPT4Turbo, Claude3Opus, and Gemini1pro. This evaluation demonstrates the versatility and applicability of our framework across different leading models. We also analyze how socio-demographic characteristics influence LLM decision-making compared to humans, revealing significant insights into biases and ethical implications. To summarize, inventing a new model for LLMs is premature without first establishing a reliable baseline framework. While developing new models for LLMs ongoing and we also involve in that, without a baseline like in this paper, these new models would lack foundation and comparability. (2) About Measurement Justification: For measurement selection, as in the related work section, especially line 102-116, we conducted a comprehensive literature review before determining using TCN model. In the domain of decision-making behavior under risky contexts, foundational theories are Expected Utility Theory (EUT) and Prospect Theory (PT). Various models based on these frameworks have been developed for human subjects. While previous studies have used PT models, we discussed their limitations in our paper, particularly the issue of pre-assumptions. The TCN model combines elements of both EUT and PT without relying on these pre-assumptions, making it particularly suitable for testing LLMs. This was the primary reason for our adoption of this model. (3) About Scope of Examination: The study of decision-making behavior is a substantial and significant area in both behavioral economics and psychology. It encompasses a wide range of applications, from financial decisions to healthcare choices, and is crucial for understanding human behavior in uncertain contexts. For instance, pioneer works by Kahneman and Tversky (1979) and Thaler (2015) illustrate the vast importance and impact of decision-making studies on economics and psychology. These studies have provided foundational insights into how individuals evaluate risk and uncertainty, influencing a multitude of fields and practices (Kahneman & Tversky, 1979; Thaler, 2015; Camerer, 2003; Camerer & Hogarth, 1999; Rabin, 2000; Bavel et al., 2020). As LLMs are increasingly involved in decision-making processes, we believe it is time to propose a framework for evaluating them, which is of great importance. Given these examples, we hope it is clear that the scope of decision-making behavior is far from narrow and holds substantial relevance in various critical fields. The settings in this paper are well-established and widely utilized in behavioral studies, linking closely to many practical applications, and covering an exhaustive range of scenarios is beyond the scope of establishing the framework. Further, to address concerns about statistical randomness, we conducted each experiment for every LLM under every setting 300 times. This approach ensures a large enough sample size to mitigate the effects of randomness and variability. In behavioral economics and psychology, similar sample sizes are often used to draw robust and reliable conclusions. Final Remarks: We appreciate the opportunity to clarify our work and address your concerns. We hope this response provides a clearer understanding of our contributions and the scope of our study. We believe our work represents a significant step forward in the evaluation of LLMs and their decision-making behaviors. References: 1. Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. 2. Thaler, R. H. (2015). Misbehaving: The Making of Behavioral Economics. 3. Camerer, C. F. (2003). Behavioral Game Theory: Experiments in Strategic Interaction. 4. Camerer, C. F., & Hogarth, R. M. (1999). The Effects of Financial Incentives in Experiments: A Review and Capital-Labor-Production Framework. 5. Rabin, M. (2000). Risk Aversion and Expected-Utility Theory: A Calibration Theorem. 6. Bavel, J. J. V., Baicker, K., Boggio, P. S., et al. (2020). Using social and behavioural science to support COVID-19 pandemic response. Nature Human Behaviour, 4(5), 460-471.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Synthetic Data (Almost) from Scratch: Generalized Instruction Tuning for Language Models
Reject
Summary: The paper introduces generalized isntruction tuning (GLAN), an approach for synthesizing instruction tuning data using a taxonomy-based approach. GLAN generates synthetic instruction data from pre-curated taxonomy of human knowledge and capabilities and aims to create diverse and broad-ranging instruction dataset. Strengths: 1. Comprehensive Coverage of Evaluation: The paper presents extensive experiments demonstrating that GLAN outperforms various popular instruction-tuned LLMs across multiple dimensions, including mathematical reasoning, coding, logical reasoning, and general instruction following. 2. Minimization of Human Involvement: The generation process significantly reduces human involvement, requiring human verification only at the taxonomy construction stage. This makes the approach scalable and less labor-intensive. 3. Customizability and Extensibility: The taxonomy-based approach allows for easy customization and extension. New fields or skills can be incorporated by simply adding new nodes to the taxonomy. Weaknesses: 1. While the paper addresses generalization, there is a risk that the generated synthetic data might overfit to the taxonomy's structure, potentially missing out on more nuanced, real-world instructions. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Are there any measures in place to ensure the generated synthetic data's diversity and prevent redundancy? 2. what is the whole taxonomy of the human knowledge and capabilities? And can each task (e.g., gsm8k, arc) be categorised into any sub category? 3. Are the effects of each category orthogonal to each other? i.e., ablating data from a child category does not effect tasks in another child category. It would be beneficial if the authors could provide some preliminary results. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your thorough evaluation and insightful feedback on our submission. --- > While the paper addresses generalization, there is a risk that the generated synthetic data might overfit to the taxonomy's structure, potentially missing out on more nuanced, real-world instructions GLAN is customizable. One can always introduce rare fields/disciplines to the taxonomy to capture nuanced, real-world instruction data as needed. Sometimes, the real-world instructions might be too difficult to “summarize” (into a certain discipline/field) manually. One workaround is to prompt GPT-4 with some of your instructions to generate the field or discipline names. The most frequently occurring field/discipline names can then be incorporated into the taxonomy. --- > Are there any measures in place to ensure the generated synthetic data's diversity and prevent redundancy? Yes. As described in line 161 to 172 (Section 2.4), we incorporate randomly sampled class sessions and key concepts to our instruction generation prompt to ensure the diversity of our generated instructions (also see the full prompt in Table 6 of Appendix A.4). For each discipline, we have approximately 4 million unique combinations (of randomly sampled class sessions and key concepts) and in total (126 disciplines) we have over 500 million such unique combinations, which guarantees that the 10 million instructions we generate exhibit significant diversity. --- > what is the whole taxonomy of the human knowledge and capabilities? As described in line 186 (section 3.1), we uploaded the taxonomy of human knowledge and capabilities used in this work as the supplementary material. Below is an example of the leaf node (i.e., the “History” discipline), where the parent node is the “Humanities” field and the grandparent node is the “academic” category. ``` {"topic": "History", "meta_topic": "academic, Humanities"} ``` --- > And can each task (e.g., gsm8k, arc) be categorised into any sub category? It depends. Specifically, gsm8k aligns well with the mathematics discipline in our taxonomy. In contrast, tasks like ARC and MMLU span multiple disciplines, corresponding to various nodes within our taxonomy. It is challenging to determine which specific discipline would most benefit the reasoning-related BBH task. However, after training on the 10 million generated instruction data, we observed a notable improvement in the reasoning capabilities of LLMs. --- > Are the effects of each category orthogonal to each other? i.e., ablating data from a child category does not affect tasks in another child category. It would be beneficial if the authors could provide some preliminary results. According to our ablation experiments on the Mathematics discipline, the answer is yes. We use 15K data generated from the Mathematics discipline (from our taxonomy) and 60K data generated from all the other disciplines (also from our taxonomy). We run experiments in three settings (i.e., only with the 15K math data, only with the 60K data in other disciplines and combining the 15K and the 60K data) and results are as follows. | | GSM8K | HumanEval | BBH | |------------------------|-------|-----------|------| | math 15k | 60.7 | 35.9 | 57.7 | | others 60k | 31.3 | 41.5 | 58.8 | | math 15k + others 60k | 61.5 | 40.2 | 58.7 | | The results from the GSM8K benchmark indicate that the mathematical capabilities are predominantly derived from the data generated within the Mathematics discipline. When combined with the 60K data from other disciplines, the mathematical performance remains mostly unchanged. Similarly, the coding and reasoning capabilities appear to be derived from the data of other disciplines (see results on HumanEval and BBH), and the same trends are observed when this data is combined with the math data. Thank you for bringing this interesting question to us. We will include the above explanations and results into our revised manuscript. --- All the results and discussions mentioned above will be included in our updated manuscript. Thank you again for your constructive reviews. --- Rebuttal Comment 1.1: Comment: Thanks for the authors responses and most of my concerns are properly addressed. Thus I would like to remain the score as 7 (Accept). --- Reply to Comment 1.1.1: Comment: Thank you again for your thorough review and for taking the time to carefully read our responses. We are glad that most of your concerns have been addressed, and we appreciate your continued support for our work.
Summary: This paper introduces GLAN, a general and scalable method for instruction tuning of Large Language Models (LLMs). GLAN employs a top-down approach to generate high-quality instruction tuning datasets. Experiments across various benchmarks demonstrate that GLAN performs comparably to other existing methods. Strengths: 1. This paper focuses on the alignment of Large Language Models, which is a trendy and important topic. If the dataset is released, it will be beneficial for the community. 2. This method is easy to follow. The process is highly scalable, leveraging LLMs like GPT-4 for generating instructions on a massive scale. GLAN allows for easy customization. New fields can be added by incorporating new nodes into the taxonomy. Weaknesses: The novelty is limited as similar top-down designs have been utilized in many previous works. Besides, the main experimental results in Table 1 appear mediocre compared to other methods. Technical Quality: 3 Clarity: 3 Questions for Authors: Refer to the weakness. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Refer to the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our submission and providing constructive feedback. --- > The novelty is limited as similar top-down designs have been utilized in many previous works. Regarding "similar top-down designs in previous works", could you please share specific references or examples? This would help us further clarify the distinctions between GLAN and those approaches. Besides, the key contributions of GLAN include: - **Scalability and Independence from Pre-Existing Data**: As described in Section 2 (see lines 82-95), GLAN enables LLMs such as GPT-4/GPT-4o and GPT-3.5/GPT-4o-mini to generate a vast amount of instruction-response data from scratch, exceeding 500 million data points (i.e., 500 million unique combinations of knowledge points), without relying on pre-existing data samples. This approach contrasts with previous methods like Evolve-Instruct and Self-Instruct, which depend on pre-existing data samples (see lines 29-37). For instance, WizardCoder [1] utilizes 20K provided samples, resulting in a final data size of 80K after applying the Evolve-Instruct method. - **Broad Coverage Across Disciplines**: GLAN spans a wide range of domains, currently covering 126 disciplines. This extensive coverage ensures that the generated data is diverse and comprehensive. For a detailed overview of the 126 disciplines, please refer to Table 12 in the appendix or the supplementary materials. - **Easy Customizability**: As detailed in lines 74-77, GLAN offers easy customization options, allowing users to add new fields, disciplines, or subjects into the taxonomy without disrupting previously generated content. This flexibility ensures that GLAN can be adapted to various needs and updated as new knowledge areas emerge. --- > Besides, the main experimental results in Table 1 appear mediocre compared to other methods. Without utilizing task-specific training data of these tasks, we achieved the highest or second-highest results as shown in Table 1 (see the “GLAN” row). Specifically, our approach yielded the best results for MBPP, BBH, ARC-E, ARC-C, and MMLU, and secured the second-best results for HumanEval, GSM8K, and MATH (also see line 249 to 258). These results demonstrate that after instruction tuning, GLAN excels on multiple dimensions from mathematical reasoning, coding, reasoning, and academic exams with a systematic data generation approach. Besides these general capabilities as demonstrated on general benchmarks, GLAN also excels on instruction following as shown in Table 4 and Table 5. --- Thanks again for your review! [1]. Z. Luo et al. “WizardCoder: Empowering Code Large Language Models with Evol-Instruct”. 2023 --- Rebuttal Comment 1.1: Comment: Dear Reviewer koAL, Since we are approaching the deadline of the discussion period, we were wondering if you have had the chance to review our response to your comments. We would like to kindly inquire about the extent to which we have successfully addressed the concerns outlined in your review. We greatly value your feedback and would appreciate any further questions or comments you might have. Thank you for your time and consideration. Sincerely, All Authors
Summary: This paper proposes a generalized way of creating instruction data. The high-level motivation is to take inspiration from how curriculum is designed for human learning into a taxonomy of subjects and use the same to prompt an off-the-shelf LLM to create data. GLAN does not need seed examples, or pre built-taxonomy like prior work. Human verification is also performed post the building of taxonomy to weed out unimportant or inaccurate divisions. The overall process is High level taxonomy -> subjects -> syllabus -> instructions. Strengths: 1. Overall strong Performance: Extensive experiments show GLAN's effectiveness in various tasks, outperforming or matching state-of-the-art models in several benchmarks (Table 1) 2. Figure 2 on scaling properties of GLAN: I found this figure quite interesting. It suggests a log linear scaling trend in performance as GLAN data is scaled up. This is quite promising. 3. Section 3.5 on Task-specific overfitting: Another great analysis section that discusses how GLAN does not particularly overfit to the training data. This ensures that the synthetic data remains generalizable across different domains. 4. Modularity of the pipeline: The modular nature of the GLAN pipeline allows for easy customization and extension by incorporating new nodes into the taxonomy without re-generating the entire dataset. Weaknesses: 1. No use of actual human curriculum: The paper set the expectation right in the abstract of using/getting strongly inspired from human curriculum. I was disappointed that the method does not utilize existing human curriculum structures, potentially missing out on years of insights in developing the same. Generating synthetic data, and in this case entire taxonomies from pre-existsing models can lead to extremely large amounts of bias. I would have much rather seen the authors delegate only lower level questions to LLMs than high level abstractions, which would lead to a trickle down effect on every single node in the taxonomy. This study, in my opinion, is incomplete without using either human generated taxonomies, and/or a comparison between how different the taxonomies are. 2. Computation cost not compared: The paper does not provide a comparison of computational costs with similar methods, such as WizardLM. For instance, GLAN training required approximately 8 days using 32 A100 GPUs to generate 10 million instructions, but no direct comparisons are made to illustrate the efficiency or cost-effectiveness relative to other approaches. 3. The method is limited by the performance of GPT-3.5/4: The quality of the generated taxonomy and syllabus heavily depends on the capabilities of the underlying LLMs used in the process, namely GPT-3.5 and GPT-4. In general, GLAN does not inform how we can improve capabilities of models beyond GPT4. But also, does not consider the cost of generating 10 million instructions. 4. High variability in results (Table 2): There is significant variability in GLAN's performance across different categories, with particularly weaker results in humanities and social sciences compared to STEM fields. The authors should address this, also discuss the document proportion of each taxonomy, and potentially see if there is a correlation between the data size and performance. Technical Quality: 3 Clarity: 3 Questions for Authors: **Did you perform an ablation to verify this hypothesis?**: "For instance, the subject calculus is both in physics and mathematics. We do not de-duplicate those subjects, since it may reflect their importance in human knowledge." Please see Weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Please see Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your detailed review and the valuable suggestions provided for improving our work. --- > No use of actual human curriculum We did not use human curriculum explicitly to build our taxonomy because: 1) Initially, we intend to automate the whole generation process and found GPT-4 is good at listing the taxonomy structure of human curriculum. 2) We have a verification process for the taxonomy (see Section 2.1 and Section 3.1). Actually, all of our annotators searched for existing human curriculum taxonomies for verification. That said, we actually leverage human curriculum implicitly and our taxonomy overlaps significantly with existing human curriculum. 3) There are numerous different versions of the human curriculum on the web and determining which version to use is challenging. We instead generate the taxonomy using GPT-4 (we assume GPT-4 has read all these versions and can produce a version with above average quality) and then verify each node inside it. We also did not use human created syllabi. Because the syllabi developed within human curricula are numerous, and collecting them all to ensure high-quality and comprehensiveness is impractical for us. We will add these discussions to our updated manuscript. --- > Computation cost not compared: The paper does not provide a comparison of computational costs with similar methods, such as WizardLM. For instance, GLAN training required approximately 8 days using 32 A100 GPUs to generate 10 million instructions, but no direct comparisons are made to illustrate the efficiency or cost-effectiveness relative to other approaches. For the **computational cost, API cost and GLAN’s differences from other approaches**, please refer to the **Author Rebuttal** addressed to all reviewers. --- > The method is limited by the performance of GPT-3.5/4: The quality of the generated taxonomy and syllabus heavily depends on the capabilities of the underlying LLMs used in the process, namely GPT-3.5 and GPT-4. In general, GLAN does not inform how we can improve capabilities of models beyond GPT4. But also, does not consider the cost of generating 10 million instructions. - **Model Dependency**: GLAN is not limited to GPTs. It can be applied to any strong close-source (e.g., Claude, Gemini) or open-source (e.g., Llama-3.1 70B/405B, Nemotron-4, Mistral Large 2, etc.) LLMs.. We chose GPT-3.5/4 for our experiments to best demonstrate the effectiveness of our method at the time of writing. - **Improving Model Capabilities Beyond GPT-4**: This is still an open research problem, focusing on how a language model can self-improve with its own generations. Solely leveraging GLAN may not directly enhance capabilities beyond those of GPT-4. But we believe the diverse instructions GLAN can produce across a wide range of domains and tasks can at least address the question of “where to improve in model self-improving?” - **Cost of Generating Instructions**: Please refer to our **Author Rebuttal** at the top. --- > High variability in results (Table 2): There is significant variability in GLAN's performance across different categories, with particularly weaker results in humanities and social sciences compared to STEM fields. The authors should address this, also discuss the document proportion of each taxonomy, and potentially see if there is a correlation between the data size and performance. - **High variability in results**: We carefully examined the disciplines in our taxonomy (esp. for those related to MMLU subjects). We found there are 19 disciplines related to MMLU and only 6 of them are STEM disciplines. In our final experiment, we generated almost the same number of examples for each of these 19 disciplines. Consequently, we have more non-STEM data than STEM data. Therefore, as mentioned in line 264 to 269, the strong STEM results may be due to CoT reasoning. - **Ablation on duplicated subjects**: We did not do ablation on duplicated subjects for the following reasons. 1) The duplication is not very severe in our view. We have in total 15,751 subjects and 7,030 of them are unique. Since we repeat the subject generation for each discipline for 10 times (described in line 192), the duplication of subjects here is reasonable. 2) We manually inspected the most frequent subjects and they looked reasonable for us. --- Thanks again for your review! All the discussions and analysis above will be added to our updated manuscript. --- Rebuttal Comment 1.1: Comment: Dear Reviewer 7Qp1, Since we are approaching the deadline of the discussion period, we were wondering if you have had the chance to review our response to your comments. We would like to kindly inquire about the extent to which we have successfully addressed the concerns outlined in your review. We greatly value your feedback and would appreciate any further questions or comments you might have. Thank you for your time and consideration. Sincerely, All Authors --- Rebuttal 2: Comment: > Initially, we intend to automate the whole generation process and found GPT-4 is good at listing the taxonomy structure of human curriculum. "found GPT-4 is good at listing the taxonomy" : this is based on what metric? was there a scientific study to test this? > our taxonomy overlaps significantly with existing human curriculum. Is there a scientific analysis of this claim? > computation cost comparison. I was looking for a training compute cost comparison since all baselines may have been trained for different durations, which can be a big confounder in the "perceived quality" of the data. Overall, found the answer to "high variability" quite unconvincing, and a reluctance to scientifically examine the phenomenon that is underneath the variability (data density etc.). --- Rebuttal Comment 2.1: Comment: Thank you for your thoughtful and detailed feedback. We greatly appreciate the time and effort you have taken to engage with our work and provide valuable insights. > a more scientific analysis of "found GPT-4 is good at listing the taxonomy structure" and "our taxonomy overlaps significantly with existing human curriculum" To quantify the overlap, we calculated the Overlap Coefficient (Szymkiewicz-Simpson coefficient; refer to Equation below) between our taxonomy and a "standard curriculum" (https://en.wikipedia.org/wiki/Outline_of_academic_disciplines) $$O(A, B) = \frac{|A \cap B|}{\min(|A|, |B|)}$$ the Overlap Coefficient is 90 / 126 = 71.42% This high overlap coefficient supports our claim that “our taxonomy overlaps significantly with existing human curriculum” and is also a strong indicator that “GPT-4 is good at listing the taxonomy structure of human curriculum”. We also find that recent studies [1][2] have demonstrated strong capabilities of GPT-4’s in generating taxonomies. We have uploaded our taxonomy as supplementary material (please refer to line 186 in Section 3.1). Besides, if you are aware of a more appropriate standard curriculum for comparison, we would appreciate your suggestions. > computation cost comparison. Training cost (measured in FLOPs) for different methods are shown below | **Methods** | **FLOPs** | |-------------------|-------------------| | Orca 2 [3] | $$9.5 \times 10^{20}$$ | | MetaMath [5] | $$2.5 \times 10^{19}$$ | | WizardLM v1.2 [6] | $$\geq 3.2 \times 10^{19}$$ | | WizardMath v1.1 [7] | – | | GLAN | $$6.3 \times 10^{20}$$ | According to [6], the training cost for WizardLM v1.0 is $3.2 \times 10^{19}$ FLOPs, while the training cost for WizardLM v1.2 remains unknown, though we expect that at least the same number of examples were used. The exact number of training examples for WizardMath is not disclosed in [7], and it also involves a PPO training stage with limited technical details provided. Please refer to Table 1 for their performance comparisons. Despite utilizing less computational resources, our results surpass those of Orca 2, and we consistently outperform MetaMath and WizardLM across all tasks presented in Table 1. Also note that GLAN data aims to enhance a model's capabilities across a wide range of tasks (without relying on seed examples), whereas the data generated by previous methods such as MetaMath, WizardLM and WizardMath are focused on improving performance in specific tasks. Therefore, to achieve the same level of performance on a particular task, our method usually needs to generate more data and hence higher training cost. > Overall, found the answer to "high variability" quite unconvincing, and a reluctance to scientifically examine the phenomenon that is underneath the variability (data density, etc.). High variability in humanities and social sciences performance is not due to data density but the inference strategy (CoT vs. non-CoT). We observed no correlation between the data size and performance. Specifically, there are 19 disciplines related to MMLU subjects and only 6 of them are STEM disciplines. We generated almost the same number of examples for each of these 19 disciplines. We have more non-STEM data than STEM data with the non-STEM data being 2.16 times greater. We believe that the Chain-of-Thought (CoT) reasoning contributes to the lower performance on these questions. In our earlier experiments using the MMLU benchmark, we evaluated both CoT and non-CoT settings. While the overall performance was superior with CoT (leading to its adoption), we noticed that CoT was more effective for STEM questions, whereas non-CoT proved advantageous for humanities and social sciences questions. This might be because CoT aids in multi-step reasoning in STEM multiple-choice questions, while humanities and social sciences questions involve more memorization and single-step reasoning, where CoT may introduce additional errors (also see line 264 to 269). All discussions and results will be added to our revised manuscript. Thank you again for your valuable feedback. ## References [1]. Gunn, M. et al. “Creating a Fine Grained Entity Type Taxonomy Using LLMs.” 2024. [2]. Lee, M. et al. “Human-AI Collaborative Taxonomy Construction: A Case Study in Profession-Specific Writing Assistants.” 2024. [3]. Mitra, A. et al. “Orca 2: Teaching Small Language Models How to Reason.” 2023. [4]. Stanford Alpaca GitHub Repository [5]. Yu, L. et al. “MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models.” ICLR 2024. [6]. C. Xu et al. “WizardLM: Empowering Large Language Models to Follow Complex Instructions”. 2023 [7]. H. Luo et al. “WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct”. 2023
Summary: ## Overall summary - This paper introduces GLAN, a method for enhancing LLMs by generating synthetic instruction data using a taxonomy of human knowledge and capabilities. GLAN constructs this taxonomy by decomposing knowledge into fields and disciplines, leveraging LLMs for generating a comprehensive syllabus for each subject. - GLAN’s scalable and customizable framework allows for easy integration of new fields of skills, highlighting its potential for ongoing improvement and adaptation. ## My opinion of the paper - I think this is a really interesting approach to generate data that can allow LLMs to be potentially smarter. However, I am wondering if there are newer topics, for example (within the medical area, we have the new topic called "Covid-19".) Since GLAN is very dependent on LLMs, the main area of concern would be ensuring that the LLMs that GLAN depends on remains updated. Strengths: ## Originality - The approach is quite interesting. The authors made use of real life scenarios, which is to use the structure of human education systems to build the taxonomy. This approach mimics the systematic acquisition of knowledge and skills in education, providing a framework for generating instruction data. ## Clarity - Pseudo Algorithm provided and figures are easy to understand. ## Significance - By creating a general and scalable method for instruction tuning, GLAN has the potential to improve the performance of LLMs across a wide range of tasks and domains. Weaknesses: ## Quality - While the paper claims scalability, there is limited discussion on the computational resources required for generating the synthetic data at scale. Practical constraints related to computational costs and time could be a potential weakness. It was mentioned in the checklist that it is very computationally expensive to repeat experiments. Technical Quality: 3 Clarity: 4 Questions for Authors: - Line 269: Why do you say that the reason why errors coming from humanities and social science questions is due to CoT? Could it be due to the lack of knowledge, because it was not trained on that knowledge? - Figure 2: Any reason why for HumanEval and BBH datasets, the scores dropped even though the GLAN data size increase? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Indicated in the appendix (do consider placing it in main paper), but did not mention about computation cost like what was mentioned in the checklist. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and the thoughtful comments on our submission. --- > However, I am wondering if there are newer topics, for example (within the medical area, we have the new topic called "Covid-19".) Since GLAN is very dependent on LLMs, the main area of concern would be ensuring that the LLMs that GLAN depends on remains updated. Actually we find in our generated dataset there are 38,749 instructions and 28 syllabus containing the keyword “covid-19”. We believe the GPT-4 we used in experiments is aware of the Covid-19 topic. However, there is no syllabus or discipline exactly focusing on “Covid-19”. A quick fix is to instruct newer GPT-4 (i.e., gpt-4o) using prompts described in Section 2.2 and 2.3 (also see Table 6) to generate a syllabus or subject list on “covid-19”. The generated syllabus looks good to us and we show part of it in the following due to space limitation. ``` ### College Level Syllabus on COVID-19 #### **Introduction** This course provides an in-depth study of COVID-19 … #### **Class Details** **Class 1: Introduction to Coronaviruses** - **Description**: Overview of coronaviruses, … - **Knowledge Points**: 1. Structure and classification of coronaviruses. 2. History of coronavirus outbreaks. 3. Transmission mechanisms. 4. Symptoms and disease progression. 5. Comparison with other respiratory viruses. - **Learning Outcomes & Activities**: - Understand the basic virology of coronaviruses. - Activity: Research and present a comparison between SARS, MERS, and COVID-19. --- **Class 2: Virology of SARS-CoV-2** … **Class 3: Epidemiology of COVID-19** … **Class 4: Clinical Presentation and Diagnosis** … ``` Thanks to the “easy customization” feature of GLAN (line 74 to 77), we only need to generate instructions for the newly added “covid-19” topic without re-generating the entire dataset. --- > While the paper claims scalability, there is limited discussion on the computational resources required for generating the synthetic data at scale. Practical constraints related to computational costs and time could be a potential weakness. It was mentioned in the checklist that it is very computationally expensive to repeat experiments. For the **computational cost** and **API cost**, please refer to the **Author Rebuttal** at the top. Besides, we believe there are two types of scalability: (1) the ability to generate a large number of diverse data points and (2) the ability to generate each data point at a low cost. In Checklist 7 (Experiment Statistical Significance), we noted, “We did not include error bars in the experiments due to the high computational demands.” This statement does not contradict GLAN's scalability. This is because generating an additional 10 million data points to compute error bars is indeed computationally expensive. On the contrary, the scalability of GLAN is precisely why it is challenging to repeat the experiments. Our objective is to enhance the general capabilities of large language models (LLMs). Achieving this goal appears to necessitate a large dataset, which in turn results in high computational/API costs. --- > Line 269: Why do you say that the reason why errors coming from humanities and social science questions is due to CoT? Could it be due to the lack of knowledge, because it was not trained on that knowledge? The lower performance on humanities and social science questions is unlikely to stem from a lack of knowledge. The scope of knowledge for the generated questions is predominantly governed by generated syllabuses. Therefore, it is implausible that GPT-4 would create syllabuses for humanities and social sciences with significantly narrower knowledge coverage compared to those in STEM disciplines. We believe CoT contributes more to the lower performance on humanities and social science questions. In our earlier experiments using the MMLU benchmark, we evaluated both w/ CoT and w/o CoT settings.While the overall performance was superior w/ CoT (thus leading to its adoption), a nuanced observation emerged: CoT was more effective for STEM questions, whereas the absence of CoT proved advantageous for humanities and social science questions. It may be because “CoT may help the multi-step reasoning in STEM multi-choice questions, while humanities and social sciences questions involve more memorization and single-step reasoning, where CoT may introduce additional errors.” --- > Figure 2: Any reason why for HumanEval and BBH datasets, the scores dropped even though the GLAN data size increase? The score drop happens within the 50k to 1M range. The main reason is probably due to the relatively small average size of data points per discipline. We have 126 disciplines in total and on average we have: - 2K examples per discipline for a total of 200K examples, - 4K examples per discipline for a total of 500K examples, - 8K examples per discipline for a total of 1M examples. We do observe a significant leap in performance from 1M to 10M examples on HumanEval and BBH, when data points per domain is significant enough. We will add computational/API cost related discussions or from the checklist/appendix to the main paper. ---------- Thanks again for your review! All discussions and results above will be added to our revised manuscript. --- Rebuttal Comment 1.1: Title: Reply to Authors' Rebuttal Comment: Thank you for your clarifications! Regarding the first point, I apologize for the confusion. Covid-19 is indeed already known to GPT-4 as its training data extends until October 2023. I was referring to scenarios involving new topics that emerged after October 2023, though I can't think of specific examples at the moment. Would such scenarios be handled as outlined in Sections 2.2 and 2.3 as you have explained here? --- Reply to Comment 1.1.1: Comment: Thank you for your insightful question and for bringing up this important point. Regarding new topics that emerged after October 2023, it is still possible to handle these scenarios with some modifications to the underlying LLM or prompts as described in Sections 2.2 and 2.3. Here are three methods to ensure our approach remains up-to-date. In the following, we refer to the prompts in Sections 2.2 and 2.3 as "GLAN prompts": - **Using Retrieval-Augmented Generation (RAG)**: You can leverage an existing LLM integrated with web search capabilities (e.g., Perplexity AI API https://docs.perplexity.ai/reference/post_chat_completions) or implement RAG using GPT-4 function calls combined with the Google Search API. In this approach, the GLAN prompts remain unchanged. The RAG model first retrieves documents relevant to the new topic and then integrates these documents with the GLAN prompts. This method is very likely to be effective because new topics are typically related to pre-existing knowledge within the LLM. With the retrieved documents, the LLM is still likely to generate a reasonable subject list or syllabus (i.e., breaking down the new topic given docs introducing it). - **Prepending Documents of the New Topic to Our Prompts**: Conduct a search for the new topic using a search API (e.g., Google Search API) and prepend the top returned documents and "Based on the context above, your task is as follows." to the GLAN prompts. It may also be necessary to prepend these documents to the instruction generation prompt in Section 2.4 to ensure that the LLM comprehends any new terms. This method is similar to the RAG approach but provides more transparency to API users regarding the integration of search results with prompts. - **Adopting Newer LLMs**: A more straightforward solution is to wait for the release of newer models (in three months or less) by leading LLM companies (e.g., OpenAI, Anthropic, Meta, Mistral) which will likely include training on documents covering the new topic. Once these models are available, they can be adopted. We appreciate your question as it tends to broaden the scope of our method. Thank you again for your valuable feedback!
Rebuttal 1: Rebuttal: We thank reviewers for your valuable feedback; in this general rebuttal, we address common concerns and questions raised. **Regarding Computational Cost** “8 days using 32 A100 GPUs” is the cost of fine-tuning Mistral on the 10 million instructions we generated. We do not know the computational cost of generating these 10 million instructions, as the model architectures and parameters for GPT-4 and GPT-3.5 are not disclosed. **Regarding API Cost** We estimate the API cost for data generation, which amounts to approximately 360K USD when using GPT-4 and GPT-3.5 (for response generation), based on data from context.ai and OpenAI API official pricing. It is important to note that a team within our organization supports these GPT API calls and we believe the actual cost is substantially lower than 360K USD. As of today, we recommend using GPT-4o and GPT-4o-mini (for response generation) to reproduce the data, reducing the cost to approximately 66K USD. This recommendation is based on our findings that GPT-4o outperforms GPT-4 in many tasks and GPT-4o-mini consistently outperforms GPT-3.5. Furthermore, by leveraging Mistral Large 2 and Mistral 8x7B, the cost can be further reduced to around 42K USD. Notably, the API costs have decreased significantly since last year: - GPT-4-0613: 30/60 USD per million input/output tokens - GPT-4-Turbo-1106: 10/30 USD per million input/output tokens - GPT-4o: 5/15 USD per million input/output tokens - GPT-4o-2024-08-06: 2.5/10 USD per million input/output tokens - GPT-4o-mini: 0.15/0.60 USD per million input/output tokens We anticipate that the API costs will continue to decrease in the future, making the application of GLAN more feasible. **Differentiating GLAN from Other Approaches** This work does not aim to achieve SOTA results on existing benchmark tasks using minimal resources. Intentionally generating data similar to target tasks (e.g., paraphrasing an existing training set) is perhaps the most cost-effective method to improve on these target tasks. However, in the long run, this class of methods lead to overfitting on these tasks. LLMs of today (even 7B models) are capable of solving many different tasks, and existing benchmark tasks are only a small subset of them. Our method, GLAN, aims to enhance the capabilities of LLMs across a wide range of tasks (not just the tasks with good evaluations), and we do not use training data from target tasks at all. Our assumption is that all instruction data for different tasks can be generated using the same method; if we can perform well on known tasks (with good evaluations), we can probably also do well on tasks still lacking evaluations. Results in Section 3.3 and 3.6 demonstrate that we did reasonably well on existing known tasks. In short, GLAN aims to enhance capabilities across a wide range of tasks, whereas previous methods such as WizardLM aim to enhance capabilities on one or a few tasks. To achieve the same level of performance on a particular task, our method needs to generate more data, which is the price to pay for cross-task generalization.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DeMo: Decoupling Motion Forecasting into Directional Intentions and Dynamic States
Accept (poster)
Summary: In this paper, a decoupled decoding process integrating intention and future state querying is proposed for motion prediction task. Through hybrid design with attention and Mamba, the proposed DeMo framework achieved strong performance on AV2 and NuScenes benchmarks. Strengths: 1. A novel decoupled query design for intentions and future states. 2. A good trial in leveraging Mamba structure for efficient statewise decoding. 3. Solid performance on prediction benchmarks. Weaknesses: 1. An over claim for SOTA performance: For instance, the LOF [1] method showcases much better performance in AV2 compared with proposed DeMo. I think a thorough comparison and corrections should be conducted. 2. Heavy computational load for long-horizon state queries. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Whats is the computational cost such as FLOPs and training time of DeMo compared to other methods? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review and the suggestions for improvement. Below are our responses to the reviewer’s comments: **Q1: Claim for SOTA performance.** Thank you for your attention. Regarding the Contemporaneous Work LOF [1], it was released on arXiv on June 20, 2024, after we submitted our paper to NeurIPS in May 2024. We will add a discussion on LOF to provide a thorough comparison in the revised paper. **Q2: Heavy computational load for long-horizon state queries.** To address this issue, we propose balancing efficiency and performance by utilizing one state query to represent several time steps, as shown in the left part of Table 6. This can effectively reduce the computational load. **Q3: Computational cost compared to other methods.** Thank you for this valuable suggestion. We have added a table below to further compare computational cost. The experiments are conducted on Argoverse 2 using 8 NVIDIA GeForce RTX 3090 GPUs. |Method|FLOPs|Training time|Memory|Parameter|Batch size| |:---:|:---:|:---:|:---:|:---:|:---:| |SIMPL [2]|19.7 GFLOPs|8h|14G|1.9M|16| |QCNet [3]|53.4 GFLOPs|45h|16G|7.7M|4| |DeMo (Ours)|22.8 GFLOPs|9h|12G|5.9M|16| > [1] FutureNet-LOF: Joint Trajectory Prediction and Lane Occupancy Field Prediction with Future Context Encoding. arXiv preprint: 2406.14422, 2024. > [2] SIMPL: A Simple and Efficient Multi-agent Motion Prediction Baseline for Autonomous Driving. IEEE Robotics and Automation Letters, 2024. > [3] Query-centric trajectory prediction. CVPR, 2023. --- Rebuttal Comment 1.1: Comment: Thanks very much for perfectly addressing my problems. I would like to raise the Soundness, Contribution and maintaining the general rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Rxa7 We appreciate the reviewer's time for reviewing and thanks again for the valuable comments and the positive score! Best wishes Authors
Summary: The manuscript presents a novel decoupling method for motion forecasting tasks, where the directional intentions are predicted first and the dynamic states following the predicted direction are predicted accordingly. The proposed solution is easy-to-follow, the model size is small, and the experimental results are convincing, achieving the first place of the Argoverse 2 dataset challenge assumed by the reviewer. Strengths: 1. Simple model design with good performance 2. Extensive experiment and convincing results. Weaknesses: 1. Some unclear details about loss function and model design. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The loss function is simply a sum of all loss terms. Is it possible to have imbalanced losses, or in other words, one loss dominating the training? 2. Is there a Typo at Line 154? $Q_a$ or $Q_s$? 3. Between Lines 135-136, the authors said that $T_s$ and $T_f$ can differ. In this case, is the prediction conducted multiple times to fill the $T_f$ steps? If so, can the directional intention change in this process? If this is possible, can the model handle it? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review and the suggestions for improvement. Below are our responses to the reviewer’s comments: **Q1: About loss function.** Thank you for your careful consideration of our work. As indicated in the right part of Table 5 in our paper, each loss component in our model contributes effectively to the training process, ensuring that no single loss term dominates the optimization. This approach aligns with recent works, such as QCNet [1] and MTR [2], which also employ a simple sum of all loss terms. **Q2: A Typo at Line 154.** Yes, we apologize for the typo. It should be $Q_s$. We have revised it in the manuscript. **Q3: About $T_s$ and $T_f$ between Lines 135-136.** If $T_s$ and $T_f$ differ, it results in the cases mentioned in the left part of Table 6, where we reduce the number of state queries to make the model more efficient. For example, in Argoverse 2, with $T_f=60$ and $T_s=30$, this corresponds to the case in the third row of the left part of Table 6, where one state query represents two contiguous time steps. There is no need for multiple predictions, and the directional intentions remain unchanged. > [1] Query-centric trajectory prediction. CVPR, 2023. > [2] Motion transformer with global intention localization and local movement refinement. NeurIPS, 2022. --- Rebuttal Comment 1.1: Comment: The authors have addressed most of my concerns. I will maintain my current score. --- Rebuttal 2: Comment: Dear Reviewer V1LD We appreciate the reviewer's time for reviewing and thanks again for the valuable comments. Best wishes Authors
Summary: DeMo: Decoupling Motion Forecasting into Directional Intentions and Dynamic States introduces a state of the art model architecture for motion forecasting (predicting the future trajectories of road actors for the purpose of autonomous driving). The authors make two notable contributions. First they provide an alternative to the typical one-query one-trajectory paradigm of current high performing models. Instead they decompose queries into "mode queries" which attempt to capture directional intentions and "state queries" which attempt to capture the dynamic properties of a trajectory. The authors also replace the sequence processing portion of the encoder (typically cross attention layers in current models or RNNs in older models) with Mamba blocks. The authors include numerous ablations demonstrating that each of these contributions plays a substantive role in the performance of their model. The authors present results on two significant motion forecasting benchmarks. Strengths: This is a very strong and well-presented manuscript. Some strengths include: ### Presentation - The copy is excellent. The paper is well written and largely quite clear. - Tables are excellent. - Visual examples (figures 3-5) are clear and compelling. ### Results The results are presented on two (of the three) most important motion forecasting benchmarks for self-driving. In both cases DeMo is clearly SOTA. They additionally present ensemble results. ### Analysis The authors conduct an extensive set of ablation experiments. They conduct the expected ablations (disabling various parts of their architecture). Additionally, they explore the impact of: layer types (RNN vs Uni MB vs Bi MB), the number of layers, the number of state queries, the auxiliary losses. Weaknesses: This is already quite a solid manuscript, however I hope that the authors can selectively address a few of the weaknesses and questions below to arrive at an even better paper. ### Results While AV2 and NuScenes are both important benchmarks, WOMD (Waymo) is probably the most important (or co-most-important) benchmark in this space right now. It would have been nice to see results on WOMD. There are tools like https://github.com/vita-epfl/UniTraj now to make this easier. ### Analysis - While tables 4 and 5 clearly demonstrate that the query decomposition and auxilary losses contribute significantly to the model performance, there is little evidence to support the motivating intuitions. For example, the introduction of state queries and the associated auxiliary loss is supposed to produce better dynamics. Can we come up for a metric to measure that? Can we find ways to demonstrate that our decomposition works as we expect? Directional information is predominantly stored in mode queries while dynamics are encoded in state queries? - I would have liked to see the authors get an RNN based model to successfully converge. (The GRU result is disappointing). ### Reproducibility The authors promise to release code (which would entirely alleviate my concern here), however I do not feel that the model is reproducible from the manuscript alone. There is very little information about the auxilary losses. Additionally there is no supplementary diagram or table with an explicit architectural description (layer sizes, normalization, etc). Technical Quality: 3 Clarity: 3 Questions for Authors: - Can you provide a precise formulation of the auxilary losses? - Can you report training time? - How does the model in ID3 (table 4) work? Are the decouple queries just concatenated and fed directly to the MLP decoder? - How does the model size vary across your ablations (table 4)? I.e. why should I ascribe the performance improvements to architectural choices and not just growing model capacity? - For inference speed can you report 99th percentile (or similar) rather than mean? The AV industry is primarily concerned with worst-case performance. - How does the multi-agent set up work? Do I re-encode the scene in each agent-centric frame? - Line 488 -- the text in this paragraph makes it sound like ZOH is a continuous->discrete transformation. I might be wrong here, but I typically think of it as a digital->analog transformation. Are we really doing something like the inverse of ZOH? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review and the suggestions for improvement. Below are our responses to the reviewer’s comments: ## _Response to Weaknesses._ **1. Results: Results on WOMD (Waymo).** We provide results on WOMD using the settings in UniTraj [1], as shown below. The results of other methods are also from UniTraj. |Method|$minFDE_6$|$minADE_6$| |:---:|:---:|:---:| |MTR|1.78|0.78| |Wayformer|1.65|0.73| |AutoBot|1.65|0.73| |DeMo (Ours)|1.59|0.75| **2. Analysis: Better measure query decomposition.** Thank you for this valuable suggestion. We measure the outputs of state queries and mode queries with $minADE$ and $minFDE$, as shown below. We can see that the $minADE_1$ and $minFDE_1$ of the trajectories from state query outputs are better than those from mode query outputs. This means state dynamics are encoded in state queries. Additionally, there are six output trajectories from mode queries, indicating that directional information is predominantly stored in mode queries. The final outputs take advantage of the strengths of both. We will add this analysis to the revised paper. |Method|$minFDE_1$|$minADE_1$|$minFDE_6$|$minADE_6$| |:---:|:---:|:---:|:---:|:---:| |state query outputs|3.84|1.52|-|-| |mode query outputs|4.12|1.63|1.31|0.67| |final outputs|3.93|1.54|1.24|0.64| **3. Analysis: About RNN-based model and the GRU result.** As shown in the left part of Table 5, we conducted an ablation study on an RNN-based model to compare GRU with Mamba. If we directly replace Mamba with GRU to process state queries, it is difficult to achieve convergence, leading to rather poor results, as shown below. Perhaps RNN-based models can process state queries in some other way; this could be a research problem on its own. ||$minFDE_6$|$minADE_6$|$MR_6$| |:---:|:---:|:---:|:---:| |GRU|1.842|0.923|0.274| **4. Reproducibility: Architectural description.** Thank you for your question. The layer sizes are provided in Table 7 in the Appendix. For normalization, we use nn.LayerNorm, and for activation, we use nn.GELU. Additional details can be found in the implementation section at Lines 184 and 459. ## _Response to Questions._ **1. A precise formulation of the auxiliary losses.** As in Line 143, we use an MLP to decode state queries into a single future trajectory $Y_f$ and calculate the loss with ground truth $Y_{gt}$ to obtain $L_{ts}$. $L_{ts} = {\rm SmoothL1}(Y_f, Y_{gt})$ As in Line 150, we use MLPs to decode the future trajectories $Y_f$ and probabilities $P_f$. So $L_m$ is shown below: $Y_{best}, P_{best} = {\rm SelectBest}(Y_f, Y_{gt})$ $L_{m} = {\rm SmoothL1}(Y_{best}, Y_{gt}) + {\rm CrossEntropy}(P_f, P_{best})$ **2. Training time.** About 9 hours in total. Our settings are as mentioned in Line 184 and Line 463. **3. The model in ID3 (Table 4).** Yes, you are right. Decoupled queries are just concatenated and fed directly to the MLP decoder, and we use two auxiliary losses to optimize the two types of queries. **4. Model size vary across your ablations (Table 4).** The table below shows the model size variations across our ablations in Table 4. Additionally, we perform an ablation on the depth of Attention and Mamba layers, as shown in the right part of Table 6. We can see that even a single layer can achieve decent performance. This indicates that the performance improvements are due to architectural choices rather than merely increasing model capacity. |ID|model size| |:---:|:---:| |1|2.0M| |2|2.0M| |3|2.1M| |4|5.8M| |5|5.9M| **5. For inference speed.** Thank you for this valuable suggestion. In most scenarios ($>$ 90\%), the inference speed is between 25 to 45 ms. In some complex scenarios, it can go up to 70 ms, while in some easy scenarios, it is only 20 ms. **6. Multi-agent setting.** We use queries for each agent to predict their trajectories in the ego agent's coordinate system, so we do not re-encode the scene in each agent-centric frame. This approach avoids making the model costly. **7. For Line 488 about ZOH in Mamba.** ZOH can discretize continuous signals; the specific formula can be found in the Mamba [2] paper. > [1] UniTraj: A Unified Framework for Scalable Vehicle Trajectory Prediction. arXiv preprint:2403.15098, 2024. > [2] Mamba: Linear-Time Sequence Modeling with Selective State Spaces. arXiv preprint:2312.00752, 2023.
Summary: The paper presents DeMo, a novel framework for motion forecasting in autonomous driving systems. DeMo decouples the motion forecasting task into two distinct components: mode queries for capturing directional intentions and state queries for modeling dynamic states over time. This separation allows DeMo to separately optimize for multi-modality and dynamic state evolution, leading to a more comprehensive representation of future trajectories. The framework employs a combination of Attention and Mamba techniques for global information aggregation and state sequence modeling. Extensive experiments on the Argoverse 2 and nuScenes benchmarks demonstrate that DeMo achieves state-of-the-art performance in motion forecasting. Strengths: The overall idea behind DeMo is reasonable and technical soundness. Additionally, the experiments are comprehensive, demonstrating the results of DeMo across two different datasets, Argoverse 2 and nuScenes. Weaknesses: There are two major weaknesses: 1. Technical Descriptions Lack Clarity: The technical explanations of the methods and algorithms used in DeMo are not sufficiently clear, making it challenging for readers to fully understand the proposed techniques and their implementations. 2. Unconvincing Contributions: The claimed contributions of the paper are not entirely convincing. For instance, the paper doesn’t compare DeMo to models that have superior performance on existing leaderboards, questioning the novelty of its improvements. Additionally, there are previous works (e.g. Motion Mamba by Zhang et.al.) that have already combined Mamba and Transformer techniques for time series motion data, which undermines the claim that DeMo introduces a novel approach. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. In the first line of Eq. (2), what do the symbols {$\{ t_1, t_2, ...t_{T_s} \}$} represent, and how do $T_s$ and $T_f$ differ from or relate to each other? How are these time values obtained and utilized in the model? 2. For the motion model $Q_m$, what specific features are considered for different motion models? The paper does not clearly specify the features used in these motion models. 3. In Section 3.4, the loss function $L_{ts}$ is described as related to "intermediate features of time state." What exactly are these intermediate features, and what is their role in the model? This concept is not clearly defined in the paper. 4. Table 1 does not include baseline methods such as SEPT and SEPT++ that have better performance and earlier submission dates than DeMo. Similarly, Table 3 omits models like QCNet, which outperform DeMo. Why are these models not included in the comparisons? 5. The paper lacks detailed information about the model’s parameters and settings. Can you provide more specific details on the parameters used for DeMo? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The technical descriptions lack clarity and the overall contribution is limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review and the suggestions for improvement. Below are our responses to the reviewer’s comments: ## _Response to technical descriptions._ **Q1: About Eq. (2) and related symbols meaning.** For the meaning of {$t_1,…,t_{T_s}$}: as in Line 135, these represent real-time differences of temporal states. For example, in Argoverse 2, we aim to predict future trajectories over 60 time steps. Therefore, if we use 60 state queries, $T_s=60$, and {$t_1,…,t_{T_s}$} corresponds to {$1,2,…,60$}. For $T_f$ and $T_s$: As stated in Line 114, $T_f$ refers to the future time steps. As stated in Line 135, $T_s$ represents the state steps used for initializing state queries. As stated in Line 233, in our default setting, $T_s=T_f$. We utilize one state query to represent several time steps in the left part of Table 6. In these cases, $T_s$ differs from $T_f$. **Q2: For the motion mode $Q_m$.** As stated in Line 146 and Line 151, we have $K$ mode queries to represent $K$ trajectories. We decode the $K$ future trajectories and their corresponding probabilities, using losses to optimize specific features for different motion modes, as done in QCNet [1], MTR [2], and other works. **Q3: About loss function $L_{ts}$ and intermediate features.** As stated in Line 143 and shown in Figure 2, we use an MLP to decode state queries into a single future trajectory $Y_f$ and calculate the loss with ground truth $Y_{gt}$ to obtain $L_{ts}$. Thus, the state queries $Q_s$ serve as intermediate features, similar to the mode queries $Q_m$. The role of these two types of intermediate features is to form the hybrid queries $Q_h$ to decode the final outputs. **Q4: About detailed information about the model’s parameters and settings.** - Training settings: Line 184, Line 459 (Appendix). - Dataset and metric settings: Line 175, Line 180. - Model size (5.9M) and inference speed (38ms): Line 242. - The number of layers in each component: Table 7 (Appendix). ## _Response to contributions._ **Q5: Table 1 and Table 3 do not include baseline methods such as SEPT, SEPT++, QCNeXt.** For Table 1, our method (ranked 3rd) is better than SEPT [3] (ranked 4th) on the official leaderboard of Argoverse 2. SEPT is a self-supervised method utilizing all sets (including the test set) for pretraining. It is orthogonal to ours, and we believe that SEPT can also be integrated into our method for further improvements. Additionally, SEPT++ has not been released yet. For Table 3, our model is not specifically designed for the multi-agent setting, unlike QCNeXt [4], which is a method specifically designed for competition. We apologize for the insufficient comparison, which we have addressed in the revised paper. **Q6: Compare with previous works (e.g. Motion Mamba) for time series data.** Although both Motion Mamba [5] and our method utilize Mamba modules, the motivation and tasks are completely different. Our critical idea is decoupling motion forecasting, and Mamba is an effective tool to implement this aim. Additionally, the structures are also completely different. We will add a discussion to highlight the distinctions and contributions of our work compared to Motion Mamba in the revised manuscript. > [1] Query-centric trajectory prediction. CVPR, 2023. > [2] Motion transformer with global intention localization and local movement refinement. NeurIPS, 2022. > [3] Sept: Towards efficient scene representation learning for motion prediction. ICLR, 2024. > [4] Qcnext: A next-generation framework for joint multi-agent trajectory prediction. arXiv preprint:2306.10508, 2023. > [5] Motion mamba: Efficient and long sequence motion generation with hierarchical and bidirectional selective ssm. arXiv preprint:2403.07487, 2024. --- Rebuttal 2: Comment: Dear Reviewer pCB4, We appreciate your time for reviewing, and we really want to have a further discussion with you to see if our response solves the concerns. We have addressed all the thoughtful questions raised by the reviewer (eg, technical descriptions and contributions), and we hope that our work’s impact and results are better highlighted with our responses. It would be great if the reviewer can kindly check our responses and provide feedback with further questions/concerns (if any). We would be more than happy to address them. Thank you! Best wishes, Authors --- Rebuttal Comment 2.1: Comment: Thank you for the reply, and I appreciate the additional information, which does help clarify some points for the reader. Regarding the methods on the leaderboard, I suggest including all relevant methods rather than selectively comparing with those that perform worse than the proposed model. Explaining why certain methods outperform the proposed model would not necessarily diminish the overall contribution of the paper. Additionally, it would be beneficial to address why one model performs better than another under different circumstances, as well as the limitations of the proposed method compared to others. Given the added details and improved clarity, I am inclined to raise my score. --- Reply to Comment 2.1.1: Comment: Dear Reviewer pCB4 We appreciate the reviewer's time for reviewing and thanks again for the valuable comments and the improved score! We will revise and refine the paper as suggested in the revision. Best wishes Authors
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Local Anti-Concentration Class: Logarithmic Regret for Greedy Linear Contextual Bandit
Accept (poster)
Summary: This paper analyzes a Greedy bandit algorithm, in the context of a linear bandit problem where (1) a regression parameter is unknown and fixed for the experiment and (2) the $K$ arms from which the decision-maker can choose are sampled at the beginning of each step $t$, from a fixed context distribution. Recent works have shown that in similar problems exploration is not necessary, and greedy algorithms are sufficient to achieve satisfying theoretical guarantees. This observation is interesting because (1) some standard methods like UCB can be costly in linear bandits, and (2) in some settings exploration might be controversial, since it might imply not acting optimally with the knowledge at hand with consequences on living subjects. The authors first propose a generic condition on the context distributions, called the LAC condition, under which they analyze Greedy. They then describe the two main statistical challenges needed for the analysis, before presenting poly-logarithmic regret guarantees under bounded context distributions. Finally, they showcase experiments that validate their approach, advocating for the use of Greedy to tackle the linear bandit problem that they consider. Strengths: First, for the reasons mentioned in previous paragraph, I believe that Greedy algorithms work in some learning settings and understanding their properties is an interesting research question. It is true that, with the development of powerful tools like optimism or Thompson Sampling, we tend to jump on these methods to tackle every bandit problem without necessarily questioning whether their exploration mechanism is necessary or not. Hence, this line of work is in my opinion interesting to recall that simple methods should be tried first before moving to those more elaborate tools. Then, the authors put a lot of efforts in describing and providing intuitions on the two statistical challenges posed by their problem, namely the positivity of the eigenvalues of the design matrix (i.e. the implicit exploration of greedy), and the fact that randomly sampling contexts guarantee relatively large sub-optimality gaps at each time step. These two challenges are the key ingredients to derive the poly-logarithmic regret guarantees for the greedy algorithm proposed by the authors. Furthermore, the LAC condition is well-described and rather intuitive. Weaknesses: Edit: after reading the rebuttal and discussing with other reviewers I decided to increase my score ______________________________ In my opinion, the first weakness is the setting itself. Other works in the literature seem to tackle two settings: the one presented in the paper, and another one which actually seems to be the main setting of the other works, in which the arms are fixed but the regression parameter is sampled at each round. This second setting looks more natural to me: the arms are fixed but their performance vary according to external environmental factors, eventually making each of them optimal in some context. I can see the applications of this setting, but I cannot see the applications of the setting presented in the paper. Then I believe that, although the main paper is not very technical, the presentation makes it difficult to grasp the main technical insights of the analysis. Many arguments are discussed but not properly sketched, and it is hard to precisely correlate the two « challenges » with the actual regret analysis. I decided to check the proof of the regret bound in appendix, and then things got even worse: the appendix is so badly organized that it requires at least 4 screens to check any single results, due to no clear proof scheme and various technical results spread everywhere. Overall, after considerable efforts I did not succeed in understanding precisely how each result is used in the regret analysis. I would suggest the authors to re-organize the paper by properly writing the regret analysis, and along the analysis pointing out how the main technical results are used, that would themselves be properly proved in dedicated sections. In appendix their should not be proof sketches but rigorous proofs, and discussions about intuitions should be clearly separated from the proofs. Furthermore, the unbounded context case seems to be tackled in an extremely complicated way. I do not get the point of this, it does not even seem important in the main paper, maybe tackling the bounded case only would be sufficient and remove some confusion. Second, I do not get why the authors do not eventually use the proof for bounded context with $x_{max}=O(polylog(T))$, which is eventually true with high probability with light-tailed contexts. Overall, it seems to me that the paper requires significant re-writing to (1) make the proofs rigorous and easy to check (or even just that an expert non-author reader might be able to check them), (2) clarify the contributions by focusing on precise interesting cases, (3) more precisely linking the « challenges » with the regret analysis. Furthermore, the setting appears as artificial in the current version of the paper. Technical Quality: 2 Clarity: 1 Questions for Authors: * The LAC condition implies a mode of the context distribution in zero, and some flatness around $0$. I am wondering how instrumental is this property to guarantee the implicit exploration of Greedy and the not-too-small sub-optimality gap. For instance, with a Gaussian in mind, do the guarantees generalize if we assume the existence of a mode and shift the condition to be on the norm of $x-a$ for some vector $a$? The adaptation does not seem direct to me because sampling around $0$ facilitate that diverse directions are sampled, making the gap emerge. * Could you be provide some justification of the settings, i.e. some applications where it would make sense to assume that $K$ arms are drawn i.i.d from a context distribution at each time step, with independence between steps? * What is special about unbounded context that make it relevant to keep the distinction in the paper? * The dependency in $t$ in challenge 2 looks fishy to me: I don’t see why the probability bound should depend on $T$ (or 1): the probability only depends on the context distribution and K, so the bound should only depends on $\epsilon$. Furthermore, by independence the same bound should hold for all $t$. Am I missing something? In addition, assuming the result is true, how is the $1/\sqrt{T}$ shaved off in the analysis? Because I would assume that the regret analysis would consist in multiplying the error by $T$ under the case the event does not hold. * Additional related question: what $\epsilon$ is used in the analysis? Confidence: 5 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: Not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank the reviewer for taking the time to review our paper and for the comments. However, we believe there is a fundamental disagreement between the reviewer's comments and the focus of our study. We hope to remedy this through open-minded discussion. We strongly believe and remain very confident that our work presents significant values and very important results that advance the existing knowledge about greedy algorithms in contextual bandits. --- **[W1]** With all due respect, we are very concerned that the reviewer does not even agree with the stochastic context setting and suggests a new setting (the reviewer thinks it "looks more natural") where features are fixed but the parameter is sampled. Then, this leads to the reviewer's conclusion that our contribution should be somehow discounted, which we believe is an unfair evaluation. This is problematic because **all of the existing greedy contextual bandit literature so far has been proposed under stochastic contexts, focusing on how such stochasticity in context allows for greedy algorithms to achieve sub-linear regret** [8, 16, 25, 28]. In fact, the previous literature focused only on very few distributions (e.g., Gaussian and uniform), as we state in the paper, which our work significantly expand and even achieve much smaller regret. Such a comment not only **rejects the entire literature on greedy contextual bandits** but also **raises concerns about whether our work can be adequately evaluated compared to the existing results** if the problem setting itself is disregarded. Our work significantly expand the set of context distributions admissible for greedy bandit algorithms, which has been an open question among many researchers. There is a rich history of contextual bandits with stochastic contexts even beyond the greedy bandit literature [5, 8, 15, 16, 25, 28]. We hope for a fair re-evaluation based on the established and widely accepted problem setting compared to the relevant literature within the field. Please do not take this personally; we ask that the reviewer put themselves in the authors' shoes and consider whether there is any meaningful feedback to be gained from a comment that discredits the problem itself and disregards the entire history of research on greedy contextual bandits (and somehow hope that the authors' hard work can evaluated fairly). This is the main problem setting that all relevant literature has been working on. We respectfully but strongly dispute this point. --- **[W2]** Firstly, we strongly disagree with the reviewer's statement that "the main paper is not very technical." We are unsure how our results and analysis could be perceived as "not very technical" to begin with. While we appreciate the feedback on improving the presentation in the appendix and will incorporate appropriate edits, we feel that the assertion that our paper is "not very technical" is unfounded. Throughout our paper, we present rigorous analysis and stronger results than what has previously been known in the literature. Despite the disagreement, we are more than happy to provide a proof sketch—some details are already presented in Appendix D, but here, we will present more thorough arguments. We first demonstrate how our two challenges can lead to logarithmic regret. *Challenge 1: $\sqrt{t}$-rate $\ell_2$ concentration* First Challenge can lead the $\ell_2$ statistical resolution of the estimator. Under the Challenge 1, we can get \begin{align*} |X_{a(t)}^\top (\hat{\theta}_{t-1} -\theta^\star)| \leq cx_{\max}\frac{\sqrt{d}}{\sqrt{ \lambda_\star t}}, \quad |X_{a^\star(t)}^\top (\hat{\theta}_{t-1} -\theta^\star)| \leq cx_{\max}\frac{\sqrt{d}}{\sqrt{ \lambda_\star t}} \end{align*} holds with high probability. Here, $c = \tilde{O}(1)$ constant. However, this resolution in insufficient for the logarithmic regret, it can only get $O(\sqrt{T})$ regret bound. *Challenge 2: Logarithmic regret* But with the help of the Challenge 2 (margin condition), we can get logarithmic expected regret upper bound. When the greedy policy select $a(t)$, it means that \begin{align*} X_{a(t)}(t)^\top \hat{\theta}_{t-1} \geq X_{a^\star(t)}^\top \hat{\theta}_{t-1} \end{align*} and by the definition of the optimal arm, \begin{align*} X_{a(t)}(t)^\top \theta^\star \leq X_{a^\star(t)}^\top \theta^\star \end{align*} holds. Under the Challenge 1, we get \begin{align*} \operatorname{reg}'(t):= X_{a^\star(t)}(t)^\top \theta^\star- X_{a(t)}(t)^\top \theta^\star \leq 2cx_{\max}\frac{\sqrt{d}}{\sqrt{ \lambda_\star t}}. \end{align*} Next, we define the event $E$ as the event of $\mathbf{X}(t)$ with regret occurring as $\operatorname{reg}'(t) > 0$. Under the event $E$, the suboptimality gap of $\mathbf{X}(t)$ satisfies \begin{align*} \Delta(\mathbf{X}(t)) \leq \operatorname{reg}'(t) \leq 2cx_{\max}\frac{\sqrt{d}}{\sqrt{ \lambda_\star t}} \end{align*} and by the Challenge 2, we get \begin{align*} \mathbb{P}[\operatorname{reg}'(t) >0] \leq 2c x_{\max}C_{\Delta} \frac{\sqrt{d}}{\sqrt{ \lambda_\star t}} + \frac{1}{\sqrt{T}} \leq 3cx_{\max}C_{\Delta} \frac{\sqrt{d}}{\sqrt{ \lambda_\star t}} \end{align*} By combining two things, we can bound the expected regret as \begin{align*} \operatorname{reg}(t) &= \mathbf{E}[\operatorname{reg}'(t)] \leq 3cx_{\max}C_{\Delta} \frac{\sqrt{d}}{\sqrt{ \lambda_\star t}} \times 2cx_{\max}\frac{\sqrt{d}}{\sqrt{ \lambda_\star t}}\\ &= 6c^2x_{\max}^2 C_{\Delta} \frac{d}{\lambda_\star t}. \end{align*} Using this argument, our only remainning goal is bounding two constants in Challenge 1 and 2. **[Bounded and unbounded contexts]** Please see the [Q3] parts. --- Rebuttal 2: Title: Answers to Reviewer's Questions Comment: **[Q1]** **No, your first sentence (premise) in the question is incorrect:** The LAC condition **does not "imply a mode of the context distribution in zero.""** It seems there might be a misunderstanding of the LAC condition's properties. If you need any clarification, please feel free to let us know. Regardless, if you are still interested in a shifted mean, it turns out that this is not an issue at all. Our regret bound still holds. For example, for the Gaussian with a shifted mean, please see Appendix C.1. Here is a imple explanation. Let the mean zero contexts $\mathbf{X}(t)$'s density as $f(x)$ with LAC fuction ${L}(\cdot)$. Then, the shifted contexts with mean $\mu$ has density $g(x) = f(x + \mathbf{\mu})$ and see that \begin{align*} \lVert \nabla \log g(x) \rVert_{\infty} &= \lVert \nabla \log f(x + \mu) \rVert_{\infty} \\ &\leq L( \lVert x + \mu\rVert_{\infty}) \\ &\leq L( \lVert x \rVert_{\infty} +\lVert \mu\rVert_{\infty}) \end{align*} Hence the density $g(x)$ has LAC with function $L'(x) = L (x + \lVert \mu \rVert_{\infty})$ and since $\lVert \mu\rVert_{\infty} = O(1)$, it has the same rate. **[Q2]** We believe **there is a misunderstanding in the comment here**. We do NOT assume i.i.d. of arms at all. We only assume the independence of the entire arm set over time (note there is a clear difference between i.i.d.-ness vs. independence only; independent of each arm vs. independence of the arm set!), and hence we allow that arms can even be dependent on each other in a given arm set. A rich body of linear contextual bandit studies assumes stochastic contexts and independence of the contexts (often even stronger i.i.d. contexts) over time [5, 8, 15, 16, 25, 28]. In particular, all of the relevant greedy contextual bandit literature assumed even i.i.d.-ness (often i.i.d. for each arm), which is stronger than our problem setting that only requires independence of the arm set (not each arm or i.i.d.). We emphasize again that $X_i(t), X_j(t)$ can be correlated for $i, j \in [K]$. Hence, we strongly believe that justification is provided in order for our work to be fairly evaluated (note again our problem setting is even weaker than the previous problem setting in greedy contextual bandits). **[Q3]** We are happy to address this question. The reason we present both bounded and unbounded contexts (particularly why we include analysis for unbounded context) is because previous works in greedy contextual bandits largely utilized unbounded noise (e.g., Gaussian) in feature setups. Hence, for fair comparisons with the existing results, we feel that it is fair to include unbounded cases [25, 28, 29]. There is also a technical reason to separately consider the two cases. Boundedness assumptions affect the regret bound differently. Hence, we would like to use appropriate soft boundedness of unbounded contexts. If we were to assume conventionally accepted $\ell_2$ boundedness for light-tail unbounded distributions, there could be a dimension dependency for the high-probability bound. Generally, it has a $\tilde{O}(\sqrt{d})$ bound which is often ignored but can be significant. However, since we assume $\psi_1$ or $\psi_2$ norm boundedness for unbounded contexts, it is dimension-free and we can get a tighter bound even in a more transparent manner. **[Q4-5]** Please see the answer to W2 part; we think it will help understand your questions in the proof sketch. We would be happy to answer any further questions. --- Rebuttal Comment 2.1: Comment: Thank you very much for your in-depth response. After reading it and discussing with other reviewers I acknowledge that * My concern about the setting is an opinion and should not be a motive for rejection. * My technical questions are answered, thank you. Furthermore, it seems that some reviewers have been able to check in detail the theoretical results, leading them to strongly support the paper. In that case, I am happy to increase my score and recommend acceptance. However, I still believe that some rewriting might be beneficial to improve the clarity of the final version of the paper. --- Reply to Comment 2.1.1: Title: Thank you Comment: Dear Reviewer Fhbd, Thank you for your open-mindedness in recognizing our work and for the increased score. We will make sure to improve our writing, particularly in the appendix, for the final version.
Summary: This paper aims to expand the range of distributions that can be used efficiently in exploration-free greedy linear contextualized bandits. For this purpose, a new condition called Local-Anti Concentration is introduced. It is claimed that different distributions from the exponential family satisfy this property and they do not require the margin property to achieve $O(poly logT)$ regret. Strengths: For the Gaussian, the authors show that LAC condition holds. Hence, they show an improvement in the regret without assuming any margin condition. Weaknesses: 1. 89, 264: It’s unclear what the entries of this Gram matrix represent. 2. 24-25: It’s not explained why healthcare and clinical domains might find exploration infeasible or unethical. 3. 57: The specifications of margin condition are not explained. 4. Even though there are experiments in the paper, in the guideline the answer to the question 5 is marked as N/A. It's unclear whether the code belongs to the authors or another researcher and hence, the answer. If the code belongs to the authors, the answer should have been YES or NO. 5. Lines 63 to 67 are repeated word for word in 106 to 109, except for the addition of the word "bandit". Please delete one of these sections. 6. 120: It is not clear on what D and B are supposed to be. The authors can add the word "sets" to be more clear. 7. 526-540: Appendix C.1 is not written extensively; the proof is provided only for the Gaussian distribution. This is the main contribution of the paper as stated in 91-92 and used in different places such as 311. Without the proof, it’s unclear how well the rest of the statements for the distributions hold. 8. 490: There is a typo as these algorithms are stated as linTS and linUCB instead of LinTS and LinUCB. 9. 114: There is another typo: "unecessary" should be "unnecessary". 10. 352: Experiment results are not discussed extensively, making it hard to understand how the experiments support the claims. Technical Quality: 2 Clarity: 2 Questions for Authors: - 352: LinTS and LinUCB algorithms are show in the Figures in the Experiments Section. However, it is unclear what these algorithms are. Were they designed by the authors or are they the work of other researchers? In 1444, they are stated as existing bandit algorithms. If so, please provide the appropriate references. - 246: What do the authors mean by if two challenges are “satisfied”? Do you mean “overcome”? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: - Technical contribution: The most significant proof, which is the proof that the given distributions satisfy the LAC condition, was left out. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback and for the opportunity to discuss our work with you. Most of your feedback appears to be clarifications and suggestions for stylistic edits or minor typos, which we appreciate. However, none of your comments seem critical enough to warrant a rating of "Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility, and/or incompletely addressed ethical considerations." Hence, we sincerely ask for a re-evaluation of our work. We strongly believe that the significance and impact of our results are very high, significantly expanding what had previously been known and doing so at a very timely moment. And, please let us know If you have any remaining questions. --- **[W1]** The definition of the Gram matrix is $\Sigma(t):=\sum_{s=1}^t X_{a(s)}(s)X_{a(s)}(s)^\top$ and is stated in Algorithm 1. Readers of the linear bandit literature are familiar with this term, but we will provide the formal definition of the Gram matrix earlier in the text. **[W2]** In healthcare, if a healthcare provider treats a patient with what appears to be sub-optimal (non-greedy) just for the purpose of exploration to learn the effect of a new treatment (not for the benefit of the particular patient being treated), it would be clearly unethical or often not feasible in actual healthcare practices (beyond clinical trials). This aspect has been discussed in various previous literature, including [8]. **[W3]** The exact definition of the margin condition is stated in Challenge 2 (line 266). **[W7]** (LAC examples) First, we prove results in line 195. For exponential, Laplace, and Student's t, we prove results for 1-dimensional distribution. However, using Proposition 1, we can extend this to multi-dimensional contexts. * Exponential: $\nabla \log f(x) = -\lambda $ and hence $| \nabla \log f(x) | \leq \lambda$. * Uniform: $\nabla \log f(\mathbf{x}) = \mathbf{0}$ and hence $||\nabla \log f(\mathbf{x}) ||_\infty \leq 1$. (multi-dimensional) * Laplace: $f(x) = \frac{1}{2 b} \exp \left(-\frac{|x-\mu|}{b}\right)$ and then $|\nabla \log f ({x})| = |\frac{1}{b}|$. * Student's t: $f(x)= \Gamma\big(\frac{\nu+1}{2}\big) / \sqrt{\nu \pi} \Gamma\big(\frac{\nu}{2}\big) \cdot \big(1+\frac{x^{2}}{\nu}\big)^{-(\nu+1)/2}$ then $|\nabla \log f(x)|=|\nabla \frac{\nu +1}{2}\log(1 + \frac{x^2}{\nu})| = \frac{(\nu+1)x}{\nu + x^2}\leq C(\nu)$ for some constant $C(\nu)>0$. **[W10]** Our experiment demonstrates that a greedy algorithm clearly outperforms other widely used algorithms (LinUCB and LinTS) that balance exploration and exploitation for common distributions. It appears that for those distributions, any type of exploration is unnecessary -- greedy algorithms suffice, which is the main assertion of this paper. Due to space constraints, the discussion is provided in Appendix N. **[W4-6, 8-9]** Thank you for pointing out typos and stylistic suggestions. We will incorporate the appropriate edits suggested by the reviewer. --- **[Q1]** LinUCB and LinTS are the most widely used linear contextual bandit algorithms. The LinUCB algorithm follows the paper by Abbasi-Yadkori et al. (2011), and the LinTS algorithm follows the paper by Agrawal and Goyal (2013) to conduct the experiments. **[Q2]** We mean that if the two challenges regarding the context distribution are satisfied, i.e., if two key constants exist, logarithmic regret can be achieved. However, to get the exact upper bound, rather than proving existence of the constant bounding the two constant is essential, and we discuss this throughout the whole paper. **[Details for two challenges and regret]** The following part breifly describe how to challenges can make logartihmic regret. *Challenge 1: $\sqrt{t}$-rate $\ell_2$ concentration* First Challenge can lead the $\ell_2$ statistical resolution of the estimator. Under the Challenge 1, we can get $$ |X_{a(t)}^\top (\hat{\theta}_{t-1} -\theta^\star)| \leq c x_{\max} \frac{\sqrt{d}}{\sqrt{ \lambda_{\star} t}}, \quad |X_{a^\star(t)}^\top (\hat{\theta}_{t-1}-\theta^\star)| \leq c x_{\max} \frac{\sqrt{d}}{\sqrt{ \lambda_{\star} t}} $$ holds with high probability. Here, $c = \tilde{O}(1)$ constant. However, this resolution in insufficient for the logarithmic regret, it can only get $O(\sqrt{T})$ regret bound. *Challenge 2: Logarithmic regret* But with the help of the Challenge 2 (margin condition), we can get logarithmic expected regret upper bound. When the greedy policy select $a(t)$, it means that \begin{align*} X_{a(t)}(t)^\top \hat{\theta}_{t-1} \geq X_{a^\star(t)}^\top \hat{\theta}_{t-1} \end{align*} and by the definition of the optimal arm, \begin{align*} X_{a(t)}(t)^\top \theta^\star \leq X_{a^\star(t)}^\top \theta^\star \end{align*} holds. Under the Challenge 1, we get \begin{align*} \operatorname{reg}'(t):= X_{a^\star(t)}(t)^\top \theta^\star- X_{a(t)}(t)^\top \theta^\star \leq 2cx_{\max}\frac{\sqrt{d}}{\sqrt{ \lambda_\star t}}. \end{align*} Next, we define the event $E$ as the event of $\mathbf{X}(t)$ with regret occurring as $\operatorname{reg}'(t) > 0$. Under the event $E$, the suboptimality gap of $\mathbf{X}(t)$ satisfies \begin{align*} \Delta(\mathbf{X}(t)) \leq \operatorname{reg}'(t) \leq 2cx_{\max}\frac{\sqrt{d}}{\sqrt{ \lambda_\star t}} \end{align*} and by the Challenge 2, we get \begin{align*} \mathbb{P}[\operatorname{reg}'(t) >0] \leq 2c x_{\max}C_{\Delta} \frac{\sqrt{d}}{\sqrt{ \lambda_\star t}} + \frac{1}{\sqrt{T}} \leq 3cx_{\max}C_{\Delta} \frac{\sqrt{d}}{\sqrt{ \lambda_\star t}} \end{align*} By combining two things, we can bound the expected regret as \begin{align*} \operatorname{reg}(t) &= \mathbf{E}[\operatorname{reg}'(t)] \leq 3cx_{\max}C_{\Delta} \frac{\sqrt{d}}{\sqrt{ \lambda_\star t}} \times 2cx_{\max}\frac{\sqrt{d}}{\sqrt{ \lambda_\star t}}\\ &= 6c^2x_{\max}^2 C_{\Delta} \frac{d}{\lambda_\star t}. \end{align*} Using this argument, our only remainning goal is bounding two constants in Challenge 1 and 2. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for all their clarifications. I have a few clarifying questions before I reassess my score. 1. Equation (3) suggests that the probability that the minimum sub-optimality gap is small is also small. Specifically, the probability that the gap falls below $\varepsilon$ is proportional to $\varepsilon$. The proportionality constant, $C_{\Delta}(t)$ has been further bounded in Theorem 3, and a growth rate of $O(\log K)$ has been established. I suspect that this is a weak bound. Here's a counterexample: Assume $d=1$, and randomly sample $K$ points uniformly from $[0,1]$. The suboptimality gap would be proportional to the gap between the first and second order statistics $X_{(1)} - X_{(2)}$, where we have defined $X_{(1)} > X_{(2)} > \cdots X_{(K)}$. Now, as $K$ increases, I think the minimum gap should shrink (larger the number of contexts, lower the sub-optimality gap). Hence, for a large number of contexts, say $O(1/\varepsilon)$, the probability in (3) may no longer be small, thus, violating the LAC property. Is my understanding correct? To verify my hypothesis, I suggest the authors a simple simulation: Sample $K$ points between $[0,1]$ uniformly at random. In x axis, plot $K$, and in the y axis, plot $X_{(1)} - X_{(2)}$. The authors can use $K = 5$, $100$, $500$, $1000$. 2. Additionally, I have a clarification question. Is the expectation in the diversity constant equation (Equation 2) with respect to contexts $X(t)$, or both $X(t)$ and $a(t)$? Also, I believe the second $X_{a(t)}(t)$ should have a transpose sign. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your feedback and your willingness to reassess your score. We are more than happy to provide the responses to your questions. --- **[1]** It is a great question. First, following the setup you mentioned, let $Z := X_{(1)} - X_{(2)}$. If the density of $Z$ is upper bounded by some constant $M$, then for any $\epsilon > 0$, $$ \mathbb{P}[0 \leq Z \leq \epsilon] \leq M\epsilon $$ holds. We emphasize again that $M$ is independent of the choice of $\epsilon$; it depends only on the context distribution and $K$. In the proof of regret analysis, we choose an arbitrary $\epsilon$. Hence, we want this inequality to hold for all $\epsilon$. Then, for a fixed $K$, our next question becomes: "What is the upper bound of $M$?" Our margin constant $C_\Delta$ is closely related to the maximum density by the above argument, and we bound the maximum density of the suboptimality gap of contexts throughout the paper. (Our definition of Challenge 2 includes an additional term $\frac{1}{\sqrt{T}}$ due to some technical reasons, but it is weaker than the above inequality). For a 1-dimensional uniform distribution, it is widely known that $M \asymp K$ (you can refer to any textbook on extreme value theory, such as [12]). However, for a $d$-dimensional uniform distribution within a ball, $M$ **decreases** rapidly as $d$ increases, which is a beneficial effect of high dimensionality. For details, please see Appendix I.8, where this matter is explicitly discussed. For Gaussian or other light-tailed distributions, $M$ exhibits a logarithmic dependency on $K$. Some asymptotic results for $M$ with respect to $K$ are proven in [12], and we provide a non-asymptotic bound that maintains this logarithmic dependency. This corresponds to the unbounded contexts case in our paper, and all of our results match the known asymptotic results. Regarding the experiment you mentioned, the difference between order statistics $X_{(1)} - X_{(2)}$ (two extremes) is well-known, with its asymptotic distribution understood. As stated earlier, for the 1-dimensional uniform distribution, this difference is proportional to $K$, while for Gaussian or light-tailed distributions, it is proportional to $\log K$. For the $d$-dimensional uniform distribution, it necessarily depends on both $K$ and $d$, and decreases with $d$. Given this solid theoretical foundation, we expect experimental results to align with these predictions. However, if you would like additional experimental results, we can provide them. We have also adjusted our approach to derive a non-asymptotic bound that applies to the bandit problem, and our results remain valid under this scenario. --- **[2]** The expection is taken only with respect to $\mathbf{X}(t)$. At time $t$, for any fixed history $H_{t-1}$, we perform the greedy policy with the estimator $\hat{\theta}_{t-1}$. Hence, when the entire set of contexts $\mathbf{X}(t) = (X_1(t), \dots, X_K(t))$ is revealed, $a(t)$ is determined immediately given the history $H_{t-1}$. Then the exact statement is: "At time $t$, for any history $H_{t-1}$, $$ \mathbb{E} {\tiny{\mathbf{X}(t)}} [X_{a(t)}(t) X_{a(t)}(t)^\top] \succeq \lambda_\star I_d $$ holds with some $\lambda_\star > 0$." In the proof, we proved a stronger statement: "For any greedy policy with any $\theta \in \mathbb{R}^d$, $$\mathbb{E}{\tiny{\mathbf{X}(t)}} \left[X_{a(t)}(t) X_{a(t)}(t)^\top\right] \succeq \lambda_\star I_d$$ holds with some $\lambda_\star > 0$." And yes, there should be a transpose sign. Thank you for pointing this out. Hope our answers provided clarification. If you have any questions, please let us know.
Summary: The paper proposes a novel condition for context distribution, called *Local Anti-Concentration (LAC)*. Under LAC, the authors prove the regret of greedy algorithms for stochastic contextual linear bandits is $\mathcal{O}(\mathrm{poly} \log T)$, without additional margin assumption. The efficacy of the greedy approach for various distributions is validated numerically as well. Strengths: - Clearly, very well-written - Clear-cut contributions that significantly improve greedy algorithms for contextual linear bandits, including a provably larger class of context distributions that can achieve $\mathcal{O}(\mathrm{poly} \log T)$ regret without additional margin condition and other technical contributions. - Numerical verification Weaknesses: - The authors should provide (possibly not-to-far-fetched) distributions in which the LAC fails. - The initial parameter $\theta_0$ is mentioned briefly but not much discussed. What is the dependency of the regret on $\theta_0$? How was $\theta_0$ chosen for the experiments? In practice, should one choose $\theta_0$ randomly, or is it okay to fix it? I feel this plays an important role, as the initial "exploration" (til the diversity becomes positive) heavily depends on $\theta_0$. **(Minor) Typography suggestions** - Some of the sentences in Section 1.2 overlap with those in the paragraph above Section 1.1. I think it would be more appropriate if Section 1.2 is absorbed into the beginning of Section 1, and Section 1.1 becomes a paragraph (\paragraph{..}) - The pseudocode in Algorithm 1 seems wrong...? the while loop should be an If-Else. - In pg. 6, "Consider the desnity" -> "Consider the density" - If accepted, the authors should include the full regret bound from Appendix B.3 in the main text. Technical Quality: 4 Clarity: 4 Questions for Authors: - Is LAC necessary for the polylog(T) regret of the greedy algorithm, or at least close to it? In other words, if the distribution is not LAC, does greedy always fail? I would be curious to see the numerical performance of greedy algorithms for distributions that are not LAC. - Does this work for generalized linear bandits as well? How about generally structured bandits (although this seems quite unlikely) and kernelized linear bandits (this, I have no idea)? - There was an interesting paper [1] in which, for linear bandits with a rich (continuous) action set, sublinear regret *implies* a lower bound on the minimum eigenvalue of the design matrix. Of course, in this case, the margin doesn't make sense, and there is a $\Omega(d\sqrt{T})$ regret lower bound, but as this paper and [1] are conceptually similar (to my eyes), would the greedy algorithm achieve $O(d\sqrt{T})$ regret? Even if not, it would also be nice to include some discussions regarding this in the paper. - (minor) [1] https://proceedings.mlr.press/v206/banerjee23b.html Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for recognizing the value of our results. We appreciate your feedback and are happy to provide our responses to your comments. --- **[W1]** We are happy to address your comment and assure you that this should not be considered a weakness. Context distributions with discrete support, particularly fixed contexts, do not satisfy the LAC condition. However, for such distributions, the greedy algorithm fails (i.e., the greedy algorithms can incur linear regret) in the worst case. To the best of our knowledge, the LAC condition is the most inclusive condition for greedy algorithms to succeed. This is a crucial finding! For further details, please read Appendix M, which contains a detailed discussion on this matter. We would be more than happy to elaborate on this. More details are also discussed in the answer to **[Q1]**. --- **[W2]** Our algorithm and regret bound are valid regardless of the choice of $\theta_0$ as long as it is bounded (and we have the freedom to choose a parameter with an adequate bound). Note that we can show that the minimum eigenvalue of the gram matrix selected by the greedy policy increases linearly with time for any $\theta_0$. Hence, we can derive the same regret bound. We will include this discussion in the revision. --- **[Q1]** LAC is a sufficient condition, not a necessary condition. However, we do not know of any distributions (yet) that do not satisfy LAC but allow poly-logarithmic regret for greedy algorithms. There are good examples of distributions that are not LAC where greedy algorithms fail (or cannot achieve poly-logarithmic regret). Firstly, it is well known that the greedy algorithm can fail in the worst case when contexts are fixed. Another example where LAC does not hold is a context distribution supported in a low-rank space. In this case as well, it is known that logarithmic regret is impossible [25], and we are not aware of the success of the greedy algorithm to the best of our knowledge. It is very important to note that our work significantly expands what has been known to be admissible for greedy algorithms. LAC is the most general condition currently known to allow greedy algorithms to achieve poly-logarithmic regret. --- **[Q2]** Yes, our analysis extends to the GLM bandit as well for regular link functions as long as the link function has a bounded first derivative, which is commonly assumed in GLM bandit literature [14, 23]. As long as the concentration of the estimator is controlled by the gram matrix, most of the analysis is similar. In the case of the kernelized bandit, it can be expressed in the form of a linear contextual bandit through the RKHS formulation. However, it usually assumes eigenvalue decay, so the minimum eigenvalue can become arbitrarily small. Hence, it might be difficult to apply the same analysis but would be an interesting future direction. So far, we have not even known what was possible for linear contextual bandits. Extension to other parametric bandits or kernelized bandits would be interesting. --- **[Q3]** We appreciate your pointer to this related work. While the paper shares some common points of addressing the minimum eigenvalue of the gram matrix, our analysis significantly differs in that one of the key ingredients of our analysis is ensuring that the minimum eigenvalue of the gram matrix increases linearly (whereas in theirs it isn't). Furthermore, ensuring the margin condition from scratch is a key step that is not addressed in the reference you provided. However, analyzing the (though not logarithmic but some sublinear) performance of the greedy algorithm within the setup of the referred paper would be an interesting direction. --- Rebuttal 2: Comment: Thanks for the detailed response, which have addressed all my questions. I will retain my score. But, please do make sure to take the other reviewers' concerns and suggestions on the paper's organization, including - table of contents (a suggestion of mine that I forgot to mention) - significantly reorganizing Appendix so that the readers can easily locate the main proof. - typos - etc --- Rebuttal Comment 2.1: Title: Thank you Comment: Reviewer qUQr, Thank you very much for your continued support and for recognizing the value of our work. We will definitely incorporate your feedback, along with the other reviewers' suggestions, to improve our presentation in the appendix (including adding a table of contents at the beginning of the appendix). If you have any questions in the meantime, please feel free to reach out to us!
Summary: The paper addresses the problem of linear contextual bandits with randomly generated contexts from a distribution $f$. The goal is to determine under which conditions on $f$ a greedy algorithm (outlined in Algorithm 1) achieves reasonable regret. By introducing the notion of Local Anti-Concentration (LAC) in Definition 1, the authors demonstrate that if $f$ satisfies the LAC condition, then the greedy algorithm achieves poly-logarithmic regret. The LAC condition encompasses a wide range of distributions, such as Gaussian, uniform, Laplace, and Student’s t-distribution, making the results of the paper general and applicable across several frameworks. Strengths: The paper is well-written and addresses a very interesting problem. It makes a significant contribution by covering a large family of distributions for the contexts and demonstrating that, for this wide range, a greedy algorithm can achieve poly-logarithmic regret. The main results, Theorems 3 and 4, are noteworthy and can be of independent interest. Weaknesses: The only weakness I see is the lack of discussion about the optimality of the achieved regret concerning the parameters of interest, i.e., $d$ (dimension of the context), $K$ (number of arms), and $\alpha$ (the parameter associated to the LAC condition). While the authors addressed this by stating, "Our focus here is not solely on attaining the sharpest regret bounds, although achieving poly-logarithmic regret is highly favorable," it would have been beneficial to discuss this matter in more detail. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. I would like to know the authors' thoughts on the necessity of the LAC condition. While the paper's focus is on showing that the LAC condition is sufficient, I am curious if there are significant distributions that do not satisfy the LAC condition but for which the greedy approach still achieves poly-logarithmic regret. 2. In line 269, the authors mention, "Eq.(3) is a relaxed version of the margin condition" and "The aforementioned existing literature explicitly assumes the condition to hold." Does this mean that the margin condition imposed in previous works can be dropped and relaxed to provide a bound on Eq.(3)? 3. I would appreciate it if the authors addressed my question in the weaknesses section. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors adequately address the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for recognizing the value of our results. We appreciate your feedback and are happy to provide our responses to your comments. --- **[W1]** In order to validate optimality of contextual bandit algorithms under stochasticity, we need to derive proper lower bounds. However, existing studies ([7, 8, 5, 15]) that have examined poly-logarithmic regret under margin conditions (similar to our paper) have not discussed lower bounds or optimality either. One caveat is that since the greedy algorithm is not adaptive, the dimensionality and $K$ dependence can vary depending on context distributions. In such cases, deriving distribution-dependent lower bounds would be meaningful to determine optimality. However, we conjecture that such an analysis would be quite challenging. Nonetheless, it would be an interesting future direction to provide a lower bound under the margin condition, not just for greedy bandit algorithms but for linear contextual bandits in general. --- **[Q1]** In short, to our best knowledge, the distributions that satisfy LAC are currently the only ones proven to be admissible for greedy algorithms! Typical cases where LAC is not satisfied include: 1. Fixed or discrete context distributions. 2. Low rank or nearly low rank contexts. In case 1, it is known that the greedy algorithm fails for fixed contexts. In case 2, it is known that logarithmic regret is impossible [25]. Additionally, for distributions with double exponential density (e.g., Gumbel distribution), LAC is not satisfied. However, with slight modifications, it might be possible to show a sublinear regret bound, but we are unsure how tight the bound can be (we do not know whether poly-logarithmic regret would be possible). It is very important to note that our work significantly expands what has been known to be admissible for greedy algorithms. --- **[Q2]** Previous literature **assumes** the existence of a constant satisfying the margin condition and regards it as a **fixed** constant. However, we **derive** the upper bound of the margin constant from the LAC density. We emphasize again that we **do not assume** the existence of the margin constant. For example, many previous studies [7, 8, 5, 15] assume that there exists $C_\Delta$ satisfying: $$ \mathbb{P}[X_{a^\star}(t)^\top \theta^\star - \max_{i \neq a^\star} X_{i}(t)^\top \theta^\star \leq \varepsilon] \leq C_\Delta \varepsilon. $$ With this margin assumption, one can achieve logarithmic regret. However, our paper calculates and estimates the margin constant $C_\Delta$ from scratch, only assuming the densities of contexts are LAC. Also, note that if the above holds, then our equation of the margin condition (equation (3)) holds directly. Hence, our version of the margin condition is weaker. More importantly, we **do not even impose such an assumption** to start with! Instead, we prove that distributions in LAC automatically satisfy the margin condition which is a significant contribution. --- Rebuttal Comment 1.1: Title: Rebuttal acknowledgment Comment: I would like to thank the authors for the rebuttal. For now, I will maintain my current score. I plan to discuss the paper with the other reviewers and look forward to the author's discussions with them as well. I will update my score accordingly. --- Reply to Comment 1.1.1: Title: Thank you Comment: Reviewer Lk9D, Thank you very much for recognizing the value of our work and for your support. If you have any questions in the meantime, please feel free to reach out to us!
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DeBaRA: Denoising-Based 3D Room Arrangement Generation
Accept (poster)
Summary: The authors present DeBaRa, a diffusion-based framework for indoor scene generation, and a self-score evaluation strategy to select conditioning input. They demonstrate the effectiveness of their approach on scene synthesis and several downstream tasks. Strengths: + The empirical results are promising. Weaknesses: + Insignificant Architectural Difference: The architectural differences between DeBaRa and other diffusion-based baselines (e.g., DiffuScene) are not substantial. The results benefited architecture difference is not sufficiently supported by experimental evidence. To be honest, we **do not** care about final performance, but the impact of the differences. + Lack of Implementation Details and Ablation Studies: The paper does not provide sufficient implementation details, making it difficult to reproduce the results. Additionally, ablation studies are missing, which are crucial to understanding the contributions of individual components of the model. + Missing Experiments: The paper claims advantages in downstream tasks like rearrangement and completion but lacks comparative experiments with baseline methods (ATISS, LEGO-Net, and DiffuScene) both qualitatively and quantitatively. + Unclear Writing and Missing Sections: The writing is unclear in several parts of the paper. Additionally, **Section 4.4 ("Additional Results") is missing**, which further reduces the clarity and completeness of the work. + Typing Errors: There are numerous typing errors throughout the paper, such as "biaises" instead of "biases" in Line 39. These errors affect the readability and professionalism of the manuscript. Technical Quality: 2 Clarity: 1 Questions for Authors: + The inference time is reported to be significantly faster than DiffuScene (about 100 times faster). Are these advantages due to the use of EDM over DDPM? Please elaborate on how the choice of EDM contributes to the improved inference time. Confidence: 5 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer `sfxx` for their time and feedback. We address the reported weaknesses and questions in the following response: > Insignificant Architectural Difference: The architectural differences between DeBaRa and other diffusion-based baselines (e.g., DiffuScene) are not substantial. First, we would like to emphasize that although we do not consider our neural network architecture to be a key contribution, there are some significant differences between ours and that of DiffuScene [1]. Specifically, unlike DeBaRa, [1] adopts a U-Net backbone with 1D convolutions. It is not conditioned on the floor plan of the room and therefore performs *unbounded scene synthesis*. Our architecture features fixed positional encoding modules, linear layers, a Transformer encoder as well as a PointNet feature extractor and is therefore close the one of LEGO-Net [2] (which is not a Diffusion model), as deliberately stated in our main submission (L146). More fundamentally, however, and as mentioned in the summary above as well as in the paper, **we consider our key contributions to lie in**: 1. a continuous-time score-based model for indoor layout generation, which output domain has been simplified to learn unconditional and class-conditional densities of object bounding boxes, expressed in a common 3D coordinate space (**Section 3.2**). It is trained following 2. a novel Chamfer objective that is permutation-invariant by design (**Section 3.3**). 3. an original Self Score Evaluation (SSE) procedure to optimally select conditioning inputs from external sources, leveraging density estimates provided by the pretrained model, allowing our method to be the first to unify the use of a LLM and of a specialized diffusion model in the context of 3D scene synthesis (**Section 3.4**). These are the key novelties of our work, which do not exist in DiffuScene or any other prior approach, and which allow us to achieve state-of-the-art capabilities in 3D layout generation (**Table 1**), 3D scene synthesis (**Table 2**) and scene re-arrangement (`PDF` **Table 3**), while adopting a backbone that is significantly more lightweight than previous methods, which further enables real-time (< 1s) efficient sampling (**Table 3**). > The paper does not provide sufficient implementation details, making it difficult to reproduce the results. Implementation details are extensively described throughout the paper and submitted appendix, notably: - **Section A.1**. Denoiser training parameterization. - **Section A.2**. EDM sampling procedure (Algorithm 2) and hyper parameters (L487-488). - **Section B.1**. Network architecture with positional encoding formula and number of frequencies, number of layers of linear modules, their activations, dropout rate, output dimensions as well as details on the Transformer implementation with number of encoder layers, their hidden dimensions, number of heads and token masking strategy. - **Section B.2**. Details on the training protocol, including the number of epochs, batch size, optimizer, learning rate, learning rate schedulers, and data augmentation routine. - **Section B.3**. Details on the (re)implementation of baseline methods. - **Section B.4**. Link to the model and implemented prompting strategy in our LLM-guided scene synthesis pipeline. - **Section 3.4**. Pseudo-code for SSE (Algorithm 1). - We follow the data preprocessing from ATISS [3], as stated L224. We also include additional implementation details in our general response, but would be happy to answer implementation-related questions if remains any. > The paper claims advantages in downstream tasks like rearrangement and completion but lacks comparative experiments with baseline methods Including additional experimental results against established baselines on downstream task is a valuable suggestion. We include **quantitative and qualitative experiment results against LEGO-Net**, which itself outperforms ATISS [3] on the scene re-arrangement task in **Table 3** and **Figure 1** of the rebuttal `PDF`, as stated in our general response. > Unclear writing and typing errors We thank reviewer `sfxx` for reporting a typographical error and missing reference to the table and figure of section 4.4 (making the dedicated section appear empty in the paper). This issue and other minor typos, have been **corrected in the current version of the manuscript**. We will also make sure to incorporate **every** additional comments in the final version. > Please elaborate on how the choice of EDM contributes to the improved inference time. As stated in our manuscript (L474-476), EDM proposes a 2nd order Runge-Kutta stochastic sampler (Algorithm 2) that provides a **favorable trade-off** between generation quality and number of function evaluations (NFE), as verified throughout the quantitative evaluations of the seminal paper [4]. Our implementation uses 50 sampling steps, which results in a NFE of 101. On the other hand, DiffuScene uses ancestral DDPM sampling with 1000 steps / function evaluations. Additionally, **our original design choices enable our transformer-based backbone to feature around 7 times fewer parameters** (Table 3), which has a direct impact on inference time. [1] DiffuScene: Denoising Diffusion Models for Generative Indoor Scene Synthesis, Tang et al., 2024. [2] Lego-net: Learning regular rearrangements of objects in rooms, Wei et al., 2023 [3] ATISS: Autoregressive Transformers for Indoor Scene Synthesis, Paschalidou et al., 2021 [4] Elucidating the Design Space of Diffusion-Based Generative Models, Karras et al., 2022 --- Rebuttal 2: Title: Response to the author rebuttal Comment: Thank you for your response. While some of your points partially address my concerns, I am still inclined to recommend rejecting this paper. + Novelty: Using EDM to replace DDPM for indoor scene modeling does not represent a significant contribution, which does not offer much new insight into the field of indoor scene generation. + Experiments: Since the focus is on the training framework, its effectiveness is not validated on a sufficiently scalable dataset. To my knowledge, the synthetic data (3D-FRONT) used contains only around 5k scenes for training (with subsets like library being even smaller), which is relatively small compared to current image datasets. + Comparisons: The lack of comparison with many autoregressive model works, such as COFS [1], and scene-graph-based methods like SceneHGN [2] and GRAINS [3], is concerning. + Evaluation Metrics: The metrics used in this study, such as FID, KL, and CAS, seem rather generic. They might not fully capture the essential aspects of scene generation, including diversity, complexity, symmetry pattern discovery, object interaction, and object concurrences. + Rendering Quality: The quality of the rendered indoor scenes does not appear to meet the standards expected by artists. For better examples, the authors may refer to [2, 3]. Questions: + It would be insightful to investigate whether this diffusion-based method can produce more diverse scenes, compared to autoregressive-based methods. + The evaluation protocol is not clear. How many generated/GT scenes do you use for evaluation? References: [1] Para et al., "COFS: Controllable Furniture Layout Synthesis," SIGGRAPH 2023. [2] Gao et al., "SceneHGN: Hierarchical Graph Networks for 3D Indoor Scene Generation with Fine-Grained Geometry," TPAMI 2023. [3] Li et al., GRAINS: Generative Recursive Autoencoders for INdoor Scenes, TOG 2019. --- Rebuttal Comment 2.1: Title: Response to reviewer comment (2/2) Comment: To summarize the key points: 1. We evaluate our approach on the same standard dataset as done in recent state-of-the-art methods [1,2,3]. 2. The baselines that we compare against supersede the methods suggested by the reviewer. Nevertheless, for completeness, in the final version we will be happy to add comparisons against methods, which can be evaluated on our test set. 3. We use the same presentation pipeline as in recent published works, ensuring consistent comparisons. We thank reviewer `sfxx` for their constructive feedback. We are confident that all of the requested changes can be addressed in a minor revision. Furthermore, we remain confident of the contribution of our work, as also confirmed by, e.g., reviewer `Fsdh`. [1] DiffuScene: Denoising Diffusion Models for Generative Indoor Scene Synthesis, Tang et al., in CVPR 2024. [2] Lego-net: Learning regular rearrangements of objects in rooms, Wei et al., in CVPR 2023. [3] COFS: Controllable Furniture Layout Synthesis, Para et al., in SIGGRAPH 2023. [4] Generalization in diffusion models arises from geometry-adaptive harmonic representations, Kadkhodaie et al., in ICLR 2024. --- Rebuttal 3: Title: Response to reviewer comment (1/2) Comment: We thank reviewer `sfxx` for increasing their initial scores and for taking the time to engage in the discussion period. **We want to highlight that most of reviewer's additional concerns and questions were not raised in their initial review**. While we would have been happy to address them in our rebuttal, we do so in the following response: > Novelty We refer the reviewer to the list of contributions mentioned in our rebuttal. Although we do not consider the use of EDM alone to be one of our major contributions, we don't employ it as a simple *drop-in* replacement for DDPM and discuss the relevance of this design choice in the context of indoor scene synthesis in our main submission, L132-139. > Experiments We evaluate our method on the standard dataset for 3D indoor scene synthesis, **which is the one used in the most recent baselines** [1, 2], including the COFS [3] paper mentioned in the reviewer's comment. **We also want to emphasize that we actually view the performance of our method** (demonstrated by our qualitative and quantitative evaluations), **and its ability to generalize to complex / unseen floor plans** (as shown in Figure 2 of our rebuttal `PDF`) **as additional strengths in the light of the limited number of training samples**. Finally, **generative diffusion models** are known to **scale favorably well** to larger training datasets [4]. > Comparisons As stated in our main submission L219, we chose to evaluate DeBaRA against established baselines from **different model *families*** (ATISS, i.e., autoregressive ; DiffuScene / LEGO-Net, i.e., denoising-based ; LayoutGPT, i.e., LLM-based). We can read in the COFS paper: "*Our model thereby **extends the baseline ATISS** with new functionality while **retaining all its existing properties and performance***", which can be verified in their experimental evaluations. On the other hand, **our method largely outperforms ATISS**, as quantitatively verified in **Table 1** and **Table 2** of our submission, and observed in **Figure 3**. The GRAINS method **only supports rooms having four walls** in its predicted layouts, while an important feature of our approach is that it takes into consideration complex (i.e., non square) input floor plans. As a result, it is not applicable to a significant number of scenes from our test set. In contrast to DeBaRA, SceneHGN proposes to generate scenes at **the object part-level**, which requires a custom dataset. However, we are willing to evaluate the 3D layout generation capabilities of the method in the revision of our manuscript. > Evaluation Metrics Unlike what is stated in the reviewer's comment, KL Divergence is not measured in our study as object's semantic categories are not part of DeBaRA's prediction space. For the same reason, measuring object concurrences may not be a relevant addition to our paper. Also note that **FID** and **KID** are known to evaluate **both** the **diversity** and the **fidelity** of the generated content. Non-diverse generation results wouldn't comprehensively capture the distribution of real scenes, which would directly penalize the FID and KID scores. Additionally, a better **SCA** score reflects more plausible layouts, as they are *harder* to distinguish from *real* ones. The superiority of denoising-based approaches in capturing symmetry / alignment patterns compared to e.g., autoregressive methods in the context of scene synthesis has been extensively studied by previous work [1, 2]. We also include in our submission's appendix additional indicators measuring the **validity** of generated layouts w.r.t. the provided floor plan (Table 5). > Rendering Quality Our scene renderings, with objects colored according to their semantic categories, have been included to help readers appreciate the quality, diversity and validity of generated layouts, while easily **distinguishing** different objects. We do not claim our renderings to be at the level of those produced by an artist. We used the rendering pipeline provided by the official implementation of DiffuScene (which builds upon the one of ATISS). Our rendering quality is therefore on par with the one of this recently published work. However, also note that our method could be employed along more advanced rendering engines. We will follow your suggestion and include additional qualitative results featuring **textured objects and floors**, which can be obtained using our current rendering pipeline. > It would be insightful to investigate whether this diffusion-based method can produce more diverse scenes Please see our previous answer regarding FID / KID. > How many generated/GT scenes do you use for evaluation? As stated in our main submission, evaluation metrics are computed across each **test** subset (L244), following the splits described L225. For each test subset, we generate the same number of scenes as the number of *real* ones.
Summary: This paper studies 3D room arrangement/layout generation. It proposes DeBaRA, a diffusion-based generative model, which can generate layouts given the list of furniture and the floor map of the room. The proposed method provides good results and is able to work in various scenarios: layout generation, LLM-guided text-to-scene generation, scene completion, etc. Strengths: - The proposed method is a novel diffusion-based generative model, which is effective in the aimed task. - The results generated are high-quality both qualitatively and quantitatively. - An LLM-guided alternative pipeline is provided to simplify the input format, which adds more functions to the model. Weaknesses: - There is no ablation study of the design choices. It is unclear whether and how each design choice and component helps the results. - It would be better if the ablation results could be provided for the following designs: EDM v.s. DDPM/DDIM, designed 3D spatial objective v.s. simple MSE, different classifier-free guidance scales, with v.s. without SSE. - Some of the designs are not clearly described in the paper. Besides, some of the assumptions are not very clear. - I wonder whether adding noise to the scene will cause the noisy layout to contain overlapped objects, objects of unreasonable sizes/rotations, etc. I did not see any of these in Fig.1. It might be better if a visualization of the process of scene being denoised could be provided. - I wonder how the size is determined by the generative model. For example, a sofa can be long or short, a table can be large or small, and a cabinet can be as tall as the wall or as small as a bedside table. How will the model decide the sizes of these objects? Can the user indicate the rough size of the object? - I wonder if it is assumed that most objects should be located in a rigid way that each edge is parallel to one wall? These are observed in most of the results of the proposed model in Figs. 3~6. If so, will these limit the diversity of generation results? - It seems that all the floor maps are relatively simple. I wonder how the model can work in more complicated floor maps, e.g., a floor with many rooms, a round room, or a triangle room. - In contrast to the claim in Q4 of the Checklist, many implementation details are not revealed, e.g., training settings (e.g., lr and iterations) and classifier-free guidance scales. - (Minor) The reason why this diffusion model can work might be that it can be regarded as an extension from some point cloud diffusion models, which also directly apply denoising on coordinates or some explicit presentations. However, this direction was not discussed in the related work. - (Minor) There are some minor math presentation issues. For example, multiple $min$s should be corrected to $\min$s (e.g., L166, L178). Technical Quality: 3 Clarity: 4 Questions for Authors: Please see "Weaknesses". Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors mentioned some limitations in the paper and did not indicate the societal impacts. One possible societal impact might be that "the generated layouts may lead to unsafe constructions, and therefore the model should given warnings about actually using it for constructing real-word rooms". Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer `Fsdh` for their time and positive feedback. We address the reported concerns in the following response: > It would be better if the ablation results could be provided for the following designs: EDM v.s. DDPM/DDIM, designed 3D spatial objective v.s. simple MSE, different CFG scales We provide an **ablation study** of these design choices in **Table 1** and **Table 2** of the rebuttal `PDF`. Note that as we don't apply *CFG* at sampling time, we chose to ablate the use of *conditioning dropout* on the input object categories during training. > Some of the designs are not clearly described in the paper One general note to clarify some of the following concerns is that unlike previous work, **we don't prevent unwanted behaviors** such as object collisions, out-of-bounds or misaligned elements **using additional loss terms** [1] or **rigid / hard-coded rules**. Instead, we adopt a purely *data-driven* approach to capture complex patterns solely from the training layouts. Remarkably, results reported in **Table 5** of our submission indicate that DeBaRA largely outperforms previous methods at respecting the indoor floor plan (i.e., keeping objects within its *bounds*). > I wonder if it is assumed that most objects should be located in a rigid way that each edge is parallel to one wall? It is not. The 3D-FRONT dataset that we use for training predominantly contains rooms in which objects are aligned with walls and don't have much *exotic* angles (i.e., different from 0° or 90°). We will be happy to include a **statistical analysis** of the training and generated data in the revision of our manuscript to quantitatively support our observations. Additionally, on **Figure 6** (right), we perform scene completion by adding a *bookshelf* (pink) and a *coffee table* (blue). We repeat the experiment ten times and report the denoising object trajectories, intermediate and final positions (colored and black dots respectively). This allows to observe the variety of predicted layouts. Notably, we can see that the *bookshelf* ends up in various different positions, always next to a wall. > I wonder how the size is determined by the generative model. [...] How will the model decide the sizes of these objects? During the denoising process, **coarse spatial attributes** are typically determined during the **early iterations** (i.e., high noise levels, injection of *fresh* noise), while **precise / fine-grained features** are set in the **late time steps** (low noise levels, no injection of *fresh* noise). Notably, the added stochasticity in the early denoising steps helps better explore the space of possible features. These general assessments on diffusion sampling have been explored by other work in the context of image generation [2]. They can be qualitatively observed in **Figure 3** of the rebuttal `PDF`. > Can the user indicate the rough size of the object? **Yes**, DeBaRA can be used to sample layouts from specified (i.e., fixed) spatial features such as object dimensions, as described in Section 3.5, using the binary mask $\mathbf{m}$ of L209. If users want to input the *rough* size of objects (i.e., instead of exact one), $\mathbf{m}$ can be *relaxed* (i.e., set to $\mathbf{0}$) in the late iterations to let the model adjust fine grained dimensions. Denoising time step from which $\mathbf{m}$ is relaxed can be set depending on the precision of user-defined sizes. **We find this reviewer suggestion to be a very practical and intuitive use of our method that further highlights its versatility and we will be happy to include this in the final version**. > It might be better if a visualization of the process of scene being denoised could be provided. Injecting noise may produce such invalid intermediate 3D configurations in early time steps (i.e., when spatial features are far from their final values). However, these phenomena will tend to diminish during the denoising process, thanks to the decreasing noise schedule. These can be better observed in **Figure 3** of the rebuttal `PDF`. > I wonder how the model can work in more complicated floor maps The 3D-FRONT dataset mostly contains relatively *simple* (i.e., single room, square, rectangular) floor maps both for training and evaluation. As a result, we haven't encountered any *round* floor map in our exploration of the test set. Consequently, we manually designed such out-of-distribution floor shapes (round, triangular and report DeBaRA's generation in **Figure 2** of the rebuttal `PDF`. This demonstrates the robustness of our method to unseen rooms. > Many implementation details are not revealed, e.g., training settings (e.g., lr and iterations) and CFG scales. Please note that these details are extensively **described throughout the submitted appendix**. For instance, we can read in Section B.2, L520-522 that we trained our models for 3000 epochs, with a batch size of 128 using the AdamW optimizer and learning rate $1e^{-4}$. During training, we use a conditioning dropout rate of 0.2 but didn't find the need to amplify the strength of the conditioning using any classifier-free guidance scale during sampling. > it can be regarded as an extension from some point cloud diffusion models Thank you, methods leveraging diffusion models to generate point clouds [3] or other geometric representation involving 3D coordinates [4] are a relevant addition to our manuscript's references. We thank again reviewer `Fsdh` for their detailed and constructive feedback as well as for their insightful suggestions that will improve the quality of our paper. [1] DiffuScene: Denoising Diffusion Models for Generative Indoor Scene Synthesis, Tang et al., 2024. [1] Exploring Diffusion Time-steps for Unsupervised Representation Learning, Yue et al., 2024. [2] Diffusion Probabilistic Models for 3D Point Cloud Generation, Luo et al., 2021. [3] BrepGen: A B-rep Generative Diffusion Model with Structured Latent Geometry, Xu et al., 2024. --- Rebuttal Comment 1.1: Comment: I sincerely thank the author for their detailed and well-organized rebuttal. All my concerns are addressed. I would like to increase my rating from 6 to 7. I would suggest the authors revise their paper according to the rebuttal.
Summary: This paper proposes a method for room layout generation given objects and a floor plan using score-based EDM. First, three encoders are used to encode objects, the floor plan and the noise respectively. These encoded latents are then given as input to a noise based scene encoder which decodes into the output latents. The output latents are then finally decoded to their respective categories and the decoded output is pushed to be closer to the input as measures by a proposed semantic aware Chamfer distance loss. Once trained, the model is able to generate novel layouts and scene in addition to being able to perform completion, rearrangement and retrieval Strengths: 1) Qualitatively, the layouts generated seem far more coherent and spatially aligned than prior work. 2) Quantitative results provide further evidence that this model performs well. 3) The paper is well written and easy to follow. Weaknesses: 1) While using a permutation invariant Chamfer loss is intuitive, I believe it should still be ablated w.r.t just the standard chamfer loss Technical Quality: 2 Clarity: 3 Questions for Authors: N/A Confidence: 2 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer `xNz4` for their time and feedback. We address the reported weakness in the following response: > While using a permutation invariant Chamfer loss is intuitive, I believe it should still be ablated w.r.t just the standard chamfer loss Evaluating the impact of our semantic-aware objective against a standard Chamfer loss is a relevant suggestion. Advantage of our novel formulation is quantitatively verified in **Table 1** of the rebuttal `PDF`. It is also evaluated against a *simple* MSE objective.
null
null
Rebuttal 1: Rebuttal: # Response to all reviewers We would like to thank reviewers for their time and insightful feedbacks and are pleased that they recognized our submission to propose a **"novel diffusion-based generative model, which is effective in the aimed task**" with an "**alternative pipeline which adds more functions to the model**" (`Fsdh`), in a way that is **"well written and easy to follow"** (`xNz4`). Our experimental evaluations have also been appreciated, as we read that **"results generated are high-quality both qualitatively and quantitatively / seem far more coherent and spatially aligned than prior work"** (`Fsdh`, `xNz4`). Before addressing reviewers' common concerns and questions, we wish to briefly highlight our main contributions: 1. a lightweight score-based model trained to learn the class-conditional and unconditional densities of 3D layouts in bounded indoor scenes using a novel 3D spatial objective. 2. a novel Self Score Evaluation (SSE) procedure to optimally select conditioning inputs from external sources using density estimates provided by the pretrained model. 3. a flexible sampling method to perform multiple downstream tasks from partial features (e.g., scene completion) and/or intermediate noise levels (e.g., scene re-arrangement, object retrieval). All these contributions are key to achieving the state-of-the-art performance exhibited by our framework. Our method is also the first to unify the use of a specialized diffusion model and a separately trained LLM in the context of 3D scene synthesis. ## 1. Ablation study `sfxx`, `Fsdh`, `xNz4`: Reviewers unanimously suggested that additional ablations of our design choices would further clarify our contributions. Please note that in **Table 2** of our main submission and as described L259, we compare a scene synthesis set-up in which the input object semantics are selected from a set of LLM-generated ones, **either randomly** (*LLM*) or **by applying SSE** (*LLM + SSE*). Additionally, **Table 3** reports the impact of applying SSE on the generation time. **These results quantitatively measure the individual impact of SSE in our scene synthesis pipeline**. We provide a study to evaluate the role of **other individual components** in the attached `PDF`: - `Fsdh`: Our 3D spatial objective v.s. simple MSE (`PDF` Tab 1.) - `xNz4`: Our 3D spatial objective v.s. standard Chamfer loss (`PDF` Tab 1.) - `Fsdh`: Use of conditioning dropout during training (`PDF` Tab 1.) - `Fsdh`, `sfxx`: Different sampling strategies (DDPM, EDM) (`PDF` Tab 2.) ## 2. Additional experimental results We follow the reviewers' suggestions and provide new results to support the performance and versatility of our method. - `sfxx`: We include quantitative (`PDF` Tab 3) and qualitative (`PDF` Fig 1) experimental evaluation against LEGO-Net [1] on **scene re-arrangement**, i.e., recovering a close *clean* layout configuration from a *messy* / perturbed one. We use authors implementation [2], in the *grad without noise* setting (which is the best performing in the original paper), on the 3D-FRONT *living rooms* test subset and with a perturbation level of $0.25$. Results highlights that **our method is able to recover more realistic arrangements**, while being **closer to their initial configurations**. This is remarkable as the LEGO-Net baseline has been specifically trained to perform this task. - `Fsdh`: Additional qualitative results on **complex floor plans** can be observed on (`PDF` Fig 2), further highlighting the robustness of our method and its consideration for the conditioning input. - `Fsdh`: As suggested, we provide additional visualization of the iterative denoising process over time (`PDF` Fig 3). ## 3. Implementation details `sfxx`, `Fsdh`: We would like to emphasize that the vast majority of our implementation details are provided in the **supplementary materials** of our submission, notably in **Section A** (training parameterization and sampling hyper parameters) and **Section B** (network architecture, training protocol, baselines (re)implementation and LLM prompting). Minor details which were omitted are as follows: - Our *Shared Object Decoder* linear layers have respective output dimensions 512, 128 and 8. - We use a popular PyTorch implementation [3] of PointNet for our floor plan feature extractor. - Our linear learning rate warmup uses a start factor of 0.1 and is active for the first 50 iterations. Then, the cosine annealing schedule is set to reach a minimum learning rate value of $1e^{-8}$ after 2200 epochs. - We implemented SSE using $T=100$ trials and with noise levels sampled as in training (L459-460). ## 4. Text body and typos - `Fsdh`: We find reviewer proposal mentioning potential **societal impacts** of our method to be an excellent addition. We will also add that robust privacy measures should systematically be implemented along our method in order to avoid any unauthorized replication of personal spaces. - `sfxx`: As reported, missing references to **Table 3** and **Figure 6** in the text body make **Section 4.4** appear empty in the submitted paper. We will clarify our manuscript by adding a concise textual introduction and analysis of these results. Notably, we can see in **Table 3** that our lightweight architecture is bridging the gap with autoregressive methods in terms of inference efficiency. **Figure 6** shows the denoising process of predicted bounding boxes being progressively *unshadowed* (left). It also allows to observe the variety of predicted layouts by plotting intermediate and final positions (colored and black dots respectively) of objects that are added to a scene over multiple trials (right). - `sfxx`: We fixed the mentioned **typo** along with other ones identified during post-submission readings of our manuscript. [1] Lego-net: Learning regular rearrangements of objects in rooms, Wei et al., 2023 [2] https://github.com/QiuhongAnnaWei/LEGO-Net [3] https://github.com/fxia22/pointnet.pytorch Pdf: /pdf/fd21fac76fb88860579b34f09bd15dcaec5af52c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
EM Distillation for One-step Diffusion Models
Accept (poster)
Summary: This work proposes EM Distillation which uses the idea of expectation maximization (EM) to distill a pretrained diffusion model. Naive adaptation of EM algorithm can be computationally expensive as it requires sampling from the teacher diffusion model (which can be slow). This work proposes an alternate approach to avoid sampling from the pretrained diffusion model. The idea is to run MCMC on the joint distribution $p(x,z)$ to generate samples. To further simply Langevin dynamics, the paper uses reparametrization trick and then performs Langevin updates on Gaussian noise $\epsilon$ as well as the latents z (which come from the prior $p(z)$). As each step of Langevin dynamics adds noise, this adversely affects training due to large variance. The paper suggests getting rid of this additional noise to stabilize training (called noise cancellation in paper). The proposed method generalizes previously proposed Variational Score Distillation-based methods such as Diff-Instruct and seems to perform well on conditional image generation tasks. Strengths: 1. The qualitative results for 1-step generation with EMD are impressive (Figure 6-14 in the appendix). The quantitative metrics seems comparable or better on datasets like ImageNet-64X64/128X128, and MS-COCO etc. 2. The proposed method can generate high quality images in one step. Further, the method also shows improved diversity of generated images for the same prompt. This indicates that the method indeed results in improved mode coverage. 3. I like the idea of using stochastic Langevin updates to get better mode coverage but later getting rid of the added noise to aid smoother training. Weaknesses: 1. This method needs more calls to teacher model compared to the baseline methods like Diff-Instruct and DMD. The corresponding overhead associated in training, both in terms of additional training time and compute, should be discussed in the paper. 1. The additional computational overhead needs additional clarity. For instance, from Algorithm 1 and 2, it seems that this method differentiates through the Langevin update steps. If K=16 for instance, this would mean that the computational graph will be $16\times$ larger compared to the baseline $K=1$ like Diff-Instruct, as it would require backprop through the generator network $K$ times. 2. The paper unfortunately has some significant typos. Some equations in Section 3.2 are missing some multiplicative factors (See more in the questions below). As a result, I’m not sure if the experimental observations made from the experiments in Section 3.2 are correct. I hope the authors can clarify my questions below so that I can adjust my score accordingly. 3. This method introduces additional hyper-parameters but the corresponding ablation studies for hyper-parameter sensitivity are missing. For instance, how does the performance of EDM vary with K, the steps of MCMC, during training? Currently, the paper mostly considers two cases K=1 and K=16. It is also unclear how $t^\star$ is selected and how it is used. It appears suddenly in the main text in Section 5.2. (See more questions below) Technical Quality: 2 Clarity: 2 Questions for Authors: 1. The expectation in Eq 5 seems incorrect. $\epsilon$ is undefined. I think the expectation should be w.r.t $p(t), p(z), x_0 \sim g_\theta(z)$, and $p(x_t|x_0)$ ie. $E_{p(t), p(z), x_0 \sim g_\theta(z), x_t \sim p(x_t|x_0) }[\cdot]$. 2. There seems to be a parameter $\alpha_t$ missing in equation 7. $\nabla_z \log p_\theta(x|z) = \dfrac{\alpha_t}{\sigma_t^2}(x_t - \alpha_t g_\theta(z))^\top \nabla_\theta g_\theta(z)$. Also, why is score $\nabla_z \log p_\theta(z)$ simplified to $z$ in this equation? 3. I find line 142 confusing. $\epsilon$ is again undefined here but from Algorithm 1, it seems to be i.i.d. sample from $\mathcal{N}(0, I)$. If so, how can $\alpha g_\theta(z) + \epsilon$ be a deterministic transformation? Is $\epsilon$ always fixed for a given $z$? This seems like the reparametrization trick used in VAEs but that still doesn’t make this transformation deterministic. Also, if $p_\theta(x_t|z) = \mathcal{N}(\alpha_t g_\theta(z), \sigma_t^2I)$, then shouldn’t this transformation be $x_t = \alpha g_\theta(z) + \sigma_t \epsilon$? 4. By noise cancellation, does it mean that we collect all terms $\sqrt{2 \gamma} n^i$ from equation 16 for K steps, and then subtract it after K steps? 5. How sensititive is the training and final performance of the model to the specific choices of $K, \gamma_e$ and $\gamma_x$? 6. What is $\lambda^\star$ and how is its value determined? [Table 6, hyperparameters] How sensitive is the final performance to the choice of this hyperparameter? 7. What does $\epsilon$-correction without $z$-correction mean? Is $g(z)$ fixed for all MCMC steps and only $\epsilon$ updated? (This is used in Table 1). Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The paper discusses its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback! It is very encouraging to know the reviewer liked the idea of combining Langevin updates with noise reduction, as well as our experiment results. **[Computational overhead]** See Global Response **[Typos]** We sincerely apologize for the typos in the submitted version. Here are some clarifications: 1. We should have put the definition of $\epsilon\sim\mathcal{N}(0, I)$ and $x_t = \alpha_t g_\theta(z) + \sigma_t \epsilon$ earlier such that the expectation in Eq (5) is a result of reparametrization. $x = \alpha g_\theta(z) + \epsilon$ in L142 is a typo (as it misses a $\sigma$), the correct one is in L177. 2. In Eq (7), missing the parameter $\alpha_t$ is a typo. We have double-checked our implementation. Thanks for pointing it out. **[Noise cancellation]** Yes, you are completely right. We stated this in L521-533: “Empirically, we find book-keeping the sampled noises in the MCMC chain and canceling these noises after the loop significantly stabilizes the training of the generator network.” We will move these lines to the main text in the revised version. **[Hyperparameters of MCMC (K, stepsize)]** See Global Response for the ablation of MCMC steps. As for the step size, we found $\gamma_\epsilon\in[0.3^2, 0.4^2]$ and $\gamma_z\in[0.003^2, 0.004^2]$ are generally good for the 3 tasks we experimented with. We reported the best configuration in the manuscript. **[t\* and \lambda\*]** We respectfully argue that t\* does not suddenly appear in Section 5.2. In Section 3.1, L113-116, we provide an introduction of t\* under the context of the diffusion denoiser, i.e. the x-prediction function. The intuition is that by choosing the value of t\*, we choose a specific denoiser at that noise level. When parametrizing t, the log-signal-to-noise ratio $\lambda$ is more useful when designing noise schedules, a strictly monotonically decreasing function $f_\lambda$ [1]. Due to the monotonicity, $\lambda^*$ is an alternative representation for t\* that actually reflects the noise levels more directly. (During rebuttal, we find there is another typo in Table 6 and Table 7, for they reported -logSNR. We will correct them in the revision. ) The pdf in the global response provides the denoiser generation at the 0th training iteration for different $\lambda^*$. When $\lambda^*=0$, the generated images are no different from Gaussian noises. When $\lambda^*=-6$, the generated images have more details than $\lambda^*=-10$. In the context of EMD, these samples help us understand the initialization of MCMC. According to our experiments, setting $\lambda^*\in[-3, -6]$ results in similar performance. For the numbers reported in the manuscript, we used the same $\lambda^*$ as the baseline Diff-Instruct on ImageNet-64 and only did a very rough grid search on ImageNet-128 and Text-to-image. [1] Kingma et al. "Variational diffusion models." NeurIPS 2021. **[What does eps-correction without z-correction mean?]** $\epsilon$-correction without $z$-correction is to fix $g(z)$ and only update $\epsilon$. It is not a theoretically rigorous algorithm. The reason we include this is that we need a baseline to show the marginal benefit of the update on $z$. The context here is that when the step sizes in $z$ and $\epsilon$ are not aligned, the performance of the joint update can be worse than the update on $\epsilon$ only. In Table 1, we showed that the reparametrized sampling eases the burden of co-adjustment of step sizes in the data and latent spaces. --- Rebuttal Comment 1.1: Comment: Thanks for answering my questions. I am satisfied with the response and increasing my score. --- Reply to Comment 1.1.1: Comment: Thank you very much!
Summary: The paper proposes EM Distillation (EMD), a maximum likelihood-based approach that distills a diffusion model to a one-step generator model with minimal loss of perceptual quality. Notably, in EMD, the generator parameters are updated using samples from the joint distribution of the diffusion teacher prior and inferred generator latents by MCMC sampling. The method forms an extension of VSD and Diff-instruct. The empirical results are good. Strengths: - The resulting methodology becomes an extension of VSD and Diff-instruct, which is interesting and novel. - The EMD method holds the flexibility to trade off training efficiency and final performance by adjusting the MCMC steps. - The empirical results of the paper are strong. Weaknesses: - The noise cancellation trick is not well justified in theory, although with empirical evidence. - It seems that there are more hyper-parameters (for the MCMC steps) to tune. I wonder about the cost of doing so. Do you have any guidance on tuning them? - Is the trick of tuning t* used by previous works on text-to-image diffusion distillation? What is the result of doing so for prior works like DMD? Technical Quality: 3 Clarity: 3 Questions for Authors: See above Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback! We are very glad to see the reviewer appreciated EMD’s flexibility to trade off training efficiency and final performance by adjusting the MCMC steps. Below we respond to questions and concerns: **[Theoretical justification for noise cancellation]** We agree that there is no rigorous proof for a guaranteed variance reduction at this point. But we also want to highlight that even at worst case, noise cancellation won’t introduce biases, for canceling the noises whose mean is 0 won’t affect the mean of the gradients. **[More hyper-parameters (for the MCMC steps) to tune]** See Global Response for ablation of MCMC steps. As for the step size, we found $\gamma_\epsilon\in[0.3^2, 0.4^2]$ and $\gamma_z\in[0.003^2, 0.004^2]$ are generally good for the 3 tasks we experimented with. We reported the best configuration in the manuscript. **[Tuning t\* for prior works on text-to-image like DMD?]** While the DMD paper didn’t report the numbers for the Diff-Instruct baseline in their setting of text-to-image, we tuned the t\* for Diff-Instruct on text-to-image and obtained results better than those reported in DMD (FID 10.96 vs 11.49). We don’t have the resources to tune DMD on text-to-image generation during the limited time of rebuttal. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thanks for the rebuttal. I recommend the authors clarify these things in the revision: 1. Regarding noise cancellation, though the mean is not affected, the variance can also significantly affect the optimization. So, more analyses are required. 2. What is the performance of the baselines combined with some used tricks in the paper? --- Reply to Comment 1.1.1: Comment: Thank the reviewer for this comment. We will include the following clarification in the revision: 1. While the noise cancellation technique improved performance on three distinct tasks (ImageNet-64, ImageNet-128, Text-to-image), the theoretical analysis of its variance reduction effect indeed needs more investigation. 2. As stated in L210-211 and L266-267, the EMD-1 baseline is the Diff-Instruct baseline with all the "tricks" (same $t^*$, same learning rate, etc.). We will reiterate it in the revision.
Summary: This paper introduces a novel distillation method for converting a diffusion process into a one-step generator. The theoretical foundation is closely tied to the Expectation Maximization (EM) algorithm. The authors aim to minimize the forward Kullback-Leibler (KL) divergence between the target generator distribution and the one-step sampler's output distribution. This minimization is approached using an EM-like algorithm. In the expectation step, Markov Chain Monte Carlo (MCMC) sampling is used to sample from the joint distribution of noise and the generated image, followed by an MCMC correction to adjust towards the distribution of noise and the target image. The estimated gradients from this process are then used to update the generator. Simultaneously, an auxiliary diffusion model is trained to approximate the score of the output distribution. The authors draw an interesting connection between their proposed EM distillation algorithm and previous score distillation-based methods. The final model is evaluated on both class-conditional and text-conditional image generation tasks. Strengths: S1. The theoretical framework is both novel and robust, showcasing an innovative adaptation of the EM algorithm for diffusion distillation. The paper clearly delineates the connection with previous methods based on reverse KL minimization. S2. The writing is clear, with intuitive mathematics and excellent presentation. S3. A wide range of design choices, such as reparameterized sampling and noise cancellation in gradient estimation, are well-founded and enhance performance. Weaknesses: W1. The performance improvement over baseline approaches (such as 1-step EMD used in VSD, DiffInstruct, DMD, Swiftbrush) is minimal, particularly for text-to-image synthesis. There appears to be a significant gap between the distilled models and the original diffusion teacher. W2. While the forward KL divergence is theoretically mode-covering, it still has significantly worse recall compared to the teacher model or trajectory-preserving approaches like the consistency model. Could the authors comment on why this is the case? W3. It seems that the code will not be released. While this is not a critique, providing additional details, such as pseudocode for the MCMC correction, would be helpful for reimplementation. Technical Quality: 3 Clarity: 3 Questions for Authors: All my concerns are detailed in the weakness section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors adequately discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. It is our honor to know the reviewer liked the framework, exposition and technical inventions in our paper. One small correction we want to make in the summary is that in the expectation step, Markov Chain Monte Carlo (MCMC) sampling is initialized with samples from the joint distribution of noise and the generated image, as the initial sampling of z and x does not involve MCMC. We respond below to several questions and concerns: **[Performance]** We want to respectively argue that on ImageNet-64, where the metric is more reliable, EMD improves the FID significantly from 3.1 of the 1-step baseline to 2.2. For text-to-image generation, we only use the zero-shot MSCOCO FID as a proxy for the evaluation following works in the literature However, the metric might be less reliable, as the teacher model’s distribution may be different from the data from MSCOCO. Concurrent to our work, [1] studies some optimization techniques in the framework of DMD to reduce the bias in the student score model. We would like to try these techniques in future work to see if it improves the MCMC correction in EMD. [1] Yin et al. "Improved Distribution Matching Distillation for Fast Image Synthesis." arXiv 2024. **[Worse recall compared to the teacher model or trajectory-preserving approaches?]** This is a good question, which we discussed a bit in L235-236: “A larger number of Langevin steps encourages better mode coverage, likely because it approximates the mode-covering forward KL better.” The short-run MCMC sampler only produces an approximation of the posterior mean, so it is possible that there are still mode missing. EMD is better to be understood as an interpolation between mode-seeking and mode-covering KL. It achieves high perceptual quality and avoids significant mode seeking. **[Code release and pseudocode for the MCMC correction]** We plan to release code for ImageNet-64 by the camera-ready deadline. The pseudocode for MCMC correction is provided in Algorithm 2. We will be happy to polish it in the revised version. --- Rebuttal 2: Comment: I thank the authors for their response and will maintain my original accept rating. Additionally, I believe the doubling of runtime is not a significant issue, provided that it results in notable improvements in quality. Further enhancements in text-to-image quality and recall would be interesting future directions. For the pseudo code, I was referring to some pytorch-style pseudo code that can be directly transferred e.g. the one in Moco Algorithm 1 [1]. But of course, it is even better to have the full imagenet code release. [1] He, Kaiming, et al. "Momentum contrast for unsupervised visual representation learning." CVPR. 2020. --- Rebuttal Comment 2.1: Comment: Thank you very much for your very supportive comments! We will include a pytorch-style pseudo code in the revised version.
Summary: This paper introduces the EM Distillation (EMD) method, which efficiently distills diffusion models into a one-step generator model. It utilizes a maximum likelihood approach grounded in Expectation-Maximization (EM) and maintains good image generation quality. The method incorporates a reparametrized sampling scheme and a noise cancellation technique, enhancing the stability of the distillation process. EMD demonstrates good FID scores relative to existing one-step generative models on ImageNet-64 /128 and exhibits capabilities in distilling text-to-image diffusion models. Strengths: 1. The paper is well-written and easy to understand. 2. It demonstrates the effectiveness of the proposed method on a large scale through text-to-image experiments. 3. Although the methodology builds on existing concepts, its application of latent variable models and the EM algorithm provides a novel perspective on the distillation problem. Weaknesses: Weaknesses: 1. The requirement for at least K steps of MCMC sampling significantly increases training costs, in contrast to other methods such as SDS, VSD, SiD and consistency distialltion (CD), which generally require only one-step sampling for distillation. 2. While the performance with a large number of MCMC steps is robust, it still does not achieve state-of-the-art results when compared to SiD, particularly with a smaller number of steps. Missing Comparisons: SiD[1] derived a new distillation method originated from fisher divergence, and reach FID 1.52 on ImageNet-64x64. [1] Zhou, Mingyuan, et al. "Score identity distillation: Exponentially fast distillation of pretrained diffusion models for one-step generation." Forty-first International Conference on Machine Learning. 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: Questions: 1. Why modeling the joint distribution of (x, z)? Will this bring any benefits? 2. Without MCMC sampling on $z$, the loss function will be reformed to VSD loss. VSD doesn't require more than one-step MCMC. Does this mean that involving $z$ in the sampling makes the algorithm slower? Then what is the meaning of it? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback! We feel very encouraged to receive the recognition that EMD provides a novel perspective on the distillation problem. Below we respond to your questions and stated weaknesses: **[Overhead for using MCMC in training]** See Global Response **[Comparison with SiD]** Thanks for bringing this insightful paper to our attention. We weren't aware of it at the time of submission and will definitely include this citation in the revised version. We think it is a very interesting future direction to see if the objective decomposition techniques proposed in SiD can be incorpated in EMD. **[Why modeling the joint distribution of (x, z)? Why running the “slow” MCMC sampling on (x, z)?]** When viewing the 1-step generator model as a latent-variable model, the generative modeling problem naturally becomes “learning a joint distribution of latent z and data x such that the marginal distribution of x matches”. Then it naturally leads us to the EM framework that matches the marginal distribution with mode-covering forward KL. We include the ablation of different MCMC steps K in Fig. 3(c)(d) and discuss it in L231-236. The results read that both FID and VGG-Recall show clear improvement monotonically as the number of Langevin steps increases. We also provide a possible explanation that a larger number of Langevin steps encourages better mode coverage, likely because it approximates the mode-covering forward KL better. --- Rebuttal Comment 1.1: Comment: I personally don't think using MCMC with a few steps for approximating such high dimensional joint distribution is a good choice. Maybe removing the accumulated noise, which makes the denoising process more like an ODE, helps here. Comparison to consistecy trajetory model is also missing, which reaches 1.98 FID on ImageNet (vs 2.2 FID in this paper) with one-step generation. The performance wise is not stunning, and the computational cost is more than doubled. The cost issue is going to be further worse in large scale models, which will limit the scaling up of the method. Given the efforts, I increased but kept as a borderline score. --- Reply to Comment 1.1.1: Comment: Thanks for raising the score! For the MCMC, we hope the demonstration of 300 steps of update in Fig. 1 helps make it more convincing. EMD with fewer steps of updates can be viewed as approximately amortizing these long sampling chains. We will include the result of the consistency trajectory model (CTM) in the revised version. We also hope the reviewer would like to notice EMD-16's slightly better VGG-Recall (0.59 vs 0.57) albeit the FID gap.
Rebuttal 1: Rebuttal: # Global Response # We would like to thank all reviewers for your careful and helpful feedback! Specifically, we want to express our appreciation to reviewer eeRQ for recognizing the motivation of using forward KL, to reviewer 3oLm for identifying the novel perspective that EMD offers on the distillation problem, to reviewer p5As for the encouraging comments of our theoretical framework being novel and robust, writing being clear and intuitive, design choices being well founded and effective, to reviewer fdee for the particular interest in EMD’s flexibility to trade off training efficiency and final performance, and to reviewer MtDz for liking the core designs in our technical contribution. We would like to clarify and address several common concerns here: **[Ablation on MCMC steps]** In the submitted version, we included the ablation of different MCMC steps K in Fig. 3(c)(d) and discussed it in L231-236. Both FID and VGG-Recall show clear improvement monotonically as the number of Langevin steps increases. We also provide a possible explanation for why a larger number of Langevin steps encourages better mode coverage as it approximates the mode-covering forward KL better. **[Computation overhead]** First, despite EMD being more expensive per training iteration compared to the baseline approach Diff-Instruct, we find the performance gain of EMD cannot be realized by simply running Diff-Instruct for the same amount of time or even longer than EMD. Second, the additional computational cost that EMD introduced is moderate even with the most expensive EMD-16 setting. See below for a detailed time analysis. Finally, for text-to-image generation, it takes EMD-8 and EMD-1 3h50min and 2h14min respectively to converge to the lowest FID. In response to the reviewer eerQ’s, 3oLm’s and MtDz’s request, here we report some quantitative measurement of the computation overhead. Since it is challenging to time each python method’s wall-clock time in our infrastructure, we instead logged the sec/step for experiments with various algorithmic ablations on ImageNet-64. | Algorithmic Ablation | sec/step| |:---------------------------------|:------------------:| | Student score matching only | 0.303 | | Generator update for EMD-1 (joint sampling of eps and z) | 0.303 | | Generator update for EMD-2 (joint sampling of eps and z) | 0.417 | | Generator update for EMD-4 (joint sampling of eps and z) | 0.556 | | Generator update for EMD-8 (joint sampling of eps and z) | 0.714 | | Generator update for EMD-16 (joint sampling of eps and z) | 1.111 | | EMD-16 ( student score + generator update w/ joint sampling) | 1.515 | | Baseline Diff-Instruct (student score + generator update) | 0.703 | So EMD-16 only doubles the wall-clock time of Diff-Instruct when taking all other overheads into account. Pdf: /pdf/dcf3e7d383e096afbbaa1770e2753baaf3b5d5fb.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper distill the diffusion model into single step generator through forward KL (mode-coverage) divergence. Apart from previous reverse KL divergence, it requires the joint samples z, x from student distribution. To do this, this paper utilize MCMC method to achieve the samples (z, x). Strengths: 1. Implementing forward KL divergence is important because it can leverage the good statistical property of MLE. 2. The empirical results that compares (EMD 1 vs EMD 16) shows direct benefit of proposed method. Especially, the improvements of recall in table2 is align with the motivation of forward KL divergence. Weaknesses: 1. There are two erroneous parts. First one is approximation of student score function through another neural network. Second one is discretization error of MCMC sampling. Can you justify the effects of this two errors in training both theoretical way and empirical way? 2. Training cost seems extremely expensive. This requires 1) separate approximation of student score, 2) 16 MCMC steps per iterations. Expensive costs itself is bad thing, but the paper does not analysis on the costs rigorously. You must add the portion of training costs (approximated student score training, MCMC, student loss computation, student model back-propagation) in one iteration. You must add performance per iteration (e.g. x-axis:iteration / y-axis: FID). 3. Adding all the metric of (NFE, FID, Pec, Rec, IS) for table2 and table3. 4. Forward KL divergence often fails to achieve better fidelity compared to reverse KL divergence. Did you see any similar situations? If not, you'd better discuss the reason. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. What if the MCMC steps becomes different? (e.g. 8, 4, 2) 2. Can you expand the method to f-divergence? 3. Will you release the code? Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We appreciate the acknowledgment of our motivation in using forward KL and the corresponding validation in the ablation between EMD-16 and EMD-1. However, we would like to point out a misunderstanding in the summary “Apart from previous reverse KL divergence, it requires the joint samples z, x from student distribution”: The joint sampling is towards the target distribution of the teacher, instead of the student. Below is our point-to-point response. **[Approximation errors in student score and MCMC]** This is a valid concern and below we discuss the two errors separately. The approximation error of student score is unfortunately a universal issue in the family of distribution matching methods, including VSD, Diff-Instruct, DMD and EMD. Due to the generality of this issue, we believe it is worth dedicated research. Concurrent to our work, [1] finds that more optimization steps leads to better approximation. Another concurrent work mentioned by reviewer 3oLm, SiD, investigates a better way to decompose the distribution divergence. We are happy to mention this issue in the limitation section, provide pointers to these works in the revised version, and explore in future work if these techniques are transferable to our method. Empirically, we find that this discretization error in MCMC is not problematic, and short-run MCMC that we use in EMD can instead be motivated as interpolating between Moment-Matching Estimate and Maximum Likelihood Estimate [2]. In Fig 3cd, we empirically show that this error can be reduced by running longer MCMC chains. [1] Yin et al. "Improved Distribution Matching Distillation for Fast Image Synthesis." arXiv 2024. [2] Nijkamp et al. "Learning non-convergent non-persistent short-run mcmc toward energy-based model." NeurIPS 2019. **[Computation overhead]** See Global Response **[Performance per iteration (e.g. x-axis:iteration / y-axis: FID)]** We did include this curve in Fig. 3b. **[Adding all the metrics for table2 and table3]** Thanks for the suggestion. We will add the following numbers to Table 2 in the revised draft: EMD-16 Prec. 0.7559, IS 68.31, EMD-1 Prec. 0.7579, IS 62.43. For table 3, unfortunately, we don’t find baseline Prec. and Rec. to compare with. **[Forward KL divergence often fails to achieve better fidelity compared to reverse KL divergence]** This is a good question. Empirically we observe EMD still gives high per sample fidelity while alleviates the mode coverage problem of Diff-instruct that leverages reverse KL. A possible explanation is that the short-run MCMC sampler only produces an approximation of the posterior mean but is still far from fully mixing. Therefore, EMD is better to be understood as an interpolation between mode-seeking reverse and mode-covering forward KL, which maintains high perceptual quality and meanwhile avoids significant mode seeking. **[What if the MCMC steps become different? (e.g. 8, 4, 2)]** See Global Response. **[Can you expand the method to f-divergence?]** Upon the reviewer’s request, we reviewed the literature and found there is an alpha-EM framework [3] that generalizes EM to alpha-divergence, another special case of f-divergence. Happy to iterate with the reviewer on other types of f-divergence you deem to be interesting and better. [3] Matsuyama, Yasuo. "The/spl alpha/-EM algorithm: surrogate likelihood maximization using/spl alpha/-logarithmic information measures." IEEE Transactions on Information Theory 49.3 (2003) **[Will you release the code?]** We plan to release code for ImageNet-64 by the camera-ready deadline. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I read all the reviews and rebuttals. In my opinion, the performance gain of EMD is not enough enduring the twice amount of computational cost. I think MCMC is not good way to implement forward KL in distillation scenario. For example, if you adopt auxiliary density ratio estimator (a.k.a. discriminator) between student and teacher, you can implement forward KL without MCMC. I want to keep my score. --- Reply to Comment 1.1.1: Comment: Thanks the reviewer for the comments. Can we ask for a clarification on how to implement forward KL with a discriminator? It would be very helpful if the reviewer would like to provide some pointers to existing works.
null
null
null
null
null
null
Loki: Low-rank Keys for Efficient Sparse Attention
Accept (poster)
Summary: This method propose the PCA based attention score approximation for top-k attention. Perform PCA on offline dataset, and store the PCA vectors for inference. Strengths: This work easily makes a QK approximator without gradient-based training but with only simple PCA for top-k attention selection. Weaknesses: You have to store the PCA vectors and perform the projection of KV during attention. This projection matrix should be holded on threadblock in GPU, to minimize the GPU memory access. But this matrix might be huge if the hidden size of QKV is large. But untill now, most of LLM using less than $256$, therefore, should be fine. (NOTE: I need to checkout the implementation whether this is correct or not) Technical Quality: 4 Clarity: 4 Questions for Authors: 1. The .gz file seems broken. Can you upload the file in other format again? I want to look into source codes, for details. 2. Fig. 1. might leads mis understand that this attention mechnism using low rank PCA attention only. Can you add that this method perform top-k operation on top of approximated scores in the figure using diagram? (I think text only is not sufficient) 3. Top-k operation is usually very slow in GPU, due to synchronization and information exchange of top-k values via global-memory. I think per-kernel latency breakdown should be presented on this paper. (Fig. 6 only shows the single sequnce length in left.) 4. Some figures are not sized properly. (e.g. Fig. 6.) Can you adjust the size of figures to avoid stretching the texts? 5. I think there should be some plot that show latency-performance trade-off (ms - accuracy) Can you add this plot using some downstream tasks? 6. Fig.4, they evaluate downstream tasks, but every task is quite short sequence length. Can you try LongBench (https://github.com/THUDM/LongBench)? Question 5 should be solved using LongBench rather than short sequence tasks. I have concern about performance evalution. The only metic used is Perplexity, HellaSwag, TQA, Winogrande, ARC, MMLU, which is relatively short sequence compared to long context LLM such as Qwen2 and Phi3. I suggest to add evaluation on long context. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: PCA should be performed during train stage, and PCA projection required for top-k attention selection. This may leads additional effort to optimize the GPU kernels in many scale, and sometimes impossible if the device cannot hold PCA projection matrix in shared memory. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Title: I fixed the .gz Comment: I found out that the `tar.gz` file seems forced to be renamed `.gz` in OpenReview. Don't worry about my question 1, it is resolved by myself right now... If possible, I will look into it during the discussion period. Thank you for your great work, and sorry for my mistake. --- Rebuttal 2: Rebuttal: We thank the reviewer for their feedback and appreciate that they recognize the simplicity of the approach to get a $Q.K^T$-approximator without any gradient-based training. ### **Weaknesses** **You have to store the PCA vectors and perform the projection of KV during attention. This projection matrix should be holded on threadblock in GPU, to minimize the GPU memory access. But this matrix might be huge if the hidden size of QKV is large. But untill now, most of LLM using less than , therefore, should be fine. (NOTE: I need to checkout the implementation whether this is correct or not)** We would like to note here that the current implementation of PCA-TopK is not a fused implementation like Flash Attention. We discuss the possibility of developing a fused method in our response to Reviewer vRA7 (Question 2). In the current implementation, the projection to PCA space happens in a separate kernel. This projection operation is implemented as a standalone matrix multiplication using PyTorch. We do agree with the reviewer that a more efficient implementation would keep the PCA vectors in shared memory to minimize the GPU memory access and that is possible since the per-head PCA transformation is only $D \times D$ dimensional, where typically $D$ = 128 (sometimes 256) for current LLM models. ### **Questions** **The .gz file seems broken. Can you upload the file in other format again? I want to look into source codes, for details.** Based on the comment submitted, we assume this is resolved for now. If needed, we can submit an anonymized link to our code as an official comment to the ACs following NeurIPS guidelines. **Fig. 1. might leads mis understand that this attention mechanism using low rank PCA attention only. Can you add that this method perform top-k operation on top of approximated scores in the figure using diagram? (I think text only is not sufficient)** We will update the figure in the subsequent version of the paper. **Top-k operation is usually very slow in GPU, due to synchronization and information exchange of top-k values via global-memory. I think per-kernel latency breakdown should be presented on this paper. (Fig. 6 only shows the single sequence length in left.)** Yes, we agree that Top-K operation is slow and while it is possible to fuse the Top-K operation with other matmuls, it still involves synchronization among threads. Currently, we are looking at ways to bypass the Top-K operation itself using a score thresholding mechanism (where the threshold is sampled offline). We leave that to future work. Figure 2 (Left & Mid) in the rebuttal pdf shows the kernel-wise breakdown for more prompt and generation lengths. It shows that the Top-K operation is almost as costly as the matmuls. A custom (or fused) kernel or a thresholding strategy might help alleviate this cost. **Some figures are not sized properly. (e.g. Fig. 6.) Can you adjust the size of figures to avoid stretching the texts?** We will fix the figures in the subsequent version of the paper **I think there should be some plot that show latency-performance trade-off (ms - accuracy) Can you add this plot using some downstream tasks?** Figure 1 (Top-Right) in the rebuttal pdf shows a latency-accuracy trade-off curve evaluated on LongBench [1] for the Llama2-7b-Chat model (sequence length of 4096). We can see that the settings with $k_f = 0.25$ and $k_f = 0.125$ perform much better than vanilla attention. $k_f$ has a larger impact on the performance than $d_f$, which is supported by our theoretical speedup analysis in the paper. Overall, the configurations: $(k_f = 0.25, d_f = 0.25)$ and $(k_f = 0.125, d_f = 0.5)$ give a good accuracy-performance trade-off for LongBench tasks as well. **Fig.4, they evaluate downstream tasks, but every task is quite short sequence length. Can you try LongBench (https://github.com/THUDM/LongBench)? Question 5 should be solved using LongBench rather than short sequence tasks.** Figure 1 (Top-Left and Bottom) in the rebuttal pdf shows the performance of PCA-TopK on LongBench [1] tasks for the Llama-2-7B-Chat model. We have included a discussion on the same in the Global Rebuttal. ### **Limitations** **PCA should be performed during train stage, and PCA projection required for top-k attention selection. This may leads additional effort to optimize the GPU kernels in many scale, and sometimes impossible if the device cannot hold PCA projection matrix in shared memory.** For clarification, our current method does not require PCA to be performed during the training stage. We compute the PCA transforms by evaluating the trained model over a calibration dataset, storing the generated keys, and then computing the PCA. We agree that an approach that utilizes our low-dimensional observation during training or fine-tuning will be interesting to study and may improve model performance. We also agree that additional effort is required to optimise the GPU kernels further. ---- ### **References** [1] Bai, Yushi, et al. "Longbench: A bilingual, multitask benchmark for long context understanding." arXiv preprint arXiv:2308.14508 (2023). --- Rebuttal 3: Comment: Thank you for the detailed and kindly described rebuttal. Sorry for my late reply. I am happy that most of my concerns are resolved, and I want to increase the rating. I am pretty confident that this paper is an acceptable grade. I hope other reviewers will respond soon. - I briefly looked into triton codes, and they are simple and worth understanding for further researchers. I hope the readability (adding a comment, changing the variable names from A and B.. to something semantically meaningful) will be improved in the public release. I think the modularity of this code is generally good. - However, the hooker function is implemented with a forward overriding style, and I do not like this style... I hope we have a better way than this because this kind of approach will be broken if the `transformers` framework changes its internal API. - Since the PCA projection is in a separate kernel, this should be improved also. - I am very grateful for the latency breakdown plot in the PDF. In every other paper that I reviewed in this NIPS, no other author actually made this chart. I truly think this analysis is critical to find further bottleneck of method for future research. - I acknowledge the practical impact of this work is understandable by showing reducing the memory footprint of K tokens read by low-rank projection. - I hope the researchers will find a way to do PCA-took in a fused way (like flash attention. e.g., Flash-PCA-Topk). I think we have two approaches: **(1)** ML perspective: changing algorithms. e.g., change top-k into thresholding function. I think these papers will be helpful for the thresholding approach [1]. Or perform Top-k in hierarchically [2] **(2)** System perspective: preserve algorithm as much as possible, but change the implementation. e.g., implement top-k operation using a bucket (partitioning may also be possible). - **I think memory reduction is not very critical for this kind of work**. So, I respectfully disagree with aGJV's weakness. I understand why the reviewer points to memory reduction because, often, memory consumption limits the research scales. However, I think the memory access footprint is quite underestimated here. The important point of the whole reduction is reducing K memory **read** footprint. This means we can effectively reduce the latency and total throughput of K reads, which is the most significant part of attention. This means we do not need high bandwidth memory such as HBM3e, and we can just use GDDR6, which is much more cost-effective. Moreover, in much sparse attention works [2], they are struggling to reduce the memory throughput of K tokens because single K tokens are already quite huge (128 * 2 bytes for each token and there are 32 heads. usually 8KB, which is 2 pages of VM). So I really love the reduction of memory footprint, and this make possible to KV cache offloading effectively as shown in [2, 3] I hope other reviewers can acknowledge this practical benefits. Maybe it is good to show, how much memory reads actually happened during PCA-topk vs. FlashAttention using `nsight-compute` [4] in future research. I expect to minimize actual memory reads, we should be careful with memory addressing and formating of KV cache tensor. Again, thank you for your wonderful work, and I hope this kind of work (improving attention mechanism) continues. [1] https://arxiv.org/pdf/2107.00910 [2] https://arxiv.org/abs/2406.09827 [3] https://arxiv.org/html/2406.19707v1 [4] https://developer.nvidia.com/nsight-compute --- Rebuttal Comment 3.1: Comment: We thank the reviewer for their extremely positive comments and the corresponding score increase. We greatly appreciate the insightful suggestions and hope to incorporate them in future work to further improve our method.
Summary: This paper reveals that key vectors lie in a significantly lower-dimensional space. Inspired by this finding, the author approximates the computation of the original attention score using PCA, then selects the top-k keys based on the approximate attention scores. Experiments across different models and datasets show that PCA-TopK can achieve speedups of up to 40% with minor reductions in generation quality. Strengths: 1. The author discovers that the key vectors in multi-head attention lie in a lower-dimensional space, which may inspire future work on Sparse Attention. 2. TThe experiments across various models and datasets indicate that PCA-TopK can achieve speed improvements with only minor reductions in generation quality. Weaknesses: 1. There are some typos (lines 44, 150) in this article, and some figures are unclear with text overlaps (Figures 2, 6). 2. There are few baselines about Sparse Attention in the experiments. How does the PCA-TopK method compare to SPAR-Q Attention? What is the trade-off curve between their acceleration ratio and effect? Technical Quality: 3 Clarity: 2 Questions for Authors: 1. The analysis that key vectors lie in a lower-dimensional space is interesting. Do value vectors and query vectors share similar characteristics? What about the vector after merging multi-head attention? 2. Given the differences in Rankl@90 across layers, what is the impact of varying the policy depending on the layer? 3. If post-training with PCA-TopK, will it yield better performance? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and appreciate that they recognize the potential impact of our low-dimensional observation and the strength of our evaluation. ### **Weaknesses** **There are some typos (lines 44, 150) in this article, and some figures are unclear with text overlaps (Figures 2, 6).** We will fix these issues in the subsequent version of the paper. **There are few baselines about Sparse Attention in the experiments. How does the PCA-TopK method compare to SPAR-Q Attention? What is the trade-off curve between their acceleration ratio and effect?** SPAR-Q incurs significant overhead as it needs to store the keys in KV-Cache twice – once in a row-major and once in a column-major format. This is required to get performant kernels for their implementation. Our method does not incur such a massive memory overhead (the only overhead we have is of storing the PCA projections, which are comparatively much smaller). Additionally, our $Q.K^T$ kernels are more performant than SPAR-Q as compared in Appendix E, Figure 13 (in our paper). We have not evaluated SPAR-Q ourselves. Their paper does not have a trade-off curve between acceleration ratio and effect. The only commonly evaluated dataset they have is TriviaQA [1] where PCA-TopK also achieves good performance (Figure 4 in our paper). Their attention latency results can be found in Figure 8 of *their* paper while Figure 9 shows end-to-end performance but evaluated on CPU. We will include end-to-end quantitative comparison with their method in subsequent versions of the paper. ### **Questions** **The analysis that key vectors lie in a lower-dimensional space is interesting. Do value vectors and query vectors share similar characteristics? What about the vector after merging multi-head attention?** Figure 3 in the rebuttal pdf shows the dimensionality analysis for query and value vectors for Llama2-7B and Llama3-70B models. It can be seen that while query vectors exhibit low dimensionality similar to key vectors, value vectors have a significantly higher rank. We will include this analysis for more models in the updated supplementary material. **Given the differences in Rankl@90 across layers, what is the impact of varying the policy depending on the layer?** Figure 2 (Right) in the rebuttal pdf shows the evaluation of varying the $d_f$ parameter per layer based on the explained variance of the PCA components for Llama3-8B. We compare two policies: (1) Fixed $d_f$ for all layers (set to either 0.25 or 0.5), (2) Variable $d_f$ per layer. For the variable policy, we select the $d_f$ of a layer based on the explained variance threshold (ranging from 0.5 to 0.8). The compression ratio is defined as the average of $d_f/D$ across all layers, where $D$ is the full dimensionality of the vectors. We can see that using the variable policy does not show any benefit over the fixed policy. This indicates that different layers may require different variance thresholds as well, and therefore, tuning the variable policy is key to getting gains. Our evaluation on Llama2-13B also shows the same trend (plot not included due to space constraints). **If post-training with PCA-TopK, will it yield better performance?** PCA-TopK is a post-training method and only requires the PCA transforms to be calculated on a set of keys generated (in inference mode) over some calibration dataset. If the reviewer is suggesting adding a fine-tuning step for better performance, we think that is a good suggestion. In that paradigm, we also feel it might be more optimal to keep the model fixed and train a good transformation that minimizes the difference between the reduced dimensional and original attention scores. ---- ### **References** [1] Joshi, Mandar, et al. "Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension." arXiv preprint arXiv:1705.03551 (2017). --- Rebuttal Comment 1.1: Comment: Thanks for the responses! I will keep my rating. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response.
Summary: The paper introduces a method for approximating attention in LLMs, with the benefit of improved inference efficiency. The insight is to focus on the dimensionality of the key vectors computed in the attention block. Principal component analysis reveals that the keys lie in a low dimensional space. This gives rise to the proposed method PCA-TopK, which uses PCA to compute approximate attention scores in a reduced dimension, and then selects the top-k tokens based on these scores, and compute the equivalent of full attention only for the selected tokens. The method is evaluated on multiple LLMs from the Llama family, Pythia, Mistral, Mixtral, Phi, and various datasets. The results show comparable performance to full attention while having significant speedups. Strengths: The study of low intrinsic dimensionality of attention keys across multiple models and datasets is insightful. Theoretical support is provided through lemmas and proofs. The evaluation includes several models, tasks and datasets, showing it could be generalized. The authors developed optimized Triton kernels for practical implementation. Weaknesses: The method does not seem to reduce the memory usage. Technical Quality: 3 Clarity: 3 Questions for Authors: For some models the pre-rotary PCA transforms outperform the post-rotary ones, which is intriguing. The authors acknowledge that they do not have a clear explanation, however if more understanding was developed it would be great to include it in the paper. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and appreciate that the reviewer finds our observation of the low intrinsic dimensionality of keys insightful and supported by theoretical analysis and extensive evaluation. ### **Weaknesses** **The method does not seem to reduce the memory usage** We agree that our method does not reduce memory usage. It is designed as an efficient Top-K token selector without deleting tokens from the KV-Cache (deleting tokens can incur a significant accuracy penalty, as seen with H2O). Our method can complement token-eviction methods like H2O by selecting Top-K tokens from a reduced KV-Cache. ### **Questions** **For some models the pre-rotary PCA transforms outperform the post-rotary ones, which is intriguing. The authors acknowledge that they do not have a clear explanation, however, if more understanding was developed it would be great to include it in the paper.** As noted, pre-rotary transforms outperforming post-rotary transforms is an intriguing observation for which we currently lack a clear explanation. We still have not been able to develop a concrete understanding of why this is the case. A naive intuition we have is that when computing the PCA transform over the post-rotary keys, it captures the distribution of token representations occurring at specific positions in the calibration dataset. During inference, the same token can appear at any other position. The pre-rotary transform captures the distribution of only tokens with significantly less positional information so it can generalize better. While Lemma 4.2 (in the paper) proves that using the post-rotary transform is a good approximation, it does not show that it is the best approximation due to the variational upper bound formulation. Hence, the pre-rotary transform might provide a better approximation.
Summary: This paper proposes a sparse attention mechanism for large language models by leveraging the low-dimensionality of key vectors in the attention block. The approach ranks and selects tokens in the KV-cache based on attention scores computed in the reduced dimensional space, leading to significant speedups in attention computation without major sacrifices in model quality. Strengths: Empirical Validation: Extensive evaluations demonstrate significant speedups with minimal accuracy degradation. Theoretical Soundness: The use of low-rank keys is backed by robust theoretical analysis, enhancing the credibility of the approach. Weaknesses: Implementation Complexity: The method's complexity might pose challenges for achieving the theoretical speedups without specialized knowledge. Memory Footprint Consideration: While the approach excels in computation, it does not address memory footprint reduction, a limitation when compared to other sparse attention techniques. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Model Class and Size Performance Variance: How does Loki's performance vary with different model classes and sizes not included in the paper? 2. Integration with High-Efficiency Attention Mechanisms: How does the PCA-TopK method align with or complement high-efficiency attention mechanisms such as Flash Attention or Ring Attention? Can it be orthogonally integrated with these techniques, or are there specific challenges that need to be addressed? 3. Comparison with Other Sparsity Strategies: The paper does not compare with attention sparse strategies beyond H2O, such as SnapKV[1]. Could the authors discuss the positioning of PCA-TopK relative to these methods and possibly include comparative analysis in future work? 4. Figure Clarity: Regarding Figure 6 (left), it appears there might be a compression issue. Could the authors verify the image quality and ensure that it is legible in subsequent versions of the paper? 5. Information Loss Management: How does PCA-TopK handle the potential loss of information when reducing the dimensionality of key vectors, especially in the context of Rotary Positional Embeddings (RoPE) which increase dimensionality? 6. Combination with Compression Techniques: How does PCA-TopK interact with other model compression techniques, and is there a combined approach that could yield better results? 7. Handling of Longer Sequences: How does Loki handle sequence lengths longer than those in the evaluation datasets? 8. Quantization Integration: Can the PCA-TopK mechanism be integrated with other optimization techniques like quantization? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and appreciate that the reviewer recognizes the strength of our extensive empirical evaluation backed by robust theoretical analysis. ### **Weaknesses** **Implementation Complexity**: We agree that achieving speedups with our method requires custom Triton kernels. We will open-source our kernels for wider use. Integrating with techniques like Flash Attention would involve a more complex implementation, as is the case with most modern attention methods. **Memory Footprint**: We agree that our method does not reduce memory usage. It is designed as an efficient Top-K token selector without deleting tokens from the KV-Cache (which can incur a large accuracy penalty as seen with H2O). Our method can complement token-eviction methods like H2O by selecting Top-K tokens from a reduced KV-Cache. ### **Questions** **Model Class and Size Performance Variance: How does Loki's performance vary with different model classes and sizes not included in the paper** Appendix B, Figure 11 and Tables 3 & 4 (in the paper) show perplexity and downstream evaluation of our method on models of different model classes covering dense and MoE models. Further, for each class, we show results for models of different sizes (7B, 8B, 13B, 70B, 8x7B, 8x22B). Appendix A, Figure 7 (in the paper) shows the dimensionality analysis on an even larger set of model classes and sizes. **Integration with High-Efficiency Attention Mechanisms: How does the PCA-TopK method align with or complement high-efficiency attention mechanisms such as Flash Attention or Ring Attention? Can it be orthogonally integrated with these techniques, or are there specific challenges that need to be addressed?*** Integrating PCA-TopK with Flash Attention (FA) or Ring Attention is theoretically possible, but performance challenges, especially with the Top-K operation, must be addressed to achieve speedups. While we haven't resolved all integration issues, we have a rough algorithm for FA integration: 1. Load the query into shared memory 2. Additional Loop: - Load the first $d_f$ dimensions of the keys into shared memory - Compute the approximate scores for each block of keys and update top-k indices 3. Original FA Loop: Load blocks of top-k keys (based on the indices computed) with full dimensionality and follow the FA algorithm. **Comparison with Other Sparsity Strategies: The paper does not compare with attention sparse strategies beyond H2O, such as SnapKV[1]. Could the authors discuss the positioning of PCA-TopK relative to these methods and possibly include comparative analysis in future work?** Methods like H2O and SnapKV [1] delete tokens to save KV-Cache memory. In contrast, PCA-TopK selects the Top-K tokens from the entire KV-Cache to reduce memory bandwidth without deleting tokens. Hence, PCA-TopK is orthogonal to H2O/SnapKV. A combined strategy could involve deleting tokens with H2O/SnapKV, and then selecting Top-K tokens from the retained cache. We will include a comparison of standalone PCA-TopK with other methods apart from H2O in future work. **Figure Clarity: Regarding Figure 6 (Left), it appears there might be a compression issue. Could the authors verify the image quality and ensure that it is legible in subsequent versions of the paper?** We will fix Figure 6 (Left) in the subsequent versions of the paper. **Information Loss Management: How does PCA-TopK handle the potential loss of information when reducing the dimensionality of key vectors, especially in the context of Rotary Positional Embeddings (RoPE) which increase dimensionality?** PCA-TopK is an approximate method and there is information loss when reducing the dimensionality. We try out different values of $d_f$ (reduced dimensionality) and find a good tradeoff between compute performance and accuracy. We also pick the best transform between post-rotary and pre-rotary. Figure 2 in the paper shows that RoPE indeed increases the dimensionality but it is still low enough that settings like $d_f = 0.25$ (25% dimensionality) work well enough for a good compute-accuracy tradeoff (Figure 3, 6 in the paper) **Combination with Compression Techniques: How does PCA-TopK interact with other model compression techniques, and is there a combined approach that could yield better results?** We discuss how our method can be used with token-eviction methods as a response to Question 3. PCA-TopK can theoretically be used with quantization, as PCA-TopK reduces dimensionality while quantization reduces bits per value. Model pruning [2] reduces model parameters, which is orthogonal to selecting tokens from the KV-Cache. We are unsure if pruning affects the low dimensionality of key vectors and leave that analysis for future work. We have not quantitatively analyzed combining PCA-TopK with other approaches and are unsure of the practical performance. **Handling of Longer Sequences: How does Loki handle sequence lengths longer than those in the evaluation datasets?** Figure 1 in the rebuttal pdf shows the evaluation of PCA-TopK on LongBench [3] long-sequence benchmark for the Llama-2-7b-Chat model. We have included a discussion on the same in the Global Rebuttal. **Quantization Integration: Can the PCA-TopK mechanism be integrated with other optimization techniques like quantization?** We discuss how our method can be used with quantization as a response to Question 6. ----- ### **References** [1] Li, Yuhong, et al. "Snapkv: Llm knows what you are looking for before generation." arXiv preprint arXiv:2404.14469 (2024). [2] Han, Song et al. “Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding.” arXiv: Computer Vision and Pattern Recognition(2015): n. Pag. [3] Bai, Yushi, et al. "Longbench: A bilingual, multitask benchmark for long context understanding." arXiv preprint arXiv:2308.14508 (2023). --- Rebuttal Comment 1.1: Comment: Thanks for the author's response, and I will increase my rating to 6. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response and the score increase.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable feedback. ### **PCA-TopK's performance on tasks with longer contexts** We ran the LongBench [1] long-sequence benchmark for the Llama2-7B-Chat model with PCA-TopK and compared its performance with Full Attention in Figure 1 of the rebuttal pdf. Figure 1 (Top-Left) shows that PCA-TopK performs well on all LongBench task categories for $(k_f = 0.25, d_f = 0.25)$, with Multidoc-QA having the most significant accuracy drop from ~22% to ~18%. Figure 1 (Bottom) shows a finer-grained evaluation on all LongBench tasks with two configurations: $(k_f = 0.25, d_f = 0.25)$ and $(k_f = 0.125, d_f = 0.5)$. For this particular model, we see that, on average, the post-rotary transform is better than the pre-rotary one, and $(k_f = 0.25, d_f = 0.25)$ is the better configuration. These observations are consistent with the evaluation of the tasks in the paper. To examine the accuracy vs. performance trade-off, we plot overall LongBench accuracy along with attention latencies (computed in a micro-benchmark) for different configurations of $k_f, d_f$ in Figure 1 (Top-Right). Attention times were computed with a prompt length of 3500 and a generation length of 512 to match LongBench's values for the Llama2-7B-Chat model. Due to the slow cache updates in HuggingFace, we lack an end-to-end framework but plan to integrate our attention method with an inference framework like vLLM soon. Nonetheless, our method shows potential for up to 40% attention speedups with minimal accuracy degradation in long and short-context tasks. We will include other models supported by LongBench in the camera-ready version of the paper, if accepted. ### **Memory usage** We acknowledge the limitation pointed out by the reviewers that our method does not reduce memory usage. Our method efficiently selects Top-K tokens from the KV-Cache without deleting tokens. As we have demonstrated with H2O, deleting tokens can have significant accuracy penalties. PCA-TopK can be used in conjunction with memory reduction methods such as token-eviction or quantization, and a combined approach may lead to better memory, performance, and accuracy trade-offs. ### **Dimensionality of query/value vectors and Variable $d_f$ policy** We analyzed the dimensionality of query and value vectors (Figure 3 in the rebuttal pdf), finding that while query and key vectors share the low-dimensional observation, value vectors do not. These findings can inspire further research into LLM properties and sparsity exploitation for improved performance. We also experiment with a variable policy for setting $d_f$ per layer instead of a fixed policy across all layers. For the variable policy, we set $d_f$ of every layer based on an explained variance threshold (varied from 0.5 to 0.8). Figure 2 (Right) in the rebuttal pdf shows the results of this evaluation on Llama3-8B, illustrating that the variable policy is no better than the simple fixed policy. We also evaluate on Llama2-13B and see a similar trend (plot not included in the pdf due to lack of space). ---- ### **References** [1] Bai, Yushi, et al. "Longbench: A bilingual, multitask benchmark for long context understanding." arXiv preprint arXiv:2308.14508 (2023). Pdf: /pdf/46d6ee6f03345427c876103c57cf3109288abc10.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Trace is the Next AutoDiff: Generative Optimization with Rich Feedback, Execution Traces, and LLMs
Accept (poster)
Summary: The paper proposes a problem specification (OPTO) and a solution (optimizer OptoPrime) as well as a software framework (Trace) for agentic program optimization. The paper demonstrates high performance of the bundle above in solving 5 benchmarks. The paper demonstrates the ability to optimize 3 types of parameters: numeric, text prompts, and python code. The paper proposes Trace as an analogue of imperative-style AutoDiff engines like PyTorch for agentic programs relying on (black box) LLM inferences. The evaluated benchmarks cover a reinforcement learning setting (Battleship, Traffic control and Robot manipulation), scalar regression (Numerical optimization) and question answering (classification and more in BigBenchHard). Strengths: [1] The authors do a great job in demonstrating the operational performance of Trace on 5 examples: Battleship, Numerical optimization, Traffic control, BigBenchHard, and Robot manipulation. [2] Great to see Trace compared with both traditional optimizers and the recent LLM-based (OPRO) in Figure 5b. [3] It is nice to see that the proposed method is designed to generalize to several types of payload (demonstrated 3: numeric values, text and code). [4] Experiments in “5.3 Unifying Prompts and Functions Optimization” neatly showcase joint optimization of prompts and code which is a clear novelty, even though one may argue that a text prompt and a text of a piece of code are both textual information. Good to see that the optimization works even without the history like Yang 2024 (OPRO) or population like Pryzant 2023 (ProTeGi). [5] The learning seems very sample-efficient with the reported scores achieved in just several single-sample iterations. [6] After running exp/run_prompt_bigbench.py I confirm the reproducibility of the results on BBH. Weaknesses: [1] The paper does not compare with “TEXTGRAD: Automatic Differentiation via Text” (arXiv:2406.07496v1) which is a concurrent work. [2] The narrative goes from describing a piece of software (Trace) to the problem specification (OPTO) and then to the algorithmic solution (OptoPrime). I would expect the problem specification and the proposed algorithm to be explained first while the software details are discussed later or even in the Appendix. [3] The experiment in 5.3 confirms that the proposed method works in principle, however due to the small scale of the optimized program (3 parameters), there is no clear signal that Trace will scale up for 10s or 100s of trainable parameters. [4] The definition of a trace (line 126) is not given. I am familiar with the verb “to trace” with respect to a symbolic program, however it is unclear what “a trace” means. From a random web page (https://www.ituonline.com/tech-definitions/what-is-an-execution-trace/): Definition: Execution Trace An execution trace is a record of the sequence of operations executed by a program during its run. This trace includes function calls, variable values, and the flow of control among other details. This definition is not mathematically clear. [5] The work lacks the analysis of Figure A.12 where the learned prompt template does not make much sense. [6] Running the baseline exp/run_prompt_bigbench_dspy.py fails. [7] The paper does not demonstrate the optimization success in agentic programs with memory, i.e. when the optimized class like Predict below has internal state that changes from call to call: ``` @trace_class class Predict(LLMCallable): ``` Technical Quality: 4 Clarity: 4 Questions for Authors: [1] On lines 79-80 you claim that “Remarkably, there is 80 no mention of Battleship nor details on how the functions reason and act should behave or adapt 81 in Fig. 2a”. However in Appendix H, Iteration 0 (Initialization) you provide relatively detailed instructions: “Given a map, analyze the board in a game. On map, O denotes misses, X denotes successes, and . denotes unknown positions.”. Since the Battleship game is a very well known one, ChatGPT could reproduce the code memorized from the public repositories. How do you assess the risk of this type of leak? [2] Where can an example of prompt optimization of an LLM agent as per line 767 be found? [3] I appreciate quoting “Complete” on Figure 2a as the example is not clear without `__call__()` and `select_coordinate()` that can only be found in the supplementary materials in exp/battleship_exp.py -> Policy2. I suggest replacing it with “An excerpt from”. [4] In Figure 3 it is not clear how can g1, g2 and g3 be different since the connectivity of the graph is defined by the optimized torch-style module, specifically a chain of 2 optimizable nodes Reason and Act in the example. [5] It would be valuable to know the USD cost of OpenAI API usage per experiment. [6] On the technical side, python version is not mentioned, datasets and ray are not on the requirements list. However I appreciate providing OAI_CONFIG_LIST_sample. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: [1] The authors honestly admit that one of the main limitations of the current approach is LLM context length. Indeed, the scaling of the proposed algorithm that packs the entire trace into a single context for LLM inference inside OptoPrime. [2] Scaling of the proposed framework to more learnable parameters and more sophisticated agentic programs is yet to be demonstrated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and questions. # Comparison with TextGrad Please note that we submitted Trace to NeurIPS on May 15 2024, and TextGrad was uploaded to arXiv and Github on June 11 2024. So a direct comparison was impossible. # Narrative We considered the narrative you suggested and decided that we needed a very strong motivation for an entirely new approach to optimization (Section 1.3). We found that showing the Trace software in action first was the fastest way to demonstrate that this kind of optimization (OPTO) is feasible, practical and powerful. # Limitation on Scalability We agree that scalability is a limitation of OptoPrime, as discussed in Sec 6 and Sec 7. We wish to clarify that this limitation pertains specifically to OptoPrime and not to Trace itself. Trace is designed to scale efficiently to large graphs with minimal overhead (see line 242) and can handle non-textual nodes. We are inspired by the historical development of back-propagation, starting from optimizing networks with 10s of neurons (in the Rumelhart et al paper in 1986) to billions today. We anticipate better and scalable algorithms for OPTO developed in the future. # Definition We will include a clear definition of an execution trace. The execution trace is defined as the sequence of operations and their execution results invoked when computing the output from a set of inputs. This execution trace can be expressed as a computational graph defined in Preliminary. We will clarify that the DAG g is the computational graph defined above. # Analyzing learned prompts We will highlight the learned prompts in the revised Sec 5. One lesson from several automatic prompt engineering works is that human intuition is not a good guide for prompt engineering. Seemingly innocuous (and perhaps unreasonable) changes to a prompt template (like that in Figure A.12) can have large effects on an LLM’s behavior, thus motivating the need for their automatic optimization e.g. via Trace. # Bugs in Running DSpy baseline Thank you for running our code. Unfortunately, the code uploaded to OpenReview included a stale version of the DSPy script. We apologize for the oversight. We provide a snippet of our correct setup in the one-page pdf. We will update the supplementary material. # Optimizing stateful agents Optimizing stateful agents is similar to optimizing recurrent neural networks; we need to explicitly represent the state as inputs to the learned functions (discussed in line 193). Our experiments on Meta-World and Battleship are examples of stateful problems. In these problems, the environment is stateful and that state is returned as input to the functions that Trace’s learning; these graphs are similar to that of optimizing an agent with an internal state. We hope these experiments are sufficient to address the reviewer’s concern. # Battleship game We independently found this error after the paper submission. The exact function definition used in the experiments is given in Appendix H, which is consistent with the code we submitted. You are right that Figure 2a over-simplifies important details about the function docstring and the current text in line 79 is misleading. We will change it to “Remarkably, there is no mention of Battleship environment APIs nor details on how the functions reason and act should behave or adapt in Fig. 2a”. We agree that GPT4 likely has trained on code about the battleship game. But it does not know the API of the Battleship game (because we coded it up from scratch). If GPT4 did know how to solve the problem we presented, it could have solved it in the first iteration after one update, but that is not what we observed (Figure 1). This gap between the performance after one update and multiple updates indicate the sign and need of learning from feedback and interactions. We will use “excerpt” in the revision to clarify the code snippets in Figure 2. # LLM Prompt Optimization Example An example of LLM agent as per line 767 can be found in a new experiment that we conducted for the [virtualhome](http://virtual-home.org/) environment. This experiment is included in the one-page PDF for the rebuttal, including the code and figure. Virtualhome is a collaborative environment that requires two agents to work together to solve household tasks. Trace is asked to optimize and update a specific part of the prompt, which is the plan for future actions. Prior work (Guo et al., "Embodied LLM Agents Learn to Cooperate in Organized Teams", 2024) forces agents to have a round of conversation before they start the task. We show that Trace allows agents to have naturally emerging pro-social behaviors for some tasks (such as “putting plates into the dishwasher”), but not others (such as “reading a book”). # Graphs in Fig 3 Figure 3 is an illustration of a general OPTO problem setup. In the Battleship example, you are correct that the graph is the same in every iteration. But, for a general optimization problem, the graph structure can be different, e.g. different parameters changing program flow during the forward pass, or simply because the execution is stochastic. Meta-World is an example where the graph structure can be different across iterations. The graph is a chain describing the multi-step interactions with the environment, and each episode ends either when the robot successfully solves the problem or when timeout happens. Therefore, iterations with successful episodes can have a shorter chain than those that fail and time out. # Token Cost The cost depends on the graph size and the tokens required to describe the problem. Running OptoPrime with GPT-4-Turbo for the most complex experiment in the paper, MetaWorld, costs <$30 USD for one task (over 10 seeds, 30 iterations). Costs of other experiments are a fraction of this. # Technical Details We currently require Python>=3.8, and we will update the setup.py to add the missing dependencies. Thank you for your feedback and running the code! --- Rebuttal Comment 1.1: Comment: Thank you for addressing the weaknesses and questions. I intend to keep my score.
Summary: This paper proposes an end-to-end optimization framework, Trace, for the automatic design and updating of artificial intelligence systems. Trace is based on Optimization with Trace Oracle (OPTO), treating the computational workflow of AI systems as a graph of neural networks, which can be updated via backpropagation. Additionally, this paper introduces OptoPrime, a general optimizer based on large language models (LLM), as a specific implementation of Trace. In the experimental section, the paper compares Trace with the state-of-the-art LLM optimizer OPRO across various tasks, including numerical optimization, traffic control, and robotic control. The results demonstrate the superior performance of Trace. The primary contribution of this work lies in modeling computational workflows as OPTO problems and designing Trace + OptoPrime to address these issues. Strengths: Strengths: 1.Clarity of Writing: The structure of the article is clear, the language is fluent, and it is easy to understand. 2.Novelty: This paper introduces a novel end-to-end optimization framework, Trace, which views the computational process as a graph and utilizes execution trace information to optimize parameters. Compared to traditional black-box optimization methods, this approach offers higher efficiency and greater interpretability. Weaknesses: Weaknesses: 1.Limited Scalability: The OptoPrime optimizer mentioned in the paper has some scalability limitations, such as difficulty in handling parameters or nodes that cannot be represented textually, and challenges in dealing with computational workflows that contain a large number of nodes. 2.The graph for Trace requires manual design, lacking automated methods. 3.Limited Experimental Improvement: The experimental section shows limited improvement, with the differences between Trace and OPRO not being particularly significant. Technical Quality: 2 Clarity: 2 Questions for Authors: 1.Difference Between Graph Optimization and Individual Node Optimization: Graph optimization considers the entire computational workflow as an interconnected system, optimizing parameters in a holistic manner, which can lead to more coordinated and efficient results. In contrast, optimizing individual nodes treats each component in isolation, potentially missing out on interactions between nodes. However, the paper lacks experimental evidence to support the effectiveness of Trace in performing graph optimization over individual node optimization. 2.Discrepancy Between Ablation Studies in Figure 5 and Figure 6: The conclusions drawn from the ablation studies in Figure 5 and Figure 6 differ, raising questions about consistency. 3.Length of Trace's Prompts Compared to OPRO and Token Efficiency: The prompts used by Trace are longer than those used by OPRO. It is important to quantify this difference in length to assess its impact on token efficiency. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and questions. # Limited Scalability We agree that scalability is a limitation of OptoPrime, as discussed in Section 6 (Limitations) and Section 7 (Conclusion) where we note the current focus on textualizable problems. However, we wish to clarify that this limitation pertains specifically to OptoPrime and not to Trace itself. Trace is designed to scale to large graphs with minimal overhead (see line 242) and can handle non-textual nodes. OptoPrime's limitation arises from converting the propagated Trace graph into a single query that fits in the context window for a current-gen LLM. As its name suggests, OptoPrime is a first step in solving OPTO problems. We are inspired by the historical development of back-propagation, starting from optimizing networks with 10s of neurons (in the Rumelhart et al paper in 1986) to billions today. We anticipate better and scalable algorithms for OPTO developed in the future, such as using multi-agent workflows to scale to large graphs and VLMs to interpret multi-modal parameters and feedback. # Lacking an automated method We wish to clarify that Trace constructs the computational graph automatically (line 183) while the program executes (dynamically like in PyTorch), rather than requiring users to pre-define it or pre-compile it (statically like in Tensorflow 1). This means that the created graph changes automatically based on the workflow process, with different inputs potentially resulting in different graph structures. In Section 3.1, we discuss how to abstract the workflow using @bundle operators. This abstraction is different from manually defining the graph, which may be the source of the confusion. The @bundle decorator is an optional feature that allows users to simplify the workflow’s text representation (e.g. to reduce input token costs), enabling LLMs to better understand and optimize the workflow. # Limited Experimental Improvement We respectfully disagree on the comment on the performance gap between OptoPrime and OPRO. In almost all experiments (Figure 1, 5b, 6a, 6b), OptoPrime is a significant improvement over OPRO, e.g. 2x-4x improvement in success or rewards. Only the experiment in Figure 6c shows the different algorithms’ performance is within the error margin. # Difference between Node and Graph Optimization Consider the following example that shows why optimizing over a graph is better than optimizing only individual nodes. ```python @bundle() def function1(x): return x > 0 @bundle() def function2(y): return y % 2 == 0 def xor_test(x, y): return function1(x).neq(function2(y)) input1 = node(3, trainable=True); input2 = node(4, trainable=True) xor_test(input1, input2).backward(feedback=”Find a set of inputs to make the return True.”) ``` When we optimize an individual node, we only see that node input, the function output, and the feedback. We do not see the other inputs and how they can affect the outcome. When we optimize for the full graph however, we can see that each input only partially affects the outcome, and we need to jointly optimize both inputs to achieve a desired outcome. # Discrepancy between Fig 5 and Fig 6’s Conclusions In all the experiments, OptoPrime with memory (denoted as Trace) performs better than OptoPrime without memory (denoted as Trace NoMem) and OptoPrime with memory but with the execution trace info removed from the prompt (denoted as Trace Masked). We noticed that the current writing is not clear about what each method means, which may cause confusion. Across the different ablations and Figure 5 and 6, we consistently see that memory improves performance, and masking the execution trace information hurts performance. We will better clarify the ablations in the revision. # Token Efficiency Thank you for the excellent suggestion, we will include token counts for the OPRO and OptoPrime prompts in the paper. Here are the statistics for the prompts at the first iteration of optimization (note OPRO's token usages grows with iterations): | Domain | OPRO | OptoPrime | | -------- | ------- | ------- | | Numerical Opt | 175 | 918 | | BigBench-Hard | NA | 1883 | | Traffic Opt | 198 | 1679 | | MetaWorld | 470 | 7101 | | Battleship | 437 | 1305 | We can see that indeed OptoPrime consumes significantly more tokens than OPRO. However, we observe consistently that even allowing 7-10x more iterations of OPRO so as to equalize token costs, the OPRO performance plateaus to a worse level than OPTOPrime (e.g. Figure 1: OPRO at Iter 7 vs. OptoPrime at Iter 2; Figure 5b: OPRO at Iter 50 vs. OptoPrime at Iter 5; Figure 6b: OPRO at Iter 30 vs. OptoPrime at Iter 10, etc.). OPRO is suboptimal not due to a token limit but instead a lack of information, which is captured and represented using Trace.
Summary: The paper introduces Trace, a novel optimization framework that instances the concept of Optimization with Trace Oracle (OPTO). In Trace, the computational workflows is treated as dynamic graphs and rich information, including intermediate results, processing details and computational graph, are used as feedback for optimization instead of traditional gradients. The framework includes a general-purpose optimizer, OptPrime, to solve the OPTO problem. In the experiment section, Trace is shown to be comparable to the first-order gradient optimizer on a numerical optimization task, and outperform baseline LLM-based methods across a wide range of tasks including , hyper-parameter tuning, robot controller design, etc. Additionally , Trace offers a Python interface that can integrate seamlessly with PyTorch. Strengths: 1.I think using execution traces instead of gradients for optimization is quite innovative. This allows for the optimization of workflows that are non-differentiable or have dynamic computation graphs. 2.This framework can be applied to a wide range of tasks, including robotics and AI systems. 3.Trace has shown superior performance compared to other LLM-based optimizers and has demonstrated results comparable to those of traditional gradient-based optimization methods. 4.The Python interface for Trace simplifies integration with existing codebases. Weaknesses: As someone outside of this field, I find this paper to be quite impressive. I did not identify any specific weaknesses. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. Could Trace be adapted for use with other programming languages, and what would such an adaptation entail in terms of architectural changes? Confidence: 2 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and question. Yes, in principle, Trace can indeed be adapted for use with other programming languages. The core design of Trace is based on the primitives node and @bundle, which define the nodes and operators, respectively, for the directed acyclic graph (DAG) abstraction of the traced computational process. Once the DAG is created in any programming environment, the algorithms used in Trace can be applied. In our current implementation, we overloaded Python’s magic methods to seamlessly integrate with existing Python code. This approach may not be feasible in other programming languages due to the limitations of operator overloading, which could result in a less clean interface. Nonetheless, by building a set of operators using the idea of @bundle, we can create DAGs to abstract the computational process in other languages. One example demonstrating the feasibility of such an adaptation is the C++ versions of AutoDiff libraries like PyTorch (which are also based on DAGs). Therefore, we believe that the DAG-based design that Trace employs can be effectively adapted to other programming languages. However, we acknowledge that developing Trace libraries for other languages can require non-trivial engineering. We hope that the impressive results demonstrated in our paper will inspire future development to adapt Trace to a broader range of programming environments. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response and congratulations on your excellent work!!
null
null
Rebuttal 1: Rebuttal: Thank you for reviewing this paper. This PDF contains figures and codes of the new virtual home experiments and a correction on the submitted code for running DSpy baseline, which is for addressing Reviewer GPUE's questions. Virtualhome is a collaborative, stateful environment that requires two LLM agents to work together to solve household tasks. Trace is asked to optimize and update a specific part of the prompt, which is the plan for future actions. Prior work (Guo et al., "Embodied LLM Agents Learn to Cooperate in Organized Teams", 2024) forces agents to have a round of conversation before they start the task. We show that Trace allows agents to have naturally emerging pro-social behaviors for some tasks (such as “putting plates into the dishwasher”), but not others (such as “reading a book”). Pdf: /pdf/d6695c02cfbbb5e192b901e228ea34475b457929.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
3-in-1: 2D Rotary Adaptation for Efficient Finetuning, Efficient Batching and Composability
Accept (poster)
Summary: This paper proposes a 2D rotation adaptation method called RoAd for efficiently fine-tuning large language models (LLMs). RoAd achieves parameter-efficient fine-tuning by rotating representations. Experimental results demonstrate that RoAd performs excellently across multiple benchmarks, reducing the number of training parameters and computational overhead. Strengths: 1 The method proposed in this paper is simple but efficient. 2 This method performs well on small-scale language models, achieving better performance with fewer or comparable parameters. Weaknesses: 1 The paper claims in the abstract that one of the main scenarios it addresses is multitasking. However, the authors mainly illustrate this through qualitative experiments in section 4.3, which seems unconvincing. It is suggested that the authors refer to ATTEMPT [ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts] and compare it in the same scenario to fully demonstrate RoAd's advantages in multitask learning. 2 There is some confusion about the logical flow of the paper, especially in chapter 2. It seems that sections 2.2 and 2.3 are not closely related to section 2.1. 3 The method's improvement on large-scale language models is relatively limited. As shown in Table 3, RoAd2/4 performs the same as LoReFT when using more parameters. This needs to be explained. 4 It is suggested that the authors add a main diagram to describe the method. 5 Figure 3 Middle shows that as the generated tokens increase, RoAd's throughput decreases rapidly, while LoRA does not show this trend. Will RoAd's throughput be lower than LoRA's when generating longer texts? Furthermore, what are the respective parameter amounts for LoRA and RoAd here? 6 This paper seems to have similarities with SSF [Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning], as both methods fine-tune by adjusting output representations. Technical Quality: 3 Clarity: 2 Questions for Authors: see Weaknesses. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: In the Limitations section, the authors mentioned scalability and training parameters. Since the primary comparison in this paper is with LoRA, could the authors reduce LoRA's parameter count to the same level as the method proposed in this paper and then conduct the relevant comparative experiments? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review. We are really encouraged by the highlights: 1. Our proposed method, RoAd, is **simple but efficient**. 2. RoAd **performs excellently across multiple benchmarks, reducing trainable parameters and computational overhead**. 3. RoAd **performs well** on small-scale (100M-13B) LMs, **better performance with fewer or comparable parameters**. $~$ --- We address your concerns below: > **W1. Compare RoAd to ATTEMPT for demonstrating its multitasking ability.** Thank you for this valuable suggestion. Please refer to **Table A.3 (Row 11-13)** in the uploaded PDF for detailed results. **Summary: RoAd exhibits strong multitasking capabilities, surpassing ATTEMPT with a similar number of trainable parameters.** - **Setup:** Adopting the setup of ATTEMPT, we concatenate all GLUE tasks for finetuning. For each task, RoAd uses a unique $R$ for each linear layer. The first half of the blocks in $R$ are shared across tasks, while the second half are updated only when the corresponding task is encountered. $~$ > **W2. Confusion about the logical flow of the paper. I.e. Subsections in $2 are not closely related.** Thank you for the opportunity to clarify our motivation. Section 2 serves as a background introduction: - $2.1: Discusses related PEFT methods. - $2.2: Highlights the batching challenge in efficiently deploying multiple adapters for various tasks/purposes/users. - $2.3: Highlights the challenge of interpretability, which is crucial for LLMs but not necessarily tied to PEFT. Existing works often focus on one or two of these challenges. However, our proposed method, RoAd, tackles all three (PEFT, batching, and interpretability) with a unified approach. $~$ > **W3. Relatively limited improvement on large-scale LLMs.** E.g. in Table 3, RoAd$_{2/4}$ performs the same as LoReFT while using more parameters. We politely disagree with this assessment. To clarify, below we summarize the performance on large-scale LLMs from our paper, including LoReFT and the RoAd's variant with a similar number of trainable parameters. **Summary: Overall, in seven out of eight results (Table 2, 3, 4, and 13 in paper), RoAd demonstrates superior performance with the minimal level of trainable parameters. While RoAd shows comparable performance to LoReFT on commonsense reasoning, it significantly outperforms LoReFT on knowledge-intensive tasks, i.e. arithmetic reasoning.** For commonsense, since its domain is closely alligned with pretraining data, slight adaptation is good enough. $~$ Table R.1. Accuracy on commonsense. | Model | Method | #Params (%) | Avg. | | :--- | :--- | ---: | ---: | |LLaMA-7B | LoReFT | 0.03 | **80.2** | || RoAd$_2$ | 0.04 | **80.2** | | - | - | - | - | - | | LLaMA-13B | LoReFT | 0.03 | 83.3 | | | RoAd$_2$ | 0.03 | **83.8** | Table R.2. Accuracy on arithmetic. | Model | Method | #Params (%) | Avg. | | :--- | :--- | ---: | ---: | | LLaMA-7B | LoReFT | 0.03 | 42.6 | | | | RoAd$_1$ | 0.02 | **44.0** | | - | - | - | - | - | | LLaMA-13B | LoReFT | 0.03 | 49.6 | | | RoAd$_1$ | 0.02 | **51.9** | $~$ > **W4. Add a main diagram to describe the method**. Thank you for this great suggestion. We add an overview diagram in **Figure A.1(c)**. $~$ > **W5. Batching efficiency of RoAd for longer (>4K) generation.** Thank you for your insightful suggestion. We extend the generated length to 8K (as our GPU resources do not support lengths beyond this) and present the results in **Figure A.1(b)**. **Results: While RoAd's throughput decreases more sharply than LoRA's with an increasing sequence length, it remains significantly higher than LoRA's.** RoAd's throughput can be seen as the upper limit of LoRA's throughput for batching, as RoAd functions like LoRA with a rank size of 1. LoRA's rank here is 8, corresponding to approximately 0.20% parameters. This is a moderate setting; for our finetuning experiments on commonsense and arithmetic reasoning tasks, LoRA's rank is 32. RoAd, with its trainable parameters equivalent to LoRA with a rank size of 1 (about 0.03%). $~$ > **W6. Clarification of the similarity with SSF.** Thank you for bringing this related work to our attention; we will include it in $2.1. **Summary: RoAd's methodology differs significantly from SSF and demonstrates superior performance.** - **Difference**: To illustrate, let's assume the hidden size is two, i.e., $h = [h_1, h_2]$. SSF adapts $h$ as $z = \gamma \odot h + \beta = [\gamma_1 h_1 + \beta_1, \gamma_2 h_2 + \beta_2]$, showing no interaction between $h_1$ and $h_2$. In contrast, RoAd rotates $h$ as $z = Rh$, where $R$ is a 2D rotation matrix. In this way, RoAd promotes interaction between $h_1$ and $h_2$. Our pilot studies ($3.1) indicate that such rotation is more crucial than merely adjusting the magnitude. - **Result**: We reproduce SSF on two benchmarks, as detailed in Table A.4 and A.5 (Row 3-4) of the uploaded PDF. With a similar number of trainable parameters (0.03%), RoAd$_2$ significantly outperforms SSF: 80.2 vs. 78.2 and 83.8 vs. 82.3 for commonsense tasks, and 46.2 vs. 35.9 and 52.2 vs. 42.9 for arithmetic tasks. $~$ > **Limitation: Reduce LoRA's parameter to the same level of RoAd.** Here we set LoRA's rank to 1, so it's trainable parameters are the same as RoAd$_2$. We show the results in **Table A.4 and Table A.5 (Row 5)**. **Result: RoAd$_2$ significantly outperforms LoRA, 80.2 vs 74.2 on commonsense tasks, 46.2 vs 41.7 on arithemetic tasks.** You can refer to our reponse to Reviewer zyNQ in W2 for the results of scaling up RoAd's parameters to the same level of LoRA's, if you are interested. --- $~$ Thank you for the thoughtful suggestions. We have incorporated the new results in the updated version. Please refer to the general rebuttal block for more new results, if you are interested. If these revisions address your concerns, we kindly request a reconsideration of the scores. Should you have any further questions, we are happy to assist. --- Rebuttal Comment 1.1: Comment: Thank you for your response. It addressed most of my concerns, and I am willing to increase my score. --- Reply to Comment 1.1.1: Title: Thank you for your increased score Comment: Dear Reviewer iQCQ, We are very encouraged by your increased score. We really enjoy the discussion, and thank you for your suggestions about: 1. Adding quantative evaluation for RoAd's multitasking ability; 2. Adding an overview diagram; 3. Further evaluating RoAd's batching efficiency for sequence length > 4K; 4. Comparing RoAd to LoRA with the same level of trainable parameters. We believe these suggestions make our work more solid and strong. Best! --- Rebuttal 2: Title: Thanks again for your positive feedback Comment: Dear Reviewer iQCQ, We sincerely thank you for your positive feedback and the time you dedicated to reviewing our rebuttal. It brings us great joy to learn that our response has addressed your concerns and contributed to increasing the score from 4 to 5. As the score is still borderline, we are wondering if there are any major concerns regarding our current revision. It would be our great pleasure to provide further clarifications and results to address any additional doubts. Your suggestions really help a lot to improve our work and make the justification of our method more complete. Once again, we would like to express our appreciation for your valuable comments during the reviewing process. Best regards!
Summary: This paper introduces a novel method for parameter-efficient finetuning, RoAd. By employing a straightforward 2D rotation to adapt LLMs, this paper addresses the challenges of existing parameter-efficient finetuning method for LLMs. Experiments on downstream tasks demonstrate the effectiveness of the proposed method. Strengths: 1. This paper proposes a novel method for parameter-efficient finetuning which efficiently adapts LLMs using a minimal number of trainable parameters. 2. The method enhances both batching efficiency and composability. 3. Comprehensive experiments on the GLUE benchmark, eight commonsense reasoning tasks and four arithmetic reasoning tasks are conducted to show the efficacy of the method. Weaknesses: 1. In the results of Table 2, RoAd shows different performance on base and large model, what could be the reason? Why the RoAd(fc1) of large model with less parameters shows better average accuracy? Why the full FT setting shows even lower accuracy? 2. If the proposed RoAd maintains the same quantity of parameters of existing method like LoRA, could the accuracy be further improved? 3. How about the RoAd2 and RoAd4 on the held-out GLUE development set? Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time, effort, and thorough review. We appreciate the positive feedback and are encouraged by your highlights: 1. We propose a **novel** PEFT method that efficiently adapts LLMs with **a minimal number of trainable parameters**. 2. Our method, RoAd, **enhances both batching efficiency and composability**. 3. **Comprehensive experiments on three benchmarks (20 tasks in total) show the efficacy of RoAd**. $~$ --- We address your concerns below: > **W3. What are the results of RoAd$_2$ and RoAd$_4$ on GLUE?** We add the new results from RoAd$_2$ and RoAd$_4$ on the GLUE benchmark in **Table A.3 (Row 1-10)** of the uploaded PDF. For ease of comparison, we also provide the results from the best baseline (Full FT) and RoAd$_1$ from the paper. **Result: Overall, more trainable parameters (from RoAd$_1$ to RoAd$_4$) offer better performance on RoBERTa-base, while the performance on RoBERTa-large is saturated, with all RoAd variants performing similarly best.** $~$ > **W1. What are the reasons for the unexpected results in Table 2?** Thank you for this insightful question. We appreciate the opportunity to further explain the results. Our explanations are based on **Table A.3 (Row 1-10)**, where the tasks are arranged from low-resource (left) to high-resource (right). **1. Why does RoAd$_1$(fc1) on RoBERTa-large with less parameters show better average accuracy than RoAd$_1$?** We believe that RoAd$_1$(fc1) is comparable to instead of better than RoAd$_1$ on RoBERTa-large, as the difference is only 0.1. From the results of Full FT, RoAd$_2$ and RoAd$_4$, we observe that their average performance is very similar, indicating performance saturation. Therefore, increasing the number of trainable parameters does not significantly affect the performance. **2. Why does Full FT underperform RoAd?** Previous research [R1, R2, etc.] has demonstrated that PEFT methods can outperform Full FT. We attribute this to two factors: - Model Capability: A powerful LLM benefits more from slight adaptations via PEFT. Full FT changes more parameters, risking catastrophic forgetting. - Task Nature: Less knowledge-intensive tasks, particularly those sharing the same domain as pretraining data, are better suited for PEFT. For knowledge-intensive tasks like code and math, more trainable parameters (Full FT) may be beneficial. Our Table A.3 supports this: - On RoBERTa-base, Full FT excels in high-resource tasks (QNLI, QQP, MNLI) due to their larger training samples (knowledge-intensive) and RoBERTa-base's lower power. - On RoBERTa-large, Full FT and RoAd perform similarly on these high-resource tasks, as RoBERTa-large's higher capability makes slight adaptations sufficient. **3. Why does RoAd show different trend on the base and large model?** This trend aligns with the second point. On RoBERTa-base, which is less powerful, more trainable parameters enhance performance until saturation. On RoBERTa-large, a more powerful model, slight adaptations with fewer trainable parameters suffice for the GLUE tasks. $~$ > **W2. Can even better results be obtained for scaling up RoAd's trainable parameters to the same level as LoRA's?** Thank you for this excellent suggestion! **Summary: RoAd exhibits impressive scalability. Increasing its trainable parameters leads to notably improved results.** - Experimental setting: To increase the number of trainable parameters in RoAd, we combine it with LoRA due to the limited number of $\theta_i$ and $\alpha_i$ in $R$. The combination is represented as $Z = (RW + BA)^TX$, where $R$ is the rotation matrix from RoAd, and $A$ and $B$ are from LoRA. We vary the number of trainable parameters by adjusting the LoRA rank. In this experiment, we only combine RoAd$_1$ with LoRA, excluding RoAd$_2$ and RoAd$_4$, as their main design purpose is to increase the number of trainable parameters. - Results: As demonstrated in Table R.1 and Table R.2, increasing the number of trainable parameters to the same level as LoRA's yields significantly better results. This shows RoAd's excellent scalability when combined with LoRA. $~$ Table R.1: Average accuracy on eight commonsen reasoning tasks. Detailed numbers for RoAd$_1$ + LoRA are in Table A.4 (Row 1-2) of the uploaded PDF. | Model | Method | #Params (%) | Avg. | | :--- | :--- | :---: | :---: | | LLaMA-7B | LoRA | 0.83 | 74.7 | | | LoReFT | 0.03 | 80.2 | | | RoAd$_4$ | 0.08 | 80.2 | | | RoAd$_2$ | 0.04 | 80.2 | | | RoAd$_1$ | 0.02 | 79.2 | | | RoAd$_1$ + LoRA | 0.84 | **82.2** | | --- | --- | --- | --- | | LLaMA-13B | LoRA | 0.67 | 80.5 | | | LoReFT | 0.03 | 83.3 | | | RoAd$_4$ | 0.07 | 83.7 | | | RoAd$_2$ | 0.03 | 83.8 | | | RoAd$_1$ | 0.02 | 83.0 | | | RoAd$_1$ + LoRA | 0.68 | **85.4** | Table R.2: Average accuracy on four arithmetic reasoning tasks. Detailed numbers for RoAd$_1$ + LoRA are in Table D.5 (Row 1-2) of the uploaded PDF. | Model | Method | #Params (%) | Avg. | | :--- | :--- | :---: | :---: | | | LoRA | 0.83 | 46.9 | | | RoAd$_4$ | 0.08 | 45.8 | | LLaMA-7B | RoAd$_2$ | 0.04 | 46.2 | | | RoAd$_1$ | 0.02 | 44.0 | | | RoAd$_1$ + LoRA | 0.84 | **50.0** | | --- | --- | --- | --- | | | LoRA | 0.67 | 51.1 | | | RoAd$_4$ | 0.07 | 52.3 | | LLaMA-13B | RoAd$_2$ | 0.03 | 52.2 | | | RoAd$_1$ | 0.02 | 51.9 | | | RoAd$_1$ + LoRA | 0.68 | **55.1** | $~$ [R1] LoRA: Low-Rank Adaptation of Large Language Models, Edward J. Hu, etc. [R2] Compacter: Efficient Low-Rank Hypercomplex Adapter Layers, Rabeeh Karimi Mahabadi, etc. --- $~$ Thank you for your thoughtful suggestions. We have incorporated the new results in the updated version by our side. In the general rebuttal block, we summarize all new results in our newly uploaded PDF, and highlight some for your easy choice, if you are interested in our response to other reviewers. If these revisions address your concerns, we kindly request a reconsideration of the scores. Should you have any further questions, we are happy to assist.
Summary: This paper proposes a parameter-efficient finetuning method named RoAd, to address two challenges of current methods. The first challenge is the efficient deployment of LLMs equipped with multiple task- or user-specific adapters. The second one is the interpretability of LLMs. RoAd employs a straightforward 2D rotation to adapt LLMs. Experiment results consistently show that RoAd surpasses other PEFT methods Strengths: - This paper motivates well on the two challenges of current PEFT methods. - RoAd achieves impressive results, surpasses other PEFT methods in various tasks. - The authors perform the insightful pilot study and make interesting observations on the key factor influencing the adaptation of LLMs. Weaknesses: - Evaluation of efficiency results for batching could be improved. The proposal of RoAd is well motivated by the overhead of batch matrix multiplication in current methods [1, 54] (Page 2, line 36). However, the authors only compare with LoRA in the evaluation of throughput of batching. It would be better if the authors can compare with [54] in this evaluation. - Novelty and advantage over OFT [41] need clarification. As RoAd can be considered as a specialized case of OFT with w = 2 (Page 3, line 91), it is important to clarify the technical novelty of RoAd over OFT. In the current form of the paper, the reader may consider RoAd as a special case of OFT without much technical novelty over OFT. - RoAd fails to consistently outperform other PEFT methods on arithmetic reasoning tasks. Specifically, the results of RoAd on LLaMA-7B is worse than LoRA and AdapterP. Such results make the statement "consistently surpasses" in Page 2, line 53, seem like a bit of an overclaim. Technical Quality: 3 Clarity: 4 Questions for Authors: - Page 7, line 247, "Figure 4" should be "Table 4". Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time, effort, and thorough review. We appreciate the positive feedback and are encouraged by your highlights: 1. Our work is **well-motivated** on two challenges of existing PEFT methods. 2. Our method, RoAd, achieves **impressive results, surpassing other PEFTs in various tasks**. 3. We offer **insightful pilot study** and **interesting observations** on the key factor of LLM adaptation. $~$ --- We address your concerns below: > **W1. Compare RoAd's batching efficiency to FLoRA [54]**. Thank you for the insightful suggestion. We include the new results in **Figure A.1(b)**. **Results: RoAd consistently demonstrates higher throughput compared to FLoRA across various generated lengths.** - Experimental Setup: Since FLoRA didn't open-source its code, we reproduce it by our own on Tansformers and PEFT that we also use for LoRA and RoAd. The rank size for both FLoRA and LoRA is set to 8, the number of request (i.e. batch size) is 8. This setting stays the same for the batching exxperiments in the paper. We further extend the generated length from 4K to 8K. $~$ > **W2. Clarification of RoAd's novelty and advantage over OFT [41].** Thank you for the opportunity to clarify the novelty and advantage of our work. **Summary: RoAd is simpler, more finetuning-efficient, offers additional functionalities, and achieves better finetuning results.** - **Methodology Similarity**: Orthogonal finetuning is a widely used method [26, 27, 41, 28, etc.], and can generally be expressed as $z = (RW)^Th$. - **Methodology Difference**: OFT constructs $R$ in a block-wise manner and uses Cayley parameterization to maintain orthogonality. In contrast, RoAd employs inherently orthogonal 2D rotation matrices, making it simpler and more straightforward to apply. - **Finetuning Efficiency**: As shown in Table 12, OFT's reliance on Cayley parameterization results in higher time and memory usage compared to RoAd. Despite having a similar number of trainable parameters, OFT requires 40GB of GPU memory, whereas RoAd only requires 23GB. Additionally, RoAd's finetuning time is approximately 50 times shorter than OFT's. - **Additional Functions**: RoAd not only excels in finetuning efficiency but also features highly efficient batching due to element-wise multiplication. This is not available in OFT, which has batching latency similar to LoRA. - **Result**: As demonstrated in Table 2, RoAd significantly outperforms OFT with fewer trainable parameters. OFT with 0.1\% trainable parameters achieves 82.3 on GLUE, while RoAd$_1$ (fc1) with 0.03\% trainable parameters offers 85.1. $~$ > **W3. Mild overclaim of the results, since there is an outlier on the arithmetic reasoning task for LLaMA2-7B.** Thank you for your valuable feedback. We apologize for the oversight. In seven out of eight results (as shown in Tables 2, 3, 4, and 13), RoAd demonstrates superior performance with the minimal level of trainable parameters, with the exception of the arithmetic reasoning task on LLaMA2-7B. To improve the accurate statement, we have revised Line 53 to state: "Seven out of eight benchmarks indicate that RoAd outperforms other PEFT methods while maintaining a significantly reduced scale of trainable parameters (< 0.1%), as partially depicted in Figure 1." $~$ > **Q1. Typo** Thank you for this detailed suggestion. We have corrected this typo in our updated version. --- $~$ Thank you for your thoughtful suggestions. We have incorporated the new results in the updated version by our side. In the general rebuttal block, we summarize all new results in our newly uploaded PDF, and highlight some for your easy choice, if you are interested in our response to other reviewers. If these revisions address your concerns, we kindly request a reconsideration of the scores. Should you have any further questions, we are happy to assist. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for addressing my concerns. I will keep my original rating of weak accept. I agree that this is a technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. --- Rebuttal 2: Title: Thank you for your feedback Comment: Dear Reviewer UyLU, Thank you for your feedback. We really enjoy the discussion, and thanks again for your suggestions about: 1. Comparing RoAd to FLoRA for the batching efficiency; 2. Clear clarification of RoAd's novelty and advantages over OFT; 3. Typos and more accurate statement. We believe these suggestions make our work more solid and strong. Best!
Summary: This paper proposes a novel parameter-efficient fine-tuning method called RoAD, aimed at addressing the challenges of efficient deployment of LLMs that require multiple adapters for distinct needs and enhancing the interpretability of LLMs. The motivation behind this approach stems from the observation that the fine-tuning process primarily affects the angular components of representations rather than their magnitude. Thus, RoAD introduces a 2D rotational approach to representations to achieve parameter-efficient adaptation. Experimental results demonstrate that RoAD achieves a superior accuracy-efficiency trade-off compared to baseline approaches. Strengths: - Developing effective approaches to adapt the LLM to downstream tasks with better scalability and performance is an important problem. The achieved performance of this paper looks promising to me. - The paper layout is clear and the writing is easy to understand. - The observation that during tuning, the directional change is much more significant than the magnitude change is an interesting observation and may further motivate some follow-up research. Weaknesses: - For the observation section, it would be beneficial to include results from more recent autoregressive language models, such as Llama2, to ensure that the observation is consistent and generalizable across more commonly used models. - Another concern regarding the observation section is that the metrics used for magnitude and angular change are not identical. As a result, the claim that angular change is larger than magnitude change may be significantly influenced by the chosen metric and scaling. The authors should further justify the validity of this observation when using different metrics. - The authors should consider further benchmarking the proposed method on generative tasks with instruction tuning and evaluate it on more challenging benchmarks, such as Alpaca-Eval (https://github.com/tatsu-lab/alpaca_eval) and MT-Bench (https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge). Technical Quality: 3 Clarity: 3 Questions for Authors: - Can the proposed method be applied to vision language models? Does the observation still stand for vision language models? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have mentioned the potential limitations of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time, effort, and thorough review. We appreciate the positive feedback and are encouraged by your highlights: 1. We develop **effective approaches with better scalability and performance** on an important problem. And the achieved **performance is promising** to you. 2. The paper **layout is clear** and the **writing is easy to understand**. 3. We offer **interesting observation** that may further **motivate some follow-up research**. $~$ --- We address your concerns below: > **W1. Does the observation from Llama2 stay the same as RoBERTa?** Thank you for your valuable suggestion. We finetune LLaMA2-7B using LoRA on the GSM8K training set and analyze the magnitude and angle changes of the test set samples, as shown in **Figure A.1\(a)** of the uploaded PDF. **Summary: LLaMA2-7B exhibits a similar pattern to RoBERTa-base, with more significant changes in angle rather than magnitude.** - **Experimental Setup:** When directly measuring $\Delta D$ and $\Delta M$, we observe $\Delta D = 1$ and $\Delta M = 0$ for almost all samples, indicating no change in both magnitude and angle. This is because LLaMA2-7B has a much larger hidden size compared to RoBERTa-base (4096 vs. 768) and is more powerful. Finetuning only slightly adapts most dimensions, with significant changes in a limited number of dimensions. Therefore, we apply t-SNE to reduce the dimensions before calculating $\Delta D$ and $\Delta M$. - **Detailed Observations:** - Many samples remain close to $\Delta D = 1$ and $\Delta M = 0$, showing no significant change in magnitude or angle. - For samples with changed representations, $\Delta M$ is small (mostly < 0.2), while $\Delta D$ shows a significant change, ranging from 1 to around -0.75. - There are a few outliers with larger changes in magnitude. $~$ > **W2. Is the observation robust across different metrics?** Thank you for the opportunity to clarify our pilot studies (Section 3.1). - In our first study, we measure both $\Delta D$ and $\Delta M$ to demonstrate that finetuning primarily affects the angle, as indicated by larger $\Delta D$ values compared to $\Delta M$. Both $\Delta D$ and $\Delta M$ are normalized metrics, with $\Delta D \in [-1, 1]$ and $\Delta M$ representing relative magnitude changes. - Acknowledging that our finding might be influenced by the choice of metrics and potentially perceived as subjective, we conduct a second study to disentangle these two factors during finetuning. The results of this second study are consistent with the first, confirming that angle information plays a more crucial role than magnitude in finetuning. These studies collectively ensure that our observations are robust and supported by practical experiments. If there are specific metrics you would like us to consider, please let us know, and we would be happy to provide additional analyses using those metrics. $~$ > **W3. Further benchmark RoAd, like with AlpacaEval, MT-Bench.** Thank you for this great suggestion. We benchmark RoAd using AlpacaEval2.0, and the results can be found in **Table A.1** of the attached PDF. Due to time and resource constraints, benchmarking with MT-Bench is on our to-do list. **Summary: RoAd demonstrates superior performance compared to all baselines, while utilizing the least number of trainable parameters.** - **Experimental Setup:** We finetune LLaMA2-7B with two instruction-tuning datasets and evaluate the model using AlpacaEval2.0. This evaluation employs GPT-4 to assess the responses generated by the finetuned model against those produced by Text-davinci-003. We don't choose GPT-4 as the reference model, because GPT-4 is too powerful than LLaMA2-7B. The proof-of-concept experiment with LoRA shows the win-rate < 5\%. $~$ > **Q1. Apply RoAd to vision language models, and its observation.** Thank you for your suggestion. We finetune LLaVA-1.5-7B using RoAd, and the results are presented in **Table A.2** of the uploaded PDF. **Summary: RoAd achieves the same average performance as LoRA with only 1/4 of its trainable parameters.** - **Experimental Setup:** [R1] requires 4.61% trainable parameters for LoRA on this task, while most tasks with LoRA in our paper need < 1%, showing that this task is knowledge-intensive. Therefore, we need to scale RoAd's trainable parameters. For this purpose, we combine it with LoRA due to the limited number of $\theta_i$ and $\alpha_i$ in $R$. The combination is represented as $Z = (RW + BA)^TX$, where $R$ is the rotation matrix from RoAd, and $A$ and $B$ are from LoRA. We adjust the LoRA rank to vary the number of trainable parameters. We combine RoAd$_1$ with LoRA, but not RoAd$_2$ or RoAd$_4$, as their primary design purpose is to increase the number of trainable parameters. - **Results:** - With only 0.08% trainable parameters, RoAd$_4$ already achieves 96.9% (66.4/68.5) of the accuracy of LoRA with 4.61% trainable parameters. By combining RoAd$_1$ with LoRA, we achieve the same performance as LoRA with just 1/4 of its trainable parameters. This demonstrates RoAd's excellent scalability when combined with LoRA. Further promising scaling results can be found in our response to Reviewer zyNQ in Weakness 2, if you are interested. - The observations are very similar to our response to Weakness 1. We also need to apply t-SNE to reduce the dimensions. $~$ [R1] Visual Instruction Tuning, Haotian Liu, etc. --- $~$ Thank you for your thoughtful suggestions. We have incorporated the new results in the updated version by our side. In the general rebuttal block, we summarize all new results in our newly uploaded PDF, and highlight some for your easy choice, if you are interested in our response to other reviewers. If these revisions address your concerns, we kindly request a reconsideration of the scores. Should you have any further questions, we are happy to assist. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed responses and have raised my score accordingly. --- Rebuttal 2: Title: Thank you for your raised score Comment: Dear Reviewer vHYZ, We are very encouraged by your increased score. We really enjoy the discussion, and thank you for your suggestions about: 1. Observation from Llama2; 2. Further benchmarking RoAd with AlpacaEval and MT-Bench, and on visual instruction tasks. We believe these suggestions make our work more solid and strong. Best!
Rebuttal 1: Rebuttal: Here we summarize the new results in the uploaded PDF, you can selectively read them if you are interested. We highlight some for your easy choice. | Table or Figure | Content | Where for details (i.e. response to which point of which reviewer) | | :--- | :--- | :--- | | Table A.1 | **Further benchmark RoAd with AlpacaEval2.0** | W3 of Reviewer vHYZ | | Table A.2 | **Further benchmark RoAd with visual instruction tuning** | Q1 of Reviewer vHYZ | | Table A.3 (Row 1-10) | RoAd$_2$ and RoAd$_4$ on GLUE | W3 of Reviewer zyNQ | | Table A.3 (Row 11-13) | RoAd's multitasking ability | W1 of Reviewer iQCQ | | Table A.4 and Table A.5 (Row 1-2) | **RoAd's scalability** | W2 of Reviewer zyNQ | | Table A.4 and Table A.5 (Row 3-4) | New baseline, SSF | W6 of Reviewer iQCQ | | Figure A.1 (a) | Pilot study on LLaMA2-7B | W1 of Reviewer vHYZ | | Figure A.1 (b) | More batching efficiency results | W1 of Reviewer UyLU and W5 of Reviewer iQCQ | | Figure A.1 (c) | **An overview diagram** | W4 of Reviewer iQCQ | Pdf: /pdf/dff6ef72f8b65f8a8b99d5df08af7ee0cca07edf.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Everyday Object Meets Vision-and-Language Navigation Agent via Backdoor
Accept (poster)
Summary: The paper proposes a novel backdoor attack paradigm, termed IPR Backdoor, for Vision-and-Language Navigation (VLN) agents. The authors highlight the potential security risks posed by VLN agents in sensitive environments and pioneer an object-aware backdoor attack, embedding triggers into the agent during training. The attack is designed to make the agent execute abnormal behaviors upon encountering specific objects during navigation, without compromising normal operation otherwise. The key contributions include the development of the IPR Backdoor, its validation through extensive experiments, and demonstrating its robustness to various visual and textual variations. Strengths: 1. The author proposes a new backdoor attack method on VLN tasks, which performs well. 2. The paper is well-written, with a clear presentation of the motivation behind the study and a comprehensive description of the method design. 3. The paper provides extensive information in the appendix. Weaknesses: 1. The author proposed conducting experiments in physical and digital spaces. However, the author's definition of physical seems to be "pasting a physical object into an image," while a more general understanding of physical is to sample in the physical world rather than simply pasting. 2. Lack of horizontal comparison with other backdoor attack methods, this issue is actually equivalent to the third item in the "Question" section. I hope the author can reply well in the rebuttal. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. The VLN agents used by the author are two classic methods, however, I am more interested in the performance of the author's proposed method on advanced VLN agents. (like the VLN-GOAT) 2. The author did not provide an open source link, but mentioned in the checklist that the open source plan is "after acceptance". This is acceptable. Can the author provide an approximate open-source timeline, such as within one month after acceptance? 3. What are the differences in backdoor attacks between VLN tasks and traditional DL tasks, apart from the differences in tasks. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 2 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses#1: The author proposed conducting experiments in physical and digital spaces. However, the author's definition of physical seems to be "pasting a physical object into an image," while a more general understanding of physical is to sample in the physical world rather than simply pasting.** Response: Thank you for your question. In validation environments (as shown in Figure 9), the triggers exist in these environments rather than being attached, making them more consistent with real objects triggered backdoor attacks. In training environments (as shown in Figure 3), since the attacker may only have acquired the image of the attacked scene containing the object trigger (please see L147-L148 of the manuscript), the real object trigger does not exist in the training environments. Therefore, during training, we use the attacker's pre-obtained image of the trigger to create poisoned scenes by pasting it into the training environments to train the backdoored VLN agent. We will provide more detailed explanations in the revision. **Weaknesses#2: Lack of horizontal comparison with other backdoor attack methods, this issue is actually equivalent to the third item in the "Question" section. I hope the author can reply well in the rebuttal.** Response: Thank you for your valuable suggestions. We provide the following comparisons in terms of core problems and performance, method design, attack setting, and research objectives. Core Problems and Performance: Existing backdoor attack methods primarily focus on designing loss functions to encourage predictions of the target label. When simply transferring such methods to VLN agent, such as using see2stop loss alone, it will encounter two core problems: "difficulty in directly aligning poisoned features with abnormal behavior semantics" and "navigation-oriented reward function weakening backdoor attack capability". Experiment shows that using only see2stop loss results in a backdoor attack success rate (Att-SR) of 75%. Furthermore, for your reference, we further combine see2stop loss with the navigation-oriented reward, resulting in a Att-SR of 0%. In contrast, our method achieves a 100% Att-SR. Method Design: We propose the first universal paradigm for backdoor attack on VLN agent in the physical space: the IPR paradigm. In the imitation learning stage, we use see2stop loss to establish the basic mapping from trigger to abnormal behavior. Considering the multi-modality and continuous decision-making characteristics of VLN tasks, we introduce the tailored anchor loss, consistency loss, and backdoor-aware reward during the pretraining and reinforcement learning stages to enhance and maintain the mapping capability from trigger to abnormal behavior. Our experiments demonstrate the significant effectiveness and robustness of our customized method. Attack Setting: Unlike traditional tasks' backdoor attacks, we explore the use of real object triggers to induce abnormal behavior of multimodal robots via backdoor attack in the physical space. This setting offers greater stealth and deployment potential. Additionally, the perception and processing of multimodal information and continuous decision-making add more complexity and challenges to the attack process. Research Objectives: Compared to traditional tasks' backdoor attacks, robotic abnormal behavior is more closely associated with privacy and property security. Beneficial applications can effectively prevent robots from entering sensitive areas, thereby protecting privacy and property. Conversely, malicious attacks pose security risks to human and robotic systems. Our research could effectively promote studies on robotic security defenses. We hope this work can inspire more urgent and interesting explorations on robot security. We will include the detailed comparisons to the revision according to the suggestion. **Questions#1: The VLN agents used by the author are two classic methods, however, I am more interested in the performance of the author's proposed method on advanced VLN agents. (like the VLN-GOAT)** Response: Thank you for your question. In this paper, we select two classic methods, RecBert and HAMT, as their model architectures form the foundation for subsequent methods like VLN-GOAT. Their performance is therefore highly relevant to a wide range of VLN agents. Experiments show that both fundamental methods achieve excellent backdoor attack and navigation capabilities, fully validating the effectiveness and robustness of our approach. Additionally, in our backdoor attack paradigm, we model the mapping from trigger to abnormal behavior as a cross-modal visual-language mapping from trigger to abnormal behavior's description text. Models with stronger cross-modal alignment capabilities are expected to perform well under our backdoor attack paradigm. Due to time constraints, we will provide a detailed analysis and comparison in the revision. **Questions#2: The author did not provide an open source link, but mentioned in the checklist that the open source plan is "after acceptance". This is acceptable. Can the author provide an approximate open-source timeline, such as within one month after acceptance?** Response: Thank you for your question. We have organized the code and will make it open source within 2-4 weeks after the paper's acceptance. **Questions#3: What are the differences in backdoor attacks between VLN tasks and traditional DL tasks, apart from the differences in tasks.** Response: Thank you for your question. Please refer to the response to Weaknesses#2. --- Rebuttal Comment 1.1: Comment: Rebuttal solves most of my concerns. So I decide to raise my score to 6 --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your efforts and recognition! We will further improve the revision based on your valuable suggestions. If you have any further questions, we are more than willing to discuss!
Summary: This paper explores the security risks of Vision-and-Language Navigation (VLN) agents, which can be integrated into daily life but may threaten privacy and property if compromised. The author addresses this overlooked issue by introducing an object-aware backdoored VLN. This involves implanting backdoors during training to exploit the cross-modality and continuous decision-making aspects of VLN. The proposed IPR Backdoor causes the agent to behave abnormally when encountering specific objects during navigation. Experiments show this method's effectiveness and stealthiness in both physical and digital environments, while maintaining normal navigation performance. Strengths: 1. The novelty: The author addresses a timely and intriguing topic, focusing on the backdoor vulnerabilities of Vision-and-Language Navigation models. 2. The presentation is clear and straightforward. 3. The evaluation is logical and effectively supports the main claims of the paper. Weaknesses: 1. Action space. At Line 166, the current action space is based on the current state. Why is it the case? Typical in RL, the action space is fixed and does not change when the states change. If the action space is not fixed, how is it trained in this paper? 2. Poisoning of training data. What are the impact of poison ratios? 20% is a pretty high ratio, for the poisoning rate of backdoor attack. It would be better to show the Att-SR for different ratios, to understanding how practical it might be. 3. Missing related work. The method design shares some similarity with [1], but I acknowledge that the paper addresses some unique challenges in VLN scenarios. Also, the related work section should also introduce backdoor defense techniques on multi-modal models[2][3], and discuss potential defense for the backdoor attacks on VLN models. 4. Minor: - Line 66, 'pertaining' --> 'pretraining' - In Figure 3, the input of Consistency Loss should be CleanEncoder(CleanInput) and BackdooredEncoder(CleanInput). The color of arrows seems incorrect. [1] BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning, https://arxiv.org/abs/2108.00352 [2] Detecting Backdoors in Pre-trained Encoders, https://arxiv.org/abs/2303.15180 [3] SSL-Cleanse: Trojan Detection and Mitigation in Self-Supervised Learning, https://arxiv.org/abs/2303.09079 Technical Quality: 3 Clarity: 3 Questions for Authors: Please see Weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The target action is only limited to STOP. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses#1: Action space. At Line 166, the current action space is based on the current state. Why is it the case? Typical in RL, the action space is fixed and does not change when the states change. If the action space is not fixed, how is it trained in this paper?** Response: Thank you for your question. In the VLN setting, the agent's action space includes its adjacent candidate navigable points and the stop action (please see L138-L139). As the agent navigates, it encounters different points, each with a varying number of adjacent candidate navigable points, making its action space dynamic. And the RL with dynamic action space is a common practice in VLN agents. Specifically, during RL training, whenever the agent reaches a point, it calculates the probability for each candidate action and selects the one with the highest probability as its next action (either reaching the selected point or executing the stop action). If the action is chosen correctly, the agent receives a positive reward; otherwise, it receives a negative penalty. **Weaknesses#2: Poisoning of training data. What are the impact of poison ratios? 20% is a pretty high ratio, for the poisoning rate of backdoor attack. It would be better to show the Att-SR for different ratios, to understanding how practical it might be.** Response: Thank you for your suggestion. Following your valuable advice, we design an analysis experiment for different poisoning rates (5%, 10%, 15%, 20%). The experiment shows that with a poisoning rate of 5%, our method achieves attack success rate (Att-SR) of 100% in the Imitation learning (IL) setting and 94% in the imitation learning (IL) +reinforcement learning (RL) setting, while maintaining high navigation performance (IL: 56.62%; IL+RL: 66.09%). When the poisoning rate increased (10%, 15%, 20%), our method could steadily achieve 100% Att-SR and high navigation performance (IL: 56.43%, 56.18%, 56.09%; IL+RL: 65.51%, 66.23%, 66.18%). This further validates the effectiveness of our method, demonstrating strong performance across various poisoning rates. We will discuss this in the revision. **Weaknesses#3: Missing related work. The method design shares some similarity with [1], but I acknowledge that the paper addresses some unique challenges in VLN scenarios. Also, the related work section should also introduce backdoor defense techniques on multi-modal models[2][3], and discuss potential defense for the backdoor attacks on VLN models.** Response: Thank you for recommending the two interesting papers. We apologize for the oversight and promise to include relevant discussions and citations in the revised manuscript. **Weaknesses#4: Minor typos.** Response: Thank you for your suggestion. We will carefully revise the typos accordingly and double-check our manuscript. **Limitations: The target action is only limited to STOP.** Response: Thank you for your suggestion. Exploring more complex and customized abnormal behaviors is a meaningful task, included in our future research plans (as stated in the Limitations section). Additionally, following the suggestion, beyond the STOP action mentioned in the paper, we further explore the abnormal behavior "go towards...". Specifically, we set the action description text to "go towards yoga ball" for visual encoder pre-training. We design loss and reward functions based on see2stop loss, consistency loss, and backdoor-aware reward to encourage the agent to trigger abnormal behavior upon detecting the trigger (yoga ball), moving towards the trigger's direction. Experiments show that during the imitation learning phase, the agent achieves 55.21% (baseline: 55.51%) navigation success rate (SR) and 97% attack success rate (Att-SR). With reinforcement learning, the agent achieves a 64.66% (baseline: 65.90%) SR and a 98% Att-SR, validating the effectiveness of our IPR backdoor attack paradigm with high navigation and backdoor attack performances. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal and I raise my score to 6. I suggest authors include these discussion in the next version. --- Reply to Comment 1.1.1: Comment: Thank you once again for your recognition and valuable insights. We will ensure to include the discussed points in the next version of our manuscript.
Summary: This work proposed a backdoor attack for Vision and Language Navigation task. It works by embedding natural or digital image as trigger into scene representation during agent rollout, and train agent to stop while preserving normal navigation capability using novel loss choices. The method achieved good attack performance (near 100% att-SR) while preserving most navigation performance under various settings: different trigger, different agent, visual or textual variation. Strengths: 1. This work use naturally exist object as attack trigger, which provide high stealthness. And could be high practical for deployment. 2. It is one of few attack work in VLN space, which, given the highly deployable potential of VLN agents, could be important. 3. An interesting anchor objective is designed to align poisoned feature with textual anchor of "Stop" so as to indirectly align with stop action through multimodal representation learning during VLN training. 4. Comprehensive evaluation are conducted, including loss ablation, physical and digital version of attack on two VLN agents, Robustness to visual and textual variations. All study provide solid evidence in the effectiveness of the proposed attack. Weaknesses: Despite situated the backdoor attack in VLN task, the method is not too different from image classification based backdoor attack, and produce stop action is similar to label prediction. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. I am not super clear what data format the trigger is, based Sec 3.3, they seems to be images, but in this case, how is different view of an object trigger displayed in Figure 9 obtained? 2. Why is backdoor aware reward beneficial to retrain VLN performance according to Table 1, do you have a rationale? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Authors state the limitation is the attack currently only direct agent to stop, more complex actions could be future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses#1: Despite situated the backdoor attack in VLN task, the method is not too different from image classification based backdoor attack, and produce stop action is similar to label prediction.** Response: Thank you for your question. Label Prediction Differences: Image classification involves selecting the most appropriate label from given labels, while action prediction for a VLN agent involves choosing the next action from a candidate action space (L138-L139). This requires understanding multimodal information and making sequential decisions, which is more complex and challenging. Method Differences: Backdoor attack methods for image classification [1,2,3] mainly focus on designing the loss function to encourage prediction of the target label. Simply applying these methods to VLN agents, such as using only see2stop loss, faces the core challenges: "the difficulty in aligning poisoned features with abnormal behavior semantics" and "navigation-oriented reward function weakening backdoor attack capability". For your reference, using only see2stop loss results in a backdoor attack success rate (Att-SR) of 75%, while using see2stop loss with navigation-oriented reward results in an Att-SR of 0%. To address this, we proposes a general paradigm for backdoor attacks on VLN agents, considering the multimodal and sequential decision-making characteristics of the VLN task. In addition to the see2stop loss in the imitation learning stage, we introduce the tailored anchor loss, consistency loss, and backdoor-aware reward in the pretraining and reinforcement learning stages for VLN agent. This distinguishes our method from image classification-based backdoor attack methods. Furthermore, achieving a 100% Att-SR in backdoor attacks on VLN agent validates the effectiveness and robustness of our customized approach. References: [1] Gu T, Dolan-Gavitt B, Garg S. Badnets: Identifying vulnerabilities in the machine learning model supply chain[J]. arXiv preprint arXiv:1708.06733, 2017. [2]Bagdasaryan E, Veit A, Hua Y, et al. How to backdoor federated learning[C]//International conference on artificial intelligence and statistics. PMLR, 2020: 2938-2948. [3]Yuan Z, Zhou P, Zou K, et al. You Are Catching My Attention: Are Vision Transformers Bad Learners under Backdoor Attacks?[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 24605-24615. **Questions#1: I am not super clear what data format the trigger is, based Sec 3.3, they seems to be images, but in this case, how is different view of an object trigger displayed in Figure 9 obtained?** Response: Thank you for your question. Figure 9 shows the validation environments, where the triggers exist in these environments rather than being attached, making them more consistent with real objects triggered backdoor attack. Therefore, different views of the object triggers can be obtained by changing the viewing angle. Section 3.3, such as Figure 3, describes the training environments. Since the attacker may only have acquired one image of the attacked scene containing the object trigger (please see L147-L148 of the manuscript), the real object trigger does not exist in the training environments. Therefore, during training, we use the attacker's pre-obtained image of the trigger to create poisoned scenes by pasting it into the training environments to train the backdoored VLN agent. We will provide more detailed explanations in the revision. **Questions#2: Why is backdoor aware reward beneficial to retrain VLN performance according to Table 1, do you have a rationale?** Response: Thank you for your question. When a trigger is present in the current scene, if the agent successfully performs an abnormal behavior, the backdoor-aware reward encourages the agent by providing a positive reward. If the agent fails to perform the abnormal behavior, the backdoor-aware reward discourages the agent from executing the current action by providing a negative penalty. When the trigger is not present in the current scene, the backdoor-aware reward encourages the agent to follow the instructed trajectory and reach the destination. Therefore, the backdoor-aware reward is beneficial for the VLN agent's performance in both backdoor attack and navigation. **Limitations: Authors state the limitation is the attack currently only direct agent to stop, more complex actions could be future work.** Response: Thanks for your discussion. Limited by time, beyond the STOP action mentioned in the paper, we take a further exploration to the abnormal behavior "go towards...". Specifically, we set the action description text to "go towards yoga ball" for visual encoder pre-training. We design loss and reward functions based on see2stop loss, consistency loss, and backdoor-aware reward to encourage the agent to trigger abnormal behavior upon detecting the trigger (yoga ball), moving towards the trigger's direction. Experiments show that during the imitation learning phase, the agent achieves 55.21% (baseline: 55.51%) navigation success rate (SR) and 97% attack success rate (Att-SR). With reinforcement learning, the agent achieves a 64.66% (baseline: 65.90%) SR and a 98% Att-SR, validating the effectiveness of our IPR backdoor attack paradigm with high navigation and backdoor attack performances. --- Rebuttal Comment 1.1: Comment: I appreciate author response to my comments. My questions are well addressed. --- Reply to Comment 1.1.1: Comment: Thank you for your kind feedback. We are glad that our responses addressed your questions satisfactorily. If you have any further ones, we are more than willing to discuss them further.
Summary: The paper addresses the security threats posed by malicious behaviors in Vision-and-Language Navigation (VLN) agents. The authors introduce a novel object-aware backdoor attack paradigm, termed the IPR Backdoor, tailored specifically for VLN's cross-modality and continuous decision-making characteristics. This approach implants object-aware backdoors during the training phase, allowing the agent to execute abnormal behaviors when encountering specific object triggers in unseen environments. Strengths: The paper introduces a unique approach to addressing security concerns in VLN by leveraging object-aware backdoors, which is a novel concept in this field. The experiments are comprehensive and demonstrate the robustness and effectiveness of the proposed method across various scenarios. The paper is well-written, with clear explanations and logical organization. The use of visual aids effectively supports the textual content. Weaknesses: The paper could explore more complex and customized abnormal behaviors beyond the STOP action. While the method is validated on several VLN agents and triggers, additional diverse datasets and more varied environments could further strengthen the findings. The impact of the proposed method on the computational overhead is not thoroughly discussed, which could be important for practical implementations. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors may add more discussion on the societal impacts since this is an adversarial setting. Flag For Ethics Review: ['Ethics review needed: Safety and security'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses#1:The paper could explore more complex and customized abnormal behaviors beyond the STOP action.** Response#1: Thank you for your suggestion. Exploring more complex and customized abnormal behaviors is a meaningful task, included in our future research plans (as stated in the Limitations section). Additionally, following your suggestion, beyond the STOP action mentioned in the paper, we also further explore the abnormal behavior "go towards...". Specifically, we set the action description text to "go towards yoga ball" for visual encoder pre-training. We design loss and reward functions based on see2stop loss, consistency loss, and backdoor-aware reward to encourage the agent to trigger abnormal behavior upon detecting the trigger (yoga ball), moving towards the trigger's direction. Experiments show that during the imitation learning phase, the agent achieves 55.21% (baseline: 55.51%) navigation success rate (SR) and 97% attack success rate (Att-SR). With reinforcement learning, the agent achieves a 64.66% (baseline: 65.90%) SR and a 98% Att-SR, validating the effectiveness of our IPR backdoor attack paradigm with high navigation and backdoor attack performances. **Weaknesses#2:While the method is validated on several VLN agents and triggers, additional diverse datasets and more varied environments could further strengthen the findings.** Response#2: When validating the navigation and backdoor attack capabilities of the VLN agent, we have used the standard dataset (R2R) and environment (Matterport3D) settings of the VLN task. Additionally, we validate the agent in different visual environments (image pre-processing, unseen environments with attached triggers) and text variants (goal-oriented instruction, "Pass" related phrase emphasis, RxR-like instructions). The VLN agent consistently demonstrates excellent backdoor attack capability. Furthermore, based on your valuable suggestion, we select 100 house images from the BnB Dataset [1] and randomly replace the agent's normal navigation views with these images after attaching triggers. This is to verify the agent's sensitivity to the trigger's environment. Experiments show that our agent still achieved a 100% backdoor attack success rate (Att-SR). Additionally, we create a disruptive instruction dataset by randomly reordering words in each R2R instruction. The agent still achieves a 100% Att-SR on this dataset. Both experiments further strengthen our findings, demonstrating that our method exhibits excellent robustness and is insensitive to diverse house environments and datasets. [1] Guhur P L, Tapaswi M, Chen S, et al. Airbert: In-domain pretraining for vision-and-language navigation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 1634-1643. **Weaknesses#3:The impact of the proposed method on the computational overhead is not thoroughly discussed, which could be important for practical implementations.** Response#3: Thank you for your suggestion. In Supplementary Material Section A.5, it is mentioned that the average training time is about 6500 minutes on a single NVIDIA V100 GPU. Specifically, during the training phase, compared to the baseline, our method requires an additional 1200 minutes due to the extra design in the pretraining stage. During the inference phase, our backdoor attack model does not incur any additional computational overhead compared to the baseline model since the model structure and parameter count remain unchanged, which is significant for real-world applications and deployment. We will add more detailed descriptions in the revised version. **Limitations: The authors may add more discussion on the societal impacts since this is an adversarial setting.** Response#4: Thank you for your valuable suggestion. The potential societal impacts include both positive and negative aspects. 1) Positive impact: This technology can effectively prevent robots from entering security-sensitive areas, thereby protecting privacy and property. 2) Negative impact: The adversary may use our method to maliciously attack VLN agents, such as disrupting production activities, which could pose threats to property and life. This necessitates targeted defense technologies to prevent potential harm, which will be a focus of our future research. We will add this discussion to the revision.
Rebuttal 1: Rebuttal: Dear Chairs and Reviewers, We deeply appreciate your management of this paper and the valuable time you dedicated to offering insightful comments. Our sincere gratitude also goes to all the reviewers for recognizing the importance of our work: 1. The topic is novel, timely, and intriguing. 2. The method is unique, interesting, robust, and effective. It shows high stealthness and high practical deployment potential. 3. The experiments are comprehensive and provide solid evidence, logically and effectively supporting the main claims. 4. The presentation is clear, straightforward, and well-written, with a logical organization. Visuals effectively support the text, and the appendix contains extensive information. We have meticulously addressed all the concerns raised by the reviewers. For detailed information on these specific concerns, please refer to the Rebuttal Section. If you have any further questions, we are more than willing to discuss them further. Best wishes, Paper1123 Authors
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Multi-turn Reinforcement Learning with Preference Human Feedback
Accept (poster)
Summary: The paper presents a novel mirror-descent-based policy optimization algorithm for multi-turn preference-based RL in the tabular setting. It proves the convergence to Nash equilibrium and evaluates the algorithm's performance in the Education Dialogue environment, where a teacher agent guides a student. Strengths: - The paper is well-written and easily understandable. - The work extends RLHF to the multi-turn setting. - The proofs provided are comprehensive, and the section on mirror descent policy optimization is particularly interesting. Weaknesses: - The experiment employs a small model, which may limit its generalizability to larger-scale models used in current LLMs. - The evaluation solely relies on reward-based judgments, and it would be beneficial for the author to consider incorporating GPT4 or Gemini to assess win rates compared to other baselines. Technical Quality: 3 Clarity: 3 Questions for Authors: - Have you considered extending your algorithm to incorporate decoder-only models like 7B llama or mistral? Since decoder-only architectures are prevalent in mainstream LLMs, this could be a valuable addition. - Reward reliability is a concern, even when produced by off-the-shelf LLMs. I'm curious if there are any discussions in the paper regarding rewards in the multi-turn RLHF settings, as this would be a significant contribution. - I couldn't find any explanation regarding the absence of a reward model in your settings. Could you provide some clarification? This deviates from the typical formal RLHF pipeline. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This work only utilizes relatively small T5-based models and prompt-based environments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for evaluating our work. We would like to point the reviewer to the general response regarding their concerns with the size of the models and using ChatGPT/Gemini for evaluations. In short, we conducted additional experiments with T5-XL (3B) that solidify the conclusions of our original experiments, and added evaluations using Gemini-Ultra that again reach similar conclusions to the original evaluations in the paper. We plan to add additional experiments with T5-XXL (11B) for the camera ready version. Regarding the questions asked by the reviewer: 1. Performing additional experiments with decoder-only models is indeed valuable, but this is left for future work since obtaining SOTA models is not the purpose of our experiments. The experiments in the paper validate our theoretical findings and the efficacy of our preference-based multi-turn approach. Please see additional explanations in the general response regarding T5-large models. 2. Regarding a discussion on the rewards, we kindly ask the reviewer to read the general response regarding the soundness of our evaluations and the new evaluations that we now added. 3. Regarding the absence of reward, we note that the data is inherently preference-based so there is no reason to assume that a reward exists. Moreover, as shown by the NashMD paper [1], there are situations that cannot be captured by a reward, only by a direct preference model. For further explanations, we kindly refer the reviewer to the general response about motivation and evaluations. [1] Nash Learning from Human Feedback, Munos et al. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' comprehensive response. I would like to keep the current score. Thanks! --- Reply to Comment 1.1.1: Comment: Thank you for reading our response. Since we performed additional experiments on larger models and added new evaluations based on your suggestions, have we answered all of your concerns? If so, we kindly ask that you consider raising your score accordingly. We would like to add that we are currently running experiments with Gemma models.
Summary: This paper views the problem of RLHF for LLMs fine-tuning from a multi-turn interaction perspective, which is natural and promising. This problem is important and interesting to study. A formulation of multi-turn preference-based RL is given. Based on the formulation of the task and existing methods for the single-turn setting, a series of methods, including a Bradely-Terry (BT) model-based RLHF method, namely Multi-turn RLHF, and two Nash-MD algorithm-based methods, namely MTPO and MTPO-$\tau$, are proposed. Proofs of the convergence of the proposed methods are presented. The effectiveness of the proposed methods based on multi-turn setting against conventional methods based on single-turn setting is evaluated on the task of Education Dialog. Additional experiments on the Car Dealer task further demonstrate the effectiveness of the MTPO method. Strengths: 1. The perspective of viewing the LLM alignment problem from the conversation level itself is novel and worthy of study. I believe methods derived from this perspective will play an essential rule in LLM alignment. 2. The motivation and the potential advantage of considering LLM alignment from a multi-turn perspective is well explained. 3. This paper is written clearly and easy to follow. Weaknesses: Many long-standing challenges in conventional RL research may occur in the setting of multi-turn preference-based RL, such as credit assignment, sparse rewarding, and the trade-off between exploration and exploitation. The current experimental results are not very insufficient to show the effectiveness of the proposed methods: 1. Models: The paper only reports evaluation results using one LLM, i.e., the T5-large encoder-decoder model (770M parameters). Though it is unnecessary to evaluate the methods using large-scale LLMs in a research paper, reporting results based on diverse small-scale models, e.g., ~2B parameters, should be practically doable and would make the argument more convincing. It is noted that the authors discussed this as a known limitation, however, I believe more sufficient experiments on diverse models need to be considered to support the claimed contributions of this paper. 2. Tasks: Only one self-constructed task is used to compare the proposed methods against methods in the conventional single-turn setting. Multi-turn conversation is ubiquitous in application scenarios of LLMs. It would be interesting to see the performance of the proposed methods on other tasks. Minor suggestions: 1. The concept of "anchor policy" should be explained in more detail to make it easier to understand the problem formulation and methods. 2. Derivation of Equation (1) to Equation (2) is missing. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Single-turn RLHF v.s. multi-turn RLHF - Is it correct that multi-turn RLHF uses a subset of training data that is used for single-turn RLHF? It would be interesting for the readers to know more information about the training data used for each method. 2. Efficiency of difference methods - Training efficiency is a crucial factor for RL-based methods. The multi-turn setting could potentially make the RL training less efficient as it only uses sparse conversation-level feedback. It would be interesting to see the training time before reaching convergence for different settings and methods. 3. Generalizability - The paper evaluates the benefit of adopting the multi-turn setting and the effectiveness of the proposed methods (multi-turn RLHF, MTPO, MTPO-τ) on the self-designed Education Dialog task. Could you discuss other tasks that could benefit from multi-turn RLHF? Can tasks involving multi-step reasoning benefit from the multi-turn setting? 4. MTPO (online oracle) v.s. MTPO (preference data) in the Car Dealer task - Could you please present a possible explanation why MTPO (preference data) outperforms MTPO (online oracle)? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Limitations of the paper is discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for carefully assessing our work. We kindly point the reviewer to the general response regarding their concerns with the size of the models – we conducted additional experiments with T5-XL (3B) that solidify the conclusions of our original experiments. We plan to add additional experiments with T5-XXL (11B) for the camera ready version. Regarding tasks, we agree that it would be very interesting to test our algorithms on other benchmarks. Specifically, we see tool-use and long-term reasoning as particularly important and interesting tasks. However, when writing this paper, we could not find much prior alignment work for the multi-turn setup for natural language, and therefore there was no usable multi-turn task. Instead, we could only find reward-based tasks like the LMRL-GYM, that the car dealer experiment is based upon, in which the goal is the usual RL goal of improving a reward, and not improving the alignment w.r.t. preference data. This has led us to develop a new task for this setting, Education Dialogue. We believe that this task is an important contribution by itself, and will allow future research on this topic, and we will be happy if other researchers can create new relevant tasks for this topic. Thanks for the minor suggestions, we will clarify these in the final version. 1. The concept of an anchor policy is wide-spread in the alignment literature, and refers to regularizing towards the initial supervised policy so that the learned policy will not diverge too far from the good properties of the base model. We will make sure this is clearly stated. 2. Regarding the derivation of eq. (2) from eq. (1): Thanks for the comment, it is quite straight-forward and we will make sure a rigorous derivation appears in the appendix: It is widely known that the solution for any policy $\pi’$, $ \pi^* = \arg\max \langle Q(\cdot), \pi(\cdot) \rangle - KL(\pi || \pi’)$ is $\pi^*(\cdot) \propto \pi’(\cdot) \exp(\cdot)$ (This is commonly used in mirror-descent based algorithms, as well as in DPO). From here, we note that it can be shown with a simple algebraic manipulation that the optimization problem in eq. (1) is equivalent to the above optimization problem, when $\pi’=\pi_k^\mu$, the geometric mixture $\pi_k^\mu \propto \pi_k^{1 - \alpha \eta_k} \mu^{\alpha \eta_k}$. The derivation is concluded by plugging this in the solution presented above. Regarding the questions asked by the reviewer: 1. We would like to clarify that single-turn RLHF uses feedback for each individual turn, which does not exist (or is very hard to define) in tasks that are inherently multi-turn. This is validated in our experiments in which single-turn baselines perform significantly worse than the multi-turn algorithms. Specifically, we see that even if we artificially create single-turn feedback, it is hard to define it in such a way that aligns with the overall preference (which is the alignment goal in this setting). Moreover, unlike trajectory-level feedback, turn-level feedback depends on the policy that continues the conversation after this specific turn, and therefore becomes biased once the policy changes through the learning process. 2. We agree that training time is an important question, but this is not the focus of this paper and we leave this to future research. We focus on whether multi-turn RL improves upon single-turn in tasks that are inherently multi-turn. Both our theory and our experiments corroborate that a multi-turn approach is crucial for obtaining a better policy. Depending on the application and the resources, different projects may choose whether they are able to apply it or not. 3. Regarding different tasks/applications for multi-turn RL please see the second paragraph. 4. Note that the difference in performance between MTPO (preference data) and MTPO (online oracle) is very small, which leads us to believe that they are actually roughly equal in performance. A sensible explanation is therefore that both of them converged to the optimal policy (or at least some local optimum). Thus, MTPO (online oracle) cannot outperform MTPO (preference data) even though it observes better feedback. --- Rebuttal 2: Title: Thanks for the responses Comment: Thanks for your responses. Some of my concerns were address, however there are still some comments and questions remaining. > The concept of an anchor policy is wide-spread in the alignment literature I agree that regularizing a training policy towards an existing policy is a common practice in off-policy RL, offline RL, and alignment literature. However, the term "anchor policy" may not be widely recognized as standard terminology. It would be helpful for readers if you explained it or provided appropriate references when introducing it for the first time to maintain the reading flow. > single-turn RLHF uses feedback for each individual turn, which does not exist (or is very hard to define) in tasks that are inherently multi-turn. I agree with this. Could you please provide more details on how you collected single-turn feedback in your practical experiments? Specifically, how did you "artificially create" single-turn feedback? > We agree that training time is an important question, but this is not the focus of this paper and we leave this to future research. Could you please include this comparison, e.g., in a single sentence, in the camera-ready paper if it is accepted? This would help readers better understand the overall task setup and the proposed method for multi-turn RLHF. --- Rebuttal Comment 2.1: Comment: We thank you for reading our responses, here are answers to your questions: 1. We are familiar with this term [1,2,3], and thought it is wide-spread, but after your comment we surveyed many papers and observed that it is indeed not as common as we thought. Thanks for mentioning this! We agree that explaining the term anchor policy, the reference policy used for regularization, will help the readers and we will incorporate it in the final version along with the above references that use this term. 2. The description of how we created single-turn feedback can be found in section 5 in the single-turn baselines paragraph. We now provide a detailed explanation which we will also include in the final version to improve clarity: \ In order to run a single-turn algorithm we need to provide preference feedback between two immediate generations in each specific turn. To create this data, we generate partial conversations (up to a random turn number i) using the SFT policy, and at this turn we generate two independent answers (again using the SFT policy). Now, to get the preference feedback we employ two different methods: * Single-turn-reward: We use a modified preference prompt (can be found in Appendix C and we also add it below) in which the model is asked to evaluate the responses by their effect on the overall conversation. * Single-turn-value: We use the SFT policy to continue the two conversations until the end, and then use our original prompt to get the preference. 3. We will include a comment in the final version on the running time of the different algorithms. Specifically, in our experiments, multi-turn algorithms are slower than their single-turn alternatives by a factor of 2. However, it is important to emphasize that running single-turn algorithms longer does not yield better policies. Instead, this leads to worse policies due to overfitting. [1] PERL: Parameter Efficient Reinforcement Learning from Human Feedback, Sidahmed et al. [2] Factually Consistent Summarization via Reinforcement Learning with Textual Entailment Feedback, Roit et al. [3] WARP: On the Benefits of Weight Averaged Rewarded Policies, Ramé et al. **Modified prompt:** You are an expert at assessing teachers. Here is an interaction between a teacher and a student. # Interaction: {conv} # Here are two possible responses by the teacher: # Response 1: {resp1} # Response 2: {resp2} # A good interaction between a teacher and student is characterized by several key elements other than whether the student was able to understand the topic. The teacher should present information clearly and enthusiastically, encouraging questions and active participation. Students should feel comfortable asking for clarification, offering their own insights, and respectfully challenging ideas. Assuming that the teacher and student continue the interaction with one of these responses, which response will lead to a better interaction (do not let the order interactions affect your answer)? Output 1 or 2.
Summary: This paper aims to propose a new reinforcement learning with preference data method for multi-turn conversations. The proposed method is based on the assumption that reaching the Nash equilibrium of the current and another policy can lead to good optimization. The proposed method is stated primarily to extend a prior work called “Nash learning from human feedback” to multi-turn setting. Strengths: * This paper provides multiple theoretical discussions to lay the ground for the formulations. * The proposed method may be helpful in the scenario that we only have one overall reward for a dialogue session with multiple turns. Weaknesses: 1. The definition of Nash Equilibrium in the paper may not be correct (e.g., in L55-56, “ a policy preferred over any other policy”). The definition of Nash Equilibrium is more like that one agent cannot get a better reward by just changing its action. However, there could be a globally better policy. I also do not find this paper citing any game theory or original Nash Equilibrium papers. 2. This paper can be more well-motivated. I still don’t know why we need Nash equilibrium and the proposed method for multi-turn natural language generation (NLG). The proposed method in this paper is not specially designed for multi-turn but only extends a general method for NLG training to multi-turn setup, which is often done and can be done for most general methods. 3. This paper does not compare or even just discuss the whole line of multi-turn RL research in NLG for a long time, thus not showing innovations compared to prior works. For example, Li, Jiwei, et al. "Deep Reinforcement Learning for Dialogue Generation." Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. 2016. 4. I’m not sure why the paper highlights and introduces Contextual MDP (CMDP). First, CMDP may not be practical or useful in multi-turn conversations (the key topic discussed in this paper), while the conversation history is considered as part of the state x. The paper’s method derivation also skips the context space. Therefore, I don’t see the necessity of using CMDP. Second, if I do not understand it wrongly, the CMDP is based on prior work (Contextual Markov Decision Processes https://arxiv.org/abs/1502.02259). However, this paper again does not state the reference. 5. This paper could be organized better. Many old knowledge and newly proposed ideas are mixed together, making it hard to identify which parts the paper really proposes or just restates other works' contributions. 6. As the update is based on Nash Equilibrium reached by the current learning policy taking the prior iteration policy as the opponent, doesn’t it mean the policy is learned to be better than its prior iteration? And therefore, it can be reduced to optimizing every turn (while considering all the prior conversation history) with each turn’s rewards and meanwhile constraining the policy to not diverge largely from prior iteration ($\pi_t$) and the base policy ($\mu$)? According to the paper’s experiments writing, I cannot ensure the input, output, and rewards used for every baseline. I’m particularly interested in the performance of optimizing every turn ($y_h$) with their own complete history ($x_h$) and a trained or supposed reward for every one of them ($r_h$) using an optimization method that regularizes the policy difference between each iteration, e.g, TRPO or PPO. Technical Quality: 2 Clarity: 1 Questions for Authors: As listed in weaknesses. Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: The paper has discussed the proper amount of limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for carefully reviewing our work. In the following we answer your concerns: 1. Our definition of Nash equilibrium is correct and this is in fact the most natural definition that arises from our multi-turn preference model. Note that only a pure Nash equilibrium satisfies ”one agent cannot get a better reward by just changing its action”. Generally, a Nash equilibrium is defined as a set of players’ **strategies** for which no agent can increase their *payoffs* by changing their own strategy. This holds beyond simple constant-sum matrix games, and can be extended to the concepts of subgame perfect nash equilibrium in extensive-form games or Markov Perfect Equilibrium in Stochastic games, which are similar to our formulation. Importantly - (i) even in matrix games, strategies are not necessarily “pure”, meaning that a strategy does not equal an action. Instead, in the RL terminology, **strategy = a (stochastic) policy**, and in our multi-turn case, the policy is a function of a state $\pi(a|s)$. (ii) In alignment in general, and particularly in our multi-turn case, the *payoff* of a game is not the “reward” but the **preference** of one policy against another. All in all, this makes our definition quite natural: A good **policy** is a one that is **preferred** over any other policy. Notably, there does not exist a preference model for individual actions (only entire trajectories). Finally, due to the structure of the regularized preference objective (anti-symmetric), the Nash equilibrium will have both agents following the same policy and thus we can express it in terms of a single policy such that no other policy is preferred over it. We will be happy to provide additional clarification, and we will add this discussion together with relevant game theory citations in the final version. 2. Regarding motivation: We disagree that the paper is not well-motivated. Multi-turn interactions are extremely important for LLM alignment because we use them in a multi-turn conversational manner. Moreover, the motivation for exploring Nash equilibrium in this setting is exactly the same as in the single-turn case. Just as NashMD and IPO motivate their work, since the data itself is preference-based, there is no reason to assume the existence of a reward model. Therefore, both our work and NashMD/IPO give stronger theoretical guarantees on the policy that they converge to. Moreover, both NashMD and our MTPO algorithm are shown to be superior to RLHF baselines in practical experiments. 3. Regarding comparison to NLG literature: While our paper investigates LLM alignment, which is not built upon literature on dialogue NLG, we do agree with the reviewer that this literature should be referenced. We will add these references and relevant discussion in the camera-ready version. 4. Regarding contextual MDPs: The single-turn alignment problem is usually described as a contextual multi-armed bandits problem (e.g., section 3 of the NashMD paper). Therefore, it is natural and rigorous to treat the multi-turn problem as a contextual MDP. In our formalization we used this description, but as often done in the single-turn case, we omit the context in our analysis for brevity. Importantly, the problem is indeed a contextual MDP and we will be happy to add the citation. 5. We disagree with the claim that old knowledge and newly proposed ideas are mixed together in the paper. We kindly ask the reviewer to explicitly point us to the places where this happens, so we can improve the presentation of the paper. Nonetheless, for better readability, we will include the organization of the paper in a paragraph in the introduction. In short, the preliminary section (section 2) is the only place where previous knowledge is presented, while all the following sections present our novel ideas. 6. If we understood the reviewer’s intention correctly, your suggestion is essentially what we refer to as single turn experiments, which perform markedly worse than our multi-turn variants. Similarly to your suggestion, **there already exists a KL-penalty** to the previous policy $\pi_k$ and the anchor $\mu$ in *both* our theoretically grounded MTPO algorithm (eq. 1), and in our theoretically proven Multi-turn RLHF (Section 4, Multi-turn RLHF paragraph). The former is a result of the fact that both of these algorithms are based on Mirror-Descent policy optimization, similarly to TRPO and PPO [2, 3]. This term is aimed at increasing learning stability (as in TRPO/PPO). For our single turn experiments, we “suppose” a turn level reward/preference signal using two methods. (1) Using a prompted model (2) By rolling out two samples of the conversation until their end and querying the trajectory level preference model. Our multi-turn algorithms essentially replace this supposed reward signal with a correct notion of preference/reward based $Q$ function, which is one of the key contributions of our work. [1] Nash Learning from Human Feedback, Munos et al. [2] Adaptive Trust Region Policy Optimization: Global Rates and Faster Convergence to Regularized MDPs, Shani et al. [3] Mirror Descent Policy Optimization, Tomar et al. --- Rebuttal Comment 1.1: Comment: Dear Reviewer AyR4, We hope this message finds you well. We kindly want to check if you had a chance to review our rebuttal, and if you have any further questions or comments we can address to help with your evaluation. Sincerely, The authors --- Rebuttal 2: Comment: Thank you for the responses and the newly provided experiments. While some of my concerns remain, I would increase my score from 4 to 5. Belows are some follow-ups for the unaddressed concerns. * Regarding motivation, my question is, "Why use Nash Equilibrium for the multi-turn scenario?" The multi-turn scenario is important for sure, and the exploration of Nash Equilibrium LM is interesting, too. But such an exploration was already made in NashMD paper (Munos et al. 2023). Does it mean the motivation of this work is extending the same idea from single-turn cases to multi-turn cases? * Regarding comparison to NLG works. This paper works on LLM alignment, which is primarily within the conversation generation area. I do not see a reason why not discussing/comparing them. * The current writing of this paper gives me difficulty to find out which parts are the new ones. I do see not only Section 2 but even in Sections 3 and 4 that many information or equations are from or derived from (Munos et al. 2023). * About the experiments, I haven't seen a response to my question of the input, output, and rewards used for every baseline. Also, after reading the response to the suggested experiment, I'm still confused about how the current "single turn experiment" in the paper optimizes: does it use a single turn to optimize the model once a time? or does it accumulate rewards for all turns and optimize together? In the meantime, the experiment I asked is using each turn but constrained optimization. But to my understanding of the response, the authors do not tackle this, but mention a different thing. --- Rebuttal Comment 2.1: Comment: Thank you for carefully reading the paper and participating in the discussion, your comments are valuable and will surely help us improve the final version. We hope the following responses answer your follow-up questions, and we will be happy to provide further clarifications. * The primary motivation for our work is the multi-turn RL setting, which was not investigated before with a theoretically grounded approach in the LLM alignment literature, and the reviewer agrees is of significant importance. Within the multi-turn setting, it is natural to explore algorithms that converge to Nash equilibrium since Munos et al. (Nash-MD) and Calandriello et al. (Online-IPO) showed that they outperform RLHF algorithms in the single-turn setting. Importantly, extending the results of Munos et al. to the multi-turn setting is a very difficult task. It builds on the vast RL literature that extends algorithms from Multi-Armed Bandits to Markov Decision Processes (MDPs), as well as requires crucial novel ideas that do not appear in the literature such as the preference-based Q-function. Using these, we are able to prove that even in the much more complex multi-turn setting, there exists an algorithm which both converges to Nash equilibrium and outperforms RLHF algorithms (both single-turn and multi-turn) in practice. * We share the same feelings as the reviewer towards the importance of conversation generation, as resonated through our choice to design our benchmark as an education-based conversation generation task. In turn, we agree that it will improve our work to include a discussion of how our approach differs or can be used as a complementary approach for techniques presented in relevant NLG works, and specifically ones that are based on RL. If the reviewer has any more references besides the seminal “Deep Reinforcement Learning for Language Generation”, we would be happy to include them in the discussion that we will add to the final version. \ We feel it is important to note again that our multi-turn setting is more general than conversation-generation, and captures other important tasks. One example is multi-turn tool use: In this task, the agent gets a single query from a user, and is allowed to repeatedly use different tools through API calls and responses. The interaction ends when the agent decides it has enough context to respond to the user, and generates a final user-facing response. This whole process consists of $N$ turns, where there are $N-1$ consecutive tool calls, and a final generation turn. The goal in this case is the alignment of this final user-facing response, so the preference feedback is only given at the end of the process. Importantly, it is hard to acquire local feedback (rewards or preferences) for each of the tool-calls, and the alignment is only captured w.r.t. to the final response. In this scenario, our multi-turn alignment algorithm MTPO allows the propagation of the final alignment signal to improve the intermediate tool-calls, towards the goal of making the overall user-facing generation better. We will include a similar discussion to make the motivation and scope of our work clearer. * All the derivations in section 3 and 4 are new. Naturally they are motivated by the work of Munos et al. on the single-turn setting and by the vast literature on Mirror Descent, but the extension of these to the multi-turn setting (and the definition of this setting) are not trivial and require delicate derivations and proofs. Note that some definitions in section 3, which might look similar to objects previously defined in Munos et al., are actually entirely new. For example, the regularized preference model that we present in section 3, is a generalization of the regularized preference model of Nash-MD, with a significant difference: The preference $\mathcal{P}(\pi \succ \pi’)$ is the expected preference over the whole multi-turn decision process of following the policy $\pi$ vs. $\pi’$ in the environment with transition probabilities $p$. In the opening paragraph of this section, you can see that $\mathcal{P}(\pi \succ \pi’) = \mathbb{E} \left[ \mathcal{P}(x_{H+1} \succ x’_{H+1}) \mid \pi, \pi’, p \right]$. Importantly, the expectation is taken over the whole MDP, meaning the expectation over the possible multi-turn interactions given an agent policy $\pi$ (e.g., the teacher) and a transition model $p$ of the environment (e.g., the student). Moreover, the definition of $KL_p$ is again the expected KL over a multi-turn trajectory which is different from the single-turn KL of Munos et al., however we prove a strong connection between them in Lemma 3.1. We kindly ask the reviewer to give specific pointers to information which is taken from previous work in these sections. --- Reply to Comment 2.1.1: Comment: * See the following detailed description of our single-turn baselines, which we will add in the final version of the paper: In order to run a single-turn algorithm we need to provide preference feedback between two immediate generations in each specific turn. To create this data, we generate partial conversations (up to a random turn number i) using the SFT policy, and at this turn we generate two independent answers (again using the SFT policy). Now, to get the preference feedback we employ two different methods: * Single-turn-reward: We use a modified preference prompt (can be found in Appendix C and we also add it below) in which the model is asked to evaluate the responses by their effect on the overall conversation. * Single-turn-value: We use the SFT policy to continue the two conversations until the end, and then use our original prompt to get the preference. To sum up, the input to the single-turn algorithm is a dataset that consists of partial conversations. For each partial conversation, there are two immediate responses and a preference between them. The algorithm uses this dataset to learn a reward model (RLHF) or a preference reward model (NashMD). Then, the RL part is done just like in standard RLHF: Sample a partial conversation from the dataset, generate one/two immediate responses using the policy, get a reward/preference from the model, and perform a policy gradient step. ### Modified prompt: You are an expert at assessing teachers. Here is an interaction between a teacher and a student. # Interaction: {conv} # Here are two possible responses by the teacher: # Response 1: {resp1} # Response 2: {resp2} # A good interaction between a teacher and student is characterized by several key elements other than whether the student was able to understand the topic. The teacher should present information clearly and enthusiastically, encouraging questions and active participation. Students should feel comfortable asking for clarification, offering their own insights, and respectfully challenging ideas. Assuming that the teacher and student continue the interaction with one of these responses, which response will lead to a better interaction (do not let the order interactions affect your answer)? Output 1 or 2.
Summary: The authors propose reinforcement-learning methods for multi-turn interactions using preference feedback in this paper. They present a mirror-descent-based policy optimization algorithm and prove its convergence to Nash equilibrium. Strengths: - The authors extend the RLHF paradigm to the multi-turn setting for an agent with multiple interactions with an external (stochastic) environment. - In their feedback, they consider the entire multi-turn conversation and not just a single turn, which enables them to capture the long-term effect of individual actions and evaluate the conversation more accurately. - The authors create and publicly release the Education Dialogue data. Weaknesses: - No human evaluation is performed, and the high-capacity LLM evaluation might not entirely reflect human preferences in a complicated conversation. Also, the prompt used for LLM evaluation only asks what conversation is better in general and doesn't compare different aspects of a good and valid conversation, such as consistency, compliance with the roles specified in the prompt, etc. - Limited experiments only using small T-5 models and simple configurations. Technical Quality: 3 Clarity: 2 Questions for Authors: Refer to weaknesses. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The current evaluation is limited and can be extended to other models and tasks to show the effectiveness of the approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to evaluate our work. Here are the responses to your comments: Human evaluation: We refer the reviewer to the discussion in the main response on human evaluation and alignment. We briefly repeat the main message there: The main goal of our experimental section is to test our algorithms against other algorithms in how good they are able to align with preference data. Unlike single-turn benchmarks which are based on data with real human-preferences, our Education Dialogue domain is based on Gemini-Ultra curated preferences. As a result, evaluating this alignment using the same Gemini-Ultra model as done in our new presented experiments is true to our goal of understanding how to align to preference data. While human-evaluation is always interesting, here it is actually just a proxy to alignment with the data. Regarding the prompt for LLM evaluation: We agree that in a real dialogue system, it is important to collect diverse data from raters that account for many attributes of what it means to be a good conversation. However, the goal of this paper is not to create a real dialogue system, but to develop *better algorithms* to be used for alignment. We believe that grounded theory and our experimental results suggest that MTPO should be heavily considered when trying to create such a real dialogue system or any other multi-turn domain. It is important to point out that other algorithm-focused works (RLHF/DPO/IPO/NashMD) are also using a single-preference signal that tries to holistically capture the task (e.g., summarization). This is again due to the fact that these papers are not intended to create a SOTA summarization model, but rather to provide new algorithms to train one. Regarding model sizes: We added experiments with T5-XL (3B) that solidify the conclusions of our original experiments and we plan to add additional experiments with T5-XXL (11B) for the camera ready version. We refer the reviewer to the general response for a detailed discussion of our results. In light of the above, we kindly request the reviewer to reconsider their score based on both the significance of our theoretical contribution and the new experiments showing that our results extend to larger models. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough response. After reading the rebuttal, I decided to increase my score by +1.
Rebuttal 1: Rebuttal: We thank the reviewers for the time and effort put into the reviews. The following address points that appeared in multiple reviews. # Paper Scope: It appears to us that the reviewers have dedicated most of their attention to the experimental results, potentially overlooking the paper’s main contributions, i.e., introducing fundamental and grounded novel algorithmic concepts and a new multi-turn setting for LLM alignment. We emphasize that the purpose of our experiments is to validate our approach and theoretical findings in a practical setup, rather than to obtain SOTA. Concretely, we kindly ask the reviewers to re-consider their judgment of the paper in light of the following main contributions: 1. We introduce the multi-turn setting for LLM alignment, which judges agents based on their overall communication capabilities instead of just their ability to provide an immediate useful response. This is crucial for enhancing LLM capabilities in scenarios, such as planning and complex tool-use. Moreover, we mathematically formalize this setting and define the novel objective of finding the policy that is best aligned with human preferences. 2. We identify a novel preference-based non-symmetric Q-function as the fundamental mathematical object that allows us to efficiently solve the multi-turn objective. Furthermore, we use this Q-function to come up with novel algorithms that are theoretically grounded and practical to implement. We also extend (and analyze) RLHF to the multi-turn setting. 3. Theoretically, we prove that our algorithms converge to the Nash equilibrium of the multi-turn setup (while clearly single-turn approaches fail to do so). 4. Empirically, we show that multi-turn approaches beat single-turn baselines. 5. We create a new preference-based multi-turn experimental environment. Since we are not aware of any such an environment, this important contribution gives the community a benchmark to further explore multi-turn alignment algorithms and improve LLMs multi-turn capabilities in the future. # On (Human) evaluation and Alignment The ultimate goal of learning from preferences is human alignment, and our multi-turn preference based algorithm is explicitly designed to improve the alignment of LLMs. However, collecting quality human data is a very long and expensive process. This is especially true when one needs to evaluate the difference between long dialogues. While recommended when training real-world systems, in academic settings, it has become standard practice to use a highly capable LLM (Gemini/ChatGPT) as a proxy for human alignment (see DPO, RLAIF, NashMD, IPO). Additionally, unlike some of the above works that train on data collected from real human preference but evaluate performance with an LLM as a judge, in our work, the training preference data itself is generated by an LLM. Therefore, the true goal in our curated environment is to align the model with the preference of this highly capable LLM rather than a human rater. Following the reviewers’ recommendations, we validated our results using the same Gemini Ultra model used to generate the data, reaching the same conclusions (see below) as with the prompted T5-xxl model used in the paper. This is indeed a more truthful measure for the model’s alignment . We wish to thank the reviewers for helping improve the paper, we will include these new and strong results in the final version. **Evaluation with a Gemini-Ultra judge** ||SFT|ST RLHF Reward|ST RLHF Value|MT RLHF|MTPO|MTPO-tau| |:------------------|:-----:|:--------------:|:-------------:|:-------:|:-----:|:--------:| |SFT|-|0.206|0.286|0.164|0.125|0.086| |ST RLHF Reward|0.794|-|0.479|0.452|0.447 | 0.277| |ST RLHF Value|0.814|0.521|-|0.467|0.438|0.320| |MT RLHF|0.836|0.548|0.533|-|0.419|0.288| |MTPO|0.875|0.553|0.562|0.581|-|0.305| |MTPO-tau|0.914|0.723|0.680|0.712|0.695|-| # On the usage of the t5-large model: We would like to point the reviewers’ attention to several important points regarding the use of T5-large models: 1. The multi-turn setting is more computationally demanding than single-turn, making the choice of smaller models especially reasonable in the context of an academic paper. Due to the repeated interactions, multi-turn dialogue is a much longer NLG task than common single-turn benchmarks like OpenAI TL;DR, Anthropic-HH, and XSUM, with an overall generation length of 20 times longer. Therefore, while we agree with the reviewers that comparing our algorithm on larger and more capable models is valuable, this was not feasible in this kind of academic research both from time and cost perspectives. 2. Our goal is to compare different algorithmic concepts to validate the benefits of our novel approach, and not to establish SOTA results. Thus, our comparison is valid and fair, as all baselines use the exact same model. While scaling up model sizes provides better performance, this is orthogonal to what our experiments aim to test. 3. Very well known concurrent work with similar experimental goals use the same T5-large models: The papers NashMD, IPO, online IPO, RLAIF, and more, use an even simpler experimental setup and have recently been published in top-tier conferences. This shows that T5 models are regarded as common practice for testing fundamental algorithmic concepts in academic contexts. 4. Per the reviewers request, we are currently running experiments with larger models. We note that these experiments take time to run but initial results on T5-XL (3B, see below) show similar gains for our multi-turn approach, and we will include the final results in the final version. **Evaluation of T5-XL (3B) using a Gemini-Ultra judge** ||SFT|ST RLHF Reward|MT RLHF|MTPO-tau| |-------------------|:-----:|:--------------:|:-------:|:--------:| | SFT |-|0.295|0.101|0.041| | ST RLHF Reward|0.705|-|0.180|0.069| | MT RLHF|0.899|0.820|-|0.139| | MTPO-tau| 0.959 | 0.931| 0.861|-| * **MTPO-tau L vs. MT RLHF XL:** 0.525
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Uncertainty-based Offline Variational Bayesian Reinforcement Learning for Robustness under Diverse Data Corruptions
Accept (poster)
Summary: The authors introduce TRACER, a new Bayesian methodology to capture the uncertainty via offline data for robustness against all types of data corruptions. An appealing feature of TRACER is that it can distinguish corrupted data from clean data using an entropy-based uncertainty measure. Experiments are provided to prove the effectiveness of such a methodology. Strengths: Very clearly stating the problem, and convincing the reader that's a relevant one. Compelling method to address such a problem. Weaknesses: The authors missed some important references: [1,2] introduce credal Bayesian deep learning, which is a robust way of performing Bayesian inference via Bayesian Neural Networks (where posterior and predictive distributions are approximated via variational inference) taking into account different types of uncertainty (using so-called credal sets, i.e. closed and convex sets of prior and likelihood probabilities), and quantifying them using entropy-based uncertainty measures. In the future, the authors may also look into [3], which seems like a compelling work for the research venue explored in the paper. [1] https://arxiv.org/abs/2302.09656 [2] https://link.springer.com/chapter/10.1007/978-3-031-57963-9_1#:~:text=In%20their%20seminal%201990%20paper,bound%20to%20hold%20with%20equality [3] https://arxiv.org/abs/2308.14815 Technical Quality: 3 Clarity: 3 Questions for Authors: I'd like to politely ask the authors to expand the related work with the above references [1,2]. In addition, would it be fair to say that, on page 3, the expectation that finds $\pi^\star$ is an integral taken with respect to the joint probability given by all the sources of randomness $P_0, \pi(\cdot \mid s_t), \rho(\cdot \mid s_t,a_t), P(\cdot \mid s_t,a_t)$? In equation (1), the closed parenthesis ) should go after $a_0=a$. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations are explicitly acknowledged by the authors in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s insightful and constructive feedback. We have carefully addressed these concerns and accordingly revised the manuscript. These comments have not only facilitated significant improvements in our manuscript but have also inspired us for further in-depth studies in our future research. - **Q1**. The authors missed some important references [1,2]. In the future, the authors may also look into [3], which seems like a compelling work for the research venue explored in the paper. - **A1**. Thank you for pointing out these crucial references. The Shannon entropy for the measures of aleatoric and epistemic uncertainties discussed in these works [1,2] provides important insight and support for our method. We will study these references thoroughly and consider incorporating their findings into our future research. We also appreciate the suggestion to explore the recent work in [3], which could further enhance our understanding and methodology. We will include these additions in our revised manuscript. [1] Credal Bayesian Deep Learning. 2023. [2] A Novel Bayes' Theorem for Upper Probabilities. Epi UAI 2024. [3] Distributionally Robust Statistical Verification with Imprecise Neural Networks. 2023. - **Q2**. In addition, would it be fair to say that, on page 3, the expectation that finds $\pi^*$ is an integral taken with respect to the joint probability given by all the sources of randomness $P_0$, $\pi (\cdot | s_t)$, $\rho (\cdot | s_t, a_t)$, $P(\cdot |s_t, a_t)$? - **A2**. Yes. On page 3, the expectation that determines $\pi^*$ involves an integral taken with respect to the joint probability distribution influenced by all sources of randomness, including $P_0$, $\pi(\cdot|s_t)$, $\rho(\cdot|s_t,a_t)$, and $P(\cdot|s_t,a_t)$ [1,2]. [1] Trust Region Policy Optimization. ICML 2015. [2] Distributional reinforcement learning with quantile regression. AAAI 2018. - **Q3**. In equation (1), the closed parenthesis ) should go after $a_0=a$. - **A3**. Thank you for catching the typo in Eq. (1). We will carefully correct this in our revision. --- Rebuttal Comment 1.1: Title: Thank you! Comment: Thanks for your answers; I'm happy to keep my score. --- Reply to Comment 1.1.1: Title: Thanks for your kind support! Comment: Dear Reviewer vWB5, Thanks for your kind support and further improvements you have suggested for our manuscript! We are committed to incorporating all your valuable suggestions in the final version, if accepted. Thank you again for your valuable comments and guidance. Best, Authors
Summary: This paper presents a novel approach called TRACER, aimed at addressing the challenge of learning robust policies from offline datasets that are subject to various data corruptions. The key contribution of this work lies in the integration of Bayesian inference to capture uncertainty within the offline data. TRACER models data corruptions as uncertainty in the action-value function and approximates the posterior distribution of this function. Furthermore, it employs an entropy-based uncertainty measure to distinguish between corrupted and clean data, thereby regulating the loss associated with corrupted data to enhance robustness in clean environments. Experimental results indicate that TRACER outperforms SOTA methods in handling both individual and simultaneous data corruptions. Strengths: - The studied problem is important, and the paper offers a reasonable solution. - Theoretical guarantees are provided. - The empirical results show considerable improvement over prior work. Weaknesses: - Though I like the insight from Variational Bayesian inference, my major concern is that the method is quite complicated, incorporating numerous components and designs such as the distributional value function, variational inference, and entropy-weighted loss function. This complexity makes this method less preferable in practice. - Moreover, the results in the main tables (Table 2 and Table 3) appear similar to those of RIQL, raising concerns about the necessity of the included components. - Another issue is the readability and clarity of the writing. For example, Equations 10 and 11 lack necessary explanation and insight. Additionally, the implementation details for $\phi_a, \phi_b, \phi_c$ are missing, leaving readers unsure of how the method works in practice. Furthermore, Equation 10 seems to include $\theta$ as an input unnecessarily, as the right side does not depend on $\theta$. - The analysis of the experiments is not thorough. For instance, the accuracy of the measurement for corrupted data decreases during training in Figure 3, which is not discussed and seems unreasonable. Technical Quality: 3 Clarity: 2 Questions for Authors: - The proposed method is complicated. Can this method be simplified? - The results in the main tables (Table 2 and Table 3) appear similar to those of RIQL, raising concerns about the necessity of the included components. - Another issue is the readability and clarity of the writing discussed above. - Why does the accuracy of the measurement for corrupted data decreases during training in Figure 3? - Moreover, I suggest the authors provide the computational cost comparison with prior work. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: This paper acknowledges its limitations; however, I identified an additional concern regarding the complexity of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful and constructive comments and suggestions. We respond to each comment as follows and sincerely hope that our responses could properly address your concerns. If so, we would deeply appreciate it if you could raise your score. If not, please let us know your further concerns, and we will continue actively responding to the comments and enhancing our submission. - **Q1**. The proposed method is complicated. Can this method be simplified? - **A1**. Thanks for your meaningful comment. - On the actor-critic architecture based on IQL, our approach TRACER adds just one ensemble model $(p_{\varphi_a},p_{\varphi_r},p_{\varphi_s})$ to reconstruct the data distribution and repace the function approximation in the critic with a distribution estimation (see Figure G1 in the PDF of our global response). Therefore, the network structure is not complicated, while significantly improving the ability to handle both simultaneous and individual corruptions. - It is worth noting that our primary aim is the development of corruption-robust offline RL techniques for diverse corruptions. To the best of our knowledge, this study introduces Bayesian inference into corruption-robust offline RL for the first time, prioritizing novelty and robust performance in challenging scenarios with simultaneous corruptions. - We thank you for reminding us of the importance of the balance between complexity and usability. In future work, we plan to refine TRACER by estimating uncertainty directly within the representation space. Thus, we can simply learn action-value functions without using distributional RL. This advancement will potentially broaden the applications of TRACER in real-world scenarios. - **Q2**. The results in the main tables (Table 2 and Table 3) appear similar to those of RIQL, raising concerns about the necessity of the included components. - **A2**. Thanks for the insightful comment. Please refer to A3 in our global response. - **Q3**. Another issue is the readability and clarity of the writing. For example, Equations 10 and 11 lack necessary explanation and insight. Additionally, the implementation details for $\varphi_a$, $\varphi_r$, $\varphi_s$ are missing, leaving readers unsure of how the method works in practice. - **A3**. Please refer to A2 in our global response. - **Q4**. Furthermore, Equation 10 seems to include $\theta$ as an input unnecessarily, as the right side does not depend on $\theta$. - **A4**. Here is our explanation. - The goal of Eq. (10) of our main text is to optimize $(\theta, \varphi_a,\varphi_r,\varphi_s)$, thus minimizing the difference between $p_{\varphi_a}(A|D_\theta,S,R,S')$ and $\pi_{\mathcal{B}}(A|S)$, $p_{\varphi_r}(R|D_\theta,S,A)$ and $\rho_{\mathcal{B}}(R|S,A)$, and $p_{\varphi_s}(S|D_\theta,A,R)$ and $p_{\mathcal{B}}(S)$. More details are in A2 of the global response. - In Eq. (10), the input of each $(\mu_{\varphi}, \Sigma_{\varphi})$ includes $D_\theta$ with the parameter $\theta$. - **Q5**. The analysis of experiments is not thorough. Why does the accuracy of the measurement for corrupted data decrease during training in Figure 3? - **A5**. Thanks for your meaningful comment. - We apologize for any confusion caused by the unclear description and presentation of the validation experiments in Figure 3 of our main text. To clearly illustrate the measurement accuracy of agents learned by TRACER with respect to uncertainty during training, we further conduct validation experiments on Walker2d-Medium-Replay-v2. Specifically, we evaluate the accuracy every 50 epochs over 3000 epochs. For each evaluation, we sample 500 batches to compute the average entropy of corrupted and clean data. Each batch consists of 32 clean and 32 corrupted data. We illustrate the curves over three seeds in Figure G3 of the PDF of our global response, where each point shows how many of the 500 batches have higher entropy for corrupted data than that of clean data. - Figure G3 illustrates an oscillating upward trend of TRACER's measurement accuracy using entropy under simultaneous corruptions, demonstrating that using the entropy-based uncertainty measure can effectively distinguish corrupted data from clean data. - **Q6**. Moreover, I suggest the authors provide the computational cost comparison with prior work. - **A6**. Please refer to A1 in the global response. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: My concerns have mostly been addressed, so I have decided to raise my score. The added diagram and clarification on the writing greatly improve understanding. The authors are expected to include these in the revised paper. --- Reply to Comment 1.1.1: Title: Thanks for your kind support and for helping us improve the paper! Comment: Dear Reviewer wmJN, Thanks for your kind support and for helping us improve the paper! We are committed to including all additional diagrams and clarifications and will incorporate your suggestions in the final version, if accepted. Thank you again for your valuable comments and guidance. Best, Authors
Summary: The paper introduces TRACER, a robust offline reinforcement learning (RL) algorithm designed to address the challenges posed by data corruptions in offline datasets. The corruptions can be realized in form of states, actions, rewards, and dynamics corruption. The proposed methodology uses Bayesian inference to model and mitigate the impact of corrupted data on the action-value function. By leveraging an entropy-based uncertainty measure, TRACER can distinguish between corrupted and clean data, thereby reducing the influence of the former and enhancing the robustness of the learned policies. The paper provides thorough evaluation on MujoCo and Carla offline RL datasets. Strengths: 1. The paper proposes an interesting method of down weighting the contribution corrupted data while learning action value function from the offline dataset using entropy as an uncertainty measure. 2. The paper uses a Bayesian approach to learn the action value function from the data which aids learning from uncorrupted elements in the dataset. 3. The authors provide extensive experimental validation across multiple tasks and corruption types, showing mostly consistent performance improvements over other methods. The paper also reports ablation studies of using entropy showing potential improvement from the proposed methodology. Weaknesses: 1. It is not clear how the final policy is extracted after learning $D_{\theta}$. 2. The paper relies on many hyperparameters. What kind of kernel is used in the gaussian distribution learning in Eq 10? Is there a upper bound on what amount of corruption the method will be able to handle? 3. While the authors consider individual corruption of one element at a time data collected from in a real world system can have simultaneous corruptions. For example corrupted action due to adversarial noise will lead the system to different state causing simultaneous corruptions. 4. Notational clarity: The paper introduces a lot of notations which is difficult to keep track of. Specially in the following contexts : a : The paper introduces LQ (θi), LV(ψ), Lπ(ϕ) in Eq 4, 5 and 6 but does not discuss how they relate to Dθ. Is the θ used for learning Q function and D same? b : The paper introduces a new notation qπ in Eq 18 called value distribution. Is this the same as action value distribution? RIQL already proves the robustness of IQL policy with respect of data corruption. What is the objective of Theorem A.3? It will be helpful to add a discussion on the theoretical analysis. Also Providing a table summarizing the notations in appendix will enhance readability of the text. Technical Quality: 3 Clarity: 2 Questions for Authors: 1) How relevant is the reward corruption given offline RL already is robust to noise in the reward function as it mimics the actions for datasets with length bias? Please refer to ref 1 2) The authors aim to leverage all elements in the dataset as observations. Do the authors assume that complete traces of the system are available for which the offline policy is being learned? In situations of dynamic corruption these traces might be different than the system. For example s_t in the corrupted dataset may not be observed in the real system. How are such cases handled? 3) As the method learns action-value function D from the data while reducing the influence of corrupted data. What amount of clean data needs to be present in the dataset for this method to work? 4) What is the reason for not using projected gradient descent for the reward corruption? Also could the author provide some understanding of why the method outperforms RIQL under certain settings and why it does not in some cases? 5) Please also respond to the weaknesses. 1. Li, Anqi, et al. "Survival instinct in offline reinforcement learning." Advances in neural information processing systems 36 (2024). Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Limitations: The authors discuss the limitation in terms of handling realistic noises in conclusion. However, a limitation is also on the dependence on clean observations present in the dataset when some elements are corrupted. The authors do not discuss any negative sociatal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your insightful comments. We respond to each comment as follows and sincerely hope that our responses could properly address your concerns. If so, we would deeply appreciate it if you could raise your score. If not, please let us know your further concerns, and we will continue actively responding to the comments and enhancing our submission. - **Q1**. (1) The paper introduces LQ (θi), LV(ψ), Lπ(ϕ) in Eq 4, 5 and 6 but does not discuss how they relate to Dθ. Is θ for learning Q function and D same? (2) How is the final policy extracted after learning Dθ. - **A1**. We apologize for any confusion caused by unclear descriptions in our main text. 1. Relation between $Q$, $D$, and $\theta$. - We use **the same $\theta$** to learn $D$ and then estimate $Q$. Specifically, following [1], we have $$Q_{\theta_i}(s,a) = \sum_{n=1}^N D_{\theta_i}^{\tau_n}(s,a,r).$$ See Lines 167-172 of the main text for more details regarding the notations. Similarly, we also have $$V_{\psi}(s) = \sum_{n=1}^N Z_{\psi}^{\tau_n}(s).$$ - With the relation between $Q$ and $D$, $V$ and $Z$, we use Eq. (4) to derive $\delta$ in Eq. (12) and Eq. (5) to derive Eq. (13) in our main text. 2. Policy Learning. - Following the weighted imitation learning in RIQL and IQL, we use Eq. (6) in our main text to learn the policy. Details for notations are shown in Lines 122-123 of the main text. [1] Implicit quantile networks for distributional reinforcement learning. - **Q2**. The paper relies on many hyperparameters. - **A2**. Compared to RIQL, one of SOTA methods, our approach TRACER only introduces additional hyperparameters associated with the approximation of action-value distributions: (1) the number $N$ of samples $\tau$; (2) a linear decay parameter $\beta$ used to trade off the losses $\mathcal{L}\_{\text{first}}$ and $\mathcal{L}\_{\text{second}}$. - **Q3**. What kind of kernel is used in Gaussian distribution of Eq 10? - **A3**. We do **not** use a Gaussian kernel in Eq. (10). See the explanation of Eq. (10) in A2 of our global response for more details. - **Q4**. Is there a upper bound on what amount of corruption the method can handle? - **A4**. Yes. Please refer to A4 in our global response. - **Q5**. While authors consider individual corruptions, data in the real world can have simultaneous corruptions. - **A5**. We specifically design TRACER to handle the challenging simultaneous corruptions. Results in Table 1 of the main text show that TRACER outperforms several SOTAs in **all tasks under simultaneous corruptions**, achieving an average gain of ${\bf +21.1\\%}$. Note that "mixed corruptions" in the main text refers to simultaneous corruptions (see Line 233 in the main text). - **Q6**. The paper introduces $q^{\pi}$ in Eq 18. Is this the same as action value distribution? - **A6**. No. The notation $q^\pi$ in Eq. (18) of the Appendix denotes the value distribution, consistent with $p(\cdot|s)$ used in Line 170 of the main text, which is an expactation of action value distribution $\mathbb{E}_{a\sim \pi, r\sim \rho} [p\_\theta(\cdot|s,a,r)]$. - **Q7**. RIQL already proves the robustness of IQL policy against data corruption. What is the objective of Theorem A.3? - **A7**. Please refer to A4 in our global response. - **Q8**. Providing a table summarizing the notations in appendix will enhance readability. - **A8**. Thanks for the insightful comment. We will include this table in our revision. - **Q9**. How relevant is the reward corruption given offline RL already robust to noise in the reward function? - **A9**. While the offline RL is robust to small-scale random reward corruptions [1], it tends to struggle with large-scale reward corruptions (see Page 21 in RIQL). [1] Survival instinct in offline reinforcement learning. - **Q10**. Do the authors assume that complete traces of the system are available? - **A10**. No. We employ the commonly used offline RL setting [1], where the offline dataset consists of shuffled tuples and agents only use individual tuples rather than complete traces. [1] Off-policy deep reinforcement learning without exploration. - **Q11**. In situations of dynamic corruption these traces might be different than the system. For example, $s_t$ in the corrupted dataset may not be observed in the real system. How are such cases handled? - **A11**. Thanks for the insightful comment. In a scenario where an element (e.g., state) may be corrupted, TRACER captures uncertainty by using (1) other elements and (2) correlations between all elements and action values (see Lines 53-60 of the main text). - **Q12**. What amount of clean data needs to be present in the dataset for this method to work? - **A12**. Please refer to A4 in our global response. Results in Table G4 show that while TRACER is robust to simultaneous corruptions, its performance depends on the extent of corrupted data it encounters. - **Q13**. What is the reason for not using projected gradient descent for reward corruptions? - **A13**. Following RIQL, the objective of adversarial reward corruption is $\hat{r} = \min_{\hat{r} \in \mathbb{B}(r, \epsilon)} \hat{r} + \gamma \mathbb{E}[Q(s', a')]$. Here $\mathbb{B}(r, \epsilon)=\\{\hat{r}\mid |\hat{r} - r| \leq (1+\epsilon)\cdot r_{\max} \\}$ regularizes the maximum distance for rewards. Thus, we can directly compute $\hat{r} =-\epsilon\times r$ without using projected gradient descent. - **Q14**. Could the author provide some understanding of why the method outperforms RIQL under certain settings and why it does not in some cases? - **A14**. Please refer to A3 in our global response. - **Q15**. Limitations. - **A15**. We look forward to developing corruption-robust offline RL with large language models, introducing the prior knowledge of clean data against large-scale or near-total corrupted data. Moreover, we plan to add potential negative societal impacts in our revision. --- Rebuttal Comment 1.1: Title: Table of Notations used in Appendix for Q8. Comment: For Q8, we provide the detailed table summarizing the notations used in our Appendix. See Table G8 as follows. Table G8. Notations in our Appendix. | Notations used in our Appendix | Descriptions | | ------------------------------ | ------------------------------------------------------------ | | $\mathbb{\zeta}$ | Cumulative corruption level. | | $\mathbb{\zeta}_i$ | Metric quantifying the corruption level of rewards and next states (transition dynamics). | | $\mathbb{\zeta}_i^{'}$ | Metric quantifying the corruption level of states and actions. | | $\pi_{b}(\cdot\|s)$ | The behavior policy that is used to collect clean data. | | $\pi_{\mathcal{B}}(\cdot\|s)$ | The behavior policy that is used to collect corrupted data. | | $\pi_{E}(\cdot\|s)$ | The policy that we want to learn under clean data. | | $\tilde{\pi}_{E}(\cdot\|s)$ | The policy that we are learning under corrupted data. | | $\pi_{\text{IQL}}(\cdot\|s)$ | The learned policy using IQL's weighted imitation learning under clean data. | | $\tilde{\pi}_{\text{IQL}}(\cdot\|s)$ | The learned policy using IQL's weighted imitation learning under corrupted data. | | $d^{\pi}(s,a)$ | The probability density function associated with policy $\pi$ at state $s$ and action $a$. | | $W_1(p,q)$ | The Wasserstein-1 distance that measures the difference between distributions $p$ and $q$. | | $q^{\pi}(\cdot \| s)$ | The value distribution of the policy $\pi$. | | $Z^{\pi}(s)$ | The random variable of the value distribution $q^{\pi}(\cdot\|s)$. | | $\epsilon_1$ | The KL divergence between policies $\pi_{E}$ and $\pi_{\text{IQL}}$, representing standard imitation error under clean data. | | $\epsilon_2$ | The KL divergence between policies $\tilde{\pi}\_{E}$ and $\tilde{\pi}\_{\text{IQL}}$, representing the standard imitation error under corrupted data. | | $\hat{\varsigma}_n$ | The midpoint of the action-value distribution $p_{\theta}$. | --- Rebuttal 2: Title: Response to Additional Comments Comment: Thank you for your valuable comments. We respond to each of your comments as follows. **Q16**. (1) How does $\tilde{\pi}_{\text{IQL}}$ help to prove the upper bound for TRACER? (2) How do you equate $Z^\pi(s)$ with $D^\pi (s,a)$ in steps 24, 25, and 26 in Theorem A.3? (3) What is $D^\pi (s,a)$ as previously D is defined as D(s, a, r)? - **A16**. - For (1), as TRACER directly applies the weighted imitation learning technique from IQL to learn the policy, we can use $\tilde{\pi}_{\text{IQL}}$ as the policy learned by TRACER under data corruptions, akin to RIQL. In Theorem A.3 of the Appendix, the major difference between TRACER and IQL/RIQL is that TRACER uses the action-value and value distributions rather than the action-value and value functions in IQL/RIQL. Therefore, we further prove an upper bound on the difference in value distributions of TRACER to show its robustness. - For (2), we apologize for the missing reference in steps 24, 25, and 26 of Theorem A.3 (see Lemma 6.1 of [1]). We follow [1] to provide the detailed derivation below. - For any $\pi'$ and $\pi$, we have $$\begin{align} Z^{\tilde{\pi}}(s) &= \sum_{t=0}^\infty \gamma^t \mathbb{E}\_{(S_t,A_t)\sim P,\tilde{\pi}} \left[R(S_t, A_t) | S_0 = s\right]\\\\ &= \sum_{t=0}^\infty \gamma^t \mathbb{E}_{(S_t,A_t)\sim P,\tilde{\pi}} \left[R(S_t, A_t)+Z^{\pi}(S_t)-Z^{\pi}(S_t) | S_0 = s\right]\\\\ &= \sum\_{t=0}^\infty \gamma^t \mathbb{E}\_{(S\_t,A\_t,S\_{t+1}) \sim P,\tilde{\pi}} \left[R(S\_t, A\_t)+\gamma Z^{\pi}(S\_{t+1})-Z^{\pi}(S\_t) | S\_0 = s\right] + Z^{\pi}(s)\\\\ &= Z^{\pi}(s)+\sum\_{t=0}^\infty \gamma^t \mathbb{E}\_{(S_t,A_t)\sim P,\tilde{\pi}} \left[D^{\pi}(S\_t,A\_t,R\_t)-Z^{\pi}(S\_t) | S\_0 = s\right]\\\\ &= Z^{\pi}(s)+ \frac{1}{1-\gamma} \mathbb{E}\_{(s,a)\sim d^{\tilde{\pi}},\tilde{\pi}} \left[D^{\pi}(s,a,r)-Z^{\pi}(s)\right]. \end{align}$$ Thus, we can derive the step 25 from the step 24. - For (3), we apologize for any confusion caused by the notations. We will replace the notation $D^\pi (s,a)$ with $D^\pi (s,a,r)$ in Theorem A.3 of our revision. [1] Approximately optimal approximate reinforcement learning. **Q17**. (1) What is the value of $c$ we use in experiments? (2) What is the effect on performance for varying $c$? - **A17**. - For (1): - For each experiment with a random seed under individual corruptions in Tables 2 and 3 of our main text, we randomly select $c\\% = 30\\%$ of transitions from the offline dataset. Within these selected transitions, we replace one element per transition with corrupted element. - In Table 1 with simultaneous corruptions, we also apply $c\\% = 30\\%$ but extend the corruption process across all four elements of the offline dataset. Specifically, we randomly select $30\\%$ of transitions and corrupt one element in each selected transition. Then, we repeat this step four times until all elements are corrupted. Therefore, approximately $76.0\\%$ of data in the offline dataset is corrupted, calculated as $1 - (1 - c)^4$. In Table G4 of our attached PDF of the global response, we evaluate TRACER using different $c \\%$, including $10\\%, 20\\%, 30\\%, 40\\%,$ and $50\\%$. These rates correspond to approximately $34.4\\%, 59.0\\%, 76.0\\%, 87.0\\%$, and $93.8\\%$ of the data being corrupted. - For (2): - Results in Table G4 show that while TRACER is robust to simultaneous corruptions and significantly outperforms RIQL, its performance depends on the extent of corrupted data it encounters, degrading with increased data corruptions. We hope our responses adequately address your concerns. If you have further concerns, please let us know and we will continue actively responding to your comments, enhancing our submission. We would deeply appreciate it if you could raise your score based on these revisions. --- Rebuttal Comment 2.1: Title: Response Comment: Thank you for addressing my concerns. I have raised my score. I would urge the authors to include the notation description, additional results and clear explanation of Theorem A3 in the revised version. --- Reply to Comment 2.1.1: Title: Thanks for your kind support and for helping us improve the paper! Comment: Dear Reviewer QZTE, Thanks for your kind support and for helping us improve the paper! We are committed to including all these additional results, notation descriptions, and detailed explanations and will incorporate your suggestions in the final version, if accepted. Thank you again for your valuable comments and guidance. Best, Authors --- Rebuttal 3: Title: Thanks again for your continued support! Comment: Dear Reviewer QZTE, Thanks again for your continued support and the further improvements you have suggested for our manuscript. We sincerely appreciate your insightful feedback, which has significantly helped us in refining the explanations and enhancing the theoretical derivations. We remain committed to incorporating all your valuable suggestions into the final version, if accepted. We are deeply grateful for your thoughtful comments and for the confidence you have shown in our work by raising your score. Best, Authors
Summary: This paper seeks to conduct reinforcement learning from corrupted offline data. More specifically, they propose the TRACER algorithm, which uses bayesian inference to calculate the uncertainty in estimating the action-value function. The authors conduct experiments with diverse corruptions on CARLA and Mujoco environments. Strengths: - This paper considers an important problem. - Their proposed algorithm is interesting and performs well. Weaknesses: - The contribution compared to other baselines (RIQL) is not really clear. In the introduction the authors claim that their method can handle simultaneous corruptions and RIQL cannot. However no experiments are done in this setting. - The experimental settings are fairly limited: only three Mujoco environments are considered and one carla experiment. - Different perturbations levels are not shown. - The authors do not discuss the computational cost of their method. Technical Quality: 3 Clarity: 3 Questions for Authors: - Can you clarify the contribution of TRACER compared to RIQL? It seems like it is an orthogonal approach, but the two algorithms are not really compared in detail. - Can you show the hyperparameter tuning results? Is TRACER stable under different hyperparameter settings? How does it compare to RIQL in this regard? - How does the computational cost of TRACER compare to the other baselines? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors do discuss their method's limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful and constructive comments and suggestions. We respond to each comment as follows and sincerely hope that our responses could properly address your concerns. If so, we would deeply appreciate it if you could raise your score. If not, please let us know your further concerns, and we will continue actively responding to the comments and enhancing our submission. - **Q1**. No experiments for simultaneous corruptions. - **A1**. We apologize for the confusion caused by unclear descriptions in our main text. Specifically, the experiments presented in Table 1 on Page 7 use the simultaneous corruptions. The term "**mixed corruptions**" in the caption of Table 1 refers to the simultaneous corruptions (see the explanation on Line 233 of Section 4 in the main text). In our revision, we will replace "mixed corruptions" with "simultaneous corruptions" to avoid any ambiguity. - **Q2**. The contribution compared to other baselines (RIQL) is not really clear. - **A2**. Thanks for the kind and insightful comment. - **Relation between TRACER and RIQL**. Our approach TRACER applies the weighted imitation learning in RIQL for policy improvement (see Eq. (6) on Page 4 of the main text). This allows us to build and expand upon RIQL. - **Contributions of TRACER**. We present TRACER's major contributions compared to corruption-robust offline RL methods [1,2] (especially RIQL): 1. To the best of our knowledge, TRACER introduces Bayesian inference into corruption-robust offline RL for **the first time**. Thus, it can capture uncertainty caused by diverse corrupted data to **simultaneously** handle the corruptions, unlike other corruption-robust offline RL methods that primarily focus on individual corruptions. 2. TRACER can **distinguish corrupted data from clean data** using an entropy-based uncertainty measure. Thus, TRACER can regulate the loss associated with corrupted data to reduce its influence. In contrast, existing corruption-robust offline RL methods lack the capability to identify which data is corrupt. 3. Based on Tables 1, 2, and 3 of the main text, TRACER significantly outperforms existing corruption-robust offline RL methods across a range of **both individual and simultaneous** data corruptions. [1] Corruption-robust offline reinforcement learning with general function approximation. NeurIPS 2023. [2] Towards robust offline reinforcement learning under diverse data corruption. ICLR 2023. - **Q3**. The experimental settings are fairly limited. - **A3**. Thanks for your insightful comment. - Our experiments using Mujoco and CARLA datasets are consistent with standard practices in corruption-robust offline RL. Existing methods [1,2] often select these datasets to assess their effectiveness. - Based on the corruption-robust offline RL methods, we further conduct experiments on two AntMaze datasets and two additional Mujoco datasets, presenting results under random simultaneous corruptions in Table G5 of the PDF of our global response. - **Settings**. Each result represents the mean and standard error over four random seeds and 100 episodes in clean environments. For each experiment, the methods train agents using batch sizes of 64 for 3000 epochs. Building upon RIQL, we apply the experiment settings as follows. 1. For the two Mujoco datasets, we use a corruption rate of $c=0.3$ and scale of $\epsilon=1.0$. Note that simultaneous corruptions with $c=0.3$ implies that approximately $76.0\\%$ of the data is corrupted. 2. For the two AntMaze datasets, we use the corruption rate of 0.2, corruption scales for observation (0.3), action (1.0), reward (30.0), and dynamics (0.3). - **Results**. The results in Table G5 show that TRACER significantly **outperforms** other methods in **all these tasks** with the aforementioned AntMaze and Mujoco datasets. [1] Corruption-robust offline reinforcement learning with general function approximation. NeurIPS 2023. [2] Towards robust offline reinforcement learning under diverse data corruption. ICLR 2023. - **Q4**. Different perturbation/corruption levels are not shown. - **A4**. Thanks for the kind and insightful comment. - **Setting**. Building upon RIQL, we extend our experiments to include Mujoco datasets with different corruption levels, using different corruption rates and scales. We report the average scores and standard errors over four random seeds in Figure G2 of the PDF of our global response, using batch sizes of 256. - **Results**. Figure G2 in the PDF shows that TRACER significantly outperforms other algorithms in **all tasks** under **random simultaneous corruptions**, achieving an average score improvement of ${\bf +33.6\\%}$. - **Q5**. Computational cost of TRACER. - **A5**. Please refer to A1 in our global response. - **Q6**. Can you show the hyperparameter tuning results? - **A6**. Thank you for the meaningful comment. - **Setting**. We conduct hyperparameter tuning experiments for both TRACER and RIQL, varying values of $\kappa$ in the Huber loss and $\alpha$ for action-value functions. We report results in Tables G6 and G7 of the PDF of our global response. Note that except the first column in Table G6, we use batch sizes of 64 and a learning rate of 0.0003 over four random seeds on Hopper task under adversarial simultaneous corruptions. - **Results**. Tables G6 and G7 reveal that TRACER is more stable than RIQL and consistently outperforms RIQL, achieving an average performance gain of ${\bf +43.3\\%}$ under adversarial simultaneous corruptions. Moreover, we find that both TRACER and RIQL reach their respective highest performance at $\kappa=0.1$ and $\alpha=0.25$, and TRACER still achieves a substantial performance gain of ${\bf +24.7 \\%}$ compared to RIQL. --- Rebuttal Comment 1.1: Title: Thanks for your kind support and for helping us improve the paper! Comment: Dear Reviewer V2Jx, Thanks for your kind support and for helping us improve the paper! We are committed to including all these additional results and detailed explanations and will incorporate your suggestions in the final version, if accepted. Thank you again for your valuable comments and guidance. Best, Authors
Rebuttal 1: Rebuttal: # Global Response We would like to thank reviewers for their insightful comments. We respond to the collective feedback below and hope that our responses could adequately address these general concerns. If so, we would deeply appreciate it if reviewers could raise the score. If not, please let us know the further concerns, and we will continue to refine our submission in response to the comments. - **Q1** for **#R V2Jx and wmJN**. Computational cost comparison. - **A1**. We provided the average training duration of our approach TRACER for Halfcheetah, Walker2d, and Hopper in Section B.5 of our Appendix. - To compare the computational cost, we report the average epoch time on Hopper in Table G1 of the PDF, where results of baselines (including DT [3]) are from [1]. The formula for computational cost is: $$\frac{\text{avg epoch time of RIQL in [1]}}{\text{avg epoch time we run RIQL}} \times \text{avg epoch time we run TRACER}.$$ Note that TRACER requires a long epoch time due to two main reasons: 1. Unlike RIQL and IQL, which learn one-dimensional action-value functions, TRACER generates multiple samples for the estimation of action-value distributions. Following [2], we generate 32 samples of action values for each state-action pair. 2. TRACER uses states, actions, and rewards as observations to update models, estimating the posterior of action-value distributions. In the future work, we plan to improve TRACER’s computational efficiency by optimizing codes to estimate the posterior in parallel using various observations. [1] Towards robust offline reinforcement learning under diverse data corruption. [2] Implicit quantile networks for distributional reinforcement learning. [3] Decision transformer: Reinforcement learning via sequence modeling. - **Q2** for **#R QZTE and wmJN**. Explanation for Eqs. (10) and (11). - **A2**. We apologize for any confusion caused by unclear descriptions for Eqs. (10) and (11). - The goal of **Eq. (10)** is to estimate $\pi_{\mathcal{B}}(A|S)$, $\rho_{\mathcal{B}}(R|S,A)$, and $p_{\mathcal{B}}(S)$ using $p_{\varphi_a}(A|D_\theta,S,R,S')$, $p_{\varphi_r}(R|D_\theta,S,A)$, and $p_{\varphi_s}(S|D_\theta,A,R)$, respectively. We model all these distributions as Gaussian distributions, and use the mean $\mu_{\varphi}$ and standard deviation $\Sigma_{\varphi}$ to represent the corresponding $p_{\varphi}$. **For implementation**, we employ MLPs to output each $(\mu_{\varphi},\Sigma_{\varphi_r})$ using the corresponding conditions of $p_{\varphi}$. Then, based on the KL divergence between two Gaussian distributions, we can derive Eq. (10). - The goal of **Eq. (11)** is to maximize the likelihoods of $D_\theta$ given samples $\hat{s} \sim p_{\varphi_s}$, $\hat{a} \sim p_{\varphi_a}$, or $\hat{r} \sim p_{\varphi_r}$. Thus, with $(s,a,r)\sim \mathcal{B}$, we propose minimizing the distance between $D_\theta (\hat{s},a,r)$ and $D(s,a,r)$, $D_\theta (s,\hat{a},r)$ and $D(s,a,r)$, and $D_\theta (s,a,\hat{r})$ and $D(s,a,r)$, where $\hat{s} \sim p_{\varphi_s}$, $\hat{a} \sim p_{\varphi_a}$, and $\hat{r} \sim p_{\varphi_r}$, thus deriving Eq. (11). - **Q3** for **#R QZTE and wmJN**. Results comparison between TRACER and RIQL. - **A3**. Thanks for the insightful comment. - **Simultaneous Corruptions**. Our method TRACER is specifically designed to address simultaneous corruptions for robustness. Results in Table 1 of the main text show that TRACER outperforms RIQL across **all tasks** under simultaneous corruptions, achieving an average gain of ${\bf +21.1\\%}$. This is because that TRACER captures uncertainty via offline data against simultaneous corruptions. Thus, it can use uncertainty to distinguish corrupted data from clean data and then reduce the influence of corrupted data. - **Individual Corruptions**. In Tables 2 and 3 of the main text, we adhered to commonly used settings for individual corruptions in corruption-robust offline RL. We directly followed hyperparameters from RIQL (i.e., $\kappa$ for huber loss, the ensemble number $K$, and $\alpha$ in action-value functions). Results show that TRACER outperforms RIQL in **18 out of 24** settings, demonstrating its robustness even when aligned with RIQL’s hyperparameters. Further, we explore hyperparameter tuning, specifically of $\kappa$, on Hopper to improve TRACER's performance. This results in TRACER outperforming RIQL in **7 out of 8** settings on Hopper, up from 5 settings (see Tables G2 and G3 in the PDF). The further improvement highlights TRACER’s potential to achieve greater performance gains. - Based on Table G2, we find that TRACER requires low $\kappa$ in Huber loss, using L1 loss for large errors. Thus, TRACER can linearly penalize the corrupted data and reduce its influence on the overall model fit. - **Q4** for **#R QZTE**. (1) RIQL already proves the robustness of IQL. What is the objective of Theorem A.3? (2) Is there a upper bound on what amount of corruption the method will be able to handle? - **A4**. 1. Theorem A.3 in our Appendix builds upon RIQL and extends it to TRACER. Specifically, RIQL first proves an upper bound on **the distance in value functions** that IQL can learn under clean and corrupted data. Then, we provide Theorem A.3 to prove an upper bound on **the difference in value distributions** that TRACER can learn under clean and corrupted data. This theorem not only supports the robustness claims of TRACER but also provides a guarantee of how TRACER's performance degrades with increased data corruptions. 2. Yes. Theorem A.3 shows that the higher the scale of corrupted data, the greater the difference in action-value distributions and the lower the TRACER's performance. We also evaluate TRACER across various corruption levels. Table G4 in the PDF shows that while TRACER is robust to simultaneous corruptions, its scores depend on the extent of corrupted data it encounters. Pdf: /pdf/98a74eb45aa73aa7ed6c9982c12b52e67be93a28.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning from Highly Sparse Spatio-temporal Data
Accept (poster)
Summary: This paper proposes to address the challenge of learning from incomplete spatio-temporal data, which is prevalent in various real-world applications. Accordingly, this paper proposes a method named OPCR to handle data sparsity more effectively. Specifically, the method first directly utilizes spatial and temporal relationships to propagate limited observations to the global context through a sparse attention mechanism. After the initial propagation, confidence-based refinement is applied to refine the imputations, correlating the missing data with valid data to enhance the quality of the imputation. Experiments show that OPCR outperforms several baselines in various downstream tasks involving highly sparse spatio-temporal data. Strengths: S1. Tackling spatio-temporal data with high sparsity is an important topic. It is critical to design effective method to tackle such a case, which has many downstream applications. S2. The paper offers a theoretical analysis on the advantages of the proposed method against existing models. The design proposed method is based on the derived theory. S3. Experiments show that the proposed model achieves the best performance among several baselines in different settings and various spatio-temporal tasks. Weaknesses: W1. The presentation needs to be significantly improved. Many parts of the contents are vague and difficult to follow. -The introduction section presents most of the contents that should be separately discussed in the related work section, while the related work section is missing. -The notations for the theory provided in Section 3 do not offer detailed explanations. I cannot find the definitions for many of them. For example, what is “poly()” in Definition 3.1? what is \mathcal{T}? what are Z_m, Z_o? -In section 4.1.1, the update in GNN equation uses S_l, but then h_^{s}_{v,t} is used to denote the equation. Are they the same thing? In section 4.1.2, what is \hat{X}, which does not appear anywhere previously? I don’t really follow the paper, as so many of the notations are not properly defined and explained. W2. It is not clear to me why the proposed belongs to “one-step propagation”. Stacking GNN layers implicitly means several iterations of propagation, which is how section 4.1.1 presents. Besides, the temporal self-attention has nothing to do with “propagation”. W3. It is not clear to me how sparse spatial attention works. The GNN equation and Figure 1 indicates that the original graph structure is utilized. However, in equation 4, it is mentioned only “available nodes” are selected to perform self-attention. In this case, only the representations of nodes that have observed data are updated? How exactly this module functions is not clear. W4. Figure 1 is not informative at all. I cannot really understand which parts of the nodes and timestamps are utilized for sparse attention. Again, due to bad notations and presentation, I don’t really follow the paper. W5. Some of the related studies are highly related, and thus should be included. For example, [1, 2]. [1] Handling Missing Data with Graph Representation Learning. NeurIPS 2020 [2] Networked Time Series Imputation via Position-aware Graph Enhanced Variational Autoencoders. KDD 2023 Technical Quality: 3 Clarity: 1 Questions for Authors: Please give more explanations on my comments in W1-W4. Confidence: 3 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: The authors did not discuss the limitations or negative impact of their paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your careful reviews and insightful suggestions. We greatly appreciate your feedback. Please see the below responses to your comments (see global response to your questions about more baselines and confused notations). > **[Weakness 1.1]** The introduction section presents most of the contents that should be separately discussed in the related work section, while the related work section is missing. Thanks for your suggestions. In the revised version, we have reorganized the paper to make it clearer. Specifically, we have streamlined the introduction to focus only on the background and significance of the spatio-temporal data imputation, briefly analyze the limitations of existing methods, and finally overview of our work. In addition, we have discussed related work in a separate section, including graph imputation, time-series imputation, and spatio-temporal data imputation, as well as comparing our method with them to highlight our advancements and contributions. > **[Weakness 1.3.2]** In section 4.1.2, what is \hat{X}, which does not appear anywhere previously? I don’t really follow the paper, as so many of the notations are not properly defined and explained. We appreciate the constructive feedback. For the sparse temporal attention module in Sec. 4.1.2, we consider timestamp encoding and positional encoding to introduce sequence information. Specifically, the input $\bar X$ can be formulated as: $$ \bar{X} = X + PE(X) + MLP(U), $$ where $U$ is the available real-world time information, such as the hour of the day; $PE$ is the vanilla positional encoding as follows. $$ PE_{(pos, 2i)} = sin(pos/10000^{2i/d_{model}}) , $$ $$ PE_{(pos, 2i+1)} = cos(pos/10000^{2i/d_{model}}). $$ > **[Weakness 1.3.1]** In section 4.1.1, the update in GNN equation uses S_l, but then h_^{s}_{v,t} is used to denote the equation. Are they the same thing? > **[Weakness 2]** It is not clear to me why the proposed belongs to “one-step propagation”. Stacking GNN layers implicitly means several iterations of propagation, which is how section 4.1.1 presents. Besides, the temporal self-attention has nothing to do with “propagation”. > **[Weakness 3]** It is not clear to me how sparse spatial attention works. The GNN equation and Figure 1 indicates that the original graph structure is utilized. However, in equation 4, it is mentioned only “available nodes” are selected to perform self-attention. In this case, only the representations of nodes that have observed data are updated? How exactly this module functions is not clear. Thanks for your comments. The sparse spatial attention module may not be expressed clearly enough. Let us first clarify this module and then answer your questions one by one. - **Using multi-layers GNN:** In Lines 150 to 155, $L_s$-layers GCN was utilized to learn static node spatial features from topology structure, and the learned static spatial features are the output $S_{L_s}$. Formally, this part does not belong to the sparse spatial attention that is dedicated to capturing dynamic spatial information. To avoid confusion, we will add a new section before Sec. 4.1.1, titled Learning of Static Spatial and Temporal Features, and move this part and our response to your [Weakness 1.3.2] to this section. - **The core of sparse spatial attention module:** The sparse spatial attention module captures the comprehensive correlations of nodes based on the static node spatial features learned by $L_s$-layer GCN, i.e., the input node embedding $S$ of this module is $S_{L_s}$. Then, $S$, $S$, and $X_t$ serve as query, key, and value, respectively, to learn the spatial dependencies-based representation ${h}^{s}_{v,t}$ of each ST point $(v,t)$ that lacks features. - **The object of attention mechanism:** We need to obtain representation of the missing point $(v, t)$, while the observed point keeps the original feature unchanged. Then, according to Eq. 5, the attention mechanism acts between the missing points and the observed points, and finally, the weighted observed point representations are used to update the missing point representation. Based on this, we address each of your questions as follows. ***Q1: Are $S_l$ and $h^s_{v,t}$ the same thing?*** $S_l$ and $h^s_{v,t}$ are not equivalent. $S_l$ denotes the learned static node spatial feature by $L_s$-layers GCN. ${h}^{s}_{v,t}$ denotes the learned dynamic representation by spatial attention module for ST point $(v,t)$. ***Q2: Why the proposed belongs to “one-step propagation”.*** In dynamic spatio-temporal data learning, we focus on the propagation at each time step, i.e., the steps of dynamic propagation. The node spatial features learned by multi-layer GCN are static and shared across all time steps, which are not counted in the dynamic propagation. The dynamic process is reflected in the two proposed attention modules; each unobserved node receives information from all observations once, which is achieved by one matrix multiplication, so we refer this strategy as “one-step propagation”. ***Q3: Only the representations of nodes that have observed data are updated?*** Only the representations of the missing data are updated by aggregating the representation of all observed data based on the attention weights between the missing data and observed data. As for the observed data, their representations use the original feature representation and are not updated. > **[Weakness 4]** Figure 1 is not informative at all. I cannot really understand which parts of the nodes and timestamps are utilized for sparse attention. Thanks for your valuable feedback. We have updated Figure 1 to include additional annotations, as shown in Fig. 1 in the rebuttal-submitted PDF. Specifically, for the sparse spatial attention module, we first learn the sparse attention matrix from structural embeddings $S$ using all observed nodes. At each time step $t$, we propagate observed nodes' information to missing data with attention-based weight. --- Rebuttal Comment 1.1: Comment: Thanks for your response. After reading it, I would like to keep my score. --- Reply to Comment 1.1.1: Title: Welcome for more discussions Comment: Thanks for your valuable time in reviewing and constructive comments, according to which we have tried our best to answer the questions and carefully revise the paper. We humbly hope our response has addressed your concerns. Considering your current rating, if you believe that our responses have satisfactorily addressed your concerns, we kindly request that you consider revising your final rating of our manuscript. If you have any additional concerns or comments that we may have missed in our responses, we would be most grateful for any further feedback from you to help us further enhance our work.
Summary: The paper addresses the issue of incomplete spatio-temporal data. It theoretically analyzes how existing iterative message-passing methods are susceptible to the impacts of data sparsity and graph sparsity. It proposes the One-step Propagation and Confidence-based Refinement (OPCR). In OPCR, the Sparse Spatial Attention module captures spatial information, while the Sparse Temporal Attention module focuses on dynamic temporal information. And, the Confidence Module in Confidence-based Iterative Refinement further refines these two types of information through spatio-temporal dependencies. Strengths: 1. The paper is well-organized, identifying limitations with existing methods through theoretical analysis and thereby motivating the proposal of an improved method for modeling spatio-temporal information. 2. The theoretical analysis of existing iterative message-passing methods is interesting and may inspire future research in the sparse data learning. 3. A good one-step propagation strategy that can avoid error accumulation and provide a more efficient learning process. Weaknesses: 1. The paper uses PAC-learnability to analyze generalization risk. Have the author(s) considered using other mathematical tools, such as Rademacher complexity? 2. Could the author(s) further discuss the potential strength of the proposed method in practical scenarios? 3. There are some typo and grammatical errors. For instance, "seperate" on Line 13, "problem" on Line 29. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Have the author(s) considered using other mathematical tools to analyze generalization risk, such as Rademacher complexity? 2. Could the author(s) further discuss the potential strength of the proposed method in practical scenarios? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your positive comments and constructive feedback. Please see the below responses to your comments. > **[Weakness 1 & Question 1]** The paper uses PAC-learnability to analyze generalization risk. Have the author(s) considered using other mathematical tools, such as Rademacher complexity? > Thanks for this valuable feedback. Other tools, such as Rademacher complexity, are usually closely related to the model structure, and are therefore model-specific mathematical tools. However, our work aims to investigate the impact of missing data on the learning ability of (general) models and to seek breakthroughs in theoretically inspired methods based on these insights, which can be facilitated by the PAC-Learnability theory rather than model-specific theory. The motivation for this theoretical analysis is that the influencing factors caused by missing data, as revealed through the general models, are universal and not confined to any specific model. This broad applicability can inspire improvements to various models (rather than specific models). It is undeniable that studying the generalization error (or convergence rate) of specific models is also of significant importance and will be the direction of our future work. > **[Weakness 2 & Question 2]** Could the author(s) further discuss the potential strength of the proposed method in practical scenarios? Thank you for your question. Our proposed method has three potential strengths: - **Lowering the data threshold:** Our method addresses both device failure (point missing) and device unavailability (spatial missing) in spatio-temporal data imputation. Learning from spatial missing data allows for the generalization of local observations to global data, which can reduce research costs in various fields. - **Correlation mining:** Our approach provides inherent spatial and temporal context for message-passing rather than just propagating information along the spatiotemporal structure. This allows for easier encapsulation of static spatio-temporal information. - **Parallel recovery:** Our one-step propagation aggregates all observations to target data, enabling parallel processing of missing data in practical scenarios. This avoids the low computational efficiency of iterative imputation methods, which makes it particularly suitable for large-scale datasets. > **[Weakness 3]** There are some typo and grammatical errors. For instance, "seperate" on Line 13, "problem" on Line 29. > Thanks for your careful review. We have modified and checked these typos and grammatical errors in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses, it addressed most of my concerns.
Summary: This paper proposes a sparse attention-based one-step imputation and confidence-based refinement approach named One-step Propagation and Confidence-based Refinement (OPCR). The authors evaluate the proposed model across two downstream tasks involving highly sparse spatio-temporal data. The contributions of this paper are as follows: 1. The authors provide a theoretical analysis of a general spatio-temporal iterative imputation model from the perspective of PAC-learnability. 2. Motivated by the theoretical results, this paper introduces a sparse attention-based one-step propagation strategy. This strategy directly propagates information from limited observations to all missing data by leveraging inherent spatial and temporal relationships, resulting in two separate spatial and temporal imputation results. The authors then perform confidence-based spatio-temporal refinement to eliminate the bias introduced by the separate imputations by assigning confidence-based propagation weights to the imputation results. 3. Finally, experiments are conducted comparing several existing imputation methods with the proposed method on real-world datasets, demonstrating the effectiveness of the proposed method. Strengths: 1. The theoretical analysis of the general spatio-temporal iterative imputation model from the PAC-learnability perspective provided in this paper is relatively innovative, explaining the error accumulation caused by multiple iterations. The authors also present a PAC-learnability analysis for sparse attention-based imputation models and provide detailed proofs for both theoretical analyses in the appendix. 2. This paper proposes a novel one-step propagation and confidence-based refinement framework. This framework addresses the problems of error accusation and can be applied to spatial and temporal missing scenes. The high-level idea may provide novel insight for further imputation algorithm design. 3. This paper explores spatially sparse data in the experiments, which helps to make full use of available data and lowers the barriers for implementing spatio-temporal models in real scenarios. Weaknesses: 1. In Section 4.1.2, the construction details of the input to the temporal sparse attention module are not described. Additionally, there are some typos in this paper. For example, the symbol \(H^0\) at line 196 is incorrectly written, the symbols $q^t_t$ and $k^t_k$ in Equation 7 are irregular and inconsistent with the corresponding symbols in Equation 6, and some symbols in the two formulas under line 195 lack descriptions. 2. Some diffusion model-based methods, such as CSDI [1] and PriSTI [2], can propagate information from observed data to missing data directly without multiple iterations. It would be better to clarify the advantages of the proposed model compared to these methods. References: [1] Tashiro Y, Song J, Song Y, et al. Csdi: Conditional score-based diffusion models for probabilistic time series imputation[J]. Advances in Neural Information Processing Systems, 2021, 34: 24804-24816. [2] Liu M, Huang H, Feng H, et al. Pristi: A conditional diffusion framework for spatiotemporal imputation[C]//2023 IEEE 39th International Conference on Data Engineering (ICDE). IEEE, 2023: 1927-1939. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Figures 4 and 5, why is the MAE result of the SAITS ∞ in the PV-US and CER-E datasets? 2. What's the advantage of the proposed model compared with diffusion model-based methods? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your positive comments and constructive feedback. Please see the below responses to your comments. > **[Weakness 1.1]** In Section 4.1.2, the construction details of the input to the temporal sparse attention module are not described. Thanks for your question. For the temporal sparse attention module, we consider timestamp encoding and positional encoding to introduce sequence information. Specifically, for any node $v \in \mathcal V$, the input $\bar X_v$ can be formulated as: $$ \bar{X_v} = X_v + PE(X_v) + MLP(U), $$ where $X_v$ is the associated time-series of node $v$; $U$ is the available real-world time information, such as the hour of the day; $PE$ is the vanilla positional encoding as follows. $$ PE_{(pos, 2i)} = sin(pos/10000^{2i/d_{model}}), $$ $$ PE_{(pos, 2i+1)} = cos(pos/10000^{2i/d_{model}}). $$ > **[Weakness 1.2]** Additionally, there are some typos in this paper. For example, the symbol (H^0) at line 196 is incorrectly written, the symbols 𝑞𝑡𝑡 and 𝑘𝑘𝑡 in Equation 7 are irregular and inconsistent with the corresponding symbols in Equation 6, and some symbols in the two formulas under line 195 lack descriptions. Thanks for your careful review. We have corrected these typos and thoroughly checked the symbols in the revised version. Specifically, in Section 4.2, to avoid confusion, we replace "$H$" with "$O$" to denote the learned representations in the second stage. Then, the layer-wise updation can be formulated as $$ O_{v,t}^{l+1} = \text{MLP} \left( O_{v,t}^{l} || \sum_{(v',t')\in {N_{v,t}}} \beta_{v't'} \cdot O_{v',t'}^l \right), $$ where $O^0 = h_{v,t}^s + {h}_{v,t}^t$. For the sparse attention-based confidence, we have revised inconsistent symbols and rewritten Eq.7 as follows. $$ \beta_{vt} = \frac{\sum_{k\in \tilde{V}}\exp (<q_v^s, k_k^s>)}{\sum_{k\in V}\exp (<q_v^s, k_k^s>)} + \frac{\sum_{k\in \tilde{T_v}}\exp (<q_{v,t}^t, k_{v,k}^t>)}{\sum_{k\in T}\exp (<q_{v,t}^t, k_{v,k}^t>)} $$ > **[Weakness 2]** Some diffusion model-based methods, such as CSDI [1] and PriSTI [2], can propagate information from observed data to missing data directly without multiple iterations. It would be better to clarify the advantages of the proposed model compared to these methods. > **[Question 2]** What's the advantage of the proposed model compared with diffusion model-based methods? > Thanks for making this important question. Similar to other temporal methods, CSDI treats spatio-temporal data as multivariate time-series and ignores spatial structure. PriSTI introduces diffusion models to spatio-temporal imputation. They use two separate attention modules to incrementally aggregate temporal and spatial dependencies. However, this design decouples the spatio-temporal context and then ignores intrinsic interactions between spatial and temporal dimensions. In addition, PriSTI applies linear interpolation to the time series of each node to initially construct coarse conditional information. However, in spatially missing data, this strategy cannot provide effective information. Notably, we have conducted comparative experiments with CSDI [1] and PriSTI [2], please refer to the global rebuttal. > **[Question 1]** In Figures 4 and 5, why is the MAE result of the SAITS ∞ in the PV-US and CER-E datasets? Thanks for your question. The temporal imputation methods, such as SAITS and BRITS, treat spatio-temporal series as multivariate time series. These models struggle to effectively learn from large-scale spatio-temporal data, which is the high-dimensional time series. Under the early stopping settings, both SAITS and BRITS often terminate training process early, resulting in extremely high MAE values. To more clearly compare other methods, we have truncated the results of BRITS and SAITS in Fig. 2 and Fig. 3. We have replaced "∞" with accurate MAE results in the revised version. Please refer to Fig. 2 and Fig. 3 in the rebuttal-submitted PDF. **References** [1] Tashiro Y, Song J, Song Y, et al. Csdi: Conditional score-based diffusion models for probabilistic time series imputation[J]. Advances in Neural Information Processing Systems, 2021, 34: 24804-24816. [2] Liu M, Huang H, Feng H, et al. Pristi: A conditional diffusion framework for spatiotemporal imputation[C]//2023 IEEE 39th International Conference on Data Engineering (ICDE). IEEE, 2023: 1927-1939.
Summary: This paper leverages the Probably Approximately Correct (PAC) theory to study the message-passing mechanism of spatial-temporal imputation. Inspired by the results of PAC, this paper introduces a One-step Propagation and Confidence-based Refinement (OPCR) model for spatial-temporal imputation. OPCR is comprised of a spatial and a temporal sparse attentions, and a confidence-based refinement module. Experimental results on several benchmark datasets show that OPCR could outperform several baselines. Strengths: 1. Using Probably Approximately Correct (PAC) theory to analyze spatial-temporal imputation is an interesting direction. 2. Spatial imputation is an interesting sub-topic of the spatial-temporal imputation. 3. Experimental results on several benchmark datasets show that the proposed OPCR could outperform baselines. Weaknesses: 1. In general, the proposed method is a little bit incremental. A simpler version of sparse spatial-temporal attention has been proposed by SPIN. 2. The presentation needs further improvements, especially the theoretical part. Many details and claims need clarification, please see questions. 3. The comparison baselines GRIN (2021) and SPIN (2022) are a little bit outdated. More recent baselines should be considered, such as [1,2,3]. [1] PriSTI: A Conditional Diffusion Framework for Spatiotemporal Imputation, ICDE'2023 [2] Networked Time Series Imputation via Position-aware Graph Enhanced Variational Autoencoders, KDD'2023 [3] Provably Convergent Schrödinger Bridge with Applications to Probabilistic Time Series Imputation, ICML'2023 Technical Quality: 3 Clarity: 2 Questions for Authors: 1. What does "poly" mean in line 98? 2. Line 101-102 & 104-105, what does $\phi$ mean? 3. Line 106-107, why should the model need to have the ability to recover all ST points? 4. In Assumption 3.2, what do $B_d$ and $B_x$ mean? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your careful reviews and insightful comments. We greatly appreciate your feedback. Please see the below responses to your comments (see global response to your questions about more baselines and confused notations). > **[Weakness 1]** In general, the proposed method is a little bit incremental. A simpler version of sparse spatial-temporal attention has been proposed by SPIN. Thanks for raising this fundamental issue. We would like to clarify the differences from SPIN and emphasize the contributions of the proposed method. **Differences.** The proposed method differs from SPIN in the following aspects: - SPIN propagates information from neighboring spatial-temporal points to the target ST point, while our sparse attention mechanism makes full use of global observations (i.e., fully connected graph) to recover every missing data, which may capture more information. - SPIN propagates in an iterative manner, while we directly propagate observations to missing data without any intermediary, which avoids the information loss and high computational cost caused by iterative propagation. - SPIN applies the sparse attention mechanism only in shallow layers. In deep layers, it assumes that most missing data has been filled and then employs dense attention mechanisms, which may lead to error accumulation, especially in large-scale datasets with high sparsity. Instead, our method only uses observations to recover all missing data in the first stage, ensuring consistent and accurate handling of sparse data. **Contributions.** Our contributions might not have been well received, and we restate and emphasize them. Specifically, our contributions lie in the theoretical analysis of imputation methods, as well as the breakthrough of theory-inspired methods. - **Our first contribution is in theoretically analyzing which factors impact the performance of the imputation task.** Most existing imputation methods usually rely on iterative message-passing in the temporal and spatial dimensions. However, their feasibility is based on their experience in engineering practice, and there is a lack of theoretical analysis. To this end, this paper aims to bridge the gap between theoretical analysis and empirical practice, and provide a theoretical analysis for general spatio-temporal iterative imputation models from the PAC-learnability perspective. The theoretical results reveal the impact of data sparsity and structural sparsity, as well as the need for more iterations for model performance. Notably, PAC-learnability-based analysis is not coupled to the specific model, and such general insights can inspire improvements to various models. - **Our second contribution is the imputation method OPCR that we have developed**, which has been shown to outperform the baselines regarding point missing and spatial missing tasks (Sec. 5.3 & 5.4). This is achieved by designing one-step imputation to avoid iterative error accumulation, and modeling spatio-temporal dependencies to refine the imputation results with confidence. Although the proposed method seems simple (i.e., reducing the number of layers and considering propagation over the fully connected graph), it is carefully designed based on comprehensive considerations of theoretical results, spatio-temporal dependence modeling, and computational complexity. We believe our method is quite useful. We reasonably believe that these contributions to the theoretical analysis of existing work and the algorithmic breakthroughs inspired by the theory are significant. > **[Weakness 3]** The comparison baselines GRIN (2021) and SPIN (2022) are a little bit outdated. More recent baselines should be considered, such as [1,2,3]. Thanks for the valuable suggestions. Per your suggestions, we have added more baselines for comparison, including the tabular imputation method GRAPE [4], the time-series imputation method CSDI [3], the spatio-temporal imputation methods PoGeVon [2], and PriSTI [1]. Please see the global rebuttal. **References** [1] PriSTI: A Conditional Diffusion Framework for Spatiotemporal Imputation, ICDE'2023. [2] Networked Time Series Imputation via Position-aware Graph Enhanced Variational Autoencoders, KDD'2023. [3] Csdi: Conditional score-based diffusion models for probabilistic time series imputation. NeurIPS'2021. [4] Handling Missing Data with Graph Representation Learning. NeurIPS 2020. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. My major concerns about (1) comparison with SPIN and (2) comparison with recent baselines are mostly addressed in the rebuttal. I'll update my scores later during the discussion period.
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for taking the time to review our work. We appreciate that you find the problem is interesting and important (**Reviewer #Ctvj, #aMi7**), theory-inspired method is proposed (**Reviewer #g4JR, #aMi7**), the method is novel and efficient (**Reviewer #fc8s, #g4JR**), theoretical analysis is innovative (**Reviewer #Ctvj, #fc8s, #g4JR, #aMi7**), the paper is well organized (**Reviewer #g4JR**), with outstanding experimental results (**Reviewer #Ctvj, #fc8s, #aMi7**). Due to the limited space, the global response will answer the common questions, and the individual response will answer special questions. **Q1: Concerns about more baselines.** > **Reviewer #Ctvj** > [Weakness 3] The comparison baselines GRIN (2021) and SPIN (2022) are a little bit outdated. More recent baselines should be considered, such as [1,2,3]. > **Reviewer #fc8s** > [Weakness 2] Some diffusion model-based methods, such as CSDI [4] and PriSTI [1] > **Reviewer #aMi7** > [Weakness 5] Some of the related studies are highly related, and thus should be included. For example, [2, 5]. Following your suggestions, we have added more baselines for comparison, including the tabular imputation method GRAPE [4], the time-series imputation method CSDI [3], the spatio-temporal imputation methods PoGeVon [2], and PriSTI [1]. We present the imputation performances (in terms of MAE) of all methods in the METR-LA dataset with a missing ratio of 95% in the table below. It can be seen that the proposed OPCR is still significantly competitive on the two imputation tasks. **Table 1. Imputation performance (in terms of MAE) of more baselines on METR-LA dataset.** | Types | Methods | Spatial Missing | Point Missing | | --- | --- | --- | --- | | Tabular Data Imputation | GRAPE [4] | 6.78 | 6.73 | | Time-series Imputation | CSDI [3] | 4.30 | 3.86 | | Spatio-temporal Data Imputation | PoGeVon [2] | 10.14 | 9.47 | | Spatio-temporal Data Imputation | PriSTI [1] | 5.01 | 3.90 | | Spatio-temporal Data Imputation | **OPCR** | **4.22** | **3.15** | **Q2: Concerns about confused notations.** > **Reviewer #Ctvj** > [Weakness 2] The presentation needs further improvements, especially the theoretical part. Many details and claims need clarification, please see questions. > *[Question 1] What does "poly" mean in line 98?* > *[Question 2] Line 101-102 & 104-105, what does 𝜙 mean?* > *[Question 3] Line 106-107, why should the model need to have the ability to recover all ST points?* > *[Question 4] In Assumption 3.2, what do 𝐵𝑑 and 𝐵𝑥 mean?* > **Reviewer #aMi7** > [Weakness 1.2] The notations for the theory provided in Section 3 do not offer detailed explanations. We apologize for the confusing claims and notations. The detailed explanations of these claims and notations are as follows: - **"poly":** The term “poly” in Definition 3.1 represents a polynomial function. - **“$\phi$”:** The “$\phi$” denotes the activation function, which is assumed to be $C_{\phi}$-lipschitz continuous and bounded by $[0,1]$. - **"Recover all ST points":** We apologize for the writing typos. The model needs to have the ability to recover all missing data, which is the goal of imputation tasks. We have clarified this point in the revised version. - **"$B_d$, $B_x$":** For matrix $X$, we use $\lVert X\rVert_2$ to denote its spectral norm. For vector $x$, we use $\lVert x \rVert_2$ to denote its Euclidean norm. For any ST point $(v, t)$ in spatio-temporal dataset, we assume its collected feature vector $x_{v,t}$ satisfies $\lVert {x}_{v,t} \rVert_2 \leq B_x$. For the weight matrix $W_d$ in the decoder (line101-line102), we assume $\lVert W_d \rVert_2 \leq B_d$. - **"$\mathcal{T}$":** Given a spatio-temporal series, we denote the set of nodes by $\mathcal{V}$ and the set of time steps by $\mathcal{T}$. - **"$\mathcal{Z}_m$", "$\mathcal{Z}_o$":** We denote the set of all ST points as $\mathcal{Z}$. $\mathcal{Z}_o$ represents the set of all observed ST points and $\mathcal{Z}_m=\mathcal{Z} \backslash \mathcal{Z}_o$ represents the set of all missing ST points. Thanks for your careful review. We have addressed each of your questions and have made corresponding modifications to the paper to improve clarity. [1] PriSTI: A Conditional Diffusion Framework for Spatiotemporal Imputation, ICDE'2023. [2] Networked Time Series Imputation via Position-aware Graph Enhanced Variational Autoencoders, KDD'2023. [3] Csdi: Conditional score-based diffusion models for probabilistic time series imputation. NeurIPS'2021. [4] Handling Missing Data with Graph Representation Learning. NeurIPS 2020. Pdf: /pdf/3f90494fb1a825f6bdb95b8c23ade744255ee375.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Bounding Box is Worth One Token: Interleaving Layout and Text in a Large Language Model for Document Understanding
Reject
Summary: This paper introduces the LayTextLLM method for document understanding, which encodes text positional information in the embedding space of an LLM and trains for effective understanding of document data as interleaved OCR-detected text and bounding box information. The results show improved performance compared to prior works on the KIE tasks, as well as on VQA in many cases. Strengths: The treatment of layout information as a modality interleaved with text is logical, and the use of a projection into the LLM’s embedding space to represent bounding boxes is clever and appears to be novel. The tasks approached are important and overall the proposed method does appear to improve document understanding (though this will be more convincing if the caveats listed below are addressed). I also appreciate the focus on open-source models and data for the method and its evaluation, making the results reproducible. Weaknesses: There are some issues regarding the comparisons to existing models, making it unclear how much of the observed improvement is really due to the novel method proposed. LayTexLLM is implemented with Llama-2-7b, but it seems that many models compared to (e.g. the strong-performing LayoutLLM) may use other LLM backbones, making it unclear whether the superior performance of LayTexLLM in many settings is due to the proposed novel method or the LLM backbone. The results will be more convincing with a comparison of different methods with the same LLM backbone (or at least an analysis of the number of parameters in each model). It is not clear what OCR engine is used, raising the concern that different OCR engines could explain some of the gaps in performance between models being compared. There are also issues with how the training is presented that make it difficult to interpret results. Some places (L131, L179, etc.) mention pre-training and SSFT, implying that pre-training means the LNTP training task. However, Sec 4.1 mentions “pre-training” and “SFT”, implying that pre-training refers to SSFT+LNTP and that it is followed by SFT (Supervised Fine Tuning) for particular tasks (VQA and KIE). The results also mention zero-shot and supervised results (e.g. L297), but it is unclear from the text and results tables which results are obtained zero-shot or from SFT, making it hard to understand if the comparisons are fair. The statements about large improvements over SOTA MLMMs (L13-14, L83-84) seem slightly misleading since LayTextLLM uses OCR detections and thus is more comparable to other OCR-based methods. LNTP (Sec. 3.2.1) is presented as a novelty but seems to just be the regular language modeling objective. If I understand correctly, this could be toned down to simply say that the added SLP and P-LoRA parameters are updated with a language modeling loss. Technical Quality: 2 Clarity: 3 Questions for Authors: I don’t fully follow the claim of L54-55 about autoregressive models vs. DocLLM-style models. Why would autoregressive modeling a priori be expected to outperform spatial cross attention for document understanding? While the justification of SSFT (Sec 3.2.2) makes sense, it seems that the issue stems from the use of positional encodings which encode left-to-right order of tokens. Have you considered using positional encodings that directly encode (x, y)-positions of text to avoid this artifact or to give the model an inductive bias towards the layout’s 2D positioning? Why was only Llama-2-7b used? Would the proposed method work for other LLM backbones? What is the motivation for using P-LoRA (Sec. 3.1.2)? Is it applied to every layer? L195 states that LNTP loss is calculated for text tokens only. Does this mean that bounding box tokens are still used as inputs but just not as targets? Why is this done, and is it tested? There are a number of minor grammatical errors throughout the paper that need revision, including missing articles (e.g. L163 “to (the) visual modality”, among others) and some awkward wording (e.g. L142 “specific”, L160 “communicated”, L211 “cons and pros” => “advantages and disadvantages”, L262 “it’s” => “it is”, L332 “(and we provide further) discussion”, among others). The acronym SFT used throughout should be defined somewhere. Tables 1-3: The term “polys” and the exact meaning of the asterisk * are unclear. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Limitations are clearly discussed in Section 5 (which should have the title “Limitations” in plural). Additionally, does the limitation of lacking visual cues apply to text formatting such as bolding or italics? This would connect well to the examples in Figure 6 where bold text is prominent. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback and appreciate the recognition of our paper’s contributions and novelty. We are grateful for the opportunity to address the concerns raised. **W1-Model backbone:** We implemented LayTexLLM using Llama2-7b, consistent with previous OCR-based methods like DocLLM, which also use Llama2-7b. We also replicated the results of the coor-as-tokens scheme using Llama2-7b for consistency. Noting the LayoutLLM model utilizes Llama2-7b and Vicuna 1.5 7B, which is fine-tuned from Llama2-7b. Therefore, for the majority of our comparisons, the models are based on the same or similar LLM backbones, allowing for a fair comparison between approaches. Other MLLMs use backbones like Qwen-VL, Internlm, and Vicuna, which are models with at least 7 billion parameters, excluding the visual encoder. Thus, we can say the comparison is fair, at least in terms of model parameters. We will explicitly mention this in the updated version. **W2-OCR engine**: We use word-level OCR from the respective datasets to ensure a fair comparison, except for the ChartQA dataset, where no OCR is provided (and we have mentioned this in lines 277-278). We will explicitly mention this in an updated version. **W3-Confusion of terminology**: Sorry for the confusion caused. Here are explanations: - Pretrain and SFT Clarification: In Section 4.1, the terms "pretrain" and "SFT" refer to LNTP and SSFT. We will revise this section to avoid confusion. - Zero-shot and Supervised Results: The term "zero-shot" refers to a model trained using SSFT only with Document Dense Description (DDD) and Layout-aware SFT data, as used in LayoutLLM. "Supervised" indicates that the model is trained using SSFT with DDD, Layout-aware SFT data, and the training sets of downstream datasets such as DocVQA and FUNSD. This terminology aligns with LayoutLLM, and we will clarify this in the updated version. - Asterisk Notation: An asterisk (*) is used to indicate whether the corresponding training set of a downstream dataset is included in the training of a specific model. This notation facilitates a fair interpretation of experimental results for the reader. **W4-Statement about improvement**: We'll tone down the phrasing to accurately reflect this comparison and highlight our improvements in relation to OCR-based methods. **W5-LNTP**: We acknowledge that LNTP resembles the regular language modeling objective. We'll tone down the presentation to clarify that the added SLP and P-LoRA parameters are updated using standard language modeling loss. **Q1-claim of autoregressive vs. docllm**: We had a brief discussion in line 297 – line 299. Here we elaborated this in detail, and add them in the updated version: - Disentangled Attention: DocLLM uses a disentangled attention to process spatial and textual modalities separately (using differerent QK weight) before integrating them. This independent handles of spatial information from document layouts, unlike traditional autoregressive models that process inputs sequentially (use the same suite of weights). In contrast, LayTextLLM interleaves bounding box tokens with text, unifying both modalities in a single sequence through an autoregressive approach. - Block Infilling Objective: Unlike standard autoregressive models that predict the next token based only on preceding text, DocLLM uses block infilling to predict missing text blocks based on both preceding and succeeding context. This deviates from leveraging the inherent autoregressive nature of traditional LLMs which solely relies on preceding tokens. - Impact on Performance: As demonstrated in Table 2. when compared using the same training dataset, LayTextLLM significantly outperforms DocLLM. **Q2-encode x and y**: We considered using positional encodings that directly encode (x, y) positions to address the artifact issue. However, to fully leverage the LLM parameters and maintain simplicity, we avoided encoding (x, y) positions, as it could complicate the model. Instead, we focused on balancing LLM reuse with necessary adjustments, which led us to propose SSFT. **Q3-model backbone and generalization of the method**: We implemented LayTexLLM using Llama2-7b as our LLM backbone, in line with prior OCR-based methods like DocLLM and LayoutLLM. Our method is model-agnostic. In our in-house KIE test, we evaluated the performance of the Baichuan2 7b and Qwen2 7b models. The results showed that incorporating the SLP layer improved performance for both models compared to not using it. | Model | w/o SLP | With SLP | |--------------|---------|----------| | Baichuan2 7B | 0.7464 | **0.7738** | | Qwen2 7B | 0.754 | **0.7858** | **Q4-using PLORA**: The motivation for using P-LoRA is due to the concern of having too few learnable parameters. P-LoRA is applied in each layer, but the main contribution still comes from SLP. **Q5-LNTP loss**: Yes, the bounding box tokens are used as inputs but not targets. Our objective is to understand the bounding boxes, not to generate them. Therefore, it is unnecessary to compute a loss for the bounding box tokens. Also, we tested in a in-house KIE dataset, finding that including bounding box as targets (using string like ''[1,20,10,30]'') during LNTP drops the downstream performance. **Q6-typo**: We will fix those typos in the updated version. **Q7-polys and asterisk**: The term 'polys' will be replaced with 'coordinates.' An asterisk (*) indicates if the training set of a downstream dataset is included in the training of a specific model. This notation ensures a fair interpretation of the experimental results for the reader. **Limitations**: Yes, bolding and italic texts should be included as visual cues, which will be updated in the updated version. We would appreciate it if you could improve your rating if all concerns been addressed and we look forward to your response. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed and careful response. I believe this addresses my main concerns and I have updated my rating accordingly. I encourage the authors to incorporate all of these clarifications into the final version, and particularly the points regarding fair comparisons between methods (LLM base models, OCR engines, ...). --- Reply to Comment 1.1.1: Title: Thanks Comment: Thank you very much for your positive feedback and for updating your rating. We greatly appreciate your thoughtful and constructive comments. We are committed to incorporating all of the clarifications you suggested, particularly regarding the fair comparisons between methods, such as LLM base models and OCR engines, in the final version of our paper.
Summary: This work presents an innovative method for integrating layout information into LLMs to enhance document understanding tasks. Instead of treating bounding box coordinates as input text tokens, the bounding box information is embedded into a single token and interleaved with text tokens. This approach addresses the challenge of long sequences while leveraging the autoregressive nature of LLMs. Experimental results demonstrate the effectiveness of the proposed method, achieving state-of-the-art performance and resulting in shorter input sequence lengths. Strengths: 1.Interleaving layout information and text is novel. 2.The proposed Shuffled-OCR Supervised Fine-tuning is interesting and may benefit other OCR-based approaches. 3.The approach achieves state-of-the-art performance on most text-rich VQA and KIE tasks, validating the effectiveness of interleaving layout and text and significantly reducing input length. 4.The paper is well-written, providing sufficient experimental details, ablations and discussions to comprehend each component of the model. Weaknesses: 1.In layout-aware pretraining tasks, whether it is beneficial to predict both the bounding box and the text, rather than just the text. 2.LaytextLLM achieves satisfying performance in various tasks, but it will be better to incorporate the visual modality for more application scenarios. Technical Quality: 3 Clarity: 3 Questions for Authors: Is the input length shorter than the original input when not using a bounding box, only in the Llama tokenizer, or is this also observed in tokenizers of other LLMs? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: Please refer to weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback and appreciate the recognition of our paper’s contributions, writing and novelty. We are grateful for the opportunity to address the concerns raised. **W1-Compute loss of bounding box**: - First of all, our objective is to understand the bounding boxes, not to generate them. Therefore, it is unnecessary to compute a loss for the bounding box tokens. Also, as shown in Figure 4, the prediction of t2 is made using the hidden state of b1, which means the supervised signal is backpropagated to the SLP. - We tested including bounding boxes as targets (using strings like "[1,20,10,30]") during LNTP on an in-house KIE dataset and found that performance dropped. - However, when we tested including bounding boxes as targets during SSFT, we found that the precision increased while the recall decreased, resulting in an almost unchanged micro F-score. Therefore, we can conclude that including bounding boxes as targets is beneficial only when added in the downstream tasks instead of during pretraining, and only when the application is sensitive to precision. **W2-Including visual modality**: We acknowledge that incorporating visual information can enhance performance, as discussed in the Limitations section. Exploring this further is a direction for our future research. **Q1-length reduction**: The length reduction is universal and can be generalized to other LLMs. We have conducted additional tests on sequence lengths using the Baichuan2 tokenizer on an in-house KIE dataset, confirming the token reduction is universal and agnostic to LLMs used when compared to coor-as-tokens. When compared to DocLLM, we can ensure that LayTextLLM maintains an equal or shorter sequence length, regardless of the tokenizer used. | Baichuan2 tokenizer | LayTextLLM | DocLLM | Coor-as-tokens | |-----------|------------|--------|----------------| | Length| 313.27 | 313.27 | 1242.63 | --- Rebuttal Comment 1.1: Comment: I believe my concerns here are reasonably satisfied. I am impressed by the further discussion on the including bounding box as prediction target and the length reduction advantages brought by LayTextLLM, which I think is quite a nice addition to paper. Consequently, I keep the positive score and support the acceptance. I encourage the authors to incorporate these clarifications into the final version. --- Rebuttal 2: Title: Thanks Comment: Thank you very much for your valuable feedback and keeping the positive rating. We are committed to incorporating all of the clarifications you suggested, particularly regarding the discussion of bounding box prediction in the final version of our paper.
Summary: The paper introduces a novel approach, named LayTextLLM, for document understanding tasks, which efficiently integrates spatial layouts and textual data within LLM. It employs a Spatial Layout Projector and introduces two innovative training tasks: Layout-aware Next Token Prediction and Shuffled-OCR Supervised Fine-tuning. Extensive experiments demonstrate significant improvements over previous state-of-the-art models in KIE and VQA. This paper demonstrates the importance of layout information in document understanding tasks. Strengths: 1. The paper introduces a novel approach by integrating SLP and P-LoRA to effectively encode and process layout information. This method significantly improves the interaction between spatial layouts and textual data within LLM, providing a new direction for future research. 2. The paper proposes the LNTP task and SSFT task to enable the LLM to layout information, thereby enhancing its document understanding capabilities and improving performance on document-related tasks. Weaknesses: 1. Due to miss the crucial visual information necessary for document understanding, this LayoutTextLLM heavily relies on OCR-derived text and spatial layouts. Other works such as LayoutLLM, layoutLMv3, introduces visual information to enhance the document understanding performance. 2. The exploration of the shuffling ratio was conducted only on Key Information Extraction (KIE) tasks. It should also be validated on Visual Question Answering (VQA) datasets to determine if the 20% shuffling ratio is optimal across different types of tasks. 3. The effectiveness of LNTP and SSFT methods should be substantiated with more ablation studies. It is recommended to fine-tune Llama2-7B directly using the existing data for a more comparisons. 4、Although LayTextLLM shows higher performance on DocVQA compared to LayoutLLM, this comparison is not entirely fair as LayoutLLM was evaluated in a zero-shot setting. Moreover, the zero-shot performance of LayoutLMv3 on DocVQA surpasses that of LayTextLLM. Technical Quality: 3 Clarity: 3 Questions for Authors: Compared with layoutlm series,what are the advantages of the encoding bounding box method proposed in this paper. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The author has already mentioned in the limitation section of the paper that the proposed model is difficult to handle scenarios where inference relies on visual cues. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback and appreciate the recognition of our paper’s novelty and improved performance. We are grateful for the opportunity to address the concerns raised. **W1-Lack visual modality**: We acknowledge that incorporating visual information can enhance performance, as discussed in the Limitations section. Exploring this further is a direction for our future research. **W2-Exploration of shuffle ratio**: We conducted an additional experiment using DocVQA, and the results again demonstrated the superiority of LayTextLLM, and confirming that choosing 20% is an appropriate selection. We will add this result in the updated version. | Ratio | **Funsd** | | | **DocVQA** | | |-------|:---------:|:--------:|:-------:|:----------:|:--------:| | | Llama2 | LaytextLLM | | Llama2 | LaytextLLM | | 100 | 20.3 | 44.7 | | 34.8 | 53.4 | | 50 | 49.1 | 62.1 | | 63.1 | 72.8 | | 20 | 50.2 |**65.4** | | 64.7 | **73.4** | | 0 | **52.3** | 65.1 | | **65.5** | 73.0 | **W3-More ablation studies**: We have conducted a new ablation study, which will be added in the next version. The experimental results demonstrating the interleaving of bounding box and text providing the largest boost, while LNTP+SSFT provide big improvement in VQA tasks. | SLP | P-LoRA | LNTP+SSFT | Document-Oriented VQA | | | | | KIE | | | | | |-----|--------|-----------|-----------------------|---------------|---------------|---------------|---------|-------|-------|-------|-------|---------| | | | | DocVQA | InfoVQA | VisualMRC | Avg | FUNSD | CORD | SROIE | POIE | Avg | | | | | 71.5 | 31.9 | 31.1 | 44.8 | 50.5 | 90.2 | 91.6 | 54.1 | 71.6 | | ✓ | | | 74.7 | 35.7 | 32.5 | 47.6 | 55.1 | 94.9 | 94.6 | 68.3 | 78.2 | | ✓ | ✓ | | 76.5 | 38.0 | 30.6 | 48.4 | 54.3 | 95.9 | **95.3** | **70.6** | 79.0 | | ✓ | ✓ | ✓ | **78.8** | **42.7** | **34.4** | **52.0** | **63.0**| **95.9**| 95.2| 62.1| **79.1** | Note the slight difference in the value from the previous version is due to our use of an in-house framework, while the new version is based on the Huggingface Transformers. **W4-fair comparison** - We use an asterisk (*) indicates whether the training set of a downstream dataset is included in the training of a specific model, which ensures a fair interpretation of the experimental results for readers. While we acknowledge that LayoutLLM performs better in the zero-shot DocVQA scenario, our primary comparison focuses on pure OCR layout + OCR text models, such as DocLLM and ICL-D3IE, as LayoutLLM incorporates the visual modality. - Could you please specify the citation that provides the zero-shot performance of LayoutLMv3, and we will include this information in the updated version. **Q1-advantage of projecting bounding box** - Precise Layout Representation: The LayoutLM series uses position embeddings for discrete layout representation, while LayTextLLM maps four coordinates into a continuous hidden space which is continuous. We believe this approach offers a more precise and enriched understanding of layout. - Enhanced Contextual Understanding: By interleaving spatial layout with textual content, the model enhances its understanding of context and structural relationships within documents. This is especially beneficial for layout-dependent documents such as invoices, forms, and multi-column scientific articles, and is particularly advantageous for decoder-only models like LLMs. We kindly request your acknowledgement of our reply, and are welcome to further discussions for your questions and concerns. We would be fully appreciated if you would consider to improve the **rating**. We look forward to your response. --- Rebuttal Comment 1.1: Title: Sincere Invitation to Participate in the Discussion Comment: Dear Reviewer Nepi, We sincerely appreciate the time and effort you've dedicated to reviewing our work. As the discussion period is drawing to a close, we kindly request your acknowledgment of our reply. We value your insights and would be grateful if you could join the discussion to clarify any remaining questions or concerns. Your input is highly valued, and we would greatly appreciate it if you could consider improving the evaluation after reviewing our responses. Thank you very much for your consideration. Sincerely, The Authors --- Reply to Comment 1.1.1: Title: Gentle Follow-Up on Review Response Acknowledgment Comment: Dear Reviewer Nepi, We understand that you have many commitments, and we deeply appreciate the time you've already devoted to reviewing our work. As the discussion phase is coming to an end, we kindly request your acknowledgment of our reply。 We would be very grateful if you could acknowledge our response and share any further thoughts or clarifications you might have. Your feedback is incredibly valuable to us, and we sincerely hope that our responses have addressed your concerns. Thank you again for your time and consideration. Sincerely, The Authors
Summary: This paper presents LayTextLLM, a novel approach to document understanding that effectively integrates spatial layout information and text into a large language model. Existing methods that integrate spatial layout with text often produce excessively long text sequences. LayTextLLM addresses these problems by projecting each bounding box into a single embedding and interleaving it with text. The method is evaluated on Key Information Extraction (KIE) and Visual Question Answering (VQA) tasks. Strengths: - Effective sequence reduction: The proposed method reduces the length of text sequences, addressing a common problem in document understanding. - Performance improvement: LayTextLLM demonstrates improvements in KIE and VQA tasks, showing performance gains over alll state-of-the-art models. - Evaluation: The paper provides detailed benchmark evaluations on 2 tasks and 7 datasets Weaknesses: - Incomplete related work: The paper omits several relevant OCR-based models, such as UDOC, LayoutMask, BROS, LAMBERT, DocFormer and LiLt. - Insufficient explanation: The repeated claim that DocLLM cannot fully exploit autoregressive features is not adequately explained. - Limited comparisons: There is no comparison with alternative methods that embed coordinates, such as co-as-token approaches (Lmdx, Shikra, ICL-D3IE). - Marginal token reduction: The reduction in the number of tokens appears to be limited, and the paper does not clarify whether words or lines are encoded, which could have a significant impact on token reduction. Technical Quality: 2 Clarity: 3 Questions for Authors: - Why use an embedding to encode coordinates that are already 4D vectors? What is the gain, considering there is no additional information (e.g., font, style, zone type)? - How can SLP be trained if there is no loss computed on bounding box tokens? - How do you explain the good performance of LayTextLLM_zero? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 1 Limitations: - Limited comparisons: The paper primarily compares LayTextLLM to DocLLM, which may not provide a comprehensive assessment of its performance. - Impact of token reduction: The reduction in the number of tokens, while beneficial, appears to be limited and may not provide significant practical benefits in all scenarios. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking time to review our paper. We thank the reviewer for the thoughtful feedback and are grateful for the opportunity to address the concerns raised. **W1-Incomplete related work**: Our survey focuses on decoder-only architectures within LLMs to highlight their unique capabilities. We acknowledge the need for more comprehensive coverage and will include citations on encoder-only architectures in the updated version. **W2-Insufficient explanation**:We had a brief discussion in line 297 – line 299. Here we elaborated this in detail, and add them in the updated version: - Disentangled Attention Mechanism: DocLLM introduces a disentangled attention mechanism that processes spatial and textual modalities separately. This mechanism handles spatial information independently before merging it (using different attention QK weight) with textual information, which is different from traditional autoregressive models that process inputs sequentially without such separation (using the same suite of weights). While LayTextLLM introducing bounding box token interleaving with texts, unifying bounding box and text information in a single sequence, fully fusing these two modalities by autoregressive method. - Block Infilling Objective: Unlike standard autoregressive models that predict the next token based on the preceding sequence, DocLLM uses a block infilling approach where it predicts missing text blocks based on both preceding and succeeding context. This deviates from leveraging the inherent autoregressive nature of traditional LLMs which solely relies on preceding tokens. - Impact on Model's Predictive Performance: The experimental findings highlight the superiority of our method, LayTextLLM, as demonstrated in Table 2. When compared using the same training dataset, LayTextLLM significantly outperforms DocLLM. **W3-Limited comparison** - ICL-D3IE: We have included a comparison with ICL-D3IE. The data in Figure 1 and Table 2 is sourced from the ICL-D3IE paper (Coord-as-tokens-ICL-175B(Davinci-003), Table 2 Davinci-003-175Bcoor). We also replicated the ICL-D3IE using LLama2-7b (Figure 1, Coord-as-tokens-7B(Llama2), Table 2 Llama2-7B-chatcoor). A detailed discussion of the comparison with ICL-D3IE can be found in lines 301-307. In the updated version, we will change the term "coor-as-tokens" to "ICL-D3IE." - Lmdx: Our comparison primarily focuses on approaches based on open-source LLMs instead of proprietary ones, as noted by Reviewer teB5. This ensures our results are reproducible. Therefore, LMDX was not included. Additionally, LMDX only provides results for CORD, whereas our experiments cover a broader range of text-rich VQA and KIE datasets. However, we will include a comparison with LMDX in the updated version. - Shikra: Shikra is not a document AI LLM even without proper OCR ability and is therefore outside the scope of this comparison. As noted by Reviewer teB5, the primary comparison should be with other OCR-based methods, such as DocLLM. **W4-Marginal token reduction** - We utilize word-level OCR from the corresponding datasets to ensure a fair comparison, which will be explicitly mentioned in a later version of the document. - Our claim of significant token reduction is primarily focused on the comparison with the coor-as-tokens scheme, as detailed in lines 74-77. In these instances, the reduction in tokens is substantial rather than marginal and applies to both word-level and line-level OCR. For example, when using the coord-as-tokens scheme with the Llama2 tokenizer, the coordinate string "[70,73,90,77]" occupies 13 tokens, while LayTextLLM represents the same information with just 1 token. | Baichuan2-tokenizer | LayTextLLM | DocLLM | Coor-as-tokens | |-----------|------------|--------|----------------| | Length| 313.27 | 313.27 | 1242.63 | - Furthermore, compared to DocLLM, our approach yields either shorter or equivalent sequence lengths. We have conducted additional tests on sequence lengths using the Baichuan2 tokenizer on an in-house KIE dataset, confirming the token reduction is universal and agnostic to LLMs used. **Q1-encoding vector:** Using an embedding to encode coordinates is not about introducing additional information. Instead, it is a practice about transforming the coordinates into a hidden state that is more understandable for a LLM. This process involves aligning dimensions in a way that is more suitable for the model's architecture. For example, in LLAVA, the 1024-dimensional output from CLIP is mapped to 4096 dimensions to better align visual and text modality. **Q2-Train SLP:** There seems to be a misunderstanding regarding the training process. Although the loss is not computed for the bounding box token, the SLP is still trained. For instance, in Figure 4, the prediction of t2 is made using the hidden state of b1, which means the supervised signal is backpropagated to the SLP. Furthermore, our objective is to understand the bounding boxes, not to generate them. Thus, it is unnecessary to compute a loss for the bounding box tokens. **Q3-performance of LaytextLLM_zero:** The strong performance is primarily due to the interleaving of bounding box tokens with text, along with other design elements such as LNTP and SSFT. Additionally, the use of synthetic data from LayoutLLM, including DDD and layout-aware SFT, also contributes to training LayTextLLM_zero. We kindly request your acknowledgement of our reply, and are welcome to further discussions for your questions and concerns. We would be fully appreciated if you would consider to improve the **rating**. We look forward to your response. --- Rebuttal Comment 1.1: Title: Sincere Invitation to Participate in the Discussion Comment: Dear Reviewer uyfs, We sincerely appreciate the time and effort you've dedicated to reviewing our work. As the discussion period is drawing to a close, we kindly request your acknowledgment of our reply. We value your insights and would be grateful if you could join the discussion to clarify any remaining questions or concerns. Your input is highly valued, and we would greatly appreciate it if you could consider improving the evaluation after reviewing our responses. Thank you very much for your consideration. Sincerely, The Authors --- Rebuttal Comment 1.2: Comment: Thank you for your answers, which have clarified a number of points. I'm going to raise my score. --- Reply to Comment 1.2.1: Title: Thanks Comment: Thank you very much for the positive feedback and for updating your rating. We greatly appreciate your thoughtful and constructive comments. We are committed to incorporating all of the clarifications you suggested in the final version of our paper.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Monomial Matrix Group Equivariant Neural Functional Networks
Accept (poster)
Summary: The paper explores the important field of learning over weight spaces, where neural networks process other neural networks. Previous research has highlighted the importance of designing equivariant architectures that account for the symmetries of the input neural network, with a primary focus on permutation symmetries. However, previous literature did not account for all symmetries of the input NNs, particularly the weight scaling symmetries of ReLU networks and the weight sign flipping symmetries of sin or Tanh networks. This paper naturally extends previous approaches to account for these activation-based symmetries. This paper first formalizes the group of symmetries that includes both neuron permutations and scaling or sign-flipping transformations using monomial matrices. Next, it proposes a novel architecture for weight space networks that are equivariant to groups of monomial matrices. The new architecture is more efficient in terms of the number of trainable parameters compared to baseline weight space networks (which are only permutation equivariant). Strengths: 1. The paper is mostly well-written and well-structured. 2. The paper addresses the important and timely problem of learning in deep weight spaces, presenting a novel architecture that extends previous permutation equivariant networks to also account for scale/sign-flipping symmetries. 3. The proposed architecture is parameter efficient compared to baseline weight space networks. Weaknesses: My main concern is the limited empirical evaluation and missing natural baselines. Also, the presented empirical results, mostly show marginal to no improvement over the limited baseline methods evaluated. The insufficient empirical study of this paper significantly damages the paper’s contribution. Given the current state of the learning over weight spaces literature, I would expect a more diverse, challenging, and comprehensive empirical evaluation. 1. The main text provides very few details on the construction of G-equivariant layers. I suggest the authors to provide at least one concrete example for mapping between a subspace of U, for example, mapping between some bias $b_i$ to a bias $b’_j$. 2. Since the method is built over NFN, it is limited in the sense that each monomial-NFN can process a specific input architecture. Building over or extending the work to GNN-based weight space networks will allow [1,2] the processing of diverse input architectures. 3. Missing weight-space baselines, like GNN-based models [1,2], DWSNets [3] (which, while mathematically equiv. to NFNs, obtains better empirical performance, see e.g., [6]) and NFT [4]. 4. Another important missing natural baseline is to use a permutation equivariant baseline like DWS/NFN or the GNN-based models together with scaling/sign-flipping data augmentations as in [5]. 5. Some evaluation and comparison of runtime and memory consumption w.r.t. baselines would be beneficial. 6. Also, adding some ablation regarding design choices would be beneficial. References [1] Graph neural networks for learning equivariant representations of neural networks, ICLR 2024. [2] Graph metanetworks for processing diverse neural architectures, ICLR 2024. [3] Equivariant Architectures for Learning in Deep Weight Spaces, ICML 2023. [4] Neural Functional Transformers, NeurIPS 2024. [5] Improved Generalization of Weight Space Networks via Augmentations, ICML 2024. [6] Neural Processing of Tri-Plane Hybrid Neural Fields, ICLR 2024. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Is it possible to use the proposed method together with normalization layers? 2. How easy would it be to extend the method to other activation functions? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. Below we address your concerns. **W1: Concrete example.** **Answer:** See an illustrative example in **A4** of General Response. **W2: Each Monomial-NFN can process a specific input architecture. Building over or extending the work to GNN-based weight space networks will allow [D1,D2] the processing of diverse input architectures.** **Answer:** See **Q1-Q2** in General response. In addition, while [D1,D2] can handle diverse input architectures, we do not know whether it is possible to incorporate non-permutations, such as scaling and sign-flipping symmetries, into these models. In contrast, we take the first step toward incorporating non-permutation symmetries into NFNs. In particular, our proposed model is equivariant to permutations and scaling (for ReLU networks) and sign-flipping (for sin and tanh networks). This leads to a significant reduction in the number of parameters, which is particularly useful for large NNs in modern deep learning, while achieving comparable or better results than those in the literature. **W3: Missing weight-space baselines, like GNN-based models [D1,D2], DWSNets [D3] (which, while mathematically equiv. to NFNs, obtains better empirical performance, see e.g., [D6]) and NFT [D4]** **W4: Another important missing natural baseline is to use a permutation equivariant baseline like DWS/NFN or the GNN-based models together with scaling/sign-flipping data augmentations as in [D5].** **Answer to W3-W4:** Thank you for pointing out these related works. Here we provide the experimental results for GNN [D1] in two scenarios: 1. Training the model on augmented train data and test with the augmented test data *Table 1: Predict CNN generalization on ReLU subset (augmented train data)* | |Original |1|2|3|4| |-|:-:|:-:|:-:|:-:|:-:| | GNN [D1]|0.897|0.892|0.885|0.858|0.851| | Monomial-NF (ours) | **0.922** | **0.920** | **0.919** | **0.920** | **0.920** | *Table 2: Predict CNN generalization on Tanh subset (augmented train data)* || Original| Augmented | |-|:-:|:-:| | GNN [D1]| 0.893| 0.902| | Monomial-NFN (ours) | **0.939** | **0.943** | The results for GNN exhibit a similar trend as other baselines that do not incoporate the scaling symmetry into their architectures. In contrast, our model has stable performance. A notable observation is that GNN model uses 5.5M parameters (4 times more than our model), occupies 6000MB of memory and takes 4 hours to train (refer to **Q3** in General Response). 2. Train the model on original train data and test with the augmented test data *Table 3: Predict CNN generalization on ReLU subset (original train data)* |Augment level|1|2|3|4| |-|:-:|:-:|:-:|:-:| | GNN [D1]|0.794| 0.679|0.586 |0.562| | Monomial-NF (ours) | **0.920** | **0.919** | **0.920** | **0.920** | *Table 4: Predict CNN generalization on Tanh subset (original train data)* | | Augmented | |-|:-:| | GNN [D1]| 0.883| | Monomial-NFN (ours) | **0.940** | In this more challenging scenario, GNN's performance drops significantly, which highlights the lack of scaling symmetry in the model. Our model maintains consistent performance, matching the case in which we train with the augmented data. **W5: Comparison of runtime and memory consumption w.r.t. baselines.** **Answer:** See **Q3** in General Response. **W6: Ablation regarding design choices.** **Answer:** Here we provide the ablation study on the choice of architecture for the task Predict CNN Generalization on ReLU subset. We denote: - Monomial Equivariant Functional Layer (Ours): MNF - Activation: ReLU - Scaling Invariant and Permutation Equivariant Layer (Ours): Norm - Hidden Neuron Permutation Invariant Layer (in [D7]): HNP - Permutation Invariant Layer: Avg - Multilayer Perceptron: MLP *Table 5: Ablation study on design choices for the task Predict CNN generalization on ReLU subset* | |Original|1|2|3|4| |-|:-:|:-:|:-:|:-:|:-:| |(MNF → ReLU)x1 → Norm → (HNP → ReLU)x1 → Avg → MLP |0.917|0.916|0.917|0.917|0.917| | (MNF → ReLU)x2 → Norm → (HNP → ReLU)x1 → Avg → MLP |0.918|0.917|0.917|0.917|0.918| | (MNF → ReLU)x3 → Norm → (HNP → ReLU)x1 → Avg → MLP |0.920|0.919|0.918|0.920|0.920| | (MNF → ReLU)x1 → Norm → Avg → MLP |0.915| 0.914|0.917|0.916|0.914| | (MNF → ReLU)x2 → Norm → Avg → MLP |0.918|0.919|0.918|0.917|0.918| | (MNF → ReLU)x3 → Norm → Avg → MLP | **0.922** | **0.920** | **0.919** | **0.920** | **0.920** | Among these designs, the architecture incoporating three layers of Monomial-NFN with ReLU activation achieves the best performance. **Q1: Is it possible to use the proposed method together with normalization layers?** **Answer:** See **Q2** in General Response. **Q2: How easy would it be to extend the method to other activation functions?** **Answer:** See **Q1** in General Response. **References** [D1] Graph neural networks for learning equivariant representations of neural networks, ICLR 2024. [D2] Graph metanetworks for processing diverse neural architectures, ICLR 2024. [D3] Equivariant Architectures for Learning in Deep Weight Spaces, ICML 2023. [D4] Neural Functional Transformers, NeurIPS 2023. [D5] Improved Generalization of Weight Space Networks via Augmentations, ICML 2024. [D6] Neural Processing of Tri-Plane Hybrid Neural Fields, ICLR 2024. [D7] Permutation Equivariant Neural Functionals, NeurIPS 2023 --- Rebuttal 2: Title: Any Questions from Reviewer xQCm on Our Rebuttal? Comment: We would like to thank the reviewer again for your thoughtful reviews and valuable feedback. We would appreciate it if you could let us know if our responses have addressed your concerns and whether you still have any other questions about our rebuttal. We would be happy to do any follow-up discussion or address any additional comments. --- Rebuttal 3: Title: Response to Rebuttal Comment: I would like to thank the authors for their rebuttal and for providing additional results and discussion, which addresses some of my concerns. I've read all the reviewers' concerns and the authors' responses. While not all of my main concerns were addressed, I do appreciate and acknowledge the novelty and contribution of the paper, and I will raise my score to align with the accepted score of all reviewers. I encourage the authors to include additional baselines and more strong evaluation comparisons in the revised version of the paper. --- Rebuttal Comment 3.1: Title: Thanks for your endorsement! Comment: Thanks for your reply, and we appreciate your endorsement. Following your suggestion, we will include additional baselines and stronger evaluation comparisons in our revision.
Summary: The present manuscript concerns the design of Neural Networks capable of processing the weights and biases of other Neural Networks, particularly Fully Connected and Convolutional NNs (known in the literature as *weight space networks, neural functional networks or metanetworks*). Previous works considered only hidden-neuron-permutation symmetries of weights/biases that preserve the function of the NN. In contrast, the authors highlight the existence of symmetries induced by certain activation functions (ReLU – positive scaling symmetries and sine/tanh – sign symmetries – Eq. (20)), following the works of Godfrey et al., NeurIPS’22 and Wood and Shawe-Taylor, Discr. Appl. Math.’96, where in certain cases are mentioned to be maximal (i.e. the only symmetries that preserve the NN function) as proved in the past. Combining these symmetries with permutations leads to the so-called “monomial-matrix groups”. Based on this background, the authors design monomial-matrix equivariant NNs, following the classical NN design strategy: linear layers interleaved with non-linearities. To characterise the former, they identify the weight-sharing pattern that equivariance imposes by solving a system of weight constraints for both the symmetries above (thus fully characterising linear layers), while for the former they use the same non-linearities as the activation functions at hand. Finally, they propose a method for monomial-matrix *invariant* layer design, which is combined with equivariant layers to yield an end-to-end invariant NN. The method is experimentally tested in various tasks: CNN generalisation prediction and Implicit Neural Representation (INR) classification and editing, showing competitive performance against permutation-symmetry-only baselines and a significant reduction in the number of learnable parameters. Strengths: **Significance/Impact.** The topic of NN processing has been steadily gaining traction in the last year and has the potential to provide significant advantages in various applications such as meta-learning and processing signals of arbitrary nature (encoded into INRs) under a unifying framework. Therefore, improving computational efficiency and incorporating new inductive biases, as done in this work, is an important step towards advancing and popularising the field. **Presentation**. The paper is in general well-presented, with appropriately chosen notations and clear descriptions of the background concepts involved and the innovations proposed. **Novelty**. - *Studied problem*. The symmetries discussed in this work, although mentioned in the literature, have not been approached so far in the context of weight space networks. - *Methodology*: This work introduces novel layers that are equivariant or invariant to a group that has been underexplored (monomial matrices). - *Theory*. Additionally, following traditional weight-sharing proof strategies, the layers are characterized as the only linear layers that have the equivariance property, while Remark 4.5. borrows results from relevant papers (that might be not widely known to the community) to highlight the cases where the studied symmetries are the maximally function-preserving ones (however some results are missing – see weaknesses). **Quality/Execution**. The paper is well-motivated, provides a comprehensive background discussion, follows a rigorous and well-established methodology to design monomial-matrix group equivariant/invariant layers and provides adequate experimental comparisons, including in regimes where previous works fail. **Computational efficiency**. The proposed method leads to a significant reduction in the number of parameters, a property which is particularly useful for large NNs (a typical use case in modern deep learning). Weaknesses: **Limited expressivity (possibly reflected in the experimental results?)** - Although characterising the linear layers by solving the weight sharing constraints is a fairly general and rigorous technique for equivariant layer design, I have the impression that the resulting weight sharing, in this case, is severe. I.e. due to the large size of the group considered, the resulting layers seem weak in terms of expressivity. For example, in Eq. (22) all hidden layer weights and bias updates are calculated by linearly processing each element individually, i.e. a weight corresponding to an edge between two neurons will be updated based only on its previous value, ignoring other edges across the same or other layers. - Additionally, the activation functions that can be used are quite limited to preserve equivariance (the authors mention that they use the same activation as in the NN that is being processed). - Although these design choices are necessary for the current construction, this probably will not be the case for a different one, i.e. by directly designing non-linear layers. In other words, I am concerned that working with the standard NN pattern (linear layers interleaved with non-linearities) might be too limiting for this family of symmetries. - I am also wondering if limitations in expressivity are induced by the choice of the invariant layers. Could the authors elaborate on this? - I do not consider the above as grounds for rejection, since I believe that even incorporating these symmetries into NFNs and the linear layer characterisation are sufficient contributions. However, they seem like important limitations, which I suspect are reflected in the experimental results, and therefore should be highlighted by the authors. **Related work and existing results**. - Given that the field is relatively new, I think that the authors should devote more space to a more detailed literature review, e.g. describing the weight symmetries that have been discovered more thoroughly and discussing/comparing the weight space networks that have been proposed so far. Currently, most methods are cited, but not adequately explained. I understand that space might not allow this, but at least adding an extended related work section in the appendix would help. - Additionally, the following very related works are missing: (1) In the topic of weight space networks: - Universal Neural Functionals, Zhou et al., arxiv’24 - Graph metanetworks for processing diverse neural architectures, Lim et al., ICLR’23 - (2) In the topic of weight space symmetries (conditions for maximality): - Reverse-engineering deep ReLU networks, Rolnick et al., ICML’20 - Hidden Symmetries of ReLU Networks, Grigsby et al., ICML’23 - As far as I understand, the works of Chen et al., Neur. Comp.’93, Fefferman et al., NIPS’93 characterise maximal weight space symmetries only for the tanh activation and not for sine, as the authors mention L208 – in case this is true, the statement in Remark 4.5. should be rectified. **Extensibility**. Judging by the derivation of the weight-sharing patterns, it appears that it is not straightforward to extend this approach to symmetries induced by other activation functions (e.g. some are discussed in Godfrey et al.). Could the authors discuss this (and if true include it in their discussion about limitations?). Technical Quality: 4 Clarity: 4 Questions for Authors: - As far as I know, Proposition 3.4. and Proposition 4.4 follow from the characterisation of intertwiner groups done by Godfrey et al., NeurIPS’22 (e.g. see Theorem E.14 and Proposition 3.4), but the authors provide alternative proofs. In case these proofs are of independent technical interest, I would recommend that the authors briefly mention a proof sketch in the main paper and the differences with Godfrey et al. (this would also help in properly accrediting this prior work). - L210: The authors mention “It is natural to ask whether the group $G$ is still maximal in the other case”. What do they mean by the “other case”? - Perhaps an illustrative example of the resulting weight-sharing would help. Or maybe give some intuition on the resulting layers? - Maybe adding the exact number of parameters along with the performance in the tables would help in grasping the actual reduction. **Minor**. - L126-L128 are unclear. Perhaps describe in more detail the notation $\text{Aut}(\Delta_n)$ and the terms conjugation, semi-direct product etc. (depending on the importance of these statements). - L242: hyperparams --> Perhaps you mean learnable params? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Some of the limitations of this work are mentioned in the text (last paragraph of Section 5 and conclusion section). However, I think that some have not been adequately discussed (see weaknesses). I would recommend adding a separate section for this purpose. I do not foresee any negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. Below we address your concerns. ----- **W1: Limited Expressivity.** **Answer:** We agree with the reviewer's discussion on limitations. However, we would also like to share our thoughts on these limitations. - Although **large size symmetry group might lead to the small number of independent parameters**, we believe that the resulting equivariant layers are still sufficient in terms of expressivity (see Theorem 5.1). Nevertherless, it is necessary to construct equivariant nonlinear layers that can encode more relations between weight network's parameters in order to achieve better results. We leave this interesting topic for future study. - **About the activation functions can be used**: See **Q1** in General Response. - **Beyond the linear layers**, the problem of characterizing nonlinear layers which are equivariant to both permutations and scaling/sign-flipping, or in particular, the problem of incorporating scaling/sign-flipping symmetries into nonlinear layers (such as self-attentions) is a nontrivial problem. We leave this open problem for future study. - **About the choice of the invariant layers**: Unfortunately, the invariant layer constructed in our paper is not expressive enough to express all invariant NFNs. For the ReLU case, one can verify that every invariant layer can be expressed via positively homogenenous functions of degree zero via Eq. (55). However, not all positively homogeneous functions of degree zero can be written in the form of the candidate choice in Eq. (56). Nevertheless, our candidate choice in Eq. (56) already covers a large part of positively homogeneous functions of degree zero. As a result, the invariant layers yield favorable experimental results. The same arguments apply for the sine and tanh networks. We have added this discussion to the limitations. **W2: Related work and existing results.** **Answer:** Following the reviewer's suggestion, we have added an extended related work section in the appendix to: (1) include missing related works in the topic of weight space networks and weight space symmetries, and (2) provide more adequate explanations of cited methods. We have also editted Remark 4.5 regarding the works of Chen et al., Neur. Comp.’93, Fefferman et al., NIPS’93 on characterising maximal weight space symmetries only for the tanh activation and not for sine. **W3: Extensibility of this approach to other activation functions.** **Answer:** See **Q1** in General Response. **Q1: About the proofs of Propositions 3.4. and 4.4.** **Answer:** While the proofs in Godfrey et al. (NeurIPS '22) are technically involved and apply to very general cases, we provide direct and simple proofs that apply to our considered cases for the convenience of the reader and the completeness of the paper. Our proofs can be seen as simplified versions of those in Godfrey et al. (NeurIPS '22), justified for the considered cases. **Q2: What do they mean by the "other case"?** **Answer:** By the "other case", we mean the case when the network architectures are MLPs with ReLU activation such that the condition $n_L \geq \ldots \geq n_2 \geq n_1 > n_0 =1$ is not satisfied. In addition, regarding the works of [Chen et al., Neur. Comp.’93], [Fefferman et al., NIPS’93] on characterising maximal weight space symmetries only for the tanh activation and not for sine, the "other case" now contains MLPs with sin activation, too. We have added this discussion to the revised version. **Q3: An illustrative example.** **Answer:** See **Q4** in General Response. **Q4: Adding the exact number of parameters along with the performance.** **Answer:** The exact number of parameters for all models in all tasks have been provided in the Appendix. In addition, we have added runtime and memory usage of our model and the previous ones (see **Q3** in General Response). --- Rebuttal 2: Title: Any Questions from Reviewer yJwm on Our Rebuttal? Comment: We would like to thank the reviewer again for your thoughtful reviews and valuable feedback. We would appreciate it if you could let us know if our responses have addressed your concerns and whether you still have any other questions about our rebuttal. We would be happy to do any follow-up discussion or address any additional comments. --- Rebuttal Comment 2.1: Title: Post rebuttal Comment: I thank the authors for their response. As I mentioned in my initial review, I do not have strong objections to this paper, apart from the fact that expressivity could be limited (we currently do not know, but there are some hints). My overall assessment of the paper remains the same and I recommend acceptance. **Suggestion to authors:** However, I think the discussion on expressivity could have been more elaborate. For example, the argument that the "layers are still sufficient in terms of expressivity (see Theorem 5.1)." is not convincing since Theorem 5.1. only characterises linear equivariant layers and does not provide any evidence about non-linear functions. Another remaining question is whether expressivity is affected because of the need to use only certain activation functions in the neural functional/metanetwork - this has not been thoroughly discussed (the authors pointed to their general response, but I could not locate this part). I strongly encourage the authors to be upfront about the above in their limitations/open questions section to make their work more complete. --- Reply to Comment 2.1.1: Comment: Thank you very much for your reply. We agree with your suggestion, and we will include these interesting discussions, such as the expressivity of the equivariant/invariant layers and the effects of activations, in the limitations and open questions section for future work.
Summary: This paper studies the extension of permutation equivariant neural functionals to accommodate the monomial group, which is a generalization of the permutation group. This extension leads to a new class of NFN called monomial NFN that can also handle the scaling symmetry of positively homogenous activation (RELU) and sign-flipping symmetry of activations such as tanh. The paper then characterizes the subset of all monomial matrices that are either *preserved* by either ReLU or sin/tanh ($\sigma A(x) = A(\sigma x)$). The paper then proceeds to construct group $G_\mathcal{U}$ which is the product of all monomial groups that act on a weight space object $U$, and two subgroups of $G_\mathcal{U}$ under which the models with ReLU or sin / tanh are invariant. Finally, the paper constructs an affine weight space layer that is $G$-equivariant and also $G$-invariant weight space layers, which are more parameter efficient than prior works. Generalization prediction experiments show that the proposed layer performs better than prior works under standard conditions and significantly outperforms when the models are perturbed by scale, corroborating the statement made by the paper. On INR classification and editing tasks, monomial-NFNs are competitive or better than prior works. Strengths: Overall, this is a well-written and technically solid paper with a novel contribution to the neural functional literature. The theoretical results seem sound. While I did not thoroughly check every proof detail, Figure 1 demonstrates that the constructed layer is indeed equivariant to the scale of the weights. The other experimental results are also quite compelling. The parameter saving could also potentially be a big upside of the monomial-NFN which can bring better generalization. Weaknesses: I don't see major weaknesses in the paper. Perhaps one shortcoming is that the majority of performance improvement comes when the weights are scaled, which is not a very natural perturbation in reality. On the other hand, many tasks that cannot be "solved" by previous NFNs remain difficult for monomial NFNs (e.g., CIFAR10 classification), which might indicate some fundamental limitation of this line of work. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. How do the computation cost and memory compare to NFNs? In my opinion, the issue with NFNs' big parameter count is that it's very memory-intensive. While monomial-NFN saves parameters, the benefit is perhaps less significant if it doesn't come with a computational advantage. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Authors have adequately discussed the limitation though it would be good to include some computation cost analysis. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. Below we address your concerns. **Weakness: I don't see major weaknesses in the paper. Perhaps one shortcoming is that the majority of performance improvement comes when the weights are scaled, which is not a very natural perturbation in reality. On the other hand, many tasks that cannot be "solved" by previous NFNs remain difficult for monomial NFNs (e.g., CIFAR10 classification), which might indicate some fundamental limitation of this line of work.** **Answer:** We agree with the reviewer on these limitations of this line of work. In addition, the problem of constructing different types of architecture, such as GNN-based or self-attention-based models, that are equivariant to a monomial matrix group is interesting and nontrivial. We leave this open problem for future study. **Q1: How do the computation cost and memory compare to NFNs? In my opinion, the issue with NFNs' big parameter count is that it's very memory-intensive. While monomial-NFN saves parameters, the benefit is perhaps less significant if it doesn't come with a computational advantage.** **Answer:** See **Q3** in General Response. --- Rebuttal 2: Title: Any Questions from Reviewer MDVG on Our Rebuttal? Comment: We would like to thank the reviewer again for your thoughtful reviews and valuable feedback. We would appreciate it if you could let us know if our responses have addressed your concerns and whether you still have any other questions about our rebuttal. We would be happy to do any follow-up discussion or address any additional comments. --- Rebuttal Comment 2.1: Comment: Thank you for the response. My opinion of the paper is still positive and I would like to keep my current score. --- Reply to Comment 2.1.1: Title: Thanks for your endorsement! Comment: Thanks for your reply, and we appreciate your endorsement.
Summary: - Paper improves neural functional networks by taking into account weight scaling properties of ReLU networks and weight sign flipping symmetries of sin or Tanh networks. - Monomial matrices are used to represent these symmetries, both permutation and scale/sign-flipping transformations. - Proposed model has fewer trainable (independent) parameters compared to the original NFN family of models. Strengths: - Proposes a principled way to incorporate activation functions in neural network representations for MLPs and CNNs. Weaknesses: - No discussion/comparison with previous works on the subject [1,2,3,5]. - No discussion of extensions to architectures with branches/ transformers? - Not the first to consider activation functions in representing neural network weights as claimed. In [1] and [2], activation functions are encoded as nodes in a graph. - The performance gain over the baselines is minimal. Perhaps this suggests that the role of activation function encoding for this task is minimal as shown in the original neural functional works and those of [1, 2, 3, 4]. - There is a focus on very specific activation functions (relu, sin, tanh), admittedly, these are common in many architectures, but the authors do not provide any discussion of how the proposed method can be applied to other activation functions. Technical Quality: 3 Clarity: 3 Questions for Authors: - Minimal equivariance is the goal for this task as equivariance is generally easy to achieve. How does the proposed model satisfy minimal equivariance while ignoring permutations that are not functionally equivarant? - Can the model handle a mixed modelzoo with different activation functions? Note that [1] can handle this case and [3] already demonstrates that this is possible. Per my understanding, the proposed model will need different instantiations of the model for each modelzoo with a different activation function. - How are the input/output layers of the considered networks handled in the proposed framework? Note that these layers do not follow the permutation symmetries of MLPs(only the hidden layers do). ## References [1] Lim, Derek, et al. "Graph metanetworks for processing diverse neural architectures." arXiv preprint arXiv:2312.04501 (2023). [2] Kofinas, Miltiadis, et al. "Graph neural networks for learning equivariant representations of neural networks." arXiv preprint arXiv:2403.12143 (2024). [3] Andreis, Bruno, Soro Bedionita, and Sung Ju Hwang. "Set-based neural network encoding." arXiv preprint arXiv:2305.16625 (2023). [4] Unterthiner, Thomas, et al. "Predicting neural network accuracy from weights." arXiv preprint arXiv:2002.11448 (2020). [5] Zhou, Allan, Chelsea Finn, and James Harrison. "Universal neural functionals." arXiv preprint arXiv:2402.05232 (2024). Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: While limitations are not discussed, i do not see any immediate negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. Below we address your concerns. **W1: Discussion/comparison with previous works in [C1,C2,C3,C5].** **Answer:** Based on the additional references you provided, we have added the following discussion to the revised version of the paper: - *Previous methods*: Permutations and scaling (for ReLU networks), as well as sign-flipping (for sine or tanh networks) symmetries, are fundamental symmetries of weight networks. Permutation-equivariant NFNs are successfully built in [C1,C2,C3,C5,33,41,64,65]. In particular, the authors in [C1,C2] carefully construct computational graphs representing the input neural networks' parameters and process the graphs using graph neural networks. In [C3], neural network parameters are efficiently encoded by carefully choosing appropriate set-to-set and set-to-vector functions. The authors in [C5] view network parameters as a special case of a collection of tensors and then construct maximally expressive equivariant linear layers for processing any collection of tensors given a description of their permutation symmetries. These methods are applicable to several types of networks, including those with branches or transformers. However, the models in [C1,C2,C3,C5], as well as others mentioned in our paper, were not necessarily equivariant to scaling nor sign-flipping transformations, which are important symmetries of the input neural networks. - *Our method* makes the first step toward incorporating both permutation and non-permutation symmetries into NFNs. In particular, the model proposed in our paper is equivariant to permutations and scaling (for ReLU networks) or sign-flipping (for sine and tanh networks). This leads to a significant reduction in the number of parameters, a property that is particularly useful for large NNs in modern deep learning, while achieving comparable or better results than those in the literature. **W2: Extensions to architectures with branches/ transformers?** **Answer:** See **Q2** in General Response. **W3: Not the first to consider activation functions in representing neural network weights.** **W4: The role of activation function encoding for this task is minimal.** **Answer to W3-W4:** We are certainly not the first to consider activation functions in representing neural network weights as several previous works in the literature, including [C1,C2,C3,C4], have already done. However, we assert that our models are the first family of NFNs to incorporate non-permutations such as scaling and sign-flipping symmetries of weight spaces, which are crucial symmetries of neural network weights. This leads to a significant reduction in the number of parameters, much lower computational cost and memory consumption, while achieving comparable or better results than those in the literature. **W5: Applied to other activation functions.** **Answer:** See **Q1** in General Response. **Q1: How does the proposed model satisfy minimal equivariance while ignoring permutations that are not functionally equivariant?** **Answer:** By Proposition 4.4, we already excluded false permutations (i.e., permutations that are not functionally equivariant) from the symmetry group $G$ of the weight networks. In addition, by Theorem 5.1, we proved that the minimal equivariance is actually achieved for this group $G$. **Q2: Can the model handle a mixed modelzoo with different activation functions?** **Answer:** See **Q2** in General Response. In addition, while [C1,C2,C3] can handle these cases, we do not know whether it is possible to incorporate non-permutations, such as scaling and sign-flipping symmetries which are our main focus, into these models. **Q3: How are the input/output layers of the considered networks handled in the proposed framework?** **Answer:** The way the input/output layers are handled is described in Proposition 4.4. According to this proposition, the input and output layers are fixed, as any nontrivial permutation or scaling transformation of the input/output layers would result in a different function. **References** [C1] Lim, Derek, et al. Graph metanetworks for processing diverse neural architectures. arXiv 2023. [C2] Kofinas, Miltiadis, et al. Graph neural networks for learning equivariant representations of neural networks. arXiv 2024. [C3] Andreis, Bruno, Soro Bedionita, and Sung Ju Hwang. Set-based neural network encoding. arXiv 2023. [C4] Unterthiner, Thomas, et al. Predicting neural network accuracy from weights. arXiv 2020. [C5] Zhou, Allan et al. Universal neural functionals. arXiv 2024. --- Rebuttal 2: Title: Any Questions from Reviewer RAv9 on Our Rebuttal? Comment: We would like to thank the reviewer again for your thoughtful reviews and valuable feedback. We would appreciate it if you could let us know if our responses have addressed your concerns and whether you still have any other questions about our rebuttal. We would be happy to do any follow-up discussion or address any additional comments. --- Rebuttal Comment 2.1: Title: Discussion Deadline is in around Two Days Comment: Dear Reviewer RAv9, We sincerely appreciate the time you have taken to provide feedback on our work, which has helped us greatly improve its clarity, among other attributes. This is a gentle reminder that the Author-Reviewer Discussion period ends in just around two days from this comment, i.e., (11:59 pm AoE on August 13th). We are happy to answer any further questions you may have before then, but we will be unable to respond after that time. If you agree that our responses to your reviews have addressed the concerns you listed, we kindly ask that you consider whether raising your score would more accurately reflect your updated evaluation of our paper. Thank you again for your time and thoughtful comments! Sincerely, Authors
Rebuttal 1: Rebuttal: **General Response:** Dear AC and Reviewers, Thanks for your thoughtful reviews and valuable comments, which have helped us improve the paper significantly. We are encouraged by the endorsements that: 1) Our Monomial-NFN is a novel contribution to the neural functional literature with sound theoretical results (Reviewer MDVG) and is an important step toward advancing and popularising the field (Reviewer yJwm); 2) Monomial-NFN is equivariant or invariant to a group that has been underexplored (monomial matrices) (Reviewer yJwm); 3) experimental results are compelling (Reviewer MDVG); 4) the parameter saving is a big upside of the Monomial-NFN which can bring better generalization (Reviewer MDVG, RAv9) and be useful for large NNs in modern deep learning (Reviewer yJwm, xQCm). We address some of the common comments from Reviewers below. **Q1: Extensibility to other activation functions (Reviewers RAv9, yJwm and xQCm).** **Answer:** Our approach can be used to handle the symmetries induced by other activation functions, such as LeakyReLU, by using the same strategy based on weight-sharing patterns. Indeed, it is proven in [24] that the linear groups preserved by some other types of activation functions are certain monomial matrices, which is totally similar to our cases. The reason why we focus on ReLU, sine, and tanh networks is that: they are commonly used in practice, and it is well-known from the literature that the maximal symmetric group of their weight spaces contains both permutations and non-permutations, such as scaling (for ReLU) and sign-flipping (for sine and tanh) symmetries. Both of these symmetries are fundamental symmetries of the weight spaces. **Q2: Extensibility to other architectures with normalizations/branches/transformers and mixed activations (Reviewers RAv9 and xQCm).** **Answer:** Our method is applicable to these architectures, provided that the symmetric group of the weight network is known. The idea is to use the weight-sharing mechanism to redefine the constraints of the learnable parameters. The main concern with the architectures that include normalizations/branches/transformers and mixed activation functions is that we do not know whether their weights are equivariant to any non-permutation symmetries, such as scaling or sign-flipping. Since the aim of this paper is to incorporate scaling and sign-flipping symmetries in addition to permutations into NFNs, these architectures are outside the scope of this paper. **Q3: Compare computation cost and memory to other NFNs (Reviewers MDVG, yJwm and xQCm).** **Answer:** We have added the runtime and memory consumption of our model and the previous ones in the two tables below to compare the computational and memory costs in the task of predicting CNN generalization. It is observable that our model runs faster and consumes significantly less memory than NP/HNP in [64] and GNN-based method in [B1]. This highlights the benefits of parameter savings in Monomial-NFN. *Table 1: Runtime of models* ||NP [64]| HNP [64]|GNN [B1]|Monomial-NFN (ours) | |-|-|-|-|-| | Tanh subset | 35m34s | 29m37s |4h25m17s | **18m23s**| | ReLU subset | 36m40s | 30m06s |4h27m29s | **23m47s**| *Table 2: Memory consumption* | | NP [64] | HNP [64] |GNN [B1] | Monomial-NFN (ours) | |-|-|-|-|-| | Tanh subset | 838MB | 856MB |6390MB|**582MB** | | ReLU subset | 838MB | 856MB |6390MB|**560MB** | **Q4: Add an example (Reviewers yJwm and xQCm).** **Answer:** Let us consider a two-hidden-layers MLP with activation $\sigma=\operatorname{ReLU}$. Assume that $n_0=n_1=n_2=n_3=2$, i.e. all layers have two neurons. This MLP defines a function $f: \mathbb{R}^2 \to \mathbb{R}^2$ given by $$f(x) = W^{(3)} \sigma \left( W^{(2)} \sigma \left( W^{(1)} x + b^{(1)} \right) + b^{(2)} \right) + b^{(3)},$$ where $W^{(i)} =\left(W^{(i)}\_{jk}\right)$ is a $2 \times 2$ matrix and $b^{(i)}=[b^{(i)}_1,b^{(i)}_2]^{\top}$ for each $i=1,2,3$. In this case, the weight space $\mathcal{U}$ consists of the tuples $U=(W^{(1)},W^{(2)},W^{(3)},b^{(1)},b^{(2)},b^{(3)})$ and it has dimension 18. According to Eq. (27), an equivariant layer $E$ over $\mathcal{U}$ has the form $E(U) = (W'^{(1)},W'^{(2)},W'^{(3)},b'^{(1)},b'^{(2)},b'^{(3)})$, where $$W'^{(1)}\_{jk} = \mathfrak{p}^{1jk}\_{1j1} W^{(1)}\_{j1} + \mathfrak{p}^{1jk}\_{1j2} W^{(1)}\_{j2} + \mathfrak{q}^{1jk}\_{1j} b^{(1)}\_{j}, \qquad \text{and} \qquad b'^{(1)}\_j = \mathfrak{r}^{1j}\_{1j1} W^{(1)}\_{j1} + \mathfrak{r}^{1j}\_{1j2} W^{(1)}\_{j2} + \mathfrak{s}^{1j}\_{1j}b^{(1)}\_{j},$$ $$W'^{(2)}\_{jk} = \mathfrak{p}^{2jk}\_{2jk} W^{(2)}\_{jk}, \qquad \text{and} \qquad b'^{(2)}\_j = \mathfrak{s}^{2j}\_{2j}b^{(2)}\_{j},$$ $$W'^{(3)}\_{jk} = \mathfrak{p}^{3jk}\_{31k} W^{(3)}\_{1k} + \mathfrak{p}^{3jk}\_{32k} W^{(3)}\_{2k}, \qquad \text{and} \qquad b'^{(3)}\_j =\mathfrak{s}^{3j}\_{31}b^{(3)}\_{1} + \mathfrak{s}^{3j}\_{32}b^{(3)}\_{2} + \mathfrak{t}^{3j}.$$ These equations can be written in a friendly matrix form which we included in the attached pdf. (We move the matrix form to the attached pdf due to space constraint.). **Reference** [B1] Graph neural networks for learning equivariant representations of neural networks, ICLR 2024. Pdf: /pdf/a1e1e6b95e092e086def37bbf4ece33c44d909c9.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Interaction-Force Transport Gradient Flows
Accept (poster)
Summary: The paper proposes a gradient flow in combined Wasserstein-MMD geometry w.r.t. certain functionals. The authors primarily consider MMD squared functional, but also have some theoretical results regarding KL divergence functional. The work is more about theory: the authors are concerned about some mathematical properties of their proposed flows and convergence analysis, and have only toy 2D illustrative experiments. Strengths: * Overall, the considered topic is quite interesting. The theory of Gradient Flows in different geometries is an emergent field which is at the intersection of Machine (Deep) learning and mathematics. This theory is full of remarkable, non-trivial, theoretical results. Transmitting all of this mathematical beauty into practical algorithms is praiseworthy. * The paper has some interesting theoretical results and statements. Weaknesses: * (A) At first, I found the manuscript to be a bit difficult to read, especially section 2. A lot of specific mathematical terms were used, e.g., “Onsager operator”; tangent/cotangent spaces and metric tensor of (probability) measures space. A lot of relationships between these specific objects were mentioned, e.g., $\mathbb{K} = \mathbb{G}^{-1}$; formulation of gradient flow through the Onsager operator (eq. 2). I think that in order to make the text more accessible for those who are not a specialist in geometry of (probability) measure spaces, it should be either simplified, or all necessary theoretical introductions should be done, e.g., in the appendix. * (B) I am not fully satisfied with the structure of the text. In particular: * Why Remark 3.4 and Corollary 3.5. (some properties of pure IFT gradflow which was introduced much earlier) are located right after technical Theorem 3.3 (Lojasiewicz inequalities)? I think it is better to place these statements right after Remark 3.1. * For me, it is a bit strange that the paper develops theory of spherical IFT gradflows (Section 3.1.), while the only practically considered case (where the driving functional for the gradflow is MMD) does not require this theory (Theorem 3.6) because spherical MMD (IFT) coincides with conventional MMD (IFT) flow. May be more emphasize (inculuding practical evaluations) should be put on KL-driving gradflows, where the sphericity matters. * (C) (lines 98-99). Machine learning applications of Wasserstein gradient flows: some missed links: [1-6] * (D) To be honest, I am a bit skeptical about the pure MMD gradient flow (in the MMD geometry) - which is denoted as (MMD-MMD-GF) in Theorem 3.6, and, correspondingly, my skepticism extends to MMD gradflow in the joined Wasserstein-MMD geometry - (MMD-IFT-GF) in Theorem 3.6. At first, (MMD-MMD-GF) was considered in literature, e.g., [7] (not cited!) - see Section 3.1, Case 1 of their paper. And it was noted that such pure MMD-MMD flow is undesirable in practice, exactly because it “teleports” mass between initial and target distribution (note that the solution to MMD-MMD is just interpolation between distributions, as noted by Theorem 3.6). As I understand, the idea of the paper under consideration is that by considering joined Wasserstein-MMD geometry (MMD-IFT flow) one can leverage this problem. However, in the paper, I didn’t find sufficient evidence that it is the case. In particular, the proposed practical optimization procedure includes solving proximal MMD minimizing-movement step, eq. 16. For $F = \text{MMD}$ it boils down to eq. (18), which is MMD barycenter problem. It is known that MMD barycenter problem has solution, see [8, proposition 2], - it is just a mixture of input distributions. Therefore, if we fairly solve MMD minimizing movement step, the resulting $\mu^{\ell + 1}$ will mix $\mu^{\ell + \frac{1}{2}}$ and target $\pi$, i.e., the teleporting of mass will occur. * (E) The practical validation of the method is rather weak. Only a couple of 2D experiments with Gaussians/Mixture of Gaussians. Moreover, I didn’t find that the proposed method performs better than the alternatives. May be, according to some metrics it is indeed the case, but the visual performance of the method is somewhat disappointing. As I understand from Figures 3, 4 and gifs provided in the supplementary, the method leaves a considerable number of points far from the support of target distribution. The alternatives, even vanilla MMD flow, are better in terms of this characteristic. [1] Gao et. al., Deep Generative Learning via Variational Gradient Flow, ICML’2019 [2] Gao et. al, Deep Generative Learning via Euler Particle Transport, MSML’2021 [3] Mokrov et. al., Large-Scale Wasserstein Gradient Flows, NeurIPS’2021 [4] Alvarez-Melis et. al., Optimizing Functionals on the Space of Probabilities with Input Convex Neural Networks, TMLR’2022 [5] Bunne et. al., Proximal Optimal Transport Modeling of Population Dynamics, AISTATS’2022 [6] Fan et. al., Variational Wasserstein gradient flow, ICML’2022 [7] Mroueh et. al., Sobolev Descent, AISTATS’2019 [8] Cohen et. al., Estimating Barycenters of Measures in High Dimensions Technical Quality: 3 Clarity: 2 Questions for Authors: * (a) What are the properties of inverse operator $\mathcal{K}^1$. In particular, why is it linear on its arguments? * (b) The MMD minimizing movement step (eq. 16 and eq. 18) is being solved inexactly in practice, e.g., only weights of particles are optimized, while the exact solution is the mixture of source and target distributions. Moreover, even eq. 18 is substituted with a single step of project GD. What is the reason? How do such approximations affect theoretical and practical properties of the proposed method? * (c) In the appendix, proofs. Why does the equality hold: $\langle \frac{\delta F}{\delta \mu}[\mu], \mathcal{K}^{-1}\frac{\delta F}{\delta \mu}[\mu] \rangle_{L_\mu^2} = \Vert \frac{\delta F}{\delta \mu}[\mu] \Vert^2_{\mathcal{H}}$? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The limitations were addressed correctly Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for writing a long and critical review. We must point out that the review is filled with misunderstandings, which we do our best to clarify below. We would appreciate it if the reviewer could please consider our clarification. >  This theory is full of remarkable, non-trivial, theoretical results. Transmitting all of this mathematical beauty into practical algorithms is praiseworthy. Thank you. But such a positive assessment does **not** match your score. > (E) The practical validation is rather weak... ... visual performance of the method is somewhat disappointing... the method leaves a considerable number of points far from the support of target distribution... vanilla MMD flow, are better We are deeply confused by this assessment. The only explanation we can think of is that the reviewer completely misunderstood the point of our experiment and intuition of unbalanced transport. We have clearly stated **twice** in the manuscript (Fig.3 & 4 captions), that **color intensity indicates points’ weights** and mention that **weights vanish**. The "points far from the support" the reviewer was referring to are likely (since we are confused by the review) the points of zero mass, hence the reviewer's comment is likely due to a major misunderstanding. Nonetheless, we will improve the presentation further to avoid misunderstandings. We will explicitly say: **The hollow/white circles indicate the particles that have already vanished**. Please note: as the paper indicates, the performance of the IFT is overwhelmingly good. Hence the review comment does not make sense. > only a couple of 2D experiments We have now included results in higher dimensions in the attached PDF. > strange that the paper develops theory of spherical IFT gradflows.. does not require this theory (Theorem 3.6) because spherical MMD (IFT) coincides with conventional MMD (IFT) flow The reasoning seems flawed here. Our results such as Thm 3.6, precisely establish this special-case equivalence of SIFT and allow an easy implementation. Without our theory, there is no literature on the PDE of a mass-preserving flow in the MMD/IFT, then no implementation is possible. The reviewer claimed our theory is "not needed" after she/he read our theory. Without this result, one cannot implement the mass-preserving flow. Furthermore, rigorously speaking, it is also incorrect to say two gradient flows are equivalent. Hence, the logic of the claim that our result is ”not needed” is not sound > [7] (not cited!) We have now cited a subsequent/expanded work by the same group [Y. Mroueh and M. Rigotti] in the revised preprint. This paper covers and overlaps with the older paper you mentioned, we believe the coverage is now enough. Furthermore, the paper you mentioned did not contain the PDE theory and principled gradient structure we uncovered. Hence, although related, it should not be used as an argument to undermine our contribution. > my skepticism extends to MMD gradflow in the joined Wasserstein-MMD geometry Unfortunately, we failed to find sound reasoning for this "extension" of skepticism. We never claimed MMD-Mmd-Gf (which is not our focus) is superior. IFT is what we advocate and the Wasserstein contribution made the difference. The review simply cited an old paper related to but didn't theoretically study **MMD-MMD-GF**, let alone IFT, and claimed this "extends" their "skepticism" to IFT. We do not see any sound reasoning here. > Therefore, if we fairly solve MMD minimizing movement step, the resulting μℓ+1 will mix μℓ+12 and target π, i.e., the teleporting of mass will occur. Sorry, we did not get what is the question or concern here. The statement is straightforward from our PDE/ODE characterization in Thm 3.6. > (a) What are the properties of inverse operator 𝐾1. In particular, why is it linear Do we understand it correctly, that the reviewer is asking why is the inverse of a linear operator, linear? Please see also the first paragraph in Sec 2.2 for properties of the integral operator and references that can provide more basics of kernel methods and the integral operator. Or we didn't understand your question correctly? > (c) In the appendix, proofs. Why does the equality hold: First, there is a typo that the inner products should be in unweighted L2 (thank you for helping us fix this), i.e., $$ \langle \frac{\delta F}{\delta \mu}\left[\mu\right] ,\mathcal K^{-1} \frac{\delta F}{\delta \mu}\left[\mu\right] \rangle_{L^2} \\ = \| \frac{\delta F}{\delta \mu}\left[\mu\right]\|^2_{\cal H} $$ This correct relation follows from the textbook definition of the RKHS (and its norm). See standard text such as Cucker, F., & Zhou, D. X. (2007). _Learning theory: an approximation theory viewpoint_ (Vol. 24). > substituted with a single step of project GD. What is the reason? How do such approximations affect theoretical and practical properties of the proposed method?   In optimization, it is often desirable and significantly faster to perform inexact iterations to be more computationally efficient. That is the reason of our implementation. As our paper does not focus on the theory of the exact-inexact relation, hence it does not affect the existing analysis of the paper. In practice, we have observed such changes do not affect the performance significantly but improve the speed. Since eq (18) is a convex program, inexact iterations are preferred. > difficult to read...A lot of specific mathematical terms were used First, we will consider your suggestions. Our paper is a mathematically rigorous treatment, hence some mathematical terms are necessary. Plus, we have already opted for a minimal set of common terminologies for ML researchers on the topic. It also seems the other reviewers did not experience the same difficulty. Nonetheless, we will try to make the paper more accessible to non-experts (than it already is). --- Rebuttal Comment 1.1: Title: Thanks to the authors Comment: I thank the authors for the answers they provided and appreciate the fairly expressed attitude towards my review. Some comments: 1. Indeed, I missed that your method supports the weights on par with the particles itself, and this is the reason of my wrong evaluation of your 2D Gaussian experiments. My bad, my carelessness. It seems that I was a bit biased due to typical particle flows I know (MMD flow, KSD flow, SVGD flow) which do not introduce this additional complexity with weights. 2. I appreciate the additional 100D Gaussian $\rightarrow$ mixture of 3 Gaussians experiment. It strengthens the work. 3. **The reviewer claimed our theory is "not needed"** - I never said that anywhere in my review. The only my "not needed" was about ethics review. Regarding the theory, I just wanted to encourage the discussion on the practical aspects of the flows different from MMD (IFT). 4. Which particular work do you mean by **[Y. Mroueh and M. Rigotti]**? Also, I am a little confused as to why the authors refuse to cite a related work [7], which I pointed out in my review. 5. For sure, MMD-MMD-GF is not your focus. My point was that MMD-IFT-GF (the only practically evaluated flow you propose) inherits some undesirable properties of the pure MMD-MMD flow. I mentioned the teleporting of mass problem noticed in the old paper I cited. And I just claimed that this teleporting of mass problem also appears in your case (MMD-IFT-GF) - theoretically - when solving (MMD minimising movement step) - eq. 16. This is because the MMD minimizing movement step (as you noticed) boils down to MMD barycenter problem, which has a known solution (see [8, proposition 2]). And this known solution is just a mixture of source and target samples. Maybe this teleporting of mass phenomenon is indeed clear from your PDE/ODE characterization in Thm 3.6, but anyway this phenomenon worth to be explicitly mentioned in the paper 6. **In optimization, it is often desirable and significantly faster to perform inexact iterations to be more computationally efficient.** In general, I agree with this statement. However in your case solving the MMD barycenter problem exactly seems to be faster, because, as [8, proposition 2] notices, the solution is just the mixture of particles. In conclusion, I thank the authors one more time and raise my score. --- Reply to Comment 1.1.1: Title: Thank you for acknowledging the major misunderstanding. We have now addressed the new points. Comment: We thank the reviewer for reading our rebuttal. However, the 6 points raised by the reviewer appear to digress from the main point of the rebuttal and do not justify the rejection assessment. We now point-by-point address the reviewer's comments. Due to the lateness of the comments, we try our best to be thorough. ### Points 1 and 2 Thank you for acknowledging the major misunderstanding and acknowledging our new experiments. We believe those issues are now resolved. ### Point 3 > Under item (B) of "Weaknesses", it was stated that "the only practically considered case (where the driving functional for the gradflow is MMD) **does not require this theory** (Theorem 3.6)". First, we apologize for wrongly writing "required" as "needed", though we believe the meaning is the same. This is what our "not needed" comment refers to. Does the reviewer's claim "not require" does not imply "not need", or does the reviewer still stand by this assessment? In any case, we have already addressed this (non)-issue in the rebuttal, and we believe this point is now resolved. ### Point 4 [Y. Mroueh and M. Rigotti]: Unbalanced sobolev descent. Advances in Neural Information Processing Systems. 2020;33:17034-43. > I am a little confused as to why the authors refuse to cite a related work [7] We have clearly stated in the rebuttal that the newer paper above contains the old framework but additionally newer methodologies (e.g. Kernel-Sobolev-Fisher discrepancy, which is more general) and results. Proper scholarly practice is to avoid block citations of many similar papers, when the relevant line of work has already been covered. We also need more time to look into the content of the 7 papers [1-7] the reviewer suggested we cite. [Y. Mroueh and M. Rigotti] appears to be more recent and comprehensive to the best of our knowledge. We hope our reason is clear. In any case, we also did not refuse to cite [7], we simply stated that we have covered kernel Sobolev descent, and it should not be used as an argument to undermine our contribution. Furthermore, none of those papers contain the contributions of our paper, so we again wish to emphasize that comments digress from the main point of the rebuttal and the rejection assessment is not justified here. ### Point 5 > My point was that MMD-IFT-GF ... inherits some undesirable properties of the pure MMD-MMD flow. We still do not see any mathematical justification for this "inheritance". The comments kept mentioning the MMD steps, but the IFT has a Wasserstein step with diffusion. So the comment is not sound. More mathematical analysis and evidence are needed to support such a claim. > anyway this phenomenon worth to be explicitly mentioned in the paper We agree. We have already done this by giving the precise mathematical formulation of the flow solution: see the second formula in Thm 3.6. This precise statement is already explicit. Furthermore, the solution of (MMD-IFT-GF) has not been studied and is not simply a mixture. We are open to adding more plain English sentences to the presentation if it helps non-experts understand the results better. But again, the reviewer digresses and this is a minor presentation (non-)issue. > known solution is just a mixture of source and target samples Mathematically, this is not rigorous. We do not assume the target distribution to be discrete, the mixture is infinite-dimensional and not simple to implement. The goal of IFT or Arbel et al.'s work is to find a gradient-based algorithm to generate the path $\mu_t$. Again, we do not see this to be a mathematical reason to undermine IFT. ### Point 6 We agree that there can be more discussion and future work on how to implement the MMD step. We already discussed with great detail. We will improve the presentation. > solving the MMD barycenter problem exactly seems to be faster We have tried both in practice. What the reviewer described is not the case. "Faster" for what? We will expand on this in the corresponding section. > because, as [8, proposition 2] notices, the solution is just But our goal is not to solve the MMD Barycenter sub-problem -- it is just a subroutine in the JKO splitting scheme. The goal is to generate samples to construct the path $\mu_t$. The logic in the reviewer's comment is flawed: why not just take samples from $\pi$ directly? Why do we bother using methods such as that from [Arbel et al.]? Furthermore, [8, proposition 2] simply says the solution to the sub-step is a reweighting, that is precisely what we implemented. One must optimize the reweighting coefficients $\beta_p$ in [8]. The reviewer's comments seem to have trivialized the this. Again, zooming out to the big picture, this numerical detail of implementation, for which we did not hide anything, does not seem to be a sound case for rejecting our contribution. ### Conclusion In summary, we appreciate the reviewer's time and effort. However, those comments do not justify the rejection assessment. --- Rebuttal 2: Title: End of discussion period approaching Comment: Dear reviewer, Thank you for your feedback on our manuscript. We have carefully considered your comments and suggestions and have made the revisions. We have also included new experiments and done our best to answer your questions. The rebuttal took a tremendous amount of effort and we want to make sure it has been read. As the discussion period will be closed soon, we kindly ask for your feedback on the rebuttal. Have we addressed your concerns? Is there anything else we can improve? Thank you again for your time and effort. Authors
Summary: This manuscript proposes a new gradient flow over probability and non-negative measures termed the interaction-force transport (IFT). The flow is based on the inf-convolution of the Wasserstein Riemannian metric tensor and the spherical maximum mean discrepancy (MMD) Riemannian metric tensor. The authors provide a number of theoretical results related to their proposal: global exponential convergence guarantees for the MMD and Kullback-Leibler divergence energies. While convergence results are available for the KL divergence energy, not much is known theoretically in the context of the MMD energy. The authors then introduce a particle gradient descent algorithm for IFT, composed of two steps gradient flows (one to update particle locations via the Wasserstein step and one to update particle weights via the MMD step), and show two proof-of-concept examples that validate the developed theory. Strengths: 1. The paper is well-written and well-organized. Despite heavy notation throughout, the authors do a nice job introducing the notation and staying consistent throughout the manuscript. The literature review on related method is also quite thorough. 2. In my opinion, this is a significant theoretical contribution to the machine/statistical learning literature. The topic is timely and will appeal to a broad NeurIPS audience. 3. While the paper is theoretical in nature, I appreciate that the authors provide an implementation of the proposed gradient flows. The presented algorithm is easy to understand ensuring reproducibility. 4. The contribution of this manuscript is original in many aspects: the new definition of gradient flows, the proofs of global convergence for the MMD and KL energies, and comparison to previously defined methods for the MMD flow that required a heuristic noise injection step. Weaknesses: I did not identify many weaknesses in this work. While I understand that the contributions are theoretical in nature, I wish the authors would have presented more examples/comparisons to the work of Arbel et al. [2019] in an appendix. The presented examples are sufficient as proof of concept. Technical Quality: 4 Clarity: 4 Questions for Authors: Could the authors comment on the roughness of the standard deviation bands in Figure 2 for the proposed method? Also, while this limitation is already mentioned briefly in the discussion, I feel that a bit more could be said about scalability of the presented algorithm. This does not necessarily have to be included in the main body of the paper, but rather in the appendix where the algorithm is presented. Confidence: 2 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Limitations were adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to carefully read and review the paper. We appreciate that you have noticed the many subtle features we put into the manuscript. Thank you for the kind summary in the "Strengths" section. Below, we respond to a few concerns and suggestions. > the roughness of the standard deviation bands in Figure 2 for the proposed method? Thank you for the careful read and for noticing the details. First, we believe the comment refers to the std band in Fig.2, which occurs around and below 10^{-3} magnitude. We suspect that could be due to that we reported in the y-axis in log scale. Then, it is expected the bands get rougher the lower they get. We observe that the same happens to the "MMD flow+noise" when its performance improves eventually (over more iterations); so it does not pertain to IFT. > this limitation is already mentioned briefly in the discussion, I feel that a bit more could be said about scalability of the presented algorithm. This does not necessarily have to be included in the main body of the paper, but rather in the appendix where the algorithm is presented. Indeed, we agree that scalability is important and deserves a more detailed discussion, at least in the technical details in the appendix. First, we note that our current implementation only looks at the batch case. Here, IFT performs very well as in, e.g., Fig.2. As we have larger data sets in higher dimensions, the vanilla MMD might be limiting. Additionally, to speed up the implementation and hence scalability, we have reported that we perform a single step of projected gradient descent instead of computing a full solution to the optimization problem (18). This is effective as (18) is a convex program. We note that we have not yet explored stochastic algorithms; we have focused on full-batch so far. Hence stochastic algorithms and parallel implementation could be future directions to improve scalability. Since we are also interested in applications to generative models, there are possibilities such as deep-net features inside the kernels, as used in MMD-GAN works. > more examples/comparisons to the work of Arbel et al. [2019] in an appendix. Thank you for the suggestion -- we agree. We have now added new experiments, whose results we report in the PDF: - experiments in higher dimensions, vs Arbel et al. 2019; - new experiments that uses the Wasserstein-Fiher-Rao flow of the MMD energy. To the best of our knowledge, this is the first implementation of the MMD energy in this flow; - many miscellaneous improvements in terms of scalability and implementation, as suggested by the review. We will provide the details in the appendix of the revised manuscript as you suggested. --- Rebuttal 2: Comment: After considering all of the reviews and the authors' rebuttal, I am inclined to keep my rating as is. I appreciate the additional experiments provided in the PDF file as part of the rebuttal. --- Rebuttal Comment 2.1: Title: Thank you for reading the rebuttal Comment: Dear reviewer, Thank you for taking the time to read and respond to our rebuttal! Your feedback has helped improve our manuscript. Authors
Summary: This paper proposes a novel gradient flow geometry – interaction-force transport (IFT). It is theoretically shown that IFT gradient flow has global exponential convergence guarantees both for MMD and KL energies. The authors propose an algorithm based on the JKO-splitting scheme and test it on examples with 2D Gaussians and Gaussian mixture. Strengths: The paper provides the proof of the exponential convergence guarantees for IFT gradient flow with MMD energy. Weaknesses: I am not convinced that the established theoretical results and provided experimental justifications are sufficiently significant. The main theoretical contribution of the current paper is the proof of exponential convergence guarantees for their IFT gradient flow both with MMD and KL divergence energy. (Actually, the proofs of these results do not seem to be very impressive, e.g., the proof of Proposition 3.8 immediately follows from two well-known facts. Anyway, it is not my major concern.) The authors state that the established convergence guarantees (especially, for the MMD energy) are the main motivation for considering the IFT gradient flow. However, for the KL case, it is not a surprising property since even ordinary Wasserstein flow with KL divergence energy has the same exponential convergence guarantees. For the MMD case, the results are quite novel, although there exist several other works which prove some convergence properties of flows with MMD energy (Arbel et al., 2019). Thus, I am wondering, are the provided proofs of exponential convergence rates actually important for the practical use of the designed algorithm? The empirical evaluation of the algorithms seems to be very limited. The algorithm is tested only in low-dimensional experiments using 2D Gaussians and Gaussian Mixtures which immediately raises questions regarding the scalability of the approach. Besides, the authors compare their approach only with Wasserstein flows with MMD energy (with or w/o noise injection) (Arbel et al., 2019, Korba et al., 2021). However, it is important to see how the algorithm behaves in comparison to flows with KL divergence energy as well. Overall, my main concerns are related to the questionable significance of paper results. The proof of exponential convergence rates for their IFT gradient flow solely does not seem to be a significant contribution. Meanwhile, the experimental evaluation needs to be considerably enhanced. *Minor*: - line 218: 'expontial' - typo Technical Quality: 2 Clarity: 3 Questions for Authors: Does your approach have some practical use cases? I suggest the authors to improve the experimental part of their paper by - including experiments in dimensions larger than $d=2$ - performing comparisons with flows using KL divergence as the energy, e.g., with (Yan et al., 2023, Lu et al., 2019) which are cited in the paper I am open to adjusting my score if the authors address these suggestions. **References.** M. Arbel, A. Korba, A. Salim, and A. Gretton. Maximum Mean Discrepancy Gradient Flow. arXiv:1906.04370, Dec. 2019. A. Korba, P.-C. Aubin-Frankowski, S. Majewski, and P. Ablin. Kernel Stein Discrepancy Descent. In Proceedings of the 38th International Conference on Machine Learning, pages 5719–5730. PMLR, July 2021. Y. Yan, K. Wang, and P. Rigollet. Learning Gaussian Mixtures Using the Wasserstein-Fisher-Rao Gradient Flow. arXiv:2301.01766, Jan. 2023. Y. Lu, J. Lu, and J. Nolen. Accelerating Langevin Sampling with Birth-death. ArXiv, May 2019. Confidence: 2 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors have addressed the limitations of their approach in the discussion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive suggestions and the critical review. We have incorporated most of your suggestions, e.g., new experiments. We did find the one comparison you suggested with KL inference difficult, for which we explained the reason. > including experiments in dimensions larger than 𝑑=2 Done. The results (attached PDF) are consistent with the other experiments. > performing comparisons... (Yan et al., 2023, Lu et al., 2019) We agree that more experiments can strengthen the paper. We have run new experiments and reported them in the attached PDF, such as WFR flows used by Yan et al. and Lu et al. First, We must note that KL minimization is NOT the topic of this paper: our focus is a type of (unbalanced-transport) gradient flows and optimization with applications to the MMD-minimization tasks. IFT is tailored to address the gap in the literature on "MMD flow". We also do not claim that MMD inference is superior to KL inference, while some other papers might do so. Furthermore, we look forward to applying our theory to generative models using MMD, e.g., [Galashov et al]. Nonetheless, we appreciate the reviewer raising this. We can think of two interpretations of the question: 1. KL energy with the proposed IFT flow: Recall that the MMD step is a discretization of the differential equation $$ \dot \mu =- \beta\cdot \operatorname{\mathcal{K}}^{-1} \left(\log \frac{\mathrm{d}\mu}{\mathrm{d}\pi} - ... \right) $$ One could view the update step as a step in the "kernel-mean-embedding space": let $e(\mu):=\operatorname{\mathcal{K}} \mu$ be the kernel mean embedding $$ e(\mu)^{\ell+1} \gets e(\mu)^{\ell} - \eta \beta \cdot \left(\log \frac{\mathrm{d}\mu^{\ell}}{\mathrm{d}\pi} -...\right) $$ However, as there is no guarantee that the velocity $\log \frac{\mathrm{d}\mu^{\ell}}{\mathrm{d}\pi}-...$ to be in the RKHS. Hence, this step is theoretically interesting, but it's unclear how the infinite-dimensional update can be implemented in a principled way. 2. KL energy with the WFR flow [Yan & Lu et al.]: Their methods are adaptations of the original Wasserstein-Fisher-Rao flow. Their cases belong to the "KL-inference" where one only has access to the score (or potentials) of the target e.g. $\nabla \log \pi$, instead of the "MMD-inference" (our paper, and Arbel et al., etc.) where we have access to the target $\pi$ via its samples but not the score $\nabla \log \pi$. Therefore, how to perform a "fair comparison" is unclear to us at the moment (e.g. how many samples from $\pi$ ? Noisy evaluation of the score?). In summary, we agree that the reviewer's proposal is interesting and can see room for future work. At this moment, we can only leave comparing MMD vs KL as future work as they apply in different tasks and settings (score-based vs sample-based). We have now implemented the **WFR flow** of MMD. It works only slightly worse than IFT but does not come with guarantees. See our overall rebuttal summary and the PDF. > Proposition 3.8 immediately follows from two well-known facts... for the KL case, it is not a surprising property ... We have already clearly stated that it is standard (L229-230). Neither did we claim it is surprising. It is also immediately evident that KL is not the focus of this paper, as can be seen from the shortness of Sec.3.3. Hence, we believe our current presentation is not likely to cause confusion. We emphasize that the discussion of KL is for completeness and to show that IFT enjoys the *best of both worlds*. Bonus: Prop. 3.8 is not as trivial as it seems -- we just fixed an error in the current revision: the LSI only holds along the mass-preserving spherical flows (SIFT) over $\cal P$, but the flow over $\cal M ^+$. This technicality does not change ML applications, especially with MMD. But anyway, thanks for helping us notice this. > For the MMD case, the results are quite novel... other works... (Arbel et al., 2019). Thus, I am wondering, are the provided proofs of exponential convergence rates actually important...? Thank you for the positive assessment. We will add a small discussion in the revision as per your comment. A sketch: (Arbel et al., 2019) does not contain (global) convergence analysis. For example, their Prop.2 states energy is non-increasing. However, this is not equivalent to convergence and is easily satisfied by many flows. The mathematical limitation to their "MMD flow" is that the MMD is in general not guaranteed to be convex along the Wasserstein geodesics. [Arbel et al.] also contains a heuristic noise injection procedure, albeit without good analysis characterization. In contrast, we believe establishing the first global (exponential) convergence in our paper is, needless to say, important. Not to mention our theorems/proof are clean and do not contain un-verifiable assumptions. Therefore, in our case, we have both practical performance and theoretical guarantees > Does your approach have some practical use cases? This paper is mainly methodological and analytical, but we also value practical application. MMD flow applications in the literature mainly include, e.g., - image process, as done by [Hertrich et al.] and that research group at TU Berlin - generative models (MMD GAN) - MMD two-sample test We also see some new directions of new deep generative models that are based on gradient flows of the MMD e.g. in [Galashov et al.] > Strengths: The paper provides the proof of... We emphasize that the proofs are a part of but not the whole of our contributions, which also include the discovery of the IFT **gradient structure**, the implementable algorithms, PDE theory of the dissipation mechanism of the MMD and inf-convolution with Wasserstein (e.g. Prop 3.2, Thm 3.6). The work also advances the state-of-the-art understanding of "MMD gradient flow", which already has sizable literature. ## Reference Galashov A, de Bortoli V, Gretton A. Deep MMD Gradient Flow without adversarial training. arXiv; 2024 --- Rebuttal 2: Title: End of discussion period approaching Comment: Dear reviewer, Thank you for your feedback on our manuscript. We have carefully considered your comments and suggestions and have made the revisions. Per your suggestions, we have also included new experiments and done our best to answer your questions. As the discussion period will be closed soon, we wish to ask for your feedback on the rebuttal kindly. Have we addressed your concerns? Is there anything else we can improve? Thank you again for your time and effort. Authors --- Rebuttal Comment 2.1: Comment: I thank the authors for their answers to my questions and concerns. First, I appreciate that you conduct a moderate-dimensional (d=100) experiment with mixture of 3 Gaussians as a target and include the comparison with some of the requested approaches. I expected that you will also provide some figures visualizing the obtained results (although, I understand that it might be quite tricky for d>2). Second, as you explain, the practical implementation of KL energy with the proposed IFT flow (requested by me and other reviewer) is out of the scope of this paper. From my point of view, you should somehow state this directly in the paper, otherwise, the existence of the whole section of the paper related to this case seems to be confusing from my point of view. Third, I am still not sure that the provided theoretical results with only moderate-dimensional experiments (up to d=100) with Gaussians are solid enough to be published. I see that the limited experimental evaluation ('proof-of-concept' type of experiments) was noted by other reviewers too. Respecting the time spent by the authors on running the experiments, I *adjust my score* accordingly. Meanwhile, I am looking forward for further discussion with other reviewers and Area Chairs. --- Reply to Comment 2.1.1: Title: Thank you for responding to our rebuttal Comment: Dear reviewer, Thank you for considering our rebuttal. We appreciate your feedback and that you have adjusted your score accordingly. We agree and will indeed provide a detailed explanation of the sampled-based (MMD energy) vs score-based (KL energy), as outlined in our rebuttal text, especially around Sec 3.3. We respect the reviewer's third point. Indeed, our experiments may be "proof of concept". Our hope is to propose the IFT "gradient structure" in this paper (e.g. K_IFT in eq (7) ) and study its properties (e.g. Theorem 3.6). In view of the already sizable literature on MMD flows started by Arbel et al. (2019), we believe that the proposed gradient structure is a significant contribution and will generate useful mathematical insights for ML researchers working on related topics. From the technical perspective of gradient flows, the discovery of a new gradient structure is already quite non-trivial. That is our intention in this paper. However, we perfectly respect that the reviewer may have a different perspective based on their expertise. We will do our best in the next revision to make our insight useful for a wider audience. Thanks again, Authors
Summary: This paper proposes a novel gradient flow geometry (IFT), based on the infimal convolution of the Wasserstein tensor with the MMD tensor. For this geometry, the authors show global exponential convergence guarantees for both MMD and KL energies. They then develop an algorithm for the IFT gradient flow and test it on an MMD inference task, showing empirically that it avoids mode collapse as Arbel et al. does. Strengths: 1. The exposition clarity is excellent. The authors do a great job of positioning their work relative to existing gradient flow works. 2. In introducing a novel gradient flow geometry and showing favorable convergence characteristics, the work has good potential to inspire follow-on works. Weaknesses: 1. The experiments run are relatively simple and low-dimensional, and it is not clear how practical the method would be for more realistic application scenarios. 2. There is no example comparison of behavior for KL, which would have been nice to see. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Looking at the numerical example, have you tried any heuristic approaches, e.g. some sort of branching, for eliminating and repopulating particles when weights get very low on certain particles? It seems such behavior might improve performance empirically. Confidence: 2 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: Limitations are well-acknowledged by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the fair assessment and constructive suggestions. We are glad that the exposition of the paper is easy to follow -- thank you for this feedback. We have incorporated some of your suggestions and left more challenging ones for future work. > experiments run are relatively simple and low-dimensional, and it is not clear how practical Your assessment is correct -- indeed our experiments are only a proof-of-concept. We have now included results in higher dimensions in the attached PDF. We also plan to investigate more realistic applications in the future. > numerical example, have you tried any heuristic approaches, We appreciate your constructive suggestions. First, we stated at the end of Sec.5 (L317-319) that there are simple improvements that can be made to improve the algorithm, and we deliberately left them out to keep the paper focused; almost playing handicapped. The IFT still outperforms the pure Wasserstein flow. Second, thank you for the valuable suggestions. We believe they can definitely help with practical performance. As we reported in the paper, we actually let IFT run with a handicap (i.e. every IFT iteration counts as two iterations). We did so to show that IFT already has an advantage without further heuristic tricks. We have focused on implementing the WFR flow during the rebuttal to provide another new algorithm for comparison. We will further investigate your suggestions in the near future and comment in our Sec.5 if any of the tricks help. > comparison of behavior for KL We agree that more experiments can strengthen the paper. A direct comparison between energy MMD vs KL is difficult, as we explain below. However, we did run the newly implemented WFR flow and reported them in the attached PDF. First, We must note that KL minimization is NOT the topic of this paper: our focus is a type of (unbalanced-transport) gradient flows and optimization with applications to the MMD-minimization tasks. IFT is tailored to address the gap in the literature on "MMD flow". We also do **not** claim that MMD inference is superior to KL inference, while some other papers might do so. Furthermore, we look forward to applying our theory to generative models using MMD, e.g., [Galashov et al]. Nonetheless, we appreciate the reviewer raising this. We can think of two interpretations of the question: 1. KL energy with the proposed IFT flow: Recall that the MMD step is a discretization of the differential equation $$ \dot \mu =- \beta\cdot \operatorname{\mathcal{K}}^{-1} \left(\log \frac{\mathrm{d}\mu}{\mathrm{d}\pi} - \frac{\int \operatorname{\mathcal{K}}^{-1}\log \frac{\mathrm{d}\mu}{\mathrm{d}\pi}}{\int \operatorname{\mathcal{K}}^{-1}1}\right) $$ One could view the update step as a step in the "kernel-mean-embedding space": let $e(\mu):=\operatorname{\mathcal{K}} \mu$ be the kernel mean embedding of $\mu$, then the update rule is $$ e(\mu)^{\ell+1}\gets e(\mu)^{\ell} - \eta \beta \cdot \left(\log \frac{\mathrm{d}\mu^{\ell}}{\mathrm{d}\pi} -...\right) $$ However, as there is no guarantee that the velocity $\log \frac{\mathrm{d}\mu^{\ell}}{\mathrm{d}\pi}-...$ to be in the RKHS. Hence, this step is theoretically interesting, but it's unclear how the infinite-dimensional update can be implemented in a principled way. 2. KL energy with the WFR flow: this case belongs to the "KL-inference" where one only has access to the score (or potentials) of the target e.g. $\nabla \log \pi$, instead of the "MMD-inference" (our paper, and Arbel et al., etc.) where we have access to the target $\pi$ via its samples but not the score $\nabla \log \pi$. Therefore, how to perform a "fair comparison" is unclear to us at the moment (e.g. how many samples from $\pi$ ? Noisy evaluation of the score?). In summary, we agree that the reviewer's proposal is interesting and can see room for future work. At this moment, we can only leave comparing MMD vs KL as future work as they apply in different tasks and settings (score-based vs sample-based). We have now implemented the **WFR flow** of MMD. It works only slightly worse than IFT but does not come with guarantees. See our overall rebuttal summary and the PDF. ### Reference Galashov A, de Bortoli V, Gretton A. Deep MMD Gradient Flow without adversarial training. arXiv; 2024 --- Rebuttal 2: Title: End of discussion period approaching Comment: Dear reviewer, Thank you for your feedback on our manuscript. We have carefully considered your comments and suggestions and have made the revisions. We have also included new experiments and done our best to answer your questions. As the discussion period will be closed soon, we wish to ask for your feedback on the rebuttal kindly. Have we addressed your concerns? Is there anything else we can improve? Thank you again for your time and effort! Authors --- Rebuttal Comment 2.1: Title: Thank you & keeping score as is Comment: Dear authors, Thank you for the clear and extensive response to my review. I think I will maintain my score as is, remaining slightly positive on the paper, given the proof-of-concept experiments and perhaps limited immediate impact. Reviewer --- Reply to Comment 2.1.1: Title: Thank you for reading our rebuttal Comment: Dear reviewer, Thank you for your feedback and for taking the time to read the rebuttal. Best regards, Authors
Rebuttal 1: Rebuttal: Dear reviewers, dear AC, We would like to thank all reviewers for their constructive feedback. We are glad that the majority of the reviewers found our paper easy to read and contains non-trivial contributions. The major concern expressed by some reviewers is that more experiments would strengthen the paper -- which we have now done. Based on the reviewers' suggestions, we have included new experiments. The figures are reported in the attached PDF. For example, 1. experiments in higher dimensions $d>2$; 2. a new Wasserstein-Fisher-Rao gradient flow of the MMD energy; see the appendix below for details. This is the first implementation of the MMD energy in this flow to the best of our knowledge. Note that this is the same algorithm used in [Yan et al. & Lu et al.], as requested by one reviewer. As we are not in the score-based sampling regime, we have to use the MMD energy instead of the KL energy (mentioned in some reviews); 3. many other improvements in terms of scalability and practicality, as suggested by the reviewers. Therefore, we believe the reviewer's concerns have been addressed. We hope the reviewers will take our new results and improvement into consideration. If there are any further concerns, we are happy to discuss them during the discussion phase. Thank you for your time and consideration, Authors ## Appendix: Details on the newly implemented WFR flows for comparison (i.e., KL step for reweighting) In the second paragraph in Sec.3.4 (L248-256), we discussed the comparison with the Wasserstein-Fisher-Rao (WFR) flow of the MMD. Note that there are no sound convergence guarantees for this scheme yet. To put things in perspective, let us make it clear: Yan et al. and Lu et al. use variants of WFR flows, but with KL energy. We can not directly compare with them as we are in the sample-based regime (only have access to samples of $\pi$) and they are in the score-based regime (only have access to the score $\nabla \log \pi$). Nonetheless, per the reviewer's suggestion to compare with them, we have implemented the **WFR flow of the MMD energy**. Note that this is the first implementation of the MMD energy in this flow, to the best of our knowledge. We believe this has added value to the paper and to some extent addresses the reviewer's demand for more empirical comparisons. This amounts to the JKO splitting steps $$ \mu^{\ell+\frac12} \gets\arg\min_{\mu\in\cal P} F(\mu ) + \frac1{2\tau}W_2^2(\mu, \mu^\ell) \textrm{(Wasserstein step)} $$ $$ \mu^{\ell+1} \gets\arg\min_{\mu\in\cal P} F(\mu ) + \frac1{\eta}{\mathrm{KL}}(\mu, \mu^{\ell+\frac12}) \textrm{(KL step)} $$ As is in the Wasserstein step, in practice, we use the explicit Euler scheme which boils down to the **entropic mirror descent**. As is well-known, especially in the optimization literature, the entropic mirror descent step can be implemented as multiplicative update of the weights (or density), i.e., suppose $x_i^{\ell+1}$ is the new particle location after the Wasserstein step, then we update the weights vector $\alpha$ via $$ \alpha_i ^{\ell+1} \gets \alpha_i ^{\ell} \cdot \exp \left( -\eta \cdot \frac{\delta F}{\delta \mu}[\mu^\ell] (x^{\ell+1}_i) \right) $$ where the Riemannian velocity $\frac{\delta F}{\delta \mu}[\mu^\ell]$ is the same as used in the Wasserstein step above. For the MMD, it is given by (already provided in the manuscript) $$\frac{\delta F}{\delta \mu}[\mu^\ell]=\int k(x, \cdot ) (\mu^\ell -\pi )(\mathrm d x)$$ We have added more details to the appendix of the revised manuscript. Numerical results are reported in the PDF below. Pdf: /pdf/717b62f8271a7c8e19ef9f7f4cbcd6f44180be59.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Regret Minimization in Stackelberg Games with Side Information
Accept (poster)
Summary: The paper examines online learning within Stackelberg games, incorporating additional contextual information. Specifically, at each round $t$, both the follower and the leader observe a shared context $z_t$, which impacts their respective utilities. The leader, who is also the learner in this online learning framework, then selects a mixed strategy $x_t$​ from its set of possible actions. Following this, the follower observes the leader's actions and chooses a best-response action based on both the leader's action and the given context. The learner's objective is to choose a sequence of probability distributions that maximizes overall utility. The authors evaluate the performance of no-regret algorithms using the concept of *"policy regret,"* aiming to achieve sublinear regret relative to the optimal *"fixed contextual policy"* that assigns a probability distribution to each context. The authors present several positive and negative results: 1. When the context and the type of players are adversarially selected, there is no $o(T)$-regret algorithm. 2. When the types of players arrive according to a fixed probability distribution but the contexts are adversarially selected, the authors provide an $O(T^{1/2})$-regret algorithm. 3. When the contexts arrive according to a fixed probability distribution but the types of followers arrive according to a fixed probability distribution, the authors provide an $O(T^{1/2})$-regret algorithm. 4. Finally, the authors extend their results to the bandit case by providing O(T2/3)O(T^{2/3})O(T2/3)-regret algorithms. Strengths: I find the problem of online learning in Stackelberg games, where both the follower and the leader observe contextual side information, to be well-motivated and of significant interest to the game theory and learning community. The authors have thoroughly examined several aspects of the problem. Despite the negative results in the case of adversarially selected contexts and types, they provide positive results in the stochastic case. Furthermore, the results appear solid and present considerable technical interest. Weaknesses: The regret bounds for the bandit case are not tight. However I believe the paper provides interesting first results for an interesting problem. Technical Quality: 3 Clarity: 3 Questions for Authors: What are the main challenges of extending your results in infinite action games with convex structure? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: [*“The regret bounds for the bandit case are not tight. However I believe the paper provides interesting first results for an interesting problem.”*] We would like to point out that it is actually unclear whether or not an algorithm exists for the bandit settings which achieves better than O(T^{2/3}) regret. While we hypothesize that getting O(T^{1/2}) regret should be possible, the algorithmic ideas used to get O(T^{1/2}) regret in the online Stackelberg setting without side information do not extend to our setting in a straightforward way. [*“What are the main challenges of extending your results in infinite action games with convex structure?”*] Since the leader’s strategy is a probability distribution over actions, one could view the leader as picking from infinitely-many actions. If you are asking about how one would extend our results to settings in which the leader’s strategy is not a probability simplex but is instead some general convex set, the problem becomes more tricky. To generalize to this setting, one would need to be able to reason about how different follower types would best-respond to different leader strategies. One could then hope to leverage the structure of this best-response in order to derive a computationally-tractable algorithm (i.e. in a way analogous to how we use Lemma 4.4). --- Rebuttal Comment 1.1: Title: Reviewer's Response Comment: I have read the authors' response and I plan to keep my score.
Summary: The paper studies an online Stackelberg game, where the leader plays with a different follower type in a different context in each time step. The paper takes an online learning approach to solving this problem. It first that given that the context space is infinite, it is not possible to achieve sublinear regret via a reduction to an online linear threasholding problem. Therefore, the authors consider relaxed cases, where either the context or the follower type is chosen stochastically in each round. In both cases, the authors present algorithms that guanrantee sublinear regret, both for the full feedback and bandit feedback settings. Strengths: (+) The model is well motivated. (+) The paper is clear and well presented. All analyses look sound and rigorous. (+) The results presented are very complete, covering all cases of the model and both positive and corresponding negative results. Weaknesses: (-) The main impossibility result appears to reply on the fact that the context space is infinite and the model is non-linear w.r.t. the context. (-) The analysis of the relaxed cases (stochastic follower type, or stochastic context) looks fairly standard and the results are expected. So overall the paper is more like an application of existing techniques to a problem motivated by a new context. Technical Quality: 3 Clarity: 3 Questions for Authors: Would the impossiblity result change if the context space is finite, or if the players' utility functions are linear w.r.t. the context. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors didn't seem to have addressed this explicitly, but the work is theoretical anyway, so this is minor. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: [*“The main impossibility result appears to reply on the fact that the context space is infinite and the model is non-linear w.r.t. the context.” and “Would the impossiblity result change if the context space is finite, or if the players' utility functions are linear w.r.t. the context.”*] We would like to emphasize that most work on learning in contextual settings focuses on the setting where the context space is infinite or large. With that being said, our impossibility result should carry over to the setting where the number of contexts is finite, but exponential-in-T. (This is because the impossibility result for no-regret learning in online linear thresholding carries over to the setting where the adversary can only select from an exponentially-large grid of uniformly-spaced points in [0, 1].) We would like to clarify that the leader’s utility function is not non-linear with respect to the context in our impossibility result. While we allow non-linear relationships in all of our positive results, the leader and follower utility functions actually have no explicit dependence on the context in our lower bound (see line 518). The intuition for the lower bound is that while the leader’s utility may not explicitly depend on the context, the adversary can “hide” information in the context about the receiver’s type. This ability to “hide” information makes it hard for the leader to compete with the best-in-hindsight policy. [*“The analysis of the relaxed cases (stochastic follower type, or stochastic context) looks fairly standard and the results are expected. So overall the paper is more like an application of existing techniques to a problem motivated by a new context.”*] We would like to clarify that our positive results do not follow from existing techniques. We will update the paper to better explain the technical innovation based on the reviewer’s feedback. There are several reasons why this is not the case. The setting of Section 4.1 (and its bandit analogue) have not been studied in the literature on non-contextual Stackelberg games. As such, there is no existing technique to apply in this setting. Our algorithm in Section 4.2 plays Hedge over a finite set of policies, while previous work on non-contextual Stackelberg games plays Hedge over a finite set of mixed strategies. While the two ideas may seem similar at this level of abstraction, showing that it suffices to consider a finite set of policies (each of which map to a finite set of context-dependent actions) is non-trivial and requires bounding a discretization error which does not appear in the setting without side information. We will update the final version to better explain the technical innovation. [*[On limitations]: “The authors didn't seem to have addressed this explicitly, but the work is theoretical anyway, so this is minor.”*] We discuss (what we view as) the most important limitations of our work (allowing intermediate forms of adversary, better regret rates in the bandit setting) in the Conclusion as directions for future work. However, we would be happy to add further discussion on other limitations in our next revision.
Summary: The paper presents a study of Stackelberg games with contextual side information, impacting their strategies in a game theoretic setting. The authors introduce a framework for analyzing online Stackelberg games, where a leader faces a sequence of followers, and both or either sequences—contexts and follower types—can be adversarially chosen. The paper contributes by showing the limitations of traditional non-contextual strategies and offering new algorithms that can handle stochastic elements in either the context or the follower sequences. Strengths: - The paper addresses a novel aspect of Stackelberg games by incorporating side information, which is realistically present in many practical applications but often ignored in theoretical models. - The paper is technically sound with rigorous proofs and a clear exposition of both theoretical and practical implications of the findings. - The paper is well-written and organized. Concepts are introduced systematically, and the flow from problem statement to results is logical and easy to follow. Weaknesses: - The paper could improve by providing numerical experiments or case studies that demonstrate the efficacy of the proposed algorithms. - The practical implications are clear for certain fields, but the paper could further elaborate on how these findings might influence other areas of research or industry applications. Technical Quality: 3 Clarity: 3 Questions for Authors: - The paper would definitely benefit from the addition of numerical experiments or case studies. - Could the authors provide more clarity on how the adversarial model for context selection was validated? Are there empirical data or specific scenarios where this model reflects real-world conditions? - How would the author compare the results with existing methods for handling contextual information in game theory, such as contextual bandits or online learning with expert advice? Can the authors comment on how their approach compares to existing methods in terms of computational efficiency and practical deployability in real systems? - How does the proposed algorithm perform if the model of side information is mis-specified? For instance, if the actual distribution of contexts or follower types deviates significantly from the stochastic model assumed, what is the impact on the regret bounds? - Several theoretical assumptions are crucial and well presented in the paper. How sensitive are the main results to these assumptions? If some of these assumptions might not hold, how would this affect the applicability of the results? - The paper could benefit from a deeper discussion on the limitations regarding the scalability of the algorithms when the number of contexts or follower types is large. What are the computational implications? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have not explicitly addressed limitations or potential negative societal impacts of their work. A clearer identification of potential limitations, such as dependency on the accurate modeling of side information and follower behavior, would strengthen the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: [*“The paper would definitely benefit from the addition of numerical experiments or case studies.”*] We hope our numerical simulations address your concerns regarding the lack of experimental results. [*“Could the authors provide more clarity on how the adversarial model for context selection was validated? Are there empirical data or specific scenarios where this model reflects real-world conditions?”*] The common motivation for studying adversarial models in online learning is not that we believe that the data-generating process is adversarial, but that we believe it is not i.i.d. If an algorithm can handle adversarially-generated data, then it can also handle data which is not i.i.d. in some way, while being agnostic as to how the data is generated. In Stackelberg games with side information, there are many examples of settings in which the contexts would not be i.i.d. In the wildlife protection and patrol boat examples, weather conditions for each day are not sampled i.i.d. from some distribution; instead, there is some dependence from day-to-day. [*“How would the author compare the results with existing methods for handling contextual information in game theory, such as...”*] We would like to emphasize that existing methods for handling contextual information (e.g. contextual bandit algorithms, online learning with expert advice) are usually not studied in strategic settings where there is more than a single agent. In general, such single-agent methods would perform poorly in game-theoretic settings, since they do not take the other strategic agents into consideration when making decisions. While our problem may be viewed as a special case of online contextual adversarial learning, applying algorithms for this problem out-of-the-box to our setting generally results in worse regret rates and would require more stringent assumptions on the data-generating process than what we require. (See line 485 for more details.) Others have also studied various forms of side information in different game settings. We include a discussion on these works in Appendix A. [*“Can the authors comment on how their approach compares to existing methods in terms of computational efficiency...”*] The computational complexity of our algorithms essentially match those for no-regret learning in Stackelberg game settings without contexts (up to small differences in polynomial factors). While both contextual and non-contextual algorithms for no-regret learning in Stackelberg games have exponential per-round complexity, this is unavoidable due to an existing hardness result for the non-contextual setting (see Li et al. [23]). With that being said, we believe that studying special cases of both the contextual and non-contextual settings where efficient learning is possible is an interesting and important direction for future research. [*“How does the proposed algorithm perform if the model of side information is mis-specified? For instance...”*] Thanks for the interesting question. While our impossibility result of Section 3 rules out the ability to learn when the distributions are arbitrarily misspecified, no-regret learning may still be possible under other, intermediary, forms of adversary (such as the one you propose). We highlight this as an interesting direction for future work in Section 6 (Conclusion). [*“Several theoretical assumptions are crucial and well presented in the paper. How sensitive are the main results to these assumptions?”*] The assumptions made through Section 4 (e.g. known utility functions, follower being one of K types, finite leader/follower actions, full feedback) are standard in the literature on no-regret learning in (Stackelberg) games and are reasonable under many settings. If you are referring to the assumptions which are unique to Section 5, the results are mixed. We believe that our results in Section 5 could be extended to handle adaptive adversaries by using a more clever exploration strategy. However, our algorithmic ideas in Section 5 do not readily extend to the setting in which the follower’s utility also depends on the context. We view relaxing this assumption as an interesting direction for future research. [*“The paper could benefit from a deeper discussion on the limitations regarding the scalability of the algorithms when the number of contexts or follower types is large. What are the computational implications?”*] We are happy to add a deeper discussion on computational runtime in our next revision. In short, our runtimes have no dependence on the number of contexts, and we inherit the exponential-in-K runtime from the non-contextual Stackelberg game setting (where K is the number of follower types). The implication of this is that while our results scale well to settings with a large (or infinitely-many) number of contexts, the number of different follower types should be small/constant. [*“The authors have not explicitly addressed limitations or potential negative societal impacts of their work. A clearer identification of..”*] We discuss (what we view as) the most important limitations of our work (allowing intermediate forms of adversary, better regret rates in the bandit setting) in the Conclusion as directions for future work. However, we would be happy to add additional discussions on the topics you propose. We chose not to discuss the societal implications of our work because our results are largely theoretical. With that being said, we anticipate that any societal implications of our work will be positive, since algorithms for learning in Stackelberg games are usually deployed in socially-beneficial domains. For example, in airport security, patrol strategies for drug-sniffing dogs could be modified to take factors such as the time of year, airport congestion, etc. into consideration. In wildlife protection domains, park rangers’ patrol schedules can be informed by things like observed tire tracks, or the current weather conditions. --- Rebuttal 2: Title: Thank you for your response Comment: I really appreciate your detailed response to my questions and additional experiments. After reading the authors' responses and other reviewers' comments, I will keep my score.
Summary: This paper studied the regret minimization in Stackelberg games with side information which consider the additional information available to each player. The paper found that achieving no-regret learning is impossible in fully adversarial settings. However, it demonstrated that no-regret learning is achievable in scenarios where either the sequence of contexts or followers is chosen stochastically. Strengths: The idea of using side-information to learn to play Stackelberg games is interesting. Compared to previous work on learning in Stackelberg games, this paper takes into consideration the additional information available to both the leader and followers at each round, which is more complicated and realistic. The paper provide an impossibility result and also identifies a setting where no-regret learning is possible. An algorithm is also provided to achieve no-regret. Weaknesses: This paper lacks experimental results to verify the theoretical analysis. Technical Quality: 4 Clarity: 3 Questions for Authors: I do not have any questions for the authors. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The limitations have been addressed properly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: [*”This paper lacks experimental results to verify the theoretical analysis.”*] We hope our numerical simulations address your concerns regarding the lack of experimental results. --- Rebuttal Comment 1.1: Comment: Thank you for your response and the numerical simulation. I will keep my score.
Rebuttal 1: Rebuttal: Thanks for taking the time and effort to review our submission. To summarize, we initiate the study of Stackelberg games with side information, which despite the presence of side information in many Stackelberg game settings, has not received attention from the community so far. We provide algorithms for regret minimization in Stackelberg games with side information, whose analysis requires new ideas and techniques when compared to algorithms for learning in Stackelberg games without side information. At the request of reviewers C9HM and YfNJ, we have included numerical experiments (summarized in the attached pdf) which show that our algorithms compare favorably to algorithms for learning in Stackelberg games which do not take side information into consideration. If you would like to see our code, we are happy to provide an anonymized link to the AC (as per the NeurIPS rebuttal instructions). Please find our responses to your other questions below. Pdf: /pdf/6d6587cc0192dbf367054b0edfbcaa9c2235f32d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Semidefinite Relaxations of the Gromov-Wasserstein Distance
Accept (poster)
Summary: The paper explores a semi-definite programming (SDP) based relaxation of the popular Gromov Wasserstein (GW) problem. The GW problem is an instance of non-convex quadratic program (QP). Standard SDP relaxation of QPs has been explored in the literature. The present work leverages this SDP relaxation result. However, this standard SPD relaxation is not sufficient as the resulting minimization problem is unbounded from below. Hence, the paper tightens the relaxation via additional constraints which are motivated from the GW problem. Empirical evaluations are performed to showcase the effectiveness of the obtained solution, both in terms of quality and runtime efficiency. Strengths: - The proposed SDP relaxation based approach for GW problem is an interesting idea and has not been explored in context of GW (to the best of my knowledge). This reformulation has interesting consequences such as - an approximation ratio (Eqn 4) which can be computed from the solution obtained via the proposed solution. This ratio lower bounded by 1 and is equal to 1 only if the obtained solution is globally optimal. Hence, this approach provides an optimality certificate for a non-convex problem - While no empirical results were shown in this regard, the proposed approach allows using a general GW cost tensor L. Existing approach such as (Peyré et al., 2016) can only employ decomposable costs (such as those obtained via L2 or KL loss). - The paper empirically evaluates the effectiveness of the proposed approach in terms of computational efficiency and quality (lower objective is better). While the proposed approach obtains better objective compared to current state-of-the-art GW-CG, its runtime (for problems of size n = 6,12,20) is around 500-150000 times higher than GW-CG (Table 1). While the heuristic solver (GW-PGD) proposed in Section 5 is faster than the proposed GW-SDP, it is still at least 250 times slower than GW-CG and its objective becomes comparable to GW-CG as n increase. - The paper is well written, explaining the underlying concepts and the related works nicely. Weaknesses: - The paper provides a detailed discussion on the literature related to the quadratic assignment problem (QAP) and SDP relaxations of the QPs. The key technical contribution of the proposed work w.r.t. Zhao et al. (1998) is removing constraints related to \pi being a permutation matrix, which also allows handling the m \neq n setting (lines 125-137). - Very high runtime compared to GW-CG. This limits the practical utility of proposed GW-SDP or GW-PGD. Technical Quality: 3 Clarity: 3 Questions for Authors: please see strength and weakness sections. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *Detailed discussion on related work* Could we check if this comment was intended as a weakness? It does not sound like one. We assume this comment was misplaced. Could the Reviewer clarify? *High run-time* We agree with the Reviewer’s concerns about run-time. We address some of these concerns in the global response. We have been working on developing faster algorithms for solving the GW-SDP. Unfortunately, this is a difficult task. There are many promising directions to develop more scalable algorithms for solving the GW-SDP problem. Some of these were discussed in the manuscript. We discuss first-order methods in the response to another Reviewer. These directions are promising but the scope of the work goes comfortably the scope of a single paper. As a note, for the closely related semidefinite relaxation of the QAP, state-of-the-art methods are only able to solve problem instances where the dimension is around 40 [OWX:18]. The largest dimension in our experiments is 32, which is not far off the state of the art. *References* [OWX:18] D. E. Oliveira, H. Wolkowicz, & Y. Xu, (2018). ADMM for the SDP relaxation of the QAP. Mathematical Programming Computation, 10(4), 631-658. --- Rebuttal Comment 1.1: Title: Response to authors' rebuttal Comment: I thank the authors for their response. I also went through other reviews and their discussions. Please find my observations below: 1. The detailed discussion on QAP related work is not a weakness and is much appreciated. However, from solution approach point of view, the key technical contribution of this works seems to be only the removal of constraints related to \pi being a permutation matrix, which also allows handling the m \neq n setting (lines 125-137). And this was done because GW problem setting is interesting also for m \neq n setting. Hence, novelty seems limited (and this was the weakness part, perhaps my earlier statement was not very clear). Regarding novelty, please also see the next point. 2. The approximation ratio (Eqn 4) is also a novel contribution and interesting. However, in their global comments and one of the rebuttals to another review, the authors mention statements such as "... our work is the only work we are aware of that addresses global optimality ... " and "The first contribution is that we compute globally optimal solutions to the GW problem. We can prove that our solutions are globally optimal." I think this is a gross exaggeration of their contribution. My understanding of the contribution of this paper in this regard is as follows. Please excuse the verbosity. - The paper *does not* propose an algorithm that is guaranteed to converge to a globally optimal solution of the (discrete) GW problem. - The paper *can only confirm* whether the solution obtained by *its approach* has converged to a globally optimal solution. - The paper *has empirically shown* that in several (very) small scale problem setting, it achieves global optimality. Whether similar observations would hold for larger setting is unclear. - However, given a problem instance, we do not know beforehand whether the proposed approach will converge sub-optimally or otherwise. - Given just a solution to GW problem, the paper's approximation ratio cannot be used to check whether the given solution is optimal or not. - In the (limited) additional experiments during rebuttal phase, the paper has also shown empirically that in the experiment that were performed, their approach reach optimal solution when m is a multiple of n. However, no theoretical justification is provided. 3. In the additional experiments done with more sample points for FW, the setting seems quite synthetic since the source and target points are sampled from a distribution (with no noise). In practice, the source and target datasets could be corrupted by noise/outliers. Say the source and target datasets had 5000 datapoints with some outliers as well. GW-SDP maybe run by sampling 10 points from source and target datasets. Does GW-SDP seem more susceptible to outliers since only 10 points were taken and hence an outlier has a greater chance to influence the results? In certain applications where the GW transport map required for domain adaptation [1], such sampling would again not be useful. [1] Gromov-Wasserstein Alignment of Word Embedding Spaces. EMNLP 2018. 4. While the authors have acknowledged and discussed it in the draft as well as in their response, the concerns about run-time and scalability of the approach, and its practical relevance remain. --- Rebuttal 2: Title: Response to comment by Reviewer Xfoj Comment: When we wrote that "... our work is the only work we are aware of that addresses global optimality ... " and "The first contribution is that we compute globally optimal solutions to the GW problem. We can prove that our solutions are globally optimal.", it was to be understood in the context that global optimality was attained if the computed ratio was one. When writing the rebuttal, we believed that all Reviewers understood the conditions under which global optimality held. Hence in writing a rebuttal, when summarizing our contributions, we did not think it was necessary to repeat the conditions under which global optimality held. Unfortunately, this summary has given the impression that we claimed global optimality in all instances. We do not claim such a thing. We hope it was clear from the original paper we do not claim global optimality in all instances. We apologize for the confusion. This was not our intention. We hope the paper was very clear about its claims and limitations. > Given just a solution to GW problem, the paper's approximation ratio cannot be used to check whether the given solution is optimal or not. To check if a solution is optimal, one typically needs to compute the dual problem (of some suitable form). Computing the dual is (sort of) unavoidable if you want a proof of optimality. The GW-SDP does this (in a way) because it is a convex program. Also, given the solution to the GW-SDP formulation, one can certify the optimality, or the gap to optimality, of any proposed solution to the GW problem. Compute the approximation ratio. If it is equal to one, the proposed solution is optimal. If it is close to one, we know it is quite good. > In the (limited) additional experiments during the rebuttal phase, the paper has also shown empirically that in the experiment that were performed, their approach reach optimal solution when m is a multiple of n. However, no theoretical justification is provided. We do not provide theoretical justification. In fact, we should not expect the SDP to provide exact solutions to all instances because the GW problem is not known to be tractable to solve. There must be some inputs for which the SDP is not exact. The surprising aspect here is that the SDP is exact for many inputs, and that is the point our experiments make. (As to providing theoretical justification, such as proving such a phenomenon holds for random instances, is a difficult research problem. We don't have an answer for this at the moment.) The Reviewer posed some further questions about outliers. We need a bit more time to respond to this and will provide a reply later.
Summary: The authors propose a new algorithm for measuring the Gromov-Wasserstein (GW) distance, an metric for assessing the similarity of point clouds in different spaces. Their algorithm formulates the computation of the GW distance as a quadratic programming problem, which is then solved by semidefinite relaxation. The authors conducted experiments using several synthetic datasets and confirmed that the proposed algorithm yields solutions with smaller objective function values compared to other algorithms. Strengths: * The proposed method is grounded in solid theoretical foundations. * The proposed method has been experimentally verified to produce solutions with smaller objective function values. * The paper is well-written, and its contributions are clearly described. Weaknesses: * The novelty is limited. It is well-known that the computation of GW can be reduced to a quadratic programming (QP) problem, and solving a non-convex QP through semi-definite relaxation is a very common approach in the field of optimization. Additionally, as the authors themselves point out, such methods have been proposed for the quadratic assignment problem (QAP), which is closely related to GW. Thus, the proposed method is merely a simple variant of these approaches, and its technical contribution is minimal. * The computational complexity of the proposed method. The proposed method requires solving an SDP with a matrix of size $mn \times mn$ as variables. Although SDPs can indeed yield globally optimal solutions with relatively low computational effort, it is well-known that the computational complexity increases sharply with the size of the problem. It is thus challenging to solve an SDP with a matrix of size $mn \times mn$ as variables in practical applications. In fact, Table 1 shows that for $n=20$, the computation time exceeds 200 seconds, indicating difficulties in applying the method to real-world problems. * Insufficiency of the experiments. The experiments conducted in the paper are all small-scale and limited to ten artificial datasets, which is insufficient to demonstrate the effectiveness of the proposed method. Technical Quality: 2 Clarity: 3 Questions for Authors: * The experiments emphasize the sparsity of the output of the proposed method, but what is the benefit of this sparsity? Additionally, why does sparsity emerge in the results? * The positioning of the Heuristic Solver proposed in Section 5 within this paper is unclear. Is this method one of the proposals of this paper? If so, it is necessary to explain its position within existing research and its motivation in more detail. If not, its contribution becomes unclear, and it is recommended to move it to the appendix. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 1 Limitations: The authors are aware of the significant computational complexity and discuss potential directions to address this issue. While this acknowledgment is commendable, the substantial computational complexity remains a critical drawback of this method. Unless this issue is resolved, the overall contribution of the paper must be considered limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *Limited novelty* We emphasize, the paper has multiple contributions. The first contribution is that we compute globally optimal solutions to the GW problem. We can prove that our solutions are globally optimal. There is no existing work (as far as we are aware of), published or unpublished, that achieves this. This contribution alone, we believe, is a significant contribution. The second contribution is to point out that many existing methods do not compute globally optimal solutions. This point, we believe, is a very concerning development, and researchers working on GW need to be aware. The Reviewer is concerned with the limited novelty because the relaxation is a variant of an existing relaxation for the QAP. The point we make here is that this particular variant we propose is the simplest variant that answers the previous two questions. We hope the Reviewer appreciates the conceptual aspects of our paper, namely that global optimality and certificate of optimality, are fundamental aspects of optimization. *Sparsity* When $m=n$, the feasible set of the GW problem is the set of doubly stochastic matrices. The extreme points of the set of doubly stochastic matrices is the collection of permutation matrices – this is known as the Birkhoff von-Neumann theorem. When $m \neq n$, the feasible set is known as the transportation polytope. The extreme points correspond to bipartite graphs (the set of vertices are $\alpha$ and $\beta$) that have no cycles. These matrices are also sparse. The optimal solution of a convex program whose feasible region is the set of doubly stochastic matrices or more generally the transportation polytope tends to be sparse. This is because optimal solutions tend to lie on low-dimensional faces, which are sparse (this is a known phenomenon in convex geometry). In short, the optimal solution to the GW problem should generally be sparse. The solutions to GW-SDP are sparse, and this is a good sign. The solutions obtained by Conjugate Gradient tend to be sparse, which is also a good sign. On the other hand, if the solution is non-zero everywhere, we can be certain the solution is sub-optimal. This is the case for e-GW solvers, and the reason is that e-GW adds a bias to the objective; that is, they solve a slightly different problem. *Heuristic solver* We will place the heuristic solver in the appendix, as recommended. *Unless this issue is resolved, the overall contribution of the paper must be considered limited.* We hope the Reviewer will take into consideration that developing scalable algorithms for solving large-scale SDPs is an active research area and an open challenge. In fact, for the closely related semidefinite relaxation of the QAP, state-of-the-art methods are only able to solve problem instances where the dimension is around 40 [OWX:18]. The largest dimension in our experiments is 32, which is not far off the state of the art. There are many promising directions to develop more scalable algorithms for solving the GW-SDP problem. Some of these were discussed in the manuscript. We discuss first-order methods in the response to another Reviewer. These directions are promising but the scope of the work goes comfortably the scope of a single paper. We urge the Reviewer to moderate expectations as to what is technically feasible within the scope of a single paper. Again, we reiterate an earlier point, that there is no other work we are aware of that addresses the global optimality of the GW problem. Addressing global optimality is one of the key contributions of this paper. *References* [OWX:18] D. E. Oliveira, H. Wolkowicz, & Y. Xu, (2018). ADMM for the SDP relaxation of the QAP. Mathematical Programming Computation, 10(4), 631-658. --- Rebuttal Comment 1.1: Title: Comments on the Authors' Rebuttal Comment: I have read the authors' rebuttal and would like to address the following points: * Originality: The statement in the rebuttal, "The first contribution is that we compute globally optimal solutions to the GW problem," is inaccurate. To be precise, a solution is guaranteed to be globally optimal only when the right-hand side of equation (4) equals 1. Although Figure 4(b) shows that this condition is met in many cases, it is likely due to the small and simple problem setting. Therefore, the algorithm does not always yield a globally optimal solution, making the authors' claim in the rebuttal an overstatement. Moreover, as mentioned in my initial review, similar algorithms exist in the context of the Quadratic Assignment Problem (QAP). * Computation Time: My concerns regarding computation time remain unresolved. While I recognize the value of this research even without significant reductions in computation time, the impact of the work is diminished as a result. * Additional Experiments: I appreciate the additional experiments conducted by the authors. * Sparsity: My concerns regarding sparsity have not been fully addressed. While it is correct that an optimal solution should be sparse, the argument that a sparse solution is close to the optimal one is not valid. Therefore, I find little merit in discussing the sparsity of the solutions. Overall, my concerns have not been sufficiently resolved, and thus, I will maintain my previous score. --- Rebuttal 2: Comment: We thank the reviewer for their quick response to our rebuttals. > Originality: The statement in the rebuttal, "The first contribution is that we compute globally optimal solutions to the GW problem," is inaccurate. To be precise, a solution is guaranteed to be globally optimal only when the right-hand side of equation (4) equals 1. In our paper, we do make it clear that we only claim global optimality when the RHS of equation (4) equals one. > Sparsity: My concerns regarding sparsity have not been fully addressed. While it is correct that an optimal solution should be sparse, the argument that a sparse solution is close to the optimal one is not valid. Therefore, I find little merit in discussing the sparsity of the solutions. The Reviewer asked why do solutions tend to be sparse. The initial comments do not appear to be a concern and we answered the question as it is. Our response simply states that solutions that are not sparse are certainly not optimal. Our proof of global optimality comes from the RHS of equation (4), not sparsity. --- Rebuttal Comment 2.1: Title: Addendum about global optimality claims Comment: In reading the response by other reviewers, it appears that other reviewers share similar concerns about overstating the claims. For what it's worth, in writing the rebuttal, we intended to give a quick summary of our contributions before proceeding to the additional experiments we ran. In writing "The first contribution is that we compute globally optimal solutions to the GW problem," we intended this sentence to be understood that it holds under the condition the approximation ratio is equals to one. From the reading the reviews, we believed the reviewers understood the extent of our claims. As such, it did not occur to us to repeat the conditions (namely that these claims hold if the approximation ratio equals one, which seems to hold fairly often empirically). Unfortunately, it gave the inadvertent impression that we overstate our contributions. That is not our intention. We apologize for the inaccurate statement about our claims. We hope it was clear from our paper what the extent of our claims of global optimality were. --- Rebuttal Comment 2.2: Title: Comments on the Authors' Rebuttal Comment: > For what it's worth, in writing the rebuttal, we intended to give a quick summary of our contributions before proceeding to the additional experiments we ran. I understand your intention. The original paper presents an accurate description, so I don't see issues with it. However, the rebuttal lacks persuasive power, and my concerns regarding the novelty remain unresolved. > The initial comments do not appear to be a concern and we answered the question as it is. The issue related to sparsity is not a major concern. However, I believe that the discussion on sparsity does not hold much significance and only serves to unnecessarily confuse the reader. Therefore, I suggest either removing the description or providing a more detailed explanation of its intent. --- Reply to Comment 2.2.1: Comment: >The issue related to sparsity is not a major concern. However, I believe that the discussion on sparsity does not hold much significance and only serves to unnecessarily confuse the reader. Therefore, I suggest either removing the description or providing a more detailed explanation of its intent. This is a useful suggestion and we will provide a more detailed explanation in the revision. Thanks for the suggestion. >I understand your intention. The original paper presents an accurate description, so I don't see issues with it. However, the rebuttal lacks persuasive power, and my concerns regarding the novelty remain unresolved. Thanks for understanding our intention, and we apologize for the confusion. As for novelty, let's perhaps try to state our case a little differently. Besides our work, we are not aware of other work that address the issue of global optimality. When we say "addresses global optimality", we mean that the method computes the actual GW distance (up to numerical error) with a proof that it does so correctly. Methods that depend on performing descent within a neighborhood do not achieve that. It may compute the global minimum, or it might get stuck in a local optimum, but it won't know which case it belongs in, with certainty. The typical way to certify that one has the global minimum is to compute a lower bound to the GW problem -- usually, this is done via a suitable dual program. The GW problem is believed to be computationally difficult (intractable). So we expect the dual problem to be equally difficult. Here, what we are offering is slightly different: We suggest a tractable (meaning polynomial time) method for computing a relaxation of the dual that is able to certify global optimality on many instances. Because we think finding the global optimal solution should be difficult (intractable), coming up with a tractable method that certifies global optimality in many instances is the next best thing one can hope for. This is precisely what we are offering. It is correct that the specific SDP formulation is a minor variation of a closely related SDP relaxation for the QAP. We were upfront about this in the manuscript as well. There are a few things to note: We considered a few other relaxations but they did not work because for many instances, the approximation ratio was vacuous. The SDP formulation we arrived at was a specific formulation that gave meaningful ratios most of the time. But the point here is not that our relaxation is particularly novel (and we were careful to attribute our ideas to the source); the point here is that the relaxation helps to certify global optimality in many instances. This we think is an important contribution. There are no other methods we know of in the literature that solves this seemingly basic problem. As such, the burden of the paper now shifts to quantifying how often the relaxation is exact. If exactness happens quite often, then the relaxation is useful. If exactness only occurs for very specific examples, then there is very little meaning to the relaxation. Our numerical experiments are intended to investigate how often the SDP is exact, and over fairly generic inputs. The largest instance we solve has $30$ points. In other words, the SDP is of size $900 \times 900$. This is considered moderate-sized for SDP. Most researchers working with SDPs would consider such a size very promising (meaning it suggests the relaxation is a powerful one). Now, $30$ points is considered small in data science and optimal transport applications. But the more relevant question here is the strength of the SDP, perhaps more so than the data analytical aspects. Now, the Reviewer suggests that the exactness is due to the "small and simple" settings. We don't agree with this characterization and our experiments are intended to suggest otherwise. If the Reviewer has further reasoning to explain why the Reviewer thinks so, do let us know and we will be happy to engage. In any case, thank you very much for your attention.
Summary: The authors propose a semidefinite programming (SDP) relaxation of the Gromov-Wasserstein (GW) distance. While the GW problem is non-convex, the proposed SDP relaxation is convex and hence can be solved in polynomial time with any off-the-shelf convex solver. The authors also provide an accompanying proof of global optimality for the relaxed problem, which can be checked efficiently. The numerical experiments use an off-the-shelf solver to solve the proposed SDP relaxation and compare it with two solvers from the PythonOT package, i.e., Conditional Gradient (CG-GW) solver and Sinkhorn projections solver on Matching Gaussian Distributions and Graph Community Matching. Lastly, the authors present a simple heuristic algorithm optimized for their proposed SDP relaxation. Strengths: * The authors propose a novel SDP relaxation of the GW problem, which has not previously been explored in the literature. The theory is compelling. * The proposed SDP approach has several compelling advantages over existing GW solvers: the solver given by POT (which implements Frank-Wolfe) can only find local optima, and entropic GW cannot be used for general cost tensors. Meanwhile, the proposed approach is broadly applicable and to my knowledge the first tractable GW solver for which the optimality of solutions can be efficiently verified. * The paper is generally straightfoward to follow and the proofs do not seem to have any obvious mistakes. * The authors motivate the proposed SDP relaxation well. Weaknesses: * The empirical evaluation is limited in scope and done entirely with synthetic data. In particular, the authors make the claim that the SDP relaxation frequently computes globally optimal solutions, but I feel the experiments are not extensive enough to fully support such a claim. A more comprehensive experimental evaluation would be required to better understand the practical applicability and limitations of the proposed method. * The first experiment (Gaussian matching) is not that compelling to me. In Section 4.1, the experiment only considers up to 30 samples in each distribution and Figure 2a seems to suggest the possibility that Frank-Wolfe could perform similarly to the SDP relaxation for larger number of supports while being substantially faster. This is in line with what is reported in Table 2 with the heuristic algorithm, where the gap in performance between SDP and FW diminishes for larger number of supports. * It seems as though the estimation gap for the first point in Figure 1b is less than 1, though this should not be possible. * Minor suggestions: 1) The readability of the paper could be enhanced by stating where proofs can be found for each claim, where the claim is presented. For instance, Proposition 3.1 is stated without mentioning where the proof can be found, and similarly for the theorems in appendix B. 2) I assume the numbers in parenthesis in Table 1 are standard errors based on Table 2. It would be helpful to state this in the caption. Technical Quality: 3 Clarity: 3 Questions for Authors: * The authors state at the end of Section 2 that optimal solutions can be found frequently when $m=n$ as shown in Section 4. What about when $m\neq n$? Do the authors find that optimality cannot be verified as frequently in this setting? * Can the authors provide more details on the choice of parameters in Section 5? How sensitive are the results to these parameter choices? * Based on the observations I point out above, I am curious if the authors have considered how the behavior of the SDP-based solvers evolves w.r.t. the number of samples and how it compares to existing solvers? What about if the dimensions of the Gaussians is changed? * The authors extend their SDP relaxation to the task of computing barycenters in appendix D. Have the authors considered comparing (at least qualitatively) their proposed barycenter algorithm to that given in [1] and [2]? * Would it be possible to explore similar SDP relaxations for variants of the GW problem, such as partial GW [2], outlier-robust GW [3], linear GW [4], etc.? If so, it may be worthwhile to mention in Section 7. [1] Gromov-wasserstein averaging of kernel and distance matrices, Peyré et al., 2016. [2] Partial Gromov-Wasserstein Metric, Bai et al., 2024. [3] Outlier-Robust Gromov-Wasserstein for Graph Data, Kong et al., 2023. [4] On a linear Gromov-Wasserstein distance, Beier et al., 2022. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The main limitation of the proposed work, i.e., high dimensionality of the matrix $mn\times mn$ in SDP, is provided by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *Real data* We use a publicly available database of triangular meshes (Sumner et al. 2004). We obtain 18 points and compute distance matrices using Dijkstra's algorithm. Each object's probability measure is chosen to be uniform. We apply (GW-SDP) to the corresponding metric-measure spaces to determine the correspondence between the selected vertices across different objects. Two representative examples are given in Figure 1 in the attached pdf file. Following are the results when we do matching of distances matrices of different objects. In general, we expect shapes of the same animals to have a smaller GW distance than shapes of different animals, which is indeed the case for the three GW formulations. We still notice that GW-SDP consistently returns the smallest value when performing the same matching task. | | GW-SDP | GW-CG | eGW-PPA | |-------------------|----------|----------|----------| | Elephant-Elephant | 0.007416 | 0.043879 | 0.025688 | | Elephant-Cat | 0.015695 | 0.050594 | 0.042214 | | Cat-Cat | 0.006549 | 0.016634 | 0.006757 | | Cat-Horse | 0.011040 | 0.033736 | 0.011041 | | Horse-Horse | 0.006287 | 0.033768 | 0.007395 | *Frank Wolfe with more samples* This is our interpretation of the Reviewer's comment. Fix both sets of distributions. Increase the number of samples for Frank-Wolfe but fix the number of samples for GW-SDP. What happens? We conducted this experiment and we noticed that the objective value for Frank-Wolfe (GW-CG in table) decreases as we increase the number of samples and approach the objective value obtained by the SDP relaxation, which is obtained using few samples. n GW-SDP GW-SDP Runtime (s) GW-CG GW-CG runtime (s) 10 0.4577 6.3753 1.135940 0.000389 100 0.629425 0.007571 1000 0.540984 2.520011 10000 0.496796 138.358954 We noticed that the objective value from Frank-Wolfe is greater than the objective value from GW-SDP. We do not know if this is because the solution from Frank Wolfe is sub-optimal, or if GW-SDP returns a smaller objective than the GW distance between the continuous distributions because of discretization error (from finite samples). This is an interesting question to investigate. In any case, the experiment suggests it may be possible to mitigate the (lack of) global optimality issues we raise about Frank Wolfe simply by increasing the number of sample points. This is indeed a cheap way. Nevertheless, without the relaxation we propose, it is not possible to tell if the improved objective value obtained by increasing the number of samples is indeed globally optimal. Also, this experiment suggests that the SDP relaxation is able to obtain a far superior objective value with fewer data points compared to Frank Wolfe. One can further investigate trade-offs between these methods (as future work). We caution against generalizing this approach of increasing samples to attain a better minima too broadly. For Frank Wolfe to converge to the global optimal solution, we need the optimization landscape to be favorable. This might not happen for more complicated data distributions or perhaps graphs. Again, without a relaxation (or a suitable dual problem), one cannot be certain if Frank Wolfe obtains the global solution. *Approximation gap* We checked this. The value is numerically equal to one. The plot appears to be less than one due to an issue with the graph plotting software. *Readability* Thanks for the suggestion. We will make these changes in the revision. *$m \neq n$* We performed an additional experiment where the number of samples in one distribution is fixed ($n=8$) and we vary the number of samples $m$ in the other distribution; see Figure 2 of the attached PDF. We notice that the relaxation is exact whenever $m$ is a multiple of $n$. On the other hand, the relaxation fails to be exact if $m$ is not a multiple of $n$. *Heuristic solver* Note that the heuristic solver will not be in the main body of the revised manuscript, based on a suggestion by another Reviewer. The parameters are chosen heuristically. The parameters $\gamma_2​$ and $\gamma_3$ control the step size in each iteration, while $\gamma_1$​ is used for backtracking to prevent $\gamma_2$​ and $\gamma_3​$ from becoming too large. Through experimentation, we found that the algorithm performs better for relatively small choices. *Behavior of SDP relaxation when dimensions of the Gaussians change* We conducted experiments using Gaussian distributions with varying dimensions and found that the performance of the GW-SDP remained consistent across all tested dimensions. As such, we report results for a single representative dimension. *Barycenter algorithm* Could the Reviewer clarify which papers [1] and [2] refer to? Thanks. *Extensions* It is generally possible to extend the semidefinite relaxation to variants of the GW problem. We briefly describe the semidefinite relaxation to the outlier-robust GW problem by Kong et. al [KLTS:23]. Here, $(X,d_{X})$ and $(Y,d_{Y})$ are two metric spaces with accompanying measures $\mu$ and $\nu$. The distance between $\mu$ and $\nu$ is $\min~~\langle L, P\rangle+\tau_1 d_{KL} (\pi 1,\alpha) + \tau_2 d_{KL} (\pi^T 1,\beta)$ $\mathrm{s.t.}\left( \begin{array}{cc} P & \mathrm{vec}(\pi)^T \\\ \mathrm{vec}(\pi) & 1 \end{array} \right) \succeq 0 $ $\qquad\sum_{i}P_{(i,j),(k,l)} = f_{j}^{k,l}$, $\Sigma_{j} P_{(i,j),(k,l)} = g_i^{k,l}$ $\qquad P\geq 0$ $\qquad d_{KL}(\mu,\alpha)\leq\rho_1,d_{KL}(\nu,\beta)\leq\rho_2$ Note: The resulting formulation is convex but not an SDP because of the KL divergence. We discuss further extensions in the revised manuscript. *References* [KLTS:23] L Kong, J Li, J Tang, & A M-C So. (2023). Outlier-robust Gromov-Wasserstein for Graph Data. In Proceedings of the 37th International Conference on Neural Information Processing Systems. [SP:04] RW Sumner and J Popovíc. (2004). Deformation transfer for triangle meshes. ACM Transactions on Graphics. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for their extensive response, which clarified some of the raised points. Overall, I think the contributions of the paper outweigh its deficiency. The new experiments conducted during the rebuttal period would significantly strengthen the paper. Given these points, I increased my score to Weak Accept. --- Reply to Comment 1.1.1: Comment: Thank you for going through the rebuttal and your vote of confidence.
Summary: The authors provide SDP relaxations of the Gromov-Wasserstein distance, which turns out to provide global optimal in many cases. Strengths: -- The SDP relaxations and proofs of their exactness are most elegant. -- The authors suggest that their method does not make as strong assumptions on the loss as in the case of the previous work (e.g., the Proximal Point algorithm). Weaknesses: Commonly used pythonOT library (https://pythonot.github.io/auto_examples/gromov/plot_gromov.html) implements the conditional Gradient algorithm (ot.gromov.gromov_wasserstein), the Proximal Point algorithm with Kullback-Leibler as proximal operator (ot.gromov.entropic_gromov_wasserstein), and the Projected Gradient algorithm with entropic regularization (ot.gromov.entropic_gromov_wasserstein), but the authors do not compare their run-time against neither of the methods. Plausibly, this is because the run-time of the SDP solver is much higher? Technical Quality: 4 Clarity: 3 Questions for Authors: How does the runtime compare with that of commonly used algorithms? If the SDP relaxations are correct, it should be possible to derive first-order methods for these, which could perhaps be simplified substantially, compared to general-purpose SDP solvers. Have you explore the direction? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The run-time is not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *Run time comparisons* We wish to clarify that the current manuscript compares run times of the Conditional Gradient (CG) method, the entropic Gromov-Wasserstein method and our method. The comparisons are found in Page 5 of the submitted manuscript. For a fixed number of samples, the SDP runtime is far more costly than the other two algorithms. We performed additional experiments to compare the runtime with algorithms suggested by the Reviewer. These results will be included in the revised manuscript. | m | GW-SDP | GW-CG | eGW-PGD | eGW-PPA | |----|------------|----------|----------|----------| | 8 | 0.557457 | 0.000349 | 0.162370 | 2.625325 | | 12 | 18.970195 | 0.000460 | 0.042703 | 2.116204 | | 16 | 17.000997 | 0.000426 | 0.048222 | 3.316097 | | 20 | 108.477784 | 0.000556 | 0.056619 | 2.986389 | | 24 | 56.620260 | 0.000523 | 0.067252 | 4.503040 | We also conducted additional experiments where we compared the performance of GW-SDP with GW-CG and eGW-PPA. (These are described in the global comment and in the attached PDF.) Interestingly, in the experiment with real data, the objective values obtained from eGW-PPA are smaller than the values obtained from GW-CG, though still larger than those obtained by GW-SDP. (A smaller objective value is better in this instance.) This suggests the GW-SDP is the best algorithm for finding optimal solutions (in terms of the smallest objective value), followed by the eGW-PPA and then GW-CG. *First-order methods* First-order methods should improve the run-time in general. We investigated this approach but encountered numerous difficulties. It is difficult to apply most first-order methods for solving SDPs to our relaxation. The main reason is that our formulation contains many constraints whereas most numerical techniques typically rely on having few affine constraints. There is a line of work that uses proximal methods in combination with numerical schemes that allows families of constraints to be decoupled such as the ADMM method. Unfortunately, these ideas also do not translate easily. Our formulation has many different types of constraints: PSD, doubly stochasticity, non-negativity, and marginal sums. It is not clear how one designs proximal operators that project onto the intersection of these constraints, or design ADMM schemes that combine several different proximal operators. There are some prior works that develop numerical schemes for solving semidefinite relaxations for the closely related Quadratic Assignment Problem (QAP) [KKBL:15,DML:17,OWX:18]. These works do point out that these semidefinite relaxations, while powerful, are not easy to solve. This suggests that it is equally difficult to develop numerical schemes for our semidefinite relaxations. In fact, the scale of the problems we work on are not far from state-of-the-art for semidefinite relaxations of the QAP. In a work by Oliveira et al. -- a work purely about numerical optimization -- the authors propose an ADMM scheme to solve QAP instances where the dimension is 40 while the largest GW instance we solve has dimension m=n=32. This means that our problem size is not far from the state of the art. We hope that the Reviewer accepts and values the conceptual and theoretical contributions already present in this paper, and recognizes that the scope of developing scalable numerical schemes, while important, goes comfortably beyond the scope of a single paper. *References* [KKBL:15] I. Kezurer, S. Z. Kovalsky, R. Basri, & Y. Lipman. (2015). Tight relaxation of quadratic match. In Computer Graphics Forum, volume 34, pages 115–128. Wiley Online Library. [DML:17] N. Dym, H. Maron, & Y. Lipman. (2017). DS++: A Flexible, Scalable and Provably Tight Relaxation for Matching Problems. ACM Transactions on Graphics, 36(184):1–14. [OWX:18] D. E. Oliveira, H. Wolkowicz, & Y. Xu, (2018). ADMM for the SDP relaxation of the QAP. Mathematical Programming Computation, 10(4), 631-658. --- Rebuttal Comment 1.1: Title: Thank you! Comment: Many thanks for the clarifications. I believe that Mathias Staudigl, Shimrit Shtern, and Pavel Dvurechensky are working on related first-order methods suitable for problems with many constraints and possibly non-linear (e.g. entropy) objectives, but I cannot provide the pre-print at the moment. I should like to say I also find the response to reviewer fpsx (with the claims of global optimality unquantified) underwhelming. --- Reply to Comment 1.1.1: Comment: >I believe that Mathias Staudigl, Shimrit Shtern, and Pavel Dvurechensky are working on related first-order methods suitable for problems with many constraints and possibly non-linear (e.g. entropy) objectives, but I cannot provide the pre-print at the moment. Thank you for the reference. We will look out for it. >I should like to say I also find the response to reviewer fpsx (with the claims of global optimality unquantified) underwhelming. When writing the rebuttal, we intended to write a quick summary before moving on to the other points in the rebuttal. The claims of global optimality were to be understood under the conditions that the approximation ratio is one. This was a point we made in the manuscript. When writing the rebuttal, it did not occur to us that it was necessary to repeat these conditions. Unfortunately, the rebuttal may have come across as overstating an incorrect claim. That is not our intention, and we apologize for the confusion. To be clear, the claims of global optimality occurs only when the approximation ratio is one. (In the follow-up replies to Reviewer fpsx you will find the follow-up exchange on this topic.)
Rebuttal 1: Rebuttal: We thank the Reviewers for taking time to provide valuable feedback. We have incorporated many of these suggestions with additional experiments which we believe improve the paper substantially. We outline some of these new experiments and contributions below, and describe them in further detail to the corresponding Reviewer who made these suggestions. We are grateful that the Reviewers appreciate the conceptual contributions of our work, namely, that it computes global optimal solutions of the GW problem with a certificate of optimality. All Reviewers raised concerns about the run-time -- this is understandable. We too share the same concerns and are actively working on this aspect. We describe some of the technical difficulties below. Nevertheless, our work is the only work we are aware of that addresses global optimality, and we hope the Reviewers consider such a contribution valuable in spite of the run-time concerns. We provide below general responses for common concerns questions. For the more detailed answer to each reviewer's question, we provide the answer as a comment following each of the individual reviews. *Experiment on real data and more baseline solvers* We use a publicly available database of triangular meshes (Sumner et al. 2004). We obtain 18 points and compute distance matrices using Dijkstra's algorithm. Each object's probability measure is chosen to be uniform. We apply (GW-SDP) to the corresponding metric-measure spaces to determine the correspondence between the selected vertices across different objects. Two representative examples are given in Figure 1 in the attached pdf file. Following are the results when we do matching of distances matrices of different objects. In general, we expect shapes of the same animals to have a smaller GW distance than shapes of different animals, which is indeed the case for the three GW formulations. We still notice that GW-SDP consistently returns the smallest value when performing the same matching task. | | GW-SDP | GW-CG | eGW-PPA | |-------------------|----------|----------|----------| | Elephant-Elephant | 0.007416 | 0.043879 | 0.025688 | | Elephant-Cat | 0.015695 | 0.050594 | 0.042214 | | Cat-Cat | 0.006549 | 0.016634 | 0.006757 | | Cat-Horse | 0.011040 | 0.033736 | 0.011041 | | Horse-Horse | 0.006287 | 0.033768 | 0.007395 | *Experiment where $m \neq n$* We performed an additional experiment where the number of samples in one distribution is fixed ($n=8$) and we vary the number of samples $m$ in the other distribution; see Figure 2 of the attached PDF. We notice that the relaxation is exact whenever $m$ is a multiple of $n$. On the other hand, the relaxation fails to be exact if $m$ is not a multiple of $n$. For the runtime, please check the results in Table 1 of the pdf. *Applying Frank-Wolfe with more sample points* Reviewer 3Vyi raised an interesting suggestion to increase the number of samples for GW-CG (non-convex GW solver using conditional gradient descent or Frank-Wolfe algorithm) vs our GW-SDP solver for a fixed number of samples. We noticed that the objective value for GW-CG decreases as we increase the number of samples. For 100000 sample points, the GW-CG algorithm is more expensive and has a poorer objective value than our method with 10 sample points. This suggests that our method can give good approximations of the GW distance with fewer sample points than existing methods. | n | GW-SDP | GW-SDP Runtime (s) | GW-CG | GW-CG runtime (s) | |-------|--------|--------------------|----------|-------------------| | 10 | 0.4577 | 6.3753 | 1.135940 | 0.000389 | | 100 | | | 0.629425 | 0.007571 | | 1000 | | | 0.540984 | 2.520011 | | 10000 | | | 0.496796 | 138.358954 | *Prohibitive run time of the GW-SDP solver and potential improvements* We acknowledge the concerns about the run time of the GW-SDP. We are actively trying to develop faster algorithms. Our current manuscript suggests some potential directions. Reviewer nsV6 suggests first order methods and we discuss this in detail in our response to Reviewer nsV6. In short, many existing techniques do not apply. In fact, in the work by Oliveira et al. on semidefinite relaxations on the closely related QAP, the authors solve problem instances of size 40. In our experiments, the largest instance is 32, and hence not far off state of the art. Nevertheless, we hope the Reviewer appreciates the important conceptual contributions of our work. It is the only work we are aware of that addresses global optimality in the GW problem. *References* [OWX:18] D. E. Oliveira, H. Wolkowicz, & Y. Xu, (2018). ADMM for the SDP relaxation of the QAP. Mathematical Programming Computation, 10(4), 631-658. [SP:04] RW Sumner and J Popovíc. (2004). Deformation transfer for triangle meshes. ACM Transactions on Graphics. Pdf: /pdf/d085cfe0ae5ee0d4ff6d2e1b16d64d8ed5252c50.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Activating Self-Attention for Multi-Scene Absolute Pose Regression
Accept (poster)
Summary: In this paper, the authors focus on improving the performance of multi-scene absolute pose regression models based on transformers. From statical analysis, the authors assume that the distortion between Q and K features in self-attention and the learnable position encoders are the reasons. Therefore, a Q-K alignment loss and the fixed-pose encoding method are adopted and experiments demonstrate the efficacy on indoor 7scenes and CambridgeLandmarks datasets. Strengths: The strengths of this paper are as follows. 1. Originality. The undistortion of Q-K features in self-attention is not a new direction as also mentioned in the related works section. While this paper may be the first one to apply it to the multi-scene absolute pose regression task and approve the efficacy. Besides, the authors also demonstrate that the fixed-position encoding works better than the learnable position encoding because learnable position encoding breaks the order the input sequences. These contributions are useful for improving the accuracy of the pose regression as shown in the experiments. 2. Quality. The proposed algorithm is easy to follow, and the paper is well written. Weaknesses: The proposed method is easy to follow as I mentioned before, so I don’t have many concerns. Several minor concerns are as follows. 1. Contribution. I can understand this paper starts from analyzing the application of the absolute pose regression, but the key technique about the distortion of Q-K features in self-attention mechanism comes from previous works [19]. As this is the major contribution of this paper, I am not very sure if it is enough. Although an additional Q-K alignment strategy is proposed as an additional contribution, its improvement over prior strategies such as [19] is not significant (0.02m and 0.17 deg). These improves may come from a suitable learning rate or more proper hyper-parameters balancing different losses. 2. Results. Overall, the improvements against the baseline method MSTransformer [17] on indoor (0.02m, 0.64deg) and outdoor (0.09m, 0.44deg) datasets are not significant, which further degrades the contribution of this paper. Technical Quality: 2 Clarity: 3 Questions for Authors: Please see the weakness. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. Distinctive contribution** A1. We can understand why this might be a concern. [19] and our research both address the underlying problem of attention collapse. However, [19] and ours are **clearly different in terms of the analysis on the causes and the solution for the attention collapse.** Firstly, [19] points out that the large gradient norm and increasing spectral norm in the early layers of transformers induce the collapse with training instability and degraded performance. Hence, they propose a method (σReparam) for regularizing the gradient and spectral norm with pre-layer normalization. On the other hand, we identify the distortion of the query-key space and the undertrained positional encoding as the main issues. That is, when the queries and keys are mapped into completely different spaces while only a few keys exist in the query space, the attention collapses and the components in self-attention modules are deactivated. Not just measuring attention entropy like [19], we also **define the new index, purity, to statistically demonstrate the predominance of this issue across the APR dataset.** Finally, we introduce an auxiliary loss which aligns the query and key regions as well as fixed positional encoding. Therefore, the major contribution stands apart from [19], and consideration of the novel contributions it offers would be appreciated. **Q2. Performance improvement** A2. We understand that the improvements may not appear significant at first glance. However, please take into account two hidden aspects regarding the conventional evaluation of median errors in position and orientation in APR. Firstly, due to the small numerical values, the errors tend to appear minor. Despite this, **our method reduces the position/orientation error rates by 7%/17% in outdoor settings and by 6%/9% in indoor settings.** Secondly, median error, being a *median*, does not fully reflect the overall performance of the model. To address this, we compared the performance with the baseline using the recall evaluation metric, which is widely adopted in 2D-3D correspondence-based camera pose estimation. As shown in Table A1 on the global rebuttal page, **our method achieves 3-5%p higher recall across various thresholds and datasets compared to the baseline.** Particularly, there are substantial improvements of outliers, which are not evident in the median error metrics. By forcing the model to find helpful cues for the task, it becomes especially effective in leveraging previously challenging edge cases. In conclusion, we would emphasize our method's superiority and the remarkable performance improvements it provides over the baseline. **Q3. Performance difference between [19] and ours** A3. First of all, please allow us to clarify that there is a trade-off between position and orientation [6, 12]. Thus, it is hard to see that [19] improves performance on the 7Scenes dataset since the median orientation error is decreased but the position error does not. In addition, we report the comparative analysis between [19] and our QKA loss on Cambridge Landmark dataset in Table A5 of the global rebuttal page. As the scale of the scene increases, it can be observed that our method shows a clear difference in median error compared to [19]. As mentioned earlier, median error may not be as visually striking as other evaluation metrics; however, please take into account that such a difference in median error is not marginal. --- Rebuttal Comment 1.1: Title: Response Uploaded Comment: Thank you for taking the time and effort to review our rebuttal. We have uploaded our responses according to the reviewer's request. With only one hour left for discussion, we kindly ask you to verify our responses. Your verification will help us improve the overall quality and clarity of our work. If we have satisfactorily addressed your concerns, we would appreciate a positive reassessment.
Summary: The paper investigates multi-scene absolute pose regression from a new perspective: query-key embedding space. Focusing on distortion of queries and keys, the paper solutions to activate self-attention, which includes an auxiliary loss to align queries and keys and fixed sinusoidal positional encoding. Experimental results demonstrate that the proposed method outperforms existing MS-APR methods in outdoor Cambridge Landmarks and indoor 7 Scenes. Strengths: 1.The paper proposes new analysis that focuses on distortion of query-key embedding space. 2.The paper presents solutions to reduce the distortion that covers an auxiliary loss that aligns queries and keys and fixed sinusoidal positional encoding. 3.Experimental results demonstrate that the proposed method can reduce the localization error. Weaknesses: 1.The motivation of Multi-Scene Absolute Pose Regression is somewhat insufficient. The paper declares that Multi-Scene Absolute Pose Regression can satisfy the needs of speed and memory efficiency across multiple scenes. However, these are not validated in the experiments, especially compared with single-scene methods. 2.The proposed method seems to only support transformer based APR, which are limited to its further applications. 3.Some references are missing. Although the paper focuses on Multi-Scene Absolute Pose Regression, the references about single-scene camera relocalization methods still should be discussed, including single-scene ARP methods and 2D-3D correspondence based methods. 4.The experimental results are limited, reflected by the following aspects. (1)The paper only lists the localization results in comparison with transformer based Multi-Scene APR methods, but the comparisons with other single-scene state-of-the-art methods are missing, especially the speed and memory efficiency which is declared as advantages of Multi-Scene methods. (2)From Tables 1,2, the improvements of the proposed method seem not obvious, which can not show the method superiority. More discussions are preferred. Technical Quality: 2 Clarity: 3 Questions for Authors: 1.Currently, the 2D-3D correspondence based methods (also called coordinate regression based methods) still achieve the state-of-the-art localization performance in both static and dynamic scenes, such DSAC*, KFNet and so on. It is curious that whether the proposed method can apply to 2D-3D correspondence based methods? 2.Does the proposed method affect the network training convergence time? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors have adequately addressed the limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. Speed & memory efficiency** A1. We would like to bring to your attention that the advantages of MS-APR mentioned have already been claimed and proven by previous research, and are not new assertions from our side. Firstly, it is known that APR methods are much faster and more memory-efficient than 2D-3D correspondence-based methods because they do not require 3D point clouds or RANSAC algorithms [6, 13, 16, 17]. APR methods show almost the same speed; but scene-specific models have limitation of memory inefficiency for large-scale application. Hence, MS-APR methods propose new architectures that ensure memory efficiency for multiple scenes. Besides, our methods do not require additional modules, thereby **maintaining the memory efficiency as the baseline.** It is why we focus on performance differences in the experiment section. To address the issue, we display the memory requirements of recent single-scene and multi-scene APR methods as the number of scenes increases in Table A3 on the global rebuttal page. **Q2. Comparison with single-scene methods** A2. Firstly, allow us to clarify by presenting the localization results in comparison with all MS-APR methods, including MSPN [16], which is a CNN and MLP-based model. Secondly, please consider that **to maintain a fair comparison**, we did not include 2D-3D correspondence-based methods or single-scene APR methods. For 2D-3D correspondence-based methods, 3D point clouds are required, and thus many APR works [7-10, 13-14, 16-17] have excluded them from the comparison. In this context, several recent single-scene APR methods, which adopt NeRF-W that also requires 3D point clouds, are also excluded from the fair comparison. Although other single-scene APR methods still pose fairness issues in terms of the number of model parameters, we show the comparison results in Table A4 on the global rebuttal page. **Q3. Performance improvement** A3. We understand that the improvements may not appear significant at first glance. However, please take into account two hidden aspects regarding the conventional evaluation of median errors in position and orientation in APR. Firstly, due to the small numerical values, the errors tend to appear minor. Despite this, **our method reduces the position/orientation error rates by 7%/17% in outdoor settings and by 6%/9% in indoor settings.** Secondly, median error, being a *median*, does not fully reflect the overall performance of the model. To address this, we compared the performance with the baseline using the recall evaluation metric, which is widely adopted in 2D-3D correspondence-based camera pose estimation. As shown in Table A1 on the global rebuttal page, **our method achieves 3-5%p higher recall across various thresholds and datasets compared to the baseline.** Particularly, there are substantial improvements of outliers, which are not evident in the median error metrics. By forcing the model to find helpful cues for the task, it becomes especially effective in leveraging previously challenging edge cases. In conclusion, we would emphasize our method's superiority and the remarkable performance improvements it provides over the baseline. **Q4. Applicability** A4. We would like to highlight the applicability of our method in two key aspects. Firstly, experiments in APR have only been conducted on small datasets, but it will not be long before APR methods need to be evaluated on large-scale datasets such as Aachen Day-Night or RobotCar, which are already tested in 3D-based methods. Accordingly, it is worth considering whether scene-specific models will remain competitive in terms of memory requirements as shown in Table A3. [16] proved that there is no need for a database in multi-scene by utilizing transformer-based model, drawing the potential of APR. Therefore, we believe that **transformer-based APR models will become more prevalent, and our method is applicable to any transformer with deactivated self-attention.** Furthermore, we would suggest that our analysis and method are **not limited to APR applications.** While our research began with addressing attention collapse in APR, our method can also be adopted as a solution for other vision tasks suffering from similar issues. To validate this, we applied our method to the temporal action detection task, which uses transformer-based models and exhibits attention collapse. As shown in Table A2 and Figure A1 on the global rebuttal page, the baseline DETR struggles with learning self-attention mechanism, and our method significantly improves performance by resolving this issue. Therefore, we ask for reconsideration of the strong applicability and scalability of our work in both APR and transformer research. **Q5. Application to 2D-3D correspondence-based** A5. That is an interesting question. Our method is designed to maximize the activation of image features that assist in camera pose estimation. In this context, it could also be applied to 2D-3D correspondence-based methods that aim to reinforce image features through the self-attention mechanism. For instance, it could be applied to the ViT Encoder used in [B]. However, please note that they differ from APR methods in terms of task loss and learning complexity; the attention outcomes might differ from those observed in APR. [B] Revaud, Jerome, et al. Sacreg: Scene-agnostic coordinate regression for visual localization. CVPR, 2024. **Q6. Training convergence time** A6. Thank you for highlighting the new benefit of our research. While the baseline model is trained on Cambridge Landmarks for 600 epochs, our model achieved better performance with only 500 epochs of training, thus indicating a clear benefit in terms of training convergence. --- Rebuttal Comment 1.1: Title: Response Uploaded Comment: Thank you for taking the time and effort to review our rebuttal. We have uploaded our responses according to the reviewer's request. With only one hour left for discussion, we kindly ask you to verify our responses. Your verification will help us improve the overall quality and clarity of our work. If we have satisfactorily addressed your concerns, we would appreciate a positive reassessment.
Summary: The paper analyzes the collapse of self-attention map in Multi Scene Pose Transformer model and proposes two simple but effective methods: auxiliary loss and fixed 2D sinusoidal encoding to solve this problem. The improved method delivers SOTA performance on the Multi Scene Pose Regression task. Strengths: 1. The proposed auxiliary loss seems a good solution for query-key distortion in MS Transformer which is simple and effective. 2. The figures and tables in this paper are exceptionally clear and well-organized, making the paper easy to understand and interpret. Weaknesses: 1. Since query-key distortion is studied in the transformer literature and the fixed 2D sinusoidal positional encoding is an off-the-shelf module, the methods in this paper seems to be lack of novelty. It would therefore be nice to discuss more on why APR task will lead to query-key distortion rather than saying "The model tends to avoid the self-attention mechanism due to the learning difficulty." in line 127-128. The paper reads as if it's discussing improvements to the transformer and not related to APR. 2. Ablation study in Table6 indicates that most of the performance improvements come from the using of fixed sinusoidal positional encoding, a more complete ablation study may be helpful to show the ability of the proposed loss function. 3. Section5: Activating Self-Attention for MS-APR is overly detailed, most of the information is a repetition of MS-Transformer. Technical Quality: 2 Clarity: 3 Questions for Authors: The results in Table5 indicates that the loss function is a very hard constraint that limits the purity of query region to [0.4, 0.6), such constraint is not suitable for all situations. Do authors have comments? Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The manuscript is limited in scholarship, missing references to more APR methods like [A]. [A] Sc-wls: Towards interpretable feed-forward camera re-localization, ECCV 2022 Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. Novelty** A1. We would like to highlight the novelty of our unique analysis and simple but effective solution. By focusing on a different aspect from previous studies [18-21, 31-34], we identified the problem in self-attention modules of APR models and solved it, which previous works could not. Specifically, they usually explored attention map theoretically to activate self-attention. However, as shown in our experimental results, these conventional solutions did not work for APR. We think that we provide meaningful discussion by discovering the query-key distortion and **statistically and empirically validating the problem using the dataset specific to this task.** Positional encoding is no exception. Previous studies primarily investigated various learnable positional embeddings to assist self-attention mechanism. However, our work found that these variations were not effective for this task and actually increased the difficulty of reasoning with self-attention. Although 2D sinusoidal positional encoding is off-the-shelf, it is sufficient to provide the essential position information for self-attention in APR. We believe that establishing a practical and effective method that improves performance without compromising speed and memory efficiency is one of the most important discussions in MS-APR. From this perspective, our methods contribute to MS-APR by activating deactivated modules perfectly **without causing any slowdown or additional memory usage.** **Q2. Learning difficulty** A2. Rather than query-key distortion occurring because of estimating camera pose, we hypothesize that the problem stems from the **APR task being difficult due to limited data and the need to extrapolate 3D values from 2D inputs.** In other words, self-attention, which has few biases and constraints, may fall back on shortcuts under difficult conditions for reasoning. Hence, we suspect that similar issues may arise in other challenging fields with transformer-based models, which struggle with limited data and the need for sophisticated reasoning. However, it is too difficult to quantify the learning difficulty. Therefore, we have explored other demanding vision tasks to verify the effectiveness of our method. Please refer to Table A2 and Figure A1 on the global rebuttal page for our experimental results on the DETR-based model in temporal action detection. Similar issue of attention collapse in self-attention can be found, and our proposed auxiliary loss effectively solves the problem, leading to significant performance improvements. **Q3. Ablation study** A3. Please refer to Table A6 for the complete ablation study on the global rebuttal page. **Q4. Hard constraint** A4. Thank you for the great question. As we discussed in the limitation, if only specific parts of an image contain information useful for camera pose estimation, such as when a dynamic moving object occupies a large portion of the image, our method might introduce side effects. However, these cases are not common. Camera pose estimation generally relies on the overall layout of the scene, like the long edges between the ceiling and walls, rather than specific parts of the image. Therefore, it is postulated that **applying a hard constraint to activate most features could be more beneficial overall for the task.** Namely, we presume that our method is effective for APR as it provides stronger regularization compared to other methods aimed at resolving attention collapse. **Q5. Missing references** A5. Thank you for bringing this to our attention. We will make sure to include references to [A] and other relevant works in our revised version. --- Rebuttal Comment 1.1: Title: BA Comment: I have updated the recommendation to BA. This insight has value in APR, echoing insights from other methods like [A] that attending to useful regions help APR. However, a proper literature discussion is needed to help the APR community converge to useful scientific conclusions. [A] Sc-wls: Towards interpretable feed-forward camera re-localization, ECCV 2022 --- Reply to Comment 1.1.1: Comment: We appreciate your positive feedback on the value of our insights in APR and how it echoes other methods like [A]. We agree that a proper literature discussion is crucial for helping the APR community converge on useful scientific conclusions. To address this, we will include a comprehensive discussion, along with references to [A] and other relevant works, in our revised version.
Summary: This paper is about a improving self attention in the transformer architecture for multi-scene APR. The show that the self attention module in the SOTA transformer model for APR is actually not helping much and offer an potential explanation. The paper claims that the keys and queries end up in different spaces, such that the inner product between keys and queries is very close to zero in most cases, leading the attention to collapse to zero. The paper proposes an additional loss term that encourages the mixing of queries and keys leading to much more overlap. The addition of this term results in a noticeable improvement in APR metrics across indoor and outdoor datasets. Strengths: + The motivation for the paper is clear and concise and overall well laid out. Table 1 lists metrics with and without self attention showing the limited utility. It is shown empirically that the keys and queries are isolated form one another as well as theoretically explained why this is an issue. The proposed approach is well explained, straightforward, and has the desired effect of quantitatively improving attention and qualitatively improving APR metrics. This will clearly be used with mulitscene transformer based APR methods going forward because it's simple without requiring any extra data and is effective. + The problem analysis Section 4 is thorough and useful. The side effects of the commonly used techniques are clearly explained and the arguments for why this is a problems is compelling. + There are sufficient implementation details. I could easily reimplement this paper from the information provided. The hyper parameters are shared with [17] so it is unlikely that the performance gain is due to hyperparameter tweaking. + Thorough ablation of different methods for solving the SA problem as described as well as different positional encoding methods. The methods proposed in the paper are validated as being the best on these datasets. + This is one of the few papers I've seen using attention maps in pose regression where I feel like the visualizations and discussion around attention are actually meaningful. Weaknesses: - Table 5 is difficult to parse. I feel like it could easily be represented with histograms as in Figure 2a. Similarly I feel like the histograms a fairly course. The point comes across okay but I'm not sure why such a course histogram would be used. Technical Quality: 4 Clarity: 4 Questions for Authors: In Table 3, MST and +Ours is incorrectly bold for the position error for the Office scene. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. Histogram** A1. We thought that using coarse bins was suitable for visualizing and showing the general trend differences in purity between the baseline and our method. However, we agree with the reviewer's suggestion to provide a more specific analysis. Accordingly, we present the baseline's purity with a fine-level histogram and include the visualization for our method in Figure A2 on the global rebuttal page. **Q2. Incorrect bold** A2. Thank you for bringing this to our attention. We will make sure to correct it in our revised version.
Rebuttal 1: Rebuttal: Thank you for reviewing. Responses to the questions can be found under each individual review. The global rebuttal page includes the relevant figures and tables for your reference. Pdf: /pdf/2da9102b9ced4cf6568f9a6f793e4b15bef0a6c9.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
GenWarp: Single Image to Novel Views with Semantic-Preserving Generative Warping
Accept (poster)
Summary: The authors propose to achieve novel view synthesis from a single image. Contrary to prior work, which warps using a monocular depth estimate and then inpaints, a more flexible architecture is introduced. To do this, a diffusion model is conditioned on warped coordinates, based on the desired pose change and a monocular depth estimate. In addition, the input view (and base coordinate map) is featurized and the diffusion process is allowed to cross attend to these features, which should intuitively allow for attention between corresponding points. The method is then trained end-to-end with a denoising diffusion objective. Experiments show that this approach achieves better FID, PSNR, and LPIPS scores than baselines on RealEstate10k and ScanNet. Ablations justify the choice of warping conditioning. Strengths: The manuscript is well written and well motivated. The related work section is comprehensive, the methods section is straightforward and easy to understand. And experimental results seem quite strong. Weaknesses: I think the paper could benefit significantly from apples-to-apples ablations. For example, I am quite interested in a comparison between the proposed method, and the method without any depth information (I think similar to GeoGPT). Or perhaps the proposed method, and the method without the warping. The authors also claim that the cross-attention “allows the model to inherently decide which regions should rely more on its generative capability and which areas should depend primarily on the information from the input view warping.” However, they provide only a single example of this in figure 4. I think more examples would make for more convincing evidence. I would be interested in a larger scale, more robust study of this hypothesis. For example, this could possibly be automated by computing how closely the cross-attention actually matches warping. The SD warp baseline could possibly be implemented better. I believe the inpainting mask covers non-warped black pixels, resulting in many unrealistic “black borders” is in Figure 10 and Figure 5. Perhaps a more fair and robust baseline could be achieved by just expanding the inpainting mask a bit. Text2room by Höllein et al seems relevant. Perhaps the authors would consider citing it. Technical Quality: 3 Clarity: 4 Questions for Authors: From the weaknesses section: could apples-to-apples ablations be conducted on components of the method? From the weaknesses section: could more evidence for “cross-attention attends to corresponding points” be presented? One major benefit of the proposed setting, as opposed to warp then inpaint, is that the method should be able to handle view-dependent effects. This may be interesting to investigate. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Limitations and societal impacts are included, and adequately addressed, but are not in the main body of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank your thoughtful review and suggestions! If any of our responses do not adequately address your concerns, please let us know and we will get back to you as soon as possible. --- ### Q: More evidence for “cross-attention attends to corresponding points” be presented? Thank you for the interesting question! To verify this, we have used 1,000 pairs of images to determine (1) how well cross-attention attends to corresponding points, and (2) which is more dominant between self-attention and cross-attention for invisible regions and regions where depth-based correspondence exists. First, the table below shows the distance between the flow map obtained from depth information and the flow map extracted from the cross-attention. Specifically, we extracted the flow map from the cross-attention layer by argmax operation to see where the model pays the most attention to. It demonstrates that as training progresses, the model learns depth-based matching and warping through the cross-attention mechanism. On the other hand, the model where the proposed embedding is replaced with the Plücker camera embedding shows relatively worse performance in terms of matching distance. | Models | Average distance | | --- | --- | | Ours - 2,000 steps | 1.36 | | Ours - 6,000 steps | 0.97 | | Ours - 10,000 steps | 0.90 | | Ours - converged | 0.85 | | Camera embed. - converged | 0.98 | Secondly, in the table below, we report which part of the concatenated attention map - the cross-attention part or the self-attention part - is more activated during generation for visible and invisible regions. As exemplified in Figure 4 of the main paper, it shows the cross-view attention part focuses on regions that can be reliably warped from the input view, while the original self-attention part is more attentive to invisible regions requiring generative priors. | | Cross-attn. | Self-attn. | | --- | --- | --- | | Visible region | 0.756 | 0.244 | | Invisible region | 0.417 | 0.583 | Regarding the cross-attention and self-attention for invisible regions, we empirically found that when generating invisible regions, the model also refers to surrounding visible areas through cross-attention, for instance, to generate the invisible left side of a desk, it needs to refer to the visible part of the desk for a plausible novel view. We will add this analysis. --- ### Q: Apples-to-apples ablations. Thank you for the suggestion. We report the additional ablation results for the two cases suggested by the reviewer below. 1. **The method without any depth information**: We have trained our model without depth information and warping process, in which we guided the model with the target camera viewpoint using dense camera embedding, i.e., Plücker embedding. 2. **The method without the warping**: We have trained the model with depth information of the input view and the camera embedding, but without the warping process. In Figure B of the global response PDF, we report a performance comparison between these two cases and our full pipeline. Specifically, we measure LPIPS of each baseline with respect to the ratio of invisible regions in the target camera viewpoint, i.e., the difficulty of generating target views. It shows that performance improves in the following order, from best to worst: our full model involving the warping process, the model with both camera and depth information, and the model with camera information only. We appreciate this suggestion and will add it to the camera ready. --- ### Q: Expanding inpainting mask a bit for SD-inpainting baseline. Thank you for pointing this out. In the paper, we followed the masking technique of LucidDreamer[7], applying an 8x8 filter to the mask obtained from the warping process, by expanding the mask up to 8x8 size for mask pixels smaller than 8x8. Actually, at the time of paper submission, we experimented with several mask filter sizes for fair evaluation, and the 8x8 filter achieved the best quantitative results. The table below shows the quantitative results when the mask filter size is increased by 50%. | Filter size | FID ↓ | PSNR↑ | | --- | --- | --- | | 8x8 (paper) | 44.13 | 12.98 | | 12x12 | 44.19 | 12.88 | There is a trade-off in mask filter size for the warping-then-inpainting approach — if the filter size is large, it may further ignore pixels from the source view, while conversely, as the reviewer mentioned, artifacts may persist. For details on this, please refer to Figure 11 and L459-L472 in the Appendix. We thank the reviewer for this point. We will include this discussion and improve the qualitative results of the baseline. --- ### Q: Is that the method should be able to handle view-dependent effects? This may be interesting to investigate. This is an exciting experiment that we have not investigated! Intuitively, our implicit warping, trained on the multi-view datasets where view-dependent effects exist, should better capture these effects compared to explicit geometric warping. These effects could be measured in datasets containing glossy objects, such as Shiny Blender dataset [A]. We will further investigate this and report it in the camera ready as we cannot report it here due to time limitations during the rebuttal phase. --- ### Q: Citing Text2Room. Thank you for the feedback. We will add the Text2Room citation in the camera-ready. --- [A] Ref-nerf: Structured view-dependent appearance for neural radiance fields. CVPR. 2022. --- Rebuttal Comment 1.1: Comment: I thank the authors for answering my questions and providing additional results. The rebuttal addresses my concerns, and I would like to keep my review as a weak accept. I have also read the other reviews and rebuttals, and want to add that I do not think 3Fuk's concerns are significant enough to merit a "borderline reject." Even if they were, the author's rebuttal to the reviews seems very reasonable to me, and I believe 3Fuk's rating should be higher.
Summary: This paper proposes a semantic-preserving generative warping framework to generate high-quality novel views from a single image, which mainly consists of two components: - Condition the novel view synthesis on the warped 2D coords embedding from the estimated depth map. - Augmenting cross-view attention with self-attention. The proposed method eliminates the artifacts caused by error depths in the warping-and-inpainting pipeline and integrates semantic features from source views, preserving semantic details in generation. Strengths: - Compared to existing warping-and-inpainting methods that condition on explicit depth warping, the proposed warped 2D coords embedding forms the correspondence between reference view and target view implicitly, which helps the network to be more robust to the noise in the estimated depth map without losing sematic details. - The paper is well-written, being clear and easy to follow. The technical limitations are also discussed in detail in the Sup. Weaknesses: - The proposed embedding based on depth warping could not benefit the synthesize process when there are large camera movements or occlusions between the input view and the target view. - Although the proposed depth-warping embedding somehow reduces the influence of the noise in the estimated depth map, compared to explicit pixel warping, the implicit condition of depth embedding also lower the local preservation ability of the network when synthesizing novel views. Is the model capable of synthesizing consistent novel views which could be used to reconstruct a 3D scene ? - The comparison baseline is limited. Plucker embedding[1] is also a dense embedding capable of providing local correspondence, which supports large camera movement and occlusions. The difference between such dense embedding with the proposed depth warping embedding should be further discussed. [1] SPAD : Spatially Aware Multiview Diffusers(CVPR2024)[https://arxiv.org/abs/2402.05235] Technical Quality: 3 Clarity: 3 Questions for Authors: As listed in the Weakness. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No specific limitation and negative societal impact need to be addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank your thoughtful review and suggestions! We give a detailed response to your comments below. If any of our responses do not adequately address your concerns, please let us know and we will get back to you as soon as possible. --- ### Q: The proposed embedding could not benefit the synthesize process when there are occlusions between the input view and the target view. Thank you for pointing this out. As the reviewer noticed, the proposed embedding was designed to enhance implicit warping for co-visible regions between input and target views, rather than specifically addressing occlusion. As shown in Figure 4 of the main paper, where the self-attention part of the diffusion model is more attentive to occluded regions, these occluded areas are generated through the generative prior of the diffusion model while referencing co-visible parts. In the rebuttal, to further validate our model’s performance regarding occlusions, we calculate the ratio of invisible regions in the warping and analyze how performance changes as this ratio increases. Figure A in the global response PDF shows that our method, along with the proposed embedding, demonstrates **better LPIPS values compared to other methods even when the ratio of invisible regions is high**, showing a similar trend in Figure 8 of the main paper. In the case of extremely distant viewpoints where depth-based correspondence does not exist, our method, like any other depth warping-based NVS methods[7,31,36], struggles to generate a novel view in a single generation step. We would like to note that in such cases, multi-step progressive generation by re-conditioning on previously generated novel views can be achieved, as similar existing methods [21, 39] have shown (L489). As exemplified in Figure 6 of the main paper, our model also demonstrates robust performance in consistent view generation. --- ### Q: Comparison with Plücker embedding. We appreciate your feedback. **The comparison between Plücker embedding and our proposed embedding is presented in Table 2 of the main paper**, where Camera embedding [37] refers to the Plücker embedding (L282). We will clarify this point in the camera-ready. For further comparison, we additionally report the performance comparison of Plücker embedding and our embedding with respect to the ratio of invisible regions. Figure B in the global response PDF shows that the proposed embedding demonstrates better performance consistently as long as there is at least a small overlap between the input view and the target view. For the reason why the warped coordinate embedding performs better than the Plücker embedding in our setting, we speculate that the model with the proposed embedding could benefit from the inductive bias that MDE depth and its warping process provide. In other words, Plücker embedding has inherent ambiguity relatively, while the warped coordinate embedding provides a direct warping hint. In our opinion, when fine-tuning with the multi-view training datasets that have relatively less diverse data than other types of datasets, our embedding with this inductive bias can be more efficient. Thank you for this feedback and we will continue to investigate this! --- ### Q: Is the model capable of synthesizing consistent novel views which could be used to reconstruct a 3D scene? Thank you for the interesting question. To verify this, we reconstructed 3DGS [A] with novel views generated from our model and rendered a video with a camera trajectory that interpolates the given camera viewpoints. We report the video frames in Figure D of the global response PDF, as we are unable to upload the video in the rebuttal. It demonstrates that the 3D scene converges well without being hindered by artifacts. --- [A] 3D Gaussian Splatting for Real-Time Radiance Field Rendering. SIGGRAPH 2023.
Summary: The paper presents a novel framework called GenWarp, which aims to generate new views from a single input image while preserving the semantic content of the original view. This is achieved by leveraging a generative process that incorporates both self-attention and cross-view attention mechanisms conditioned on warping signals. The proposed approach demonstrates superior performance compared to existing methods, especially in scenarios with challenging viewpoint changes, and exhibits good generalization to out-of-domain images. Strengths: 1. The paper introduces an approach for single-shot novel view synthesis by combining self-attention and cross-view attention mechanisms to preserve semantic details. 2. The proposed method outperforms existing techniques in generating high-quality novel views, particularly for challenging viewpoint changes. 3. GenWarp shows generalization capabilities, performing well on out-of-domain images, which indicates its applicability to various scenarios. Weaknesses: 1. The paper could further highlight its novelty by providing a more comprehensive comparison with a wider range of state-of-the-art methods, including both classical and recent approaches. Additionally, the authors should discuss any potential limitations of their approach in terms of scalability or adaptability to different types of scenes. 2. The methodology section could benefit from additional diagrams and flowcharts that illustrate the workflow and attention mechanisms in more detail. Including intermediate results and step-by-step visualizations would help readers better understand the progression from the input image to the generated novel view. 3. While the paper provides detailed instructions for reproducing the experiments, the code and data are not made publicly available at the time of submission, which can hinder reproducibility efforts. 4. The performance of GenWarp is highly dependent on the quality of the datasets used for fine-tuning, which could limit its effectiveness if high-quality multi-view datasets are not available. 5. The method struggles with generating novel views when the camera viewpoints are extremely distant, indicating a limitation in handling very large viewpoint changes. 6. The reference list could be updated to include more recent advancements in the field, particularly those published in the last year. Additionally, a more detailed comparative analysis of the strengths and weaknesses of related methods would be beneficial. 7. what is the advantage of diffusion-model-based pipeline over 3D Gaussian[A,B] or NeRF[C,D] based pipeline, can the author discuss more about existing methods? [A] Yu, Zehao, Anpei Chen, Binbin Huang, Torsten Sattler, and Andreas Geiger. "Mip-splatting: Alias-free 3d gaussian splatting." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19447-19456. 2024. [B] Yan, Zhiwen, Weng Fei Low, Yu Chen, and Gim Hee Lee. "Multi-scale 3d gaussian splatting for anti-aliased rendering." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20923-20931. 2024. [C] Deng, Congyue, Chiyu Jiang, Charles R. Qi, Xinchen Yan, Yin Zhou, Leonidas Guibas, and Dragomir Anguelov. "Nerdi: Single-view nerf synthesis with language-guided diffusion as general image priors." In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 20637-20647. 2023. [D] Yang, Yifan, Shuhai Zhang, Zixiong Huang, Yubing Zhang, and Mingkui Tan. "Cross-ray neural radiance fields for novel-view synthesis from unconstrained image collections." In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15901-15911. 2023. Technical Quality: 3 Clarity: 2 Questions for Authors: Please refer to the weakness section. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank your thoughtful review and suggestions! If any of our responses do not adequately address your concerns, please let us know and we will get back to you. --- ### Q: Handling extremely distant viewpoint changes. Thank you for pointing this out. Our key contribution is effectively using estimated noisy depth signals in NVS diffusion models. While addressing depth-based correspondence for *extremly* distant viewpoints is a common issue for depth-based methods and beyond our scope, our approach outperforms others by implicitly using depth warping signals, even with significant viewpoint changes, as shown in Figure 8 of the main paper. To further demonstrate this, we analyze how the performance changes as the ratio of pixels invisible from the input view increases due to viewpoint changes. **As shown in Figure A of the global response PDF, our method demonstrates the best performance compared to the other methods even as the invisible ratio increases.** Finally, we'd like to note that even for extreme viewpoints where depth-based correspondence doesn't exist, progressive generation by re-conditioning on previously generated novel views can be achieved, as similar existing methods[21, 39] have shown. As exemplified in Figure 6 of the main paper, our model also demonstrates robust performance in consistent view generation. We will faithfully reflect this discussion in the camera-ready version. --- ### Q: What is advantage of diffusion-based pipleline over 3DGS/NeRF-based pipleline? Conventional 3DGS/NeRF pipelines [A,B,D] aim to perform 3D reconstruction using numerous input views, synthesizing novel views through interpolation between input views. However, these methods struggle to synthesize novel views in few-shot scenarios. In such cases, synthesizing a novel view is closer to generation problem than reconstruction problem. Generalizable NeRF/3DGS pipelines [E,F] learn scene priors through training. In few-shot scenarios, these methods show improved reconstruction performance. However, these works do not explicitly consider generative modeling. Although showing superior reconstruction performance, they show limited performance in synthesizing large unseen areas in novel views, e.g., extrapolation. GenWarp and other recent methods [7,15,22,33,C] using diffusion models formulate single-shot novel view synthesis as a conditional generation problem, rather than a reconstruction-based approach. Consequently, they show superior performance with extremely limited input views, e.g., single-shot scenarios. --- ### Q: More comprehensive comparison. As suggested by the reviewer, we provide comparisons with three additional recent/classical methods (Nerdi [D], PixelNeRF [E], vanilla NeRF) in Figure C of the global response PDF. For Nerdi [D], due to lack of available codes, we brought curated qualitative results on DTU dataset from their paper. Our result shows **non-blurry, clear novel view compared to other methods**. As their methods are optimization-based and take several hours per scene, we will thoroughly include quantitative comparisons that cannot be done during the rebuttal phase in the camera ready. Additionally, for comparison with recent methods, we would like to note that we have included a comparison with a warping-then-inpainting strategy with inpainting models [30] which is commonly adopted in recent state-of-the-art pipelines [7,26,36] for single-shot 3D generation. > Potential limitations of others in terms of scalability or adaptability? > For other NVS generative models [15,31], when these methods are evaluated on different datasets, i.e. in out-of-domain scenarios, they show decreased performance, as shown in Table 1 of the main paper. We speculate that it is because a single dataset usually consists of similar scenes, so the models struggle with different types of scenes unseen during training. Other approaches [7,26,36] that use warping-then-inpainting with pretrained T2I diffusion models [30], maintain good scalability as they directly utilize the large-scale T2I models without fine-tuning. However, they show unstable results, especially when the target camera viewpoint is far (L106, L49), due to warping errors caused by noisy depths, exemplified in Figure 2 of the main paper. We address these limitations by combining the best of both worlds — GenWarp inherits the generalization capabilities of T2I models while refining the noisy depth-warping artifacts. --- ### Q: Dependent on the quality of datasets. Indeed, our model’s performance is dependent on dataset quality due to its learning-based nature. However, our model inherits the generalization capabilities of T2I models, which are trained on large, high-quality image corpora. Furthermore, our generative warping approach introduces an inductive bias coming from MDE depth and its warping process, enabling efficient fine-tuning with the training datasets. As a result, it shows superior performance on the same training dataset, as demonstrated by the out-of-domain performance in Table 1 of the main paper. --- ### Q: Additional diagram. Thank you for the feedback. We provide a detailed diagram and intermediate results in Figure E of the global response PDF, which will be included in the camera ready. --- ### Q: While the paper provides detailed instructions for reproduction, code and data are not made publicly available. Thank you for recognizing our effort for reproducibility. We will make sure to release all the code and data in the camera ready. --- ### Q: Reference list update. We thank you for the feedback and will thoroughly update our reference list with the papers [A,B,C,D], as well as recent/concurrent papers. We will also supplement our Related Work section accordingly. --- [E] pixelnerf: Neural radiance fields from one or few images. CVPR. 2021 [F] pixelsplat: 3d gaussian splats from image pairs for scalable generalizable 3d reconstruction. CVPR. 2024.
Summary: This paper proposes a novel single-shot novel view synthesis framework based on a pretrained T2I diffusion model. Instead of directly warping pixels between input view and novel view, an implicit approach is proposed to conduct geometry warping operation. The cross-view attention is used to eliminates the artifacts caused by error depths and integrates semantic features from source views, preserving semantic details in generation. Extensive experiments prove the efffectiveness of the proposed method. Strengths: 1. The implicit geometry warping approach is effective in addressing ill-warped problem and missing original semantic problem encountered in explicit warpping methods. 2. The cross-view attention can provide more useful information for novel view generation. 3. The qualitative and quantatitive experiments on RealEstate10K , ScanNet, and in-the-wild images show the proposed method outperform SOTA methods in both in-domain and out-of-domain scenario. Weaknesses: 1. Although the cross-view attention strategy is effective to fuse features, it may be not novel enough. There are many similar operation in video generation area. 2. The detail of finetuning the T2I model is not clear. Technical Quality: 4 Clarity: 4 Questions for Authors: N/A Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: This method is restricted to depth-based correspondence of two views which limit its application in some scenes where depth correspondence between two views is missing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank your thoughtful review and suggestions! We give a detailed response to your comments below. If any of our responses do not adequately address your concerns, please let us know and we will get back to you as soon as possible. --- ### Q. Novelty regarding cross-view attention. Thank you for the feedback. We agree that cross-view attention itself is not newly proposed in our paper — it has been used in the 3D/video generation area. However, we would like to emphasize that **the novelty of our paper lies not so much in the use of cross-view attention itself, but rather in drawing inspiration from the intuitive connection between cross-view attention and warping operations.** This led us **guide the attention modules within the diffusion model to emulate geometric warping**. As Table 2 in the main paper demonstrates that naively using cross-view attention yields limited performance, we have proposed to use warped coordinate grids as positional embeddings to support our generative warping. Our other strategy is aggregating the cross-view attention with the self-attention in the diffusion model at once, instead of inserting new cross-view attention layers. With this strategy, we found that the self-attention effectively finds where to refine, compensating for warping errors coming from noisy MDE depth, exemplified in Fig.4 of the main paper. By doing so, we have overcome the performance constraints inherent in existing two-step approaches that rely on the warping-then-generation paradigm. We believe these findings possess robust merits that contribute to this field. --- ### Q: The detail of finetuning the T2I model. Thank you for pointing this out. As described in L445-449, we initialize our two networks, semantic preserver and diffusion U-net, with Stable Diffusion v1.5 [30], fine-tune the networks on 2×H100 80GB with a batch size of 48, at resolutions of 512 × 384 and 512 × 512. We used a learning rate of 1.0e-5, which is the same as the value used when training Stable Diffusion, and we kept all other hyperparameters at the same values used in Stable Diffusion. Additionally, the coordinate embeddings are passed through 3 convolutional layers and added to the input of the diffusion U-Net and the semantic preserver network. All parameters of our model are trained in an end-to-end manner through the diffusion loss shown in Equation 6 of the main paper. We will clarify these points in the camera-ready version. If we have not addressed any unclear aspects, please let us know, and we will reply accordingly. --- ### Limitation section. As the reviewer mentioned, we discuss the viewpoint limitation due to depth-based correspondence in Limitation section — as this is not the goal of our paper, our method does not explicitly consider this common issue in depth-based methods. However, we would like to remind you that (1) as shown in Figure 8 of the main paper, our method demonstrates superior performance compared to other methods even when viewpoint changes are large, as long as there is at least a small overlap between the views, (2) even for extremely distant viewpoints, multi-step progressive generation by re-conditioning on previously generated novel views can be achieved (L487-L489), exemplified in Figure 6 in the main paper.
Rebuttal 1: Rebuttal: # General Response We would like to first thank the reviewers for the helpful suggestions and constructive reviews. We are greatly encouraged by their positive assessment regarding soundness (1 excellent, 3 good), contribution (4 good), and presentation (2 excellent, 1 good) of our work. They acknowledge that our generative geometric warping is **effective in addressing ill-warped problems** (2Nrv), **preserving semantic details** (2Nrv,3Fuk), exhibiting **generalization capabilities** (3Fuk), our warped 2D coords embedding helps the network to be **more robust to the noisy predicted depth maps** (vdto), and our manuscript is **well written** (Cjbv, vdto) and **well motivated** (Cjbv). We also thank the reviewers for recognizing that our **experimental results are strong** (Cjbv), outperforming existing methods in **both in-domain and out-of-domain scenarios** (2Nrv) under **challenging viewpoint changes** (3Fuk). --- In the rebuttal, we have conducted the following additional experiments to address the reviewers' questions and suggestions: - We provide further analysis on performance changes as the ratio of invisible regions varies. (Figure A in the attached PDF file) - We conduct additional ablation studies on camera (Plücker) embedding and depth embedding. (Figure B in the attached PDF file) - To verify whether cross-view attention mimics depth-based warping, we analyze the distance between GT flow and flow extracted from the cross-view attention. (First table in the response to Reviewer Cjbv) - We analyze the impact of cross-view attention and self-attention on visible and invisible regions. (Second table in the response to Reviewer Cjbv) - For extensive comparison, we provide a comparison with existing NeRF-based pipelines. (Figure C in the attached PDF file) - We provide results of 3D scene (3DGS) reconstruction with novel views generated by our method. (Figure D in the attached PDF file) - We provide intermediate generation results. (Figure E in the attached PDF file) Pdf: /pdf/514493e26ec675c480cfedf6b93c861445e6971d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Episodic Future Thinking Mechanism for Multi-agent Reinforcement Learning
Accept (poster)
Summary: This paper introduces an Episodic Future Thinking (EFT) mechanism for reinforcement learning (RL) agents to enhance decision-making in multi-agent scenarios. The EFT mechanism allows an agent to predict the future actions of other agents by inferring their characters from observation-action trajectories. This capability is evaluated in multi-agent autonomous driving scenarios and multiple particle environments, demonstrating that EFT leads to higher rewards compared to existing multi-agent RL solutions. Strengths: The integration of episodic future thinking in RL is a significant contribution, providing a new perspective on how agents can predict and simulate future scenarios to improve decision-making. Besides, The paper is well-structured and clearly explains the methodology, experiments, and results, also provides comprehensive evaluations in diverse experiments, including ablation study, showcasing the robustness of the proposed method. Weaknesses: 1. The paper does not sufficiently address the computational overhead of implementing the EFT mechanism, especially with varying data sizes. 2. I suggest that authors also implement SOTA in the experiment of investigating the effects of trajectory noise, so that compare the sensitivity of the proposed methods. 3. The approach assumes that character traits can be inferred accurately, which might not hold in highly dynamic environments with rapidly changing behaviors. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. How does the performance of the proposed Episodic Future Thinking mechanism scale with an increasing number of agents? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The approach assumes that character traits can be inferred accurately, which might not hold in highly dynamic environments with rapidly changing behaviors. The limitation of having only one EFT-enabled agent in experiments raises questions about the method's effectiveness in scenarios where multiple agents are equipped with EFT capabilities. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive reviews and insightful feedback about this work. Below, we describe how we have revised the paper to address the reviewer's concerns and questions. - **Computational complexity** We agree that considering computational complexity is crucial for practical solution development. To address the reviewer's concern, we have investigated it using a big $\mathcal O$ analysis of the proposed solution with our setup. Below are the notations used in this analysis: 1. $d$: denote the dimension of the input 2. $|E|$: the number of agents 3. $|E_{obs}|$: the number of observable agents Before looking at the specific analysis, \textbf{the complexities of EFTM is $\mathcal O(|E_{obs}|\times d^2)$ for the execution}, and vanilla policy requires $\mathcal O(d^2)$. This implies that **the maximum time complexity of EFTM is limited, regardless of the environment's size**, since the maximum number of observable agents is fixed. We provide how to calculate the complexity of basic policy operations as shown in the below table. This demonstrates that the complexity of these operations is $\mathcal{O}(d^2)$. For EFT prediction, our solution requires $|E_{obs}|$-times the computational cost for others' action prediction. Therefore, the complexity of the proposed solution is $\mathcal O(|E_{obs}|\times d^2)$ for the execution. |Computation|Equation|Matrix Size|Complexity| |-|-|-|-| |The $1^{\mathrm{st}}$ policy layer|$\mathrm{out}_1 = \sigma_1(W_1\cdot x_t+b_1)$|$W_1 \in \mathbb{R}^{2d\times d}, x_t \in \mathbb{R}^{d}$|$2d^2$| |The $2^{\mathrm{st}}$ policy layer|$\mathrm{out}_2 = \sigma_2(W_2\cdot \mathrm{out}_1+b_2)$|$W_2 \in \mathbb{R}^{4d\times 2d}, \mathrm{out}_1 \in \mathbb{R}^{2d}$|$8d^2$| |The output layer|$a_t = \tanh(W_3\cdot\mathrm{out}_2 + b_3)$|$W_3 \in \mathbb{R}^{2 \times 4d}, \mathrm{out}_2 \in \mathbb{R}^{4d}$|$8d$| |**Total of policy**|-|-|$\mathcal{O(d^2)}$| Our solution focuses on predicting the actions of agents within the observation area, not the entire state. It means that regardless of the environment's size, the number of observable agents is limited. These computations can be performed in parallel, making our analysis more relevant and effective than simply comparing execution times for single operations. Next, our solution has advantages in terms of training time. To elaborate, the wall-clock times of EFTM and MARL (averaging MAPPO, MADDPG, and QMIX) are as follows: **for autonomous driving tasks, approximately $3.5$ and $17$ hours, and for MPE tasks, approximately $1.7$ and $2.5$ hours**. This significant gap comes from our objective, training a multi-character policy that can work in any multi-agent interactions. It means that EFTM considers training a single agent, not multi-agents. --- - **Comparison of the trajectory noise sensitivity with baseline** Thank you for your comments, which helped us to improve the experimental section. To address the reviewer's concern, we have run additional experiments. Given time constraints, we only performed additional investigation on MADDPG, which showed the second-best performance in our selective demonstration tasks. Below are additional results ($4$ seeds). |std of trajectory noise|0.0|0.01|0.05|0.1|0.2|0.3| |-|-|-|-|-|-|-| |MADDPG (test noise)|2763 $\pm$ 126|2530 $\pm$ 439|1891 $\pm$ 892|1522 $\pm$ 1039|837 $\pm$ 711|335 $\pm$ 693| |MADDPG (training noise)|2763 $\pm$ 126|2891 $\pm$ 360|2610 $\pm$ 402|2133 $\pm$ 519|1258 $\pm$ 1011|1341 $\pm$ 955| |EFTM|2899 $\pm$ 217|2833 $\pm$ 316|2841 $\pm$ 283|2795 $\pm$ 613|2437 $\pm$ 812|1535 $\pm$ 1023| The empirical result confirms that **the proposed solution is more robust than the MADDPG**. We conjecture that the proposed solution might alleviate noise effects through character classification since noise information is not used for direct action computation. Details of two types of baselines are as follows. 1. Test noise without training noise: This case adds the noise into the MARL agents' observation directly. Direct noise for policy calculations quickly leads to the collapse of the entire multi-agent system. 2. Training noise without test noise: This case adds the trajectory noise in a training phase, \textit{i.e.}, building policy and team value function. A small amount of noise during the learning process serves to increase the robustness of the overall system, but as the noise increases, the instability of the learning process increases. Please be aware that the considered standard deviation is not trivial given that our observation range is $[-1, 1]$. Specifically, we provide the signal-to-noise ratio with a quality level across each standard deviation. We label the quality level from [15]. |std of trajectory noise|0.01|0.05|0.1|0.2|0.3| |-|-|-|-|-|-| |signal-to-noise ratio|34.7dB|21.3dB|14.7dB|9.2dB|4.7dB| |quality level|Excellent|Good|Fair|Poor|Poor| --- Rebuttal Comment 1.1: Title: Thank you for the responses Comment: Thank you to the authors for their responses. Most of my questions have been addressed. After considering your responses and the feedback from other reviewers, I will maintain my evaluation. --- Reply to Comment 1.1.1: Comment: Thank you once again for your active engagement and for taking time and effort into this discussion! --- Rebuttal 2: Comment: - **Limitation in highly dynamic environments with rapidly changing behaviors** We appreciate the reviewer's insightful comment regarding the challenges of modeling and inference with policy changes over time. As the EFT agent should continuously adapt to evolving strategies and behaviors, the complexity of modeling and inferring these changes increases significantly. This issue is further compounded as the number of agents grows, potentially exacerbating the intractability of the problem. The dynamic nature of policy introduces additional layers of complexity, making it increasingly difficult to predict and manage the interactions among agents effectively. This is an ultimate goal for the research community, and we consider it as future work. Although we did not address rapidly changing behavior in this study, our work demonstrates promising results, such as successful interactions with changes in surrounding characters across different episodes. In accordance with NeurIPS policy, we would like to clarify that we have already discussed this limitation in our manuscript. --- Once again, we deeply appreciate the insightful comments and suggestions. We hope our clarification and additional empirical studies could address the concerns raised by the reviewer. Should there be any leftover questions, please let us know and we will make every effort to address them during the subsequent discussion period.
Summary: Introduce an episodic future thinking(EFT) mechanism, which, along with the mechanism of counterfactual, is a cognitive activity of human beings. The proposed algorithm predicts future observation transitions and uses them to determine the next steps of action. Although the maximum likelihood method is also used to infer a character c, I do not believe that this paper has made a significant contribution. Strengths: Introduce an episodic future thinking(EFT) mechanism, which, along with the mechanism of counterfactual, is a cognitive activity of human beings. The proposed algorithm predicts future observation transitions and uses them to determine the next steps of action. Although the maximum likelihood method is also used to infer a character c, I do not believe that this paper has made a significant contribution. Weaknesses: The proposed algorithm predicts future observation transitions and uses them to determine the next steps of action. Although the maximum likelihood method is also used to infer a character c, I do not believe that this paper has made a significant contribution. Technical Quality: 2 Clarity: 3 Questions for Authors: No Confidence: 2 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's time and effort. Here are our answers to the reviewer's comments. - **Our contribution and motivation** We would like to clarify our contribution is not trivial. To emphasize our contribution, we summarize the novelty as follows. **We introduce a novel social decision-making approach by coupling character inference and upcoming future prediction**. Each independent functionality has been studied partially in different research streams but never jointly considered for social decision-making scenarios. **Moreover, its effectiveness in a scenario where heterogeneous agents coexist has never been experimentally proved**. Combining such functionality in the social decision-making framework is not straightforward or incremental, which is why we put considerable effort into clearly explaining the proposed solution. Additionally, all other reviewers acknowledged that our method is novel with detailed points: - Reviewer Cur7: A policy that can handle diverse agent characters is a significant contribution. This addresses a real challenge in multi-agent systems where agents may have different goals or behavioral traits. - Reviewer ukHS: The integration of episodic future thinking in RL is a significant contribution, providing a new perspective on how agents can predict and simulate future scenarios to improve decision-making. - Reviewer RgnJ: The multi-character policy handles both continuous and discrete action spaces, expanding the applicability of RL methods to more complex scenarios. - Reviewer pTth: The cognitive motivation makes a lot of sense, and broadly, modeling diverse other agent motives seems like a promising direction that has not received much attention. This work addresses a significant topic in the advancement of the MARL domain. As the reviewer cur7 mentioned, handling diverse characters is a substantial challenge in multi-agent systems where agents may have different goals or behavioral traits. While we sincerely want to provide more detailed responses, there is limited information about the discussion points, so we could not elaborate further. Please feel free to ask any additional questions the reviewer may have, and we will be happy to answer them. --- Rebuttal Comment 1.1: Comment: Thank you for raising the scores. We confirmed that the reviewer changed scores from 4 to 5. If the reviewer could provide an opinion on what additional work is needed for us to move beyond the borderline score, we would greatly appreciate it!
Summary: This paper introduces an Episodic Future Thinking (EFT) mechanism for reinforcement learning agents in multi-agent systems with heterogeneous characters. The authors propose a multi-character policy and a character inference module to enable agents to predict other agents' actions and simulate future scenarios. The EFT mechanism allows agents to make adaptive decisions by considering the predicted future state. The approach is evaluated in autonomous driving scenarios and multiple particle environments, demonstrating improved performance compared to existing multi-agent and model-based reinforcement learning algorithms. Strengths: - A policy that can handle diverse agent characters is a significant contribution. This addresses a real challenge in multi-agent systems where agents may have different goals or behavioral traits. - The authors test their approach across various levels of character diversity and compare it with multiple reasonable baselines. This thorough evaluation strengthens the validity of their claims. - The method's effectiveness is demonstrated in both autonomous driving and multiple particle environments, suggesting potential applicability across different domains. - The paper is generally well-structured and clearly written. Weaknesses: - The experiments only consider one EFT agent among non-EFT agents. - The results in Figure 4 are not statistically significant. There is also no standard deviation for the baseline. - No standard deviations provided in Table 2 and 3. - The difference between training and execution wasn't clear until it was mentioned in the conclusion. - The paper lacks a detailed analysis of the computational costs associated with the EFT mechanism, particularly as the number of agents or environmental complexity increases. - While the paper mentions POMDP, it doesn't deeply explore how partial observability affects the performance of the EFT mechanism. - The improvement over baseline methods, while present, is not consistently substantial across all scenarios, particularly in the multiple particle environments. Technical Quality: 3 Clarity: 3 Questions for Authors: - “In contrast, our solution trains the policy with only local observations and actions, which can be a more practical solution.” But you still need to train the character identification model and multi-character policy, which requires access to the other observations too? - “In addition, the standard deviation of model-based RL algorithms is much larger than the proposed solution, which shows the difficulty of learning a dynamic model without understanding others in multi-agent systems.“. What standard deviations are the authors referring to? - How does the EFT mechanism perform when all agents in the system are equipped with this capability? Does this lead to emergent behaviors or potential instabilities? - What is the scalability of the proposed method? How does its performance and computational cost change as the number of agents increases? - How robust is the character inference module to noisy or adversarial behaviors from other agents? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - The current study only considers scenarios with a single EFT agent, which doesn't fully capture the dynamics of multiple predictive agents interacting. In scenarios where multiple agents use EFT, there's a potential for feedback loops or cascading effects that could lead to suboptimal or unstable system behavior. This isn't explored in the current work. - The paper doesn't address the potential increased computational requirements of the EFT mechanism compared to simpler approaches, which could be a limitation in resource-constrained environments. A table or figure comparing wall-clock time would be insightful. - While the method is tested in two different environments, its performance in more complex, dynamic, or partially observable environments remains unexplored. This paper presents an interesting approach to multi-agent reinforcement learning by incorporating episodic future thinking. While the idea is novel and shows some promise, the lack of statistically significant increase in performance and lack of comparison with regards to wall-clock time in the current study, raise concerns about its broader applicability and impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are thankful for the reviewer’s detailed feedback and constructive suggestions for improving our work. In response, we outline the revisions made to address the reviewer’s concerns and questions. We have marked the weakness, question, and limitation numbers associated with each discussion section. - **Computational complexity (W5, Q4, L2)** We agree that considering computational complexity is crucial for practical solution development. To address the reviewer's concern, we have investigated it using a big $\mathcal O$ analysis of the proposed solution with our setup. Below are the notations used in this analysis: 1. $d$: denote the dimension of the input 2. $|E|$: the number of agents 3. $|E_{obs}|$: the number of observable agents Before looking at the specific analysis, \textbf{the complexities of EFTM is $\mathcal O(|E_{obs}|\times d^2)$ for the execution}, and vanilla policy requires $\mathcal O(d^2)$. This implies that **the maximum time complexity of EFTM is limited, regardless of the environment's size**, since the maximum number of observable agents is fixed. We provide how to calculate the complexity of basic policy operations as shown in the below table. This demonstrates that the complexity of these operations is $\mathcal{O}(d^2)$. For EFT prediction, our solution requires $|E_{obs}|$-times the computational cost for others' action prediction. Therefore, the complexity of the proposed solution is $\mathcal O(|E_{obs}|\times d^2)$ for the execution. |Computation|Equation|Matrix Size|Complexity| |-|-|-|-| |The $1^{\mathrm{st}}$ policy layer|$\mathrm{out}_1 = \sigma_1(W_1\cdot x_t+b_1)$|$W_1 \in \mathbb{R}^{2d\times d}, x_t \in \mathbb{R}^{d}$|$2d^2$| |The $2^{\mathrm{st}}$ policy layer|$\mathrm{out}_2 = \sigma_2(W_2\cdot \mathrm{out}_1+b_2)$|$W_2 \in \mathbb{R}^{4d\times 2d}, \mathrm{out}_1 \in \mathbb{R}^{2d}$|$8d^2$| |The output layer|$a_t = \tanh(W_3\cdot\mathrm{out}_2 + b_3)$|$W_3 \in \mathbb{R}^{2 \times 4d}, \mathrm{out}_2 \in \mathbb{R}^{4d}$|$8d$| |**Total of policy**|-|-|$\mathcal{O(d^2)}$| Our solution focuses on predicting the actions of agents within the observation area, not the entire state. It means that regardless of the environment's size, the number of observable agents is limited. These computations can be performed in parallel, making our analysis more relevant and effective than simply comparing execution times for single operations. Next, our solution has advantages in terms of training time. To elaborate, the wall-clock times of EFTM and MARL (averaging MAPPO, MADDPG, and QMIX) are as follows: **for autonomous driving tasks, approximately $3.5$ and $17$ hours, and for MPE tasks, approximately $1.7$ and $2.5$ hours**. This significant gap comes from our objective, training a multi-character policy that can work in any multi-agent interactions. It means that EFTM considers training a single agent, not multi-agents. --- - **Accessibility of others' trajectories (W6, Q1, Q5)** Thank you for the thorough review and insightful comments. In this response, we would like to address the accessibility assumption in MARL and alternatives in terms of POMDP setup. (1) *Accessibility assumption* We would like to explain the details of multi-character policy training and our consideration in the character inference process. **We build a multi-character policy without the accessibility of other's observations**. For training the multi-character policy, we follow this process: 1. Sample the character for an EFT agent. This step allows the agents to experience various characters. Each character component can be sampled from their pre-defined ranges. 2. Episode runs and train policy based on the RL process. 3. Iterative 1-2 steps at every episode. Regarding the character inference process, **while it requires the other's observation, this does not imply direct access to the other's observation**. The agent can collect by observing other's trajectories. To address such practical concerns, we have studied the performance robustness corresponding to the trajectory noise level in the below discussion. (2) *Robustness against trajectory noise of POMDP setup* We would like to clarify that the **one core aspect of partially observable MDP is the noisy level [14]**. We have provided a study of how robust our solution is as the noise level of the observed trajectory increases in our original manuscript (Section 5.2). In addition, we add the performance of EFTM matching with each standard deviation of additive Gaussian noise, as follows. |Std of additive Gaussian noise|0.01|0.05|0.1|0.2|0.3| |-|-|-|-|-|-| |Inference Accuracy|99.6 $\pm$ 0.01|98.3 $\pm$ 0.07|91.8 $\pm$ 0.23|81.1 $\pm$ 0.52|69.5 $\pm$ 0.66| |Cumulative reward of EFTM|2833 $\pm$ 316|2841 $\pm$ 283|2795 $\pm$ 613|2237 $\pm$ 812|1435 $\pm$ 1023| We believe that this result provides valuable insights into the expected performance of the proposed solution, particularly in scenarios where observation prediction technology is deployed. Although we only consider noise level without an adversarial agent scenario, we politely assert that such a scenario is beyond our scope. Please be aware that the considered standard deviation is not trivial given that our observation range is $[-1, 1]$. Specifically, we provide the signal-to-noise ratio with a quality level across each standard deviation. We label the quality of each level based on [15]. |std of trajectory noise|0.01|0.05|0.1|0.2|0.3| |-|-|-|-|-|-| |signal-to-noise ratio|34.7dB|21.3dB|14.7dB|9.2dB|4.7dB| |quality level|Excellent|Good|Fair|Poor|Poor| --- Rebuttal 2: Comment: - **Experimental results as the number of EFT agents increases (W1, L1)** Although we have discussed this potential weakness in our limitation section, we additionally explored how our EFTM behaves as the number of agents increases. The additional empirical results are as follows. Here's the table with the bold formatting removed: |Ratio of EFT agent|Baseline (single EFT)|10%|20%|30%|40%|50%|60%| |-|-|-|-|-|-|-|-| |Performance|2899 $\pm$ 217|2910 $\pm$ 193|2818 $\pm$ 316|2376 $\pm$ 991|2041 $\pm$ 752|1650 $\pm$ 548|1728 $\pm$ 683| Empirical result indicates that EFTM performance remains robust when the interaction between EFT agents is infrequent, such as around $20\%$, and that performance gradually declines thereafter. As per our expectations, potential instability happens when a larger proportion of agents in the system are equipped with EFT simultaneously. This is similar to the ongoing debates in the theory of mind (ToM) research, where the complexity and depth of understanding others' mental states—from zero- to higher-order ToM—are crucial points of discussion. Determining the optimal level of complexity for specific scenarios is an interesting direction for future research and could offer valuable insights into EFTM. --- - **Standard deviation for main results and additional experiments on SMAC (W3, W7, Q2, L3)** We apologize for the inconvenience. We wanted to report it in the main body, but due to the page limit, we included it in the appendix of the original manuscript. Our appendix includes Tables with the standard deviation as follows. |Character diversity|n=1|n=2|n=3|n=4|n=5| |-|-|-|-|-|-| |Proposed|**2899** $\pm$ 217|**3047** $\pm$ 162|**2976** $\pm$ 196|**2948** $\pm$ 91|**3051** $\pm$ 109| |FCE-EFT|**2899** $\pm$ 217|2784 $\pm$ 161|2646 $\pm$ 196|2566 $\pm$ 103|2629 $\pm$ 125| |MADDPG|2763 $\pm$ 126|**3006** $\pm$ 103|2800 $\pm$ 106|**2933** $\pm$ 98|2856 $\pm$ 121| |MAPPO|2753 $\pm$ 206|2862 $\pm$ 201|2597 $\pm$ 144|2529 $\pm$ 131|2763 $\pm$ 190| |QMIX|2199 $\pm$ 56|2310 $\pm$ 39|2288 $\pm$ 118|2118 $\pm$ 82|1861 $\pm$ 132| |Dreamer|**2911** $\pm$ 312|2813 $\pm$ 283|2733 $\pm$ 351|2631 $\pm$ 521|2701 $\pm$ 433| |MBPO|2089 $\pm$ 804|1964 $\pm$ 753|1523 $\pm$ 948|1893 $\pm$ 792|1633 $\pm$ 821| |Algorithm|MAPPO|MADDPG|QMIX|Proposed| |-|-|-|-|-| |Spread|-149.29 $\pm$ 0.94|-157.10 $\pm$ 2.30|-154.70 $\pm$ 4.90|**-149.12** $\pm$ 1.38| |Adversary|9.61 $\pm$ 0.07|7.80 $\pm$ 1.43|8.11 $\pm$ 0.37|**10.01** $\pm$ 0.33| |Tag|13.78 $\pm$ 4.40|6.65 $\pm$ 3.90|**15.00** $\pm$ 2.73|14.57 $\pm$ 2.95| These tables show that **the standard deviation of EFTM is similar to that of other methods**. The model-based solution has the highest variance due to the uncertainty of other agents. Overall, EFTM achieves the best performance with a mid-level variance compared to all other baselines. Additionally, we have run additional experiments on SMAC [1], which is widely used for evaluating the MARL algorithm, to address the reviewer's concern. We report the performance ($4$ seeds) with MARL baselines, as follows. |SMAC Task|EFTM|MAPPO|MADDPG|QMIX| |-|-|-|-|-| | 2s3z|98.8 $\pm$ 2.3| **100** $\pm$ 1.5|90.3 $\pm$ 5.3|95.3 $\pm$ 2.5| | 3s5z vs 3s6z |**84.3** $\pm$ 9.1|63.3 $\pm$ 19.2|18.9 $\pm$ 4.8|82.8 $\pm$ 5.3| | 6h vs 8z|**93.8** $\pm$ 6.7|85.9 $\pm$ 30.9|68.0 $\pm$ 34.7|9.4 $\pm$ 2.0| | Total|**276.9**|249.2|177.2|187.5| This result also demonstrates that EFTM still has surpassing or matching performance with previous solutions. It means that EFTM is capable of generalizing to solve widely-used MARL tasks, achieving the best total scores. Notably, we set a simple setup for the SMAC and MPE environments that is, we follow a vanilla setup with a single character diversity $n=1$. **While fully leveraging the advantages of EFTM in these environments can be challenging, EFTM is nonetheless capable of delivering competitive performance in such settings.** --- Once again, we deeply appreciate the insightful comments and suggestions. We hope our clarification and additional empirical studies could address the concerns raised by the reviewer. Should there be any leftover questions, please let us know and we will make every effort to address them during the subsequent discussion period. --- Rebuttal Comment 2.1: Comment: I thank the authors for running additional experiments and addressing my weaknesses. The additional results improved my outlook on the paper! I have one immediate follow-up question for clarification. In your experiments, for example SMAC, does the character inference process get the observations directly or does the EFT agents collect them, as proposed in your new ablation? Furthermore, I appreciate the new table results with standard deviations. I believe all means should be boldened where the standard deviations overlap for the final version of the paper. --- Rebuttal 3: Comment: Thank you for your active response! As discussed earlier, we set character diversity as n=1 on SMAC and MPE. It means that the EFT agent does not need to infer the character because they have the same; in addition, the EFT agent only predicts teammates’ future actions, not including opponents. Additional experiments aim to study whether EFTM-based action selection works in other environments. Next, we promise to follow the reviewer’s suggestion about performance highlighting style. --- Rebuttal Comment 3.1: Comment: Thanks for the quick response. Given the rebuttal and the additional results, I will update my score to a 6, expecting a moderate-to-high-impact. I find the empirical evaluation solid and interesting to the community. The combination of components is unique to the best of my knowledge. I believe it is interesting to the field that this combination of components is valuable and the analyses highlight further limitations and lays the ground for future work. I do not think the performance of the algorithm justifies a 7, expecting high-impact. For example, in SMAC, the proposed method performs as well as MAPPO or MADDPG, even at high character diversity (n=5), when accounting for the standard deviations, which itself is a fair baseline but also not necessarily state-of-the-art. Similar conclusions hold for MPE. However, given the improved wall-clock time and different training regime, this is still a significant contribution. Realistically, for high impact, the performance improvements would probably need to be better to motivate a large subgroup of the field to improve on this method. --- Reply to Comment 3.1.1: Comment: We sincerely appreciate the insights you’ve shared for this work and are truly grateful for raising the score. Your detailed explanation regarding the score update is extremely helpful. As for SMAC performance, we could not fully explore the hyperparameters due to the limited rebuttal time. Moving forward, we will make more effort to have a higher impact! Thank you once again for your active engagement in this discussion. We truly appreciate the time and effort you’ve dedicated!
Summary: This paper presents Episodic Future Thinking (EFT), an approach for RL in multi-agent environments. EFT involves learning a multi-character policy (where character is a parameter that modifies the reward), and then using this to infer characters of other agents and planning accordingly, using these characters and learned policy to predict others’ trajectories more accurately. The paper demonstrates superior performance on a driving environment and multi-agent particle environments. Strengths: The paper is clearly written throughout. It presents, to my knowledge, an original approach for multi-agent RL with characters. Results are well-described and make sense. The studies of 5.2 and 5.3 are welcome additions that help make sense of how the method works. The cognitive motivation makes a lot of sense, and broadly, modeling diverse other agent motives seems like a promising direction that has not received much attention. *Edit*: raised score to 6 following rebuttal. Weaknesses: My main concern is with the significance of the performance comparisons. For the driving task, my understanding is that the other agents have a range of characters. The proposed method has the opportunity to learn a multi-character policy. First, I have a concern as to how one might put the baselines on an equal footing in terms of experience — see Questions for that. Second, even if the baselines were put on an equal footing in terms of experience, how surprising is the result for the driving experiment? The driver environment has been designed so that the proposed method has precisely the right inductive bias — inferring a latent character vector. The MPE testbed is less clearly set up so that the proposed model has the right inductive bias for it — though perhaps it helps to be able to have separate models of the different agent groups — and again (see Questions), it’s really unclear to me how you would put baselines on the same footing in terms of giving them experience modeling both groups. It would be very helpful to include confidence intervals for these experiments. Performance on MPE testbed is very close, numerically, to baselines. Are those differences actually statistically significant? I think those environments tend to have pretty high variance. The model-based baselines, especially Dreamer, shouldn’t be expected to work well in multi-agent environments like these without significant modifications, I think. Dreamer is not going to handle stochasticity of multi-agent environments well well given how the world model is set up by default. Did you modify it? And why use Dreamer v1 instead of the most recent version? Technical Quality: 3 Clarity: 3 Questions for Authors: Given that the proposed method gets to train a multi-character policy, which presumably involves training on a bunch of experience with multiple characters, how are the baselines put on an equal footing in terms of experience in the environment, with these different character objectives? How is c varied during multi-character policy training? Is it randomly set each episode? Minor, and I may have missed this, but what model is used to do forward prediction? It might be helpful to briefly mention that in the main text, if it’s not there. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Seems adequate, if the above are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the reviewer's thorough review and valuable suggestions about this work. Below, we outline how we have revised the paper to address the reviewer's concerns and questions. - **Fairness for experience in character diversity** We agree with the reviewer that maintaining a fair experimental setup is crucial for performance comparison. As the reviewer correctly pointed out, we set up MARL agents with a single character during their training. This is because we thought character change could impede MARL training. To address the reviewer's comment and confirm our belief, **we provide additional evaluations of baselines with a diverse character experience** - an equal footing in terms of character experience. We report the results (4 seeds) below. |Algorithm|n=1|n=2|n=3|n=4|n=5| |-|-|-|-|-|-| |EFTM|2899 $\pm$ 217|3047 $\pm$ 162|2976 $\pm$ 196|2948 $\pm$ 91|3051 $\pm$ 109| |**MADDPG (new)**|2368 $\pm$ 85|2419 $\pm$ 262|2353 $\pm$ 56|2310 $\pm$ 142|2338 $\pm$ 210| |MADDPG (original)|2763 $\pm$ 126|3006 $\pm$ 103|2800 $\pm$ 106|2933 $\pm$ 98|2856 $\pm$ 121| |**MAPPO (new)**|2192 $\pm$ 113|2366 $\pm$ 91|2233 $\pm$ 250|2180 $\pm$ 313|2241 $\pm$ 386| |MAPPO (original)|2753 $\pm$ 206|2862 $\pm$ 201|2597 $\pm$ 144|2529 $\pm$ 131|2763 $\pm$ 190| This result confirms that EFTM outperforms other solutions in a demonstration task. As same with our expectation, **other baselines show a performance degradation compared to the existing setup**. Nevertheless, the performance is maintained rather than decreased as the character diversity level $n$ increases. We conjecture its performance drop is caused by not considering the optimization method for multi-characters. Although we provide the experience variability to agents, they cannot use it efficiently. Here are explanations of additional procedures for ensuring a fair experience. 1. Sample the level of character diversity in society. This step allows the agents to experience various levels of society. We establish the set of character diversity levels as $[1, 2, 3, 4, 5]$, considered in the evaluation phase. 2. Sample the character for each agent. This step allows the agents to experience various characters. Each character component can be sampled from their pre-defined ranges. 3. Episode runs. 4. Iterative 1-3 steps. --- - **Simulation detail for MPE task** We would like to explain the detailed setup for additional tasks. For the MPE task, we consider a simple setup, akin to prior works, with character as a single character, that is, the diversity level $n=1$. For competitive tasks, we deployed the EFT agent on a good agent group and the pre-trained networks on an adversarial one. The EFT agent only predicts teammates' future actions, not an adversarial group. Please note that the main body and appendix of the original manuscript include experimental setup information as described below. **Main body**: We set the character for each group as a single character, that is, the diversity level $n=1$. **Appendix**: 1. Spread: In this task, there are three agents. Their objective is to reach three landmarks without collision with each other. A reward function is the sum of negative distances from landmarks to agents and collision penalty term. 2. Adversary: This task includes two cooperating agents and a third adversary agent; there are true goal and false goal spots. The adversary can observe relative distances without communication about the goal spots. The cooperative agents aim to reach the goal spot while avoiding an adversary. The reward function is a sum of the negative distance to the goal spot and the distance from the adversary to the true goal. We use an adversary agent controlled by a pre-trained [7]. 3. Tag: This task is dubbed a predator-prey task. The environment includes two types of agents and obstacles: a single good agent, three adversary agents, and two obstacle blocks. The adversaries are slower than a good agent and receive a reward when tagging a good agent. We employ a pre-trained prey agent from [7]. --- - **Version of Dreamer** Thank you for providing key discussion points about the proper baseline selection. We initially considered Dreamer v1 [8], v2 [9], and v3 [10] as candidates for baselines, and we selected Dreamer v1 for the following reasons. Among these, **Dreamer v3 has not been peer-reviewed yet, so we decided not to use it**. Next, we delved into Dreamer v1 and v2 papers, and then we found that they focused on different demonstration tasks. **Dreamer v2 is more focused on discrete control tasks**, for example, the Atari games [11]. Conversely, Dreamer v1 concentrated on continuous control tasks, such as DeepMind control suite [12], DeepMind lab [13], and some continuous Atari games [11]. Subsequently, when considering model-based baselines, we only train a single agent with a world model in the training phase. We then deploy trained agents in driving environments, where there are pre-trained drivers with diversified characters. Note that deployed agents in the training and test phases are the same as EFTM. In summary, we did not over-modify the existing Dreamer model. --- - **How does the character $c$ determine?** As the reviewer understands, **we randomly sample the character $\mathbf {c}$ from a character distribution in every episode**. More precisely, we use a uniform sampling method to set the character of an agent during the training process. Character $\mathbf{c}$ is a vector of character components $[c_1, \cdots, c_K]$. Each character component $c_k$ randomly sampled from pre-defined ranges, *e.g.*, $[0, 2.5]$. Sampled character is bounded fitting to significant figures, in our experiments we consider one significant figure. --- Rebuttal 2: Comment: - **Standard deviation for main results and additional experiments on SMAC** We apologize for the inconvenience. We wanted to report it in the main body, but due to the page limit, we included it in the appendix of the original manuscript. Our appendix includes Tables with the standard deviation as follows. |Character diversity|n=1|n=2|n=3|n=4|n=5| |-|-|-|-|-|-| |Proposed|**2899** $\pm$ 217|**3047** $\pm$ 162|**2976** $\pm$ 196|**2948** $\pm$ 91|**3051** $\pm$ 109| |FCE-EFT|**2899** $\pm$ 217|2784 $\pm$ 161|2646 $\pm$ 196|2566 $\pm$ 103|2629 $\pm$ 125| |MADDPG|2763 $\pm$ 126|**3006** $\pm$ 103|2800 $\pm$ 106|**2933** $\pm$ 98|2856 $\pm$ 121| |MAPPO|2753 $\pm$ 206|2862 $\pm$ 201|2597 $\pm$ 144|2529 $\pm$ 131|2763 $\pm$ 190| |QMIX|2199 $\pm$ 56|2310 $\pm$ 39|2288 $\pm$ 118|2118 $\pm$ 82|1861 $\pm$ 132| |Dreamer|**2911** $\pm$ 312|2813 $\pm$ 283|2733 $\pm$ 351|2631 $\pm$ 521|2701 $\pm$ 433| |MBPO|2089 $\pm$ 804|1964 $\pm$ 753|1523 $\pm$ 948|1893 $\pm$ 792|1633 $\pm$ 821| |Algorithm|MAPPO|MADDPG|QMIX|Proposed| |-|-|-|-|-| |Spread|-149.29 $\pm$ 0.94|-157.10 $\pm$ 2.30|-154.70 $\pm$ 4.90|**-149.12** $\pm$ 1.38| |Adversary|9.61 $\pm$ 0.07|7.80 $\pm$ 1.43|8.11 $\pm$ 0.37|**10.01** $\pm$ 0.33| |Tag|13.78 $\pm$ 4.40|6.65 $\pm$ 3.90|**15.00** $\pm$ 2.73|14.57 $\pm$ 2.95| These tables show that **the standard deviation of EFTM is similar to that of other methods**. The model-based solution has the highest variance due to the uncertainty of other agents. Overall, EFTM achieves the best performance with a mid-level variance compared to all other baselines. Additionally, we have run additional experiments on SMAC [1], which is widely used for evaluating the MARL algorithm, to address the reviewer's concern. We report the performance ($4$ seeds) with MARL baselines, as follows. |SMAC Task|EFTM|MAPPO|MADDPG|QMIX| |-|-|-|-|-| | 2s3z|98.8 $\pm$ 2.3| **100** $\pm$ 1.5|90.3 $\pm$ 5.3|95.3 $\pm$ 2.5| | 3s5z vs 3s6z |**84.3** $\pm$ 9.1|63.3 $\pm$ 19.2|18.9 $\pm$ 4.8|82.8 $\pm$ 5.3| | 6h vs 8z|**93.8** $\pm$ 6.7|85.9 $\pm$ 30.9|68.0 $\pm$ 34.7|9.4 $\pm$ 2.0| | Total|**276.9**|249.2|177.2|187.5| This result also demonstrates that EFTM still has surpassing or matching performance with previous solutions. It means that EFTM is capable of generalizing to solve widely-used MARL tasks, achieving the best total scores. Notably, we set a simple setup for the SMAC and MPE environments that is, we follow a vanilla setup with a single character diversity $n=1$. **While fully leveraging the advantages of EFTM in these environments can be challenging, EFTM is nonetheless capable of delivering competitive performance in such settings.** --- Once again, we deeply appreciate the insightful comments and suggestions. We hope our clarification and additional empirical studies could address the concerns raised by the reviewer. Should there be any leftover questions, please let us know and we will make every effort to address them during the subsequent discussion period. --- Rebuttal Comment 2.1: Title: Great! Comment: Thanks for your follow-up work on this! The clarifications and new experiments greatly alleviate my concerns. In line with reviewer Cur7's thinking, I am upgrading my score to a 6. --- Reply to Comment 2.1.1: Comment: We sincerely appreciate the insights you’ve shared for this work and are grateful for your consideration in raising the score.
Rebuttal 1: Rebuttal: We express our gratitude to all five reviewers for their insightful feedback. We are pleased to present the updates we have made in response to valuable suggestions, as detailed below. - We compared the performance with **two additional baselines** on the research about opponent modeling and theory of mind (Reviewer RgnJ). This result confirmed the effectiveness and adaptability of our approach, outperforming additional baselines. - We evaluated EFTM and baselines on **SMAC (StarCraft multi-agent challenge)**, which demonstrates the generalizability of EFTM for the MARL domain (Reviewer RgnJ). This result demonstrated that the proposed solution still has comparable performance in a widely used MARL setup. - We added **time complexity analyses** in the inference phase in terms of big $\mathcal O$ analysis and wall-clock time measure (Reviewer Cur7 and ukHS). Next, we apologize for not including standard deviations for performance evaluations in the main body of the paper. Due to the page limit, we included **the results with standard deviations in Appendix J2 and K2 of the original manuscript** (Reviewer RgnJ, pTth, and cur7). Below are the references we use in this response. [1] S. Mikayel et al., The Starcraft multi-agent challenge. AAMAS 2019. [2] Y. Chao et al., The surprising effectiveness of PPO in cooperative multi-agent games. NeurIPS 2022. [3] P. Xavier et al., Virtualhome: Simulating household activities via programs. CVPR 2018. [4] Y. Wang et al., ToM2C: Target-oriented multi-agent communication and cooperation with theory of mind. ICLR 2022. [5] P. Georgios et al., Agent modelling under partial observability for deep reinforcement learning. NeurIPS 2021. [6] Z. Xianghua et al., Effective and stable role-based multi-agent collaboration by structural information principles. AAAI 2023. [7] G. Papoudakis et al., Benchmarking multi-agent deep reinforcement learning algorithms in cooperative tasks. NeurIPS 2020. [8] H. Danijar et al., Dream to control: Learning behaviors by latent imagination. ICLR 2020. [9] H. Danijar et al., Mastering Atari with discrete world models. ICLR 2021. [10] H. Danijar et al., Mastering diverse domains through world models. arXiv preprint arXiv:2301.04104 (2023). [11] M. Volodymyr et al., Human-level control through deep reinforcement learning. Nature 2015. [12] T. Yuval et al., Deepmind control suite. arXiv preprint arXiv:1801.00690 (2018). [13] B. Charles et al., Deepmind lab. arXiv preprint arXiv:1612.03801 (2016). [14] S. Satinder et al., Learning without state-estimation in partially observable Markovian decision processes. Machine Learning Proceedings 1994. [15] G. Jim. How to: Define minimum SNR values for signal coverage. Viitattu 23 (2012).
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper introduces an Episodic Future Thinking (EFT) mechanism for reinforcement learning (RL) agents, inspired by cognitive processes observed in animals, to enhance social decision-making in multi-agent systems with diverse agent characteristics. The EFT mechanism uses a multi-character policy to infer the behavioral preferences of other agents, predicts their actions, and simulates potential future scenarios to select optimal actions. The authors evaluate the EFT mechanism in a multi-agent autonomous driving scenario and demonstrate that it leads to higher rewards and is robust across societies with varying levels of character diversity. Strengths: + The paper introduces an episodic future thinking (EFT) mechanism for RL agents, borrowing from cognitive processes observed in animals, representing an interesting application of biological insights to enhance AI decision-making processes. + The multi-character policy handles both continuous and discrete action spaces, expanding the applicability of RL methods to more complex scenarios. + The paper demonstrates the effectiveness of the EFT mechanism in a multi-agent autonomous driving scenario. The authors examine the robustness of the EFT mechanism across different levels of character diversity, showing its resilience in various social compositions. Weaknesses: - The paper primarily focuses on an autonomous driving scenario. Demonstrating the EFT mechanism's effectiveness across a broader range of multi-agent scenarios, e.g., SMAC and VirtualHome, could strengthen the argument for its generalizability. While the paper mentions the mechanism's effectiveness across different levels of character diversity, a detailed scalability analysis in terms of the number of agents and interactions with human or heterogeneous agents could provide further confidence in the approach. - The results in Table. 2 and .3 only report the average performance. It is necessary to report the standard deviation to make the results more confident, as the environments are highly dynamic and varying uncertainty. - Lacking baselines. There are some works that incorporate ToM or opponent modeling with MARL[1,2]. It is necessary to compare those methods, e.g., estimate the current observation or hidden state of others instead of the next observation, to demonstrate the advantages of the proposed methods. Ref: [1] Agent modeling under partial observability for deep reinforcement learning. NeurIPS 2021 [2] ToM2C: Target-oriented Multi-agent Communication and Cooperation with Theory of Mind, ICLR 2022 Technical Quality: 2 Clarity: 3 Questions for Authors: Q1: Do you know the concept of **role** introduced in previous MARL works? is there any difference between the introduced character and role? Q2: Can you validate the generalization of the agents by training them at a specific level and transferring them to other levels with unseen characters? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors discussed the limitations in the discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's detailed feedback and valuable suggestions for enhancing our work. In response, we describe how we have revised the paper to address the reviewer's concerns and questions. - **Additional experiment results to prove generalizability** To demonstrate the efficiency of the proposed solution, we would like to **report additional experimental outcomes on the SMAC (StarCraft multi-agent challenge)** [1], which is widely used in the multi-agent RL domain and recommended by the reviewer. The additional result ($4$ seeds) is as below. |SMAC Task|EFTM|MAPPO|MADDPG|QMIX| |-|-|-|-|-| | 2s3z|98.8 $\pm$ 2.3| **100** $\pm$ 1.5|90.3 $\pm$ 5.3|95.3 $\pm$ 2.5| | 3s5z vs 3s6z |**84.3** $\pm$ 9.1|63.3 $\pm$ 19.2|18.9 $\pm$ 4.8|82.8 $\pm$ 5.3| | 6h vs 8z|**93.8** $\pm$ 6.7|85.9 $\pm$ 30.9|68.0 $\pm$ 34.7|9.4 $\pm$ 2.0| | Total|**276.9**|249.2|177.2|187.5| The table above shows that EFTM still has surpassing or matching performance with previous solutions. **It means that EFTM is capable of generalizing to solve widely-used MARL tasks, achieving the best total scores.** More precisely, we set a simple setup for the SMAC environment, akin to the MPE task, that is we follow a vanilla setup with a single character diversity $n=1$. The reported scores of MAPPO and QMIX are based on benchmark performance [2]. We also checked VirtualHome [3] as the reviewer recommended, but it seemed inappropriate for us due to the requirement of the language model, so we decided not to use this task. Although this response did not cover VirtualHome [3], we believe that our results in three tasks, e.g., autonomous driving, MPE, and SMAC, can alleviate the concern of the reviewer. Thank you for your suggestion, which helps us to prove the generalization of our method. --- - **Experiments of additional baselines - theory of mind and agent modeling** We thank the reviewer for a valuable suggestion about comparison baselines. As per the reviewer’s suggestion, **we have run experiments on additional baselines, including ToM2C [4] and opponent modeling [5], as follows.** |character diversity|n=1|n=2|n=3|n=4|n=5| |-|-|-|-|-|-| |EFTM|2899 $\pm$ 217|**3047** $\pm$ 162|**2976** $\pm$ 196|**2948** $\pm$ 91|**3051** $\pm$ 109| |ToMC2 [4]|**3016** $\pm$ 109|2812 $\pm$ 273|2683 $\pm$ 309|2691 $\pm$ 458|2511 $\pm$ 397| |Opponent Modeling [5]|1913 $\pm$ 330|1792 $\pm$ 410|1771 $\pm$ 367|1683 $\pm$ 381|1733 $\pm$ 429| **This result demonstrates the effectiveness and adaptability of our approach, achieving higher rewards when character diversity exists.** ToMC2 achieves the best score in the $n=1$ scenario, but its performance decreases as the diversity level increases; opponent modeling fails at all diversity levels. On the other hand, the proposed solution is robust to changes in the surrounding agents and maintains high performance across diversity levels. We conjecture why two baselines fail in this setup, as follows. **ToMC2 requires retraining or adjusting the ToM module as surrounding agents change**. The ToM module is tailored to other agents for the prediction of information (*e.g.*, goals, observations, and actions). Next, **opponent modeling also necessitates a new opponent modeling process for each test environment**. In addition, prior works on opponent modeling rarely involve more than four players. In contrast, our selective tasks consider $20$ surrounding agents and their policies can be subject to change. --- - **Validate the generalizability of the EFT agent** We appreciate the reviewer's valuable feedback about the generalizability of the multi-character policy over unseen characters. As the reviewer pointed out, validating the generalizability of the proposed solution is important for realistic tasks. To verify this, we have run additional experiments under the following two cases: 1) Case 1: Train multi-character policy by experiencing character range $[0.0, 0.6]$ and $[0.8, 1.0]$, then test the accuracy of unseen character inference over unseen characters $\{0.65, 0.7, 0.75\}$. 2) Case 2: Train multi-character policy by experiencing character range $[0.2, 0.8]$, then test the accuracy of unseen character inference over unseen characters $\{0.0, 0.1, 0.9, 1.0\}$. Below are additional results in terms of character inference ($20$ inference trials). |True character|0.65 (case1)|0.7 (case1)|0.75 (case1)|0.0 (case2)|0.1 (case2)|0.9 (case2)|1.0 (case2)| |-|-|-|-|-|-|-|-| |Inferred character|0.61 $\pm$ 0.09|0.67 $\pm$ 0.15|0.76 $\pm$ 0.08|0.12 $\pm$ 0.21|0.13 $\pm$ 0.04|0.85 $\pm$ 0.13|0.82 $\pm$ 0.28| For case 1, the inferred characters are reasonably close to their true values. This indicates that the policy could have partial generalizability through interpolation, even for values within the gap not explicitly covered by the training ranges. For case 2, as the actual character gets farther away from the experienced value, it loses generalization performance, increasing the standard deviation and gap between the inferred and true one. To avoid encountering unseen values as much as possible, we should set a realistic character range and deeply consider the sampling method during the learning process. In our case, we used uniform random sampling so that diverse characters could be experienced evenly within the predefined character range. Additionally, we believe that some few-shot learning and adaptation methods can alleviate these problems. --- Rebuttal Comment 1.1: Title: Thanks for you response Comment: My main concern about the generalization has been addressed in the response. I tend to maintain my rate, as I think further clarification on the details of the experiments is required. - Can you explain the implementation details of the ToM2C and Opponent model in your experiment? - Do you have any idea on building a more general computation model that combines the role and character jointly in the agent? - The Virtualhome environment does not need language model at all. There are also some other simulators close to VirtualHome, such as 3DWorld. If you can not extend your model on such 3D environments, can you explain the reasons or how to extend the current version for these 3D environments? --- Reply to Comment 1.1.1: Comment: Thank you for your active response! To ease any remaining concerns, we leave our opinions on additional questions below. --- **Experimental details** Thank you for this comment. Our implementation follows the official Git repositories from ToM2C [4] and opponent modeling [5] (In accordance with NeurIPS 2024 policy, we cannot upload hyperlinks in OpenReview). Given that we consider the POMDP setup, it is important to set how many other agents that an agent has access to. For ToM2C, we consider full access in accordance with the paper: they reported that full access has better performance than partial access. On the other hand, for opponent modeling, we consider six surrounding vehicles, not entire agents. That is because the reference paper aims to model the other agents in local information. Finally, training and validation setups are the same as other baselines. --- **Role and Character** Thank you for this constructive comment regarding the future direction of our community. A promising approach for combining the concepts of role and character would be to use a hierarchical structure. Each agent within a cooperative team first defines its role or subtask. The agent could then decide on the most effective strategy to achieve its subgoal, taking into account the characters and behaviors of other agents. We genuinely believe that this approach could be valuable in various studies, e.g., multi-agent planning tasks, as it emphasizes setting broad objectives first and then making detailed decisions based on interactions within the multi-agent system. --- **3D Environments** We apologize for our mis-clarification regarding the language models in VirtualHome. The authors of VirtualHome [3] reported they consider video with text, so we have a misunderstanding about the need for a language model. Thank you for your correction, and it may not strictly require a language model. We believe our concept could still be relevant to the testbeds you suggested. Since these environments are based on images or video, they would require more advanced forward prediction and representation networks to manage the complexities of 3D data. Specifically, VirtualHome operates in a 2D or 3D observation space, requiring at least 64 x 64 x 3 features as input. In contrast, SMAC and MPE tasks use a 1D observation space with about 100-200 and 10-20 features, respectively. By implementing an appropriate module for handling 3D data, our model could be extended to function in these more demanding 3D environments. We deeply acknowledge the value and importance of the reviewer’s request, so we would like to explore additional results in various domains. Regrettably, our group has limited GPU resources, unlike tech companies, making it challenging to get results for more computationally intensive tasks. At the same time, while applying our work to 3D environments is relevant, we believe that it is not the most critical aspect of our work. Our main focus is to develop a social decision-making process in a heterogeneous society where multiple characteristics coexist. We claim that the value of our method has been fully demonstrated in testbeds such as autonomous driving tasks, MPE, and SMAC. Sorry again that we could not include VirtualHome results, and we would greatly appreciate your understanding of our computational resource limitation. If you have any other questions or comments that could raise your score, we would be happy to continue the discussion, given the time! [3] P. Xavier et al., Virtualhome: Simulating household activities via programs. CVPR 2018. [4] Y. Wang et al., ToM2C: Target-oriented multi-agent communication and cooperation with theory of mind. ICLR 2022. [5] P. Georgios et al., Agent modelling under partial observability for deep reinforcement learning. NeurIPS 2021. --- Rebuttal 2: Comment: - **Difference between role and character** We appreciate the reviewer for bringing out insightful discussion. A 'role' in a multi-agent system represents a responsibility or function for achieving the objective of a cooperative team [6]. Roles can be interpreted as subtasks for each agent. A 'character' refers to the specific behavioral strategies an agent employs to perform its assigned role. To illustrate, suppose a cooperative multi-agent task, including two different roles necessary to achieve the team’s goal, and two agents, being assigned a specific role. The agent aims to solve its subtask which can be solved using different strategies. The character endows a behavioral preference to the agent. These two concepts are considered and debated significantly in the MARL domain. While 'role' has been the focus of several prior works, 'character' concept remains relatively overlooked. **We sincerely emphasize that it is essential to consider a task with multiple agents with diverse characteristics in the MARL community.** We believe that our work can serve the beginning, and the broader impact on the community will be meaningful. Taking this into account, we will include extensive related works about 'role' and 'character' in the appendix of the final version. --- - **Standard deviation for main results** We apologize for the inconvenience. We wanted to report it in the main body, but due to the page limit, we included it in the appendix of the original manuscript. Our appendix includes Tables with the standard deviation as follows. |Character diversity|n=1|n=2|n=3|n=4|n=5| |-|-|-|-|-|-| |Proposed|**2899** $\pm$ 217|**3047** $\pm$ 162|**2976** $\pm$ 196|**2948** $\pm$ 91|**3051** $\pm$ 109| |FCE-EFT|**2899** $\pm$ 217|2784 $\pm$ 161|2646 $\pm$ 196|2566 $\pm$ 103|2629 $\pm$ 125| |MADDPG|2763 $\pm$ 126|**3006** $\pm$ 103|2800 $\pm$ 106|**2933** $\pm$ 98|2856 $\pm$ 121| |MAPPO|2753 $\pm$ 206|2862 $\pm$ 201|2597 $\pm$ 144|2529 $\pm$ 131|2763 $\pm$ 190| |QMIX|2199 $\pm$ 56|2310 $\pm$ 39|2288 $\pm$ 118|2118 $\pm$ 82|1861 $\pm$ 132| |Dreamer|**2911** $\pm$ 312|2813 $\pm$ 283|2733 $\pm$ 351|2631 $\pm$ 521|2701 $\pm$ 433| |MBPO|2089 $\pm$ 804|1964 $\pm$ 753|1523 $\pm$ 948|1893 $\pm$ 792|1633 $\pm$ 821| |Algorithm|MAPPO|MADDPG|QMIX|Proposed| |-|-|-|-|-| |Spread|-149.29 $\pm$ 0.94|-157.10 $\pm$ 2.30|-154.70 $\pm$ 4.90|**-149.12** $\pm$ 1.38| |Adversary|9.61 $\pm$ 0.07|7.80 $\pm$ 1.43|8.11 $\pm$ 0.37|**10.01** $\pm$ 0.33| |Tag|13.78 $\pm$ 4.40|6.65 $\pm$ 3.90|**15.00** $\pm$ 2.73|14.57 $\pm$ 2.95| These tables show that **the standard deviation of EFTM is similar to that of other methods.** The model-based solution has the highest variance due to the uncertainty of other agents. Overall, EFTM achieves the best performance with a mid-level variance compared to all other baselines. --- Once again, we deeply appreciate the insightful comments and suggestions. We hope our clarification and additional empirical studies could address the concerns raised by the reviewer. Should there be any leftover questions, please let us know and we will make every effort to address them during the subsequent discussion period.
null
null
null
null
null
null
No Free Lunch in LLM Watermarking: Trade-offs in Watermarking Design Choices
Accept (poster)
Summary: This paper demonstrates that typical design choices in large language model (LLM) watermarking schemes result in significant trade-offs between robustness, utility, and usability. To navigate these challenges, this paper rigorously examines a series of straightforward yet effective attacks on prevalent LLM watermarking approaches and proposes practical guidelines and defenses to strengthen their security and practicality. Specifically, the robustness of existing watermarking, a desirable property to mitigate removal attacks, can make the systems susceptible to piggyback spoofing attacks, which makes watermarked text toxic or inaccurate through small modifications. By proposing a novel attack, it is further shown that using multiple watermarking keys can make the system susceptible to watermark removal attacks. Finally, it is identified that public watermark detection APIs can be exploited by attackers to launch both watermark-removal and spoofing attacks. The paper proposes a defense strategy leveraging techniques from differential privacy to effectively counteract spoofing attempts. Strengths: The paper provides novel attack schemes for existing watermarking techniques and provides empirical evidence to support their claims, which help the community to better understand the tradeoff among the design of watermarking systems. In general, release detection API will always make the watermarking more vulnerable, and the proposed defense demonstrates an interesting connection to DP. Weaknesses: One of the biggest weaknesses of this paper is that the proposed attacks mainly explore the drawbacks of existing literature in [11,14,33], and some of the tradeoffs described in the paper are tied to these algorithms or specific formulations, which are not fundamental to the watermarking problem itself. 1. The robustness issue discussed in Section 4 is mainly due to the specific definition of robustness Definition 3. As robustness is defined using editing distance, the piggyback spoofing attacks leverage the fact that we can significantly change the meaning of a sentence by editing very few tokens. One way to address this issue is to define robustness using the semantics of the generated text instead of editing distance. If the meaning of the text remains similar, the watermarking should still be detectable; otherwise, the watermarking should disappear if the edits dramatically change the meaning of the text. Conceptually, I believe that there should not exist a tradeoff between robustness and spoofing attack. Empirically, the authors could conduct additional experiments on the following semantics-based watermarking. I am curious to see if the proposed attacks still work for semantic-based watermarking. Liu, Aiwei, Leyi Pan, Xuming Hu, Shiao Meng, and Lijie Wen. "A semantic invariant robust watermark for large language models." ICLR 2024 Liu, Yepeng, and Yuheng Bu. "Adaptive Text Watermark for Large Language Models." ICML 2024 2. As shown in "Adaptive Text Watermark for Large Language Models," by adding watermarking using semantics, the ASR of a spoofing attack is quite low without using multiple secret keys. In addition, it is likely that the same prompt will generate text with similar semantics, leading to a similar biased watermarking pattern. Therefore, it would be hard to do watermarking removal using the proposed attacks. The authors are encouraged to provide more discussion regarding the applicability of their empirical findings to different types of watermarks. Technical Quality: 3 Clarity: 3 Questions for Authors: I can imagine that similar issues discussed in the paper will also occur for watermarking images generated by the diffusion model. For example, the following two papers consider image watermarking removal using adversarial samples from the publicly available detection model. Saberi, Mehrdad, Vinu Sankar Sadasivan, Keivan Rezaei, Aounon Kumar, Atoosa Chegini, Wenxiao Wang, and Soheil Feizi. "Robustness of ai-image detectors: Fundamental limits and practical attacks." arXiv preprint arXiv:2310.00076 (2023). Jiang, Zhengyuan, Jinghuai Zhang, and Neil Zhenqiang Gong. "Evading watermark based detection of AI-generated content." In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, pp. 1168-1181. 2023. This paper would greatly benefit from a thorough discussion of the differences and connections between these two problems by having a more thorough literature review. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are addressed in Appendix A of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments. In the following, we respond to each question. --- >**Q1: One of the biggest weaknesses of this paper is that the proposed attacks mainly explore the drawbacks of existing literature in [11,14,33], and some of the tradeoffs described in the paper are tied to these algorithms or specific formulations, which are not fundamental to the watermarking problem itself.** **A1**: Please see global response **C2**. --- >**Q2: The robustness issue discussed in Section 4 is mainly due to the specific definition of robustness Definition 3. [...] I am curious to see if the proposed attacks still work for semantic-based watermarking.** **A2**: Thanks for this suggestion; please see global response **C2.1**. --- >**Q3: As shown in "Adaptive Text Watermark for Large Language Models," [...] The authors are encouraged to provide more discussion regarding the applicability of their empirical findings to different types of watermarks.** **A3**: Thanks for the comments. Our findings in the tradeoffs of using multiple watermark keys do not apply to the semantics-based watermarks, as they rely on a separate semantics embedding model instead of determining the watermark logit bias using a watermark key. The general spoofing attack considered in [2] works for the PRF-based robust watermarks, and we may need substantial modifications to the general spoofing attacks to gain a better attack performance in semantics-based watermarks. For instance, instead of estimating the watermarked tokens’ distributions by providing uniformly distributed prompts, the attacker will need to carefully construct the prompts to guarantee that the outputs will be semantically close, such that they can gain some information on which portion of the tokens is more likely to appear in a specific semantic context by making a large number of queries. To defend against such potential attacks, one option could be for the model provider to introduce randomness into the semantics embedding model. For example, multiple random semantics embedding models could be used during inference, similar to the setting of using multiple watermark keys, to make it more resistant to watermark stealing. In this case, our findings will still apply, but this would need a more thorough and rigorous investigation. As we mentioned in the global response **C2**, attacking semantics-based watermarks is not the focus of our paper, but we will clarify this and provide corresponding discussions in the revision. [2] Liu et al. Adaptive Text Watermark for Large Language Models. ICML 24 --- >**Q4: I can imagine that similar issues discussed in the paper will also occur for watermarking images generated by the diffusion model. [...] This paper would greatly benefit from a thorough discussion of the differences and connections between these two problems by having a more thorough literature review.** **A4**: We agree that the attacks utilizing the detection API can be generalized to the image watermarks, as the attackers can adopt a similar oracle attack pipeline. The attackers will need to integrate domain-specific constraints to guarantee that the generated sentences or images are meaningful and high-quality. We will discuss potential opportunities and challenges in extending our attacks to image watermarks in the limitations/future work section of our revision (e.g., using the 1 extra camera-ready page if necessary). Thanks for this suggestion.
Summary: The paper details three different attacks on LLM watermarking, targeting watermark removal and spoofing: A1: spoofing by taking advantage of the robustness of the watermark A2: removal by taking advantage of multiple watermarking keys A3: removal by taking advantage of a public detector Attacks are followed by some guidelines/defenses. Strengths: Attack A2 is very original (AFAIK). The proposed defense against A3 based on DP is interesting. Weaknesses: W1. Relevance I have been a watermarking practitioner for years. I have never seen the following proposals: - Robust watermarking as proof of content authenticity (A1-Section 4) A robust watermarking detector distinguishes 2 hypotheses: H_0: content is not watermarked (by this technique and that secret key) H_1: content has been watermarked **and** potentially modified. So, I do not perceive A1 (Section 4) as an attack but more as a misunderstanding or a misuse of robust watermarking by the authors. I strongly disagree with Guideline #1 (line 206), which suggests lowering the robustness of robust watermarking. This hardly makes sense. The recommendation should be to combine two schemes: robust watermarking and fragile (digital-signature-based) watermarking. Robust watermarking for authenticity is wrong. Fragile watermarking for IA-gen detection is wrong as well. - Watermarking detectors should be public (A3-Section 6) This leads to oracle attacks well documented in the watermarking literature of the 200s (nowadays called black-box attacks). The authors study even the easiest case where the attacker observes a soft decision (Z statistics): he can observe if any single modification lowers the detection score. A harder and more relevant case would be a Yes/No decision. But I am not even recommending that setup for security reasons. In short, the novelty (the attack against a specific LLM watermarking) is narrow. "*It is still an open question whether watermark detection APIs should be made publicly available*". It is not an open question, and the conclusion of this section is absolutely not a surprise. W2. State-of-the-art The experimental setup considers 3 schemes with default parameters. Here default parameters mean parameters as appearing in the very first version of these papers. Yet, since then, we know that these choices were not adequate. - Use of Z-statistics. They have been shown to be suboptimal and inaccurate, leading to theoretical FPRs that are way off the empirical FPRs. I recommend using p-values (empirically validated). See "Three Bricks to Consolidate Watermarks for LLMs" from P. Fernandez. - The choice 'h=1' in KGW is not recommended (Needless to say that Unigram --where 'h=0'-- is even worse). Again, it leads to inaccurate FPR. Moreover, it is not secure (watermarking stealing attack). The question is whether your attack still holds with a larger h (with a proper hash function, not those implemented based on min of hashes). W3. Key management As far as A2 is concerned, I think the key issue is key management. The literature offers 2 flavors: randomly picking a key in a small key space (Kuditipudi) or using a hash of the previous tokens (Aaronson, Kirchenbauer). Section 5 only investigates the first option. Technical Quality: 3 Clarity: 2 Questions for Authors: Q1. Attack A3. Section 6. The assumption is that the watermarked LLM provides the top-5 tokens. Although some LLM APIs like OpenAI provide this information, I have some doubts that a **watermarked** LLM would do this. Especially, what are the top-5 tokens for Exp? Are they computed before or after the Gumbel trick? Q2. I am a bit surprised that $\sigma = 4$ doesn't impact the detection performance. Without DP: I suppose that w/o watermark $Z\sim\mathcal{N}(0;1)$ and with watermark $Z\sim\mathcal{N}(\mu;1)$, with $\mu \approx 6$ (according to Fig. 3.a). With a threshold set to 4, this makes P_FP ~ 3e-5, P_TP ~ 0.98, for an accuracy of 0.99. With DP: The variances are now equal to $\sqrt{1+\sigma^2}$. This makes P_FP ~ 0.17, P_TP ~ 0.69, for an accuracy of 0.76.... Quite far away from what you get. More importantly, these two cases should be compared for a fixed P_FP so that the threshold with DP should be higher. Reporting only the accuracy is masking the fact that P_FP is way higher. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 1 Limitations: The title and introduction discuss LLM watermarking in general, but the paper is based on three particular schemes. I agree that these three are the most well-known. It is questionable whether other more exotic schemes (like semantic-based) are also vulnerable. One may easily think of easy counter-attacks: forbidding querying the LLM with the same prompt repeatedly (or the same prompt plus one extra token), forbidding querying the detector with too similar text, etc. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's constructive comments. Please refer to **C1** in the global response for our clarifications on the positioning and contributions of our work. Our work studies the feasibility and ramifications of potential attacks, with the goal of better informing the public and the LLM watermarking community about key design tradeoffs. Below, we respond to each question. --- >**Q1: Robust watermarks are not suitable for proof of content authenticity.** **A1**: We agree robust watermarks are more suited for AI content detection, while fragile (signature-based) watermarks are used for authentication. Our contribution in this section is a rigorous study of the tradeoff between robustness and spoofing resistance, therefore showing that no single scheme is sufficient to protect against both types of attacks. Our experiments show the extent to which robust watermarks are vulnerable. We’ve recommended using fragile watermarks for spoofing defense in Sec 4.2 (L 215), but it is also known that fragile watermarks such as signatures are not robust to editing. We emphasize that increasing a watermark’s robustness to editing diminishes its suitability for content authenticity. This tradeoff is valuable information for the LLM watermark community. Indeed, we note that we are not the first to explore spoofing and robust watermarks, as recent works [1,2] have proposed more complex spoofing attacks on a specific set of robust watermarks. In contrast to these works, we explore a more simple and general piggyback spoofing attack that allows us to explore the inherent trade-off between spoofing and robustness. The potential for spoofing has also been cited as a major barrier to industrial LLM watermarking deployment [3], making it important to study this tradeoff rigorously. We agree that the current guideline can be revised to better convey our conclusions. We will follow your suggestion to revise our guideline to: `Guideline #1: Robust watermarks are vulnerable to spoofing attacks and are not suitable as proof of content authenticity alone. To mitigate spoofing while preserving robustness, it may be necessary to combine additional measures such as signature-based fragile watermarks.` --- >**Q2: The findings of attacks exploiting public detection API are not surprising.** **A2**: In Sec 6 (L 288), we stated that “Although this (public detection API) makes it easier to detect watermarked text, it is commonly acknowledged that it will make the system vulnerable to attacks. Here, we study this statement more precisely by examining the specific risk trade-offs that exist, as well as introducing a novel defense that may make the public detection API more feasible in practice.” We mentioned the use of public detection APIs is an ‘open question’ given recent activity in the community—for example, recent keynotes mention such settings [4], and commercial AI content detection services that return confidence scores to users [5]. However, we can remove this sentence to avoid misunderstandings. Despite the risks, public detection APIs have a lot of benefits, such as improving transparency in AI usage and supporting regulatory compliance. We believe providing a detection API isn't a simple yes or no question, and that there are different knobs one can tune when providing such an API. Our work both provides and explores such knobs, so that the risks/benefits of public detection APIs may be considered for practical deployment. Oracle attacks have existed for decades, but rigorously exploring them in the context of LLM watermarks is necessary. We show that making detection scores differentially private can effectively mitigate the spoofing attack without compromising detection accuracy (see **A7**). Our findings can help to enable public detection APIs deployment, inspiring future LLM watermark designs. --- >**Q3: I recommend using p-values as a detection metric.** **A3**: Thanks for this suggestion; please see global response **C6**. --- >**Q4: Whether your attack still holds with a larger h?** **A4**: See global response **C3**. --- >**Q5: Key management.** **A5**: See global response **C4**. --- >**Q6: The assumption of watermarked LLM providing the top-5 tokens.** **A6**: See global response **C5**. --- >**Q7: DP noise scale.** **A7**: For KGW and Unigram, we add noise to the z-scores. Sensitivity varies with sentence length (e.g., $\Delta=\frac{h+1}{\sqrt{\gamma(1-\gamma)l}}$ for replacement editing, where $l$ is the sentence length, $h,\gamma$ are watermark parameters). The actual noise scale is proportional to $\sigma\Delta$. For a 200-token sentence, $h=1,\gamma = 0.5,\sigma = 4$, the noise scale is 0.8. We’ve tested FPR with DP defense on OpenGen dataset, and FPRs are close to 0 (below 1e-3) using our recommended noise scale. We’ll explain more clearly in the text to avoid confusion. --- >**Q8: Are other more exotic schemes (like semantic-based) also vulnerable?** **A8**: See global response **C2**. --- >**Q9: One may easily think of easy counter-attacks: forbidding querying the LLM with the same prompt repeatedly (or the same prompt plus one extra token), forbidding querying the detector with too similar text, etc.** **A9**: In Guidelines #2 and #3, we recommended “defense-in-depth” techniques like anomaly detection, query rate limiting, and user verification. However, with just rate limiting, our attacks remain possible, as service providers can’t always ensure trusted users. Thus, it’s important to consider the tradeoffs when deploying LLM watermarking systems. --- [1] Jovanović et al. Watermark Stealing in Large Language Models. ICML 24 [2] Sadasivan et al. Can AI-generated text be reliably detected? arXiv 23 [3] Somesh Jha. Keynote at SaTML 2024-Watermarking (The State of the Union). 2024 [4] Scott Aaronson. Watermarking of large language models. 2023 [5] AI Purity; GPTZero; Winston AI --- Rebuttal Comment 1.1: Comment: **About Attack A1** I prefer this new guideline. A LOT. **About Attack A2** - There is a small contradiction in the text. The attack is motivated by Eq. (4) known as distortion-freeness or unbiasedness. Yet, the experiment considers KGW or Unigram, which are not distortion-free or unbiased. This is understandable because Watermarking Stealing only holds for green-list-based methods (AFAIK). It might be good to warn the reader. - The results also deeply rely on the way detection proceeds with multiple keys. There are plenty of variants. It amounts at computing a p-value per key, then aggregating these p-values into one statistic and computing the associated *global* p-value. Here the aggregation is the min operator over p-values (i.e., max operator over the score). I believe more robust alternatives are Fisher, Edgington, or Harmonic Mean aggregations. Anyway, I just mean that the results deeply rely on the setup and some precautions in the text are welcome. https://en.wikipedia.org/wiki/Harmonic_mean_p-value - Another guideline could be: Stay away from Watermark Stealing and Never use multiple keys. + Either prefer *NON* green-list-based methods like EXP. This is backed by Fig. 12 & 15. + Either use a green-list-based method with a proper cryptographic hash function (not makeshifts like HashMin or HashSum which are flawed) and a large h. BTW, about EXP, I don't understand line 729 > the use of a large number of watermark keys is inherent in their design, which defaults to 256. **About Attack A3** > Oracle attacks have existed for decades... Why don't you cite them? > “defense-in-depth” techniques such as anomaly detection Would you mind providing references, please. > We show that making detection scores differentially private... About DP, why does the sensitivity depend on $h$? --- Reply to Comment 1.1.1: Comment: Thanks for the reviewer’s timely reply. We respond to follow-up comments below: >**Attack A1.** We will update this guideline in our revision. Thanks again for the reviewer’s suggestion. --- >**There is a small contradiction in the text.** In Sec 5 (L 230), we mentioned that Exp is rigorously unbiased (the $\epsilon$ in Eq. 4 is negligible), and KGW and Unigram slightly shift the watermarked distributions (the $\epsilon$ in Eq. 4 could be large and won’t converge with the increasing of key numbers). We will emphasize this point and also clarify that watermark stealing does not work on the rigorously unbiased watermarks in the revision. --- >**The results also deeply rely on the way detection proceeds with multiple keys.** Our watermark-removal attack exploiting the use of multiple keys is not dependent on the aggregation method as we do not rely on the server’s watermark detection in this attack. However, the tradeoff analysis and the sweet spot for the number of the keys may slightly change given the different detection performance for various aggregations. We will add a paragraph to discuss this interesting problem in the revision; thanks for bringing it up. --- >**Another guideline could be: Stay away from Watermark Stealing and Never use multiple keys.** We want to clarify that the original Exp watermarking scheme inherently uses multiple keys in their setup: it maintains a predefined set of watermark keys, and at each time of model inference, it will randomly sample a key (a starting key index) from the pool, as also mentioned in the reviewer’s previous review (randomly picking a key in a small key space (Kuditipudi)). The number of keys is defaulted to 256 in their paper’s evaluation and codebase. Since the use of multiple keys is inherent in Exp, it can defend against watermark stealing attacks at the cost of being vulnerable to our watermark-removal attacks. The results in Figs. 12 & 15 show that we can effectively remove the watermark in Exp when n=7, given that the p-value of this attack is significantly large. However, for n=3, our watermark-removal attack does not work. To defend against watermark-removal, Exp needs to consider using fewer keys or limit query rates for users. We note that using a smaller number of keys like 3 would destroy the distortion-free guarantee and make Exp vulnerable to watermark stealing. We will follow your suggestion to recommend the use of larger h with proper hash functions in the guideline, and point out its tradeoff between robustness in the revision, as we have discussed in the global response **C4**. --- >**Citations for oracle attacks and “defense-in-depth” techniques.** We are happy to provide citations in our revision to support these points. Specifically, for oracle attacks, there are related works in both cryptography [1,2] and watermark analysis [3,4]. For defense-in-depth techniques we will cite [5,6]. [1] Bleichenbacher, Daniel. Chosen ciphertext attacks against protocols based on the RSA encryption standard PKCS# 1. CRYPTO 1998. [2] Cramer, Ronald, and Victor Shoup. Design and analysis of practical public-key encryption schemes secure against adaptive chosen ciphertext attack. SIAM Journal on Computing 2003. [3] Linnartz, Jean-Paul MG, and Marten Van Dijk. Analysis of the sensitivity attack against electronic watermarks in images. Information Hiding. Springer, 1998. [4] Kalker, Ton, J-P. Linnartz, and Marten van Dijk. Watermark estimation through detector analysis. Proceedings 1998 International Conference on Image Processing. 1998. [5] Bau, Jason, et al. State of the art: Automated black-box web application vulnerability testing. IEEE S&P 2010. [6] Sommer, Robin, and Vern Paxson. Outside the closed world: On using machine learning for network intrusion detection. IEEE S&P 2010. --- >**About DP, why does the sensitivity depend on h?** In KGW, considering replacement editing, each edit will change the hash that is used to split the green and red token lists for the length of context width tokens, which is h. This will affect at most $h+1$ tokens (including the token being edited) in terms of whether they are detected in the green or red list. Thus, the z-score sensitivity is bounded by $\frac{h+1}{\sqrt{\gamma(1-\gamma)l}}$. --- Please let us know if you have further comments, questions, or suggestions. We thank the reviewer again for their constructive feedback. If you believe that some of your key concerns have been addressed, we would greatly appreciate it if you are willing to revisit your score.
Summary: In this work, the authors reveal new attack vectors including watermark-removal attacks and spoofing attacks that exploit common features and design choices of LLM watermarks. Besides, the authors propose a defense utilizing the ideas of differential privacy, which increases the difficulty of spoofing attacks. Strengths: 1. The research question in this paper is interesting and is a hot topic in the field of LLM watermark. 2. The paper is well-written. 3. The experimental data and results presented in this paper are extensive. Weaknesses: 1. In Sec 3.1, what are the differences between "piggyback" and "general" spoofing attacks? Specifically, what does "piggyback" refer to? 2. In Sec 3.1, regarding attacks on detection APIs, the reviewer is confused by the statement "the attacker can auto-regressively synthesize (toxic) sentences." What does this mean? 3. Regarding attacks discussed in Sec. 5, the attackers can discover watermarking rules by observing a large amount of watermarked text, thus enabling attacks. The original KGW paper in Sec. 5 mentions using a large context width h to defend against such attacks. However, this paper lacks explanation and discussion regarding the parameter h. 4. In the detection API attack, the description of the adversary's capabilities is unclear. In Sec 6.1, the authors assume the adversary can access the target watermarked LLM's API and query watermark detection results. This implies the adversary can generate watermarked text and obtain detection results for any given text. But why can the adversary generate a list of possible replacements for x_i? Does this mean the adversary can access the perturbed probability distribution and logits of tokens? If so, this seems to exceed the stated capabilities of the adversary. 5. There are some minor issues, such as the undefined notation "V^{\ast}" in the definition in Sec 3. Technical Quality: 3 Clarity: 2 Questions for Authors: Please answer all points in the Weaknesses section. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments. In the following, we respond to each question. --- >**Q1: In Sec 3.1, what are the differences between "piggyback" and "general" spoofing attacks? Specifically, what does "piggyback" refer to?** **A1**: Piggybacking (Sec 4, L 142) is a classic attack in computer networks, where the attacker tags along with another person who is authorized to gain entry into a restricted area. The general spoofing attacks for LLM watermarks [1, 2] usually require the attacker to first estimate the watermark pattern by making a large number of queries (observe millions of watermarked tokens) to the watermarked LLM, and then they can create malicious content with a target watermark embedded but in fact it is not generated by the watermarked LLM to ruin the reputation of LLMs. Our piggyback spoofing attack does not require estimating the watermark pattern, instead, we launch the attack based on the content generated by the watermarked LLM, which is similar to piggybacking in computer networks where the attacker relies on well-established authorization. The attacker’s goal of piggyback spoofing is the same as the general spoofing attack as they both aim to create malicious/inaccurate content with a target watermark embedded. However, a benefit of our attack is that it has much weaker assumptions on the attacker’s ability. The attacker can simply exploit the robustness property of the watermarks and maliciously edit the watermarked content without altering the watermark detection result to generate malicious but watermarked content to ruin the LLM’s reputation. [1] Jovanović et al. Watermark Stealing in Large Language Models. ICML 24 [2] Sadasivan et al. Can AI-generated text be reliably detected? arXiv 23 --- >**Q2: In Sec 3.1, regarding attacks on detection APIs, the reviewer is confused by the statement "the attacker can auto-regressively synthesize (toxic) sentences." What does this mean?** **A2**: In the attacks exploiting the detection APIs, the attacker will generate sentences auto-regressively, similar to how LLMs generate sentences. That is, the attacker will select each token based on the prior tokens and the detection results. Please also refer to Alg.1 and Alg.2 in the Appendix J of our paper. We will clearly explain this in the revision. --- >**Q3: Regarding attacks discussed in Sec 5, the attackers can discover watermarking rules by observing a large amount of watermarked text, thus enabling attacks. The original KGW paper in Sec 5 mentions using a large context width h to defend against such attacks. However, this paper lacks explanation and discussion regarding the parameter h.** **A3**: Please see global response **C4**. --- >**Q4: In the detection API attack, the description of the adversary's capabilities is unclear. In Sec 6.1, the authors assume the adversary can access the target watermarked LLM's API and query watermark detection results. This implies the adversary can generate watermarked text and obtain detection results for any given text. But why can the adversary generate a list of possible replacements for x_i? Does this mean the adversary can access the perturbed probability distribution and logits of tokens? If so, this seems to exceed the stated capabilities of the adversary.** **A4**: Please see global response **C5**. --- >**Q5: There are some minor issues, such as the undefined notation $V^{\ast}$ in the definition in Sec 3.** **A5**: The $V^{\ast}$ refers to a sequence of tokens, where each token belongs to the vocabulary set $V$. We will clearly explain this in the revision. --- Rebuttal Comment 1.1: Comment: The reviewer thanks the authors for the response. The score is kept the same.
Summary: This paper explores the vulnerabilities and trade-offs in watermarking schemes for large language models (LLMs). It highlights how common design choices in these schemes, aimed at ensuring robustness, multiple key usage, and public detection, make them susceptible to simple yet effective attacks. The authors demonstrate that robust watermarks, intended to prevent removal, can be easily exploited through piggyback spoofing attacks that insert toxic or inaccurate content while maintaining the watermark. Additionally, using multiple watermark keys to defend against watermark stealing inadvertently increases vulnerability to watermark removal attacks. Public detection APIs, while useful for verifying watermarked content, are shown to be exploitable for both removal and spoofing attacks. Through empirical evaluations on state-of-the-art watermarks (KGW, Unigram, Exp) and models (LLAMA-2-7B, OPT-1.3B), the study rigorously demonstrates these vulnerabilities and the resulting trade-offs between robustness, utility, and usability. The paper proposes potential defenses, including the use of differential privacy techniques in detection APIs, and offers guidelines for designing more secure watermarking systems. Ultimately, the study underscores the importance of carefully considering watermarking design choices to balance security and utility, calling for further research to develop robust defenses and optimize these trade-offs. Strengths: + The paper rigorously explores various common watermarking design choices and demonstrates their susceptibility to simple yet effective attacks. It highlights the fundamental trade-offs between robustness, utility, and usability, which are crucial for understanding the limitations of current watermarking methods. + The paper provides an insightful discussion of the inherent trade-offs in watermarking design, such as the balance between watermark robustness and vulnerability to spoofing attacks. This helps in understanding the complexities involved in creating effective watermarking. + The authors propose potential defenses and guidelines to enhance the security of LLM watermarking systems. These recommendations are valuable for practitioners looking to deploy more secure watermarking solutions in practice. Weaknesses: + The citation in Line 248 and Figure 2 is not correct. The authors are supposed to cite [1]. + Lack of discussions in the Publicly-Detectable Watermarking [2], which compromises robustness to defend against spoofing attacks. [1] Nikola Jovanovic, Robin Staab, and Martin Vechev. Watermark stealing in large language models. arXiv preprint arXiv:2402.19361, 2024. [2] Fairoze, Jaiden, et al. "Publicly detectable watermarking for language models." arXiv preprint arXiv:2310.18491 (2023). Technical Quality: 4 Clarity: 3 Questions for Authors: + How generalizable are the findings? Would the vulnerabilities and trade-offs identified apply to all types of LLMs, or are they specific to certain architectures or applications? + What are the limitations of the attack methods presented in this study? Are there scenarios where these attacks might not be effective? + Are there any practical considerations or potential drawbacks to implementing DP defense mechanisms in real-world systems? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: See the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments. In the following, we respond to each question. --- >**Q1: The citation in Line 248 and Figure 2 is not correct. The authors are supposed to cite [1].** **A1**: Thanks for pointing this out. We will fix this typo in the revision to avoid confusions. --- >**Q2: Lack of discussions in the Publicly-Detectable Watermarking [2], which compromises robustness to defend against spoofing attacks.** **A2**: There exist some recent works [1,2] that study mitigating the spoofing attack vulnerabilities in robust watermarks. The high-level idea is to embed a cryptographic signature into the subsequent tokens, and the signatures are computed using the first $m$ high-entropy tokens and the secret key. They further incorporate error correction code to make the design robust. As also mentioned by the reviewer, such designs are not as robust as the watermarks we study as they prioritize the resistance against spoofing instead of strong robustness. For instance, by simply modifying the first $m$ tokens, the signature check no longer passes. The designs of these works are consistent with our findings in the piggyback spoofing attacks: to defend against spoofing attacks, the design needs to incorporate less robust (or even non-robust) signature-based watermarks. We will include these related works in the revision to provide a more comprehensive study. [1] Zhou et al. Bileve: Securing Text Provenance in Large Language Models Against Spoofing with Bi-level Signature. arXiv 24 [2] Fairoze, et al. Publicly detectable watermarking for language models. arXiv 23 --- >**Q3: How generalizable are the findings? Would the vulnerabilities and trade-offs identified apply to all types of LLMs, or are they specific to certain architectures or applications?** **A3**: Please see global response **C2**. --- >**Q4: What are the limitations of the attack methods presented in this study? Are there scenarios where these attacks might not be effective?** **A4**: As we have discussed in **C2**, the semantics-based watermarks rely on a high-quality semantics embedding model instead of using secret keys to embed the watermark. As such designs fundamentally differ from the watermarks we study, our findings of using multiple watermark keys are not applicable here. We will include a limitation and future work section in the revision to discuss this issue and present a more comprehensive study. --- >**Q5: Are there any practical considerations or potential drawbacks to implementing DP defense mechanisms in real-world systems?** **A5**: DP adds noise to the watermark detection results. The service provider needs to determine the optimal noise scale, as larger noise will make the detection inaccurate, and less noise will be ineffective to defend against attackers. According to our empirical findings, we can find a sweet point to achieve both high detection accuracy (low FPR) and low attack success rate. Overall, we believe that our DP defense can potentially make the detection API publicly available while protecting the secret watermark pattern information without sacrificing detection accuracy. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. My concerns have been thoroughly addressed, and I recommend incorporating the discussions into the revision. This paper has the potential to significantly impact the field of LLM watermarks. I will maintain my score and advocate for its acceptance.
Rebuttal 1: Rebuttal: We appreciate all reviewers’ constructive comments. Below we clarify our contributions, respond to common questions, and present new experimental results. >**C1: Clarification on the contributions and positioning of our work.** Our work explores attacks that exploit design choices of common LLM watermarks. While these design choices may enhance robustness, resistance against watermark stealing attacks, and public detection ease, we show that they also allow malicious actors to launch attacks that can easily remove the watermark or damage the model's reputation. Although some of our high-level take-aways may confirm common beliefs (e.g., the risk of spoofing robust watermarks as noted by Reviewer Hb7x), we disagree with the implication that the feasibility and ultimate ramifications of such attacks are thus unworthy of scientific study—particularly given that these design choices have been adopted/explored both in recent research and in practical deployment. Further, our work questions common folklore (such as the inability to use public detection APIs), showing that attacks on these systems may be mitigated with our novel DP-inspired defense. Overall, our goal is to rigorously study the risks and benefits of LLM watermark design tradeoffs, and to distill these results into a set of take-aways that can better inform the public and LLM watermarking community. We consider these take-aways particularly important as the research community grows and the use of LLM watermarking systems increases, potentially out of the hands of a select set of domain experts. --- >**C2: Generalizability of our attacks. (Hb7x, UAwD, YBsi)** We focus on three SOTA PRF-based robust watermarks, which are a natural set to explore given their popularity and formal cryptographic guarantees. There are other promising watermarks like the semantics-based watermarks as the reviewers mentioned. While attacking semantics-based watermarks is outside the scope of our study, we agree with Reviewer UAwD that this is an interesting direction to explore, and have provided an initial exploration below. We will discuss generalizing our attacks to other watermarks including semantics-based watermarks as a potential avenue of future work in our revision. >**C2.1: Piggyback spoofing on semantics-based watermarks. (UAwD)** Semantics-based watermarks use embedding models to capture sentence semantics and bias LLM predictions. Robustness ensures semantically close sentences yield similar watermark patterns. We agree spoofing attacks are harder if we assume perfect semantics embedding models. However, inaccuracy in the embedding can make spoofing possible. As a proof of concept, we attacked the SIR watermark [1], and present a concrete example in Tab.1 of the submitted PDF to show piggyback spoofing is possible. We deem this an interesting future direction to rigorously explore and will add discussions in the revision. [1]Liu et al. A semantic invariant robust watermark for large language models. ICLR 24 --- >**C3: Consistent attack performance for larger h. (Hb7x)** Our results hold for any h and hash function in KGW watermark. Increasing h makes brute-force watermark stealing harder, but our attacks don’t depend on h or hash functions. With the latest KGW codebase, we use h=4 and sumhash in new experiments, observing consistent results with h=1 for all attacks, as shown in Figs.1-3 in the submitted PDF. --- >**C4: Discussions on the tradeoff of context width h. (Hb7x, iwzS)** We primarily explored the fundamental tradeoffs in using multiple watermark keys, which prior works have underexplored. Tradeoffs in context widths (h) are discussed in prior works [1-3]. Using larger h enhances the resistance against watermark stealing but reduces robustness. Our new experiments validate this. Fig.1 shows that fewer edits are allowed for watermarked content with a larger h, indicating lower robustness. KGW recommends using h<5 in their codebase for robustness, and no prior works we are aware of suggest using h>4. Recent work [1] shows successful watermark stealing even with h=4. Using multiple keys, as shown in Sec 5 of our paper, mitigates stealing attacks, but introduces new attack vectors of watermark removal. We will add a discussion on larger h in the revision. [1]Jovanović et al. Watermark Stealing in Large Language Models. ICML 24 [2]Kirchenbauer et al. A watermark for large language models. ICML 23 [3]Zhao et al. Provable Robust Watermarking for AI-Generated Text. ICLR 24 --- >**C5: Clarifications on obtaining top-5 tokens from the watermarked LLM. (Hb7x, iwzS)** In our watermark-removal attack with detection APIs, we assume the attacker can generate a short list of replacements for the current token. We used the setting of returning top-5 tokens by the watermarked LLM API because it’s beneficial to users and is currently used in commercial non-watermarked LLM services including OpenAI [1]. For instance, it can help understand model confidence, enable debugging, make custom sampling strategies, etc. One of the goals of our paper is to point out how existing LLM deployment practices can lead to attacks if watermarking is integrated. The fact that this API is vulnerable to our attacks illustrates our point. We will clearly state the attacker's assumptions in the revision. [1]OpenAI API. https://platform.openai.com/docs/api-reference/completions/create --- >**C6: Using p-value instead of z-score as the detection metric. (Hb7x)** We will follow the reviewer’s suggestion to change the detection metric from z-score to p-value for KGW and Unigram. P-values are used for Exp in our paper, and the observations are consistent with KGW and Unigram. We expect no impact on results from this change since p-value is monotonic to z-score. Figs.1-3 in the submitted PDF also show consistent attack performance using p-values for KGW, and we will add results for Unigram in the revision. Pdf: /pdf/87c775ff7c36343178fa132453886fdcdd447813.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Graph Coarsening with Message-Passing Guarantees
Accept (poster)
Summary: This paper studies the theoretical guarantees of graph coarsening for GNNs. The authors propose a new and directed message-passing operation specific to coarsened graphs, which makes many theoretical results possible. Strengths: S1: The results are very useful for the field of graph coarsening. Theorem 2 shows the gap of loss between the coarsened graph and the original graph, which is applicable for many tasks. S2: The new message-passing looks natural and novel to me. S3: The literature review is comprehensive. Weaknesses: W1: The overall structure is somewhat loose. W2: It would be nice to see more intuitive explanation of $S^{MP}_c$, e.g., when $S$ is symmetrically normalized adjacency matrix from GCN. W3: The experiments are conducted only on small datasets, which I understand since the authors put more effort on the theoretical analysis. Technical Quality: 3 Clarity: 2 Questions for Authors: Q1: Can the authors give some intuitive explanation on $\overline{C}_{\Pi}$, e.g., what kind of coarsening method will make this term smaller? Q2: It would be nice to see more experiments on larger datasets, e.g., ogbn-arxiv. Typo: Line 212, $\hat{A}=A+I$. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and feedback. We will fix the typos and clarify the description of the proposed propagation matrix. **Q1)** *Can the authors give some intuitive explanation on $\overline{C}_{\Pi}$, e.g., what kind of coarsening method will make this term smaller?* From a theoretical point of view, in general we only have a harsh upper bound: $$C_{\Pi} \leq \lVert S \rVert_{{L}} \sqrt{\frac{\lambda_{max}}{\lambda_{min}}} $$ as $\Pi$ is an orthogonal projector and where $\lambda$ are the eigenvalues of the laplacian ${L}$. For a propagation matrix $S = \alpha i_n + \beta {L}$ and normalized as it is the case in our experiments $ \lVert S \rVert_{{L}} \leq 1$. Thus $C_{\Pi} \leq \sqrt{\frac{\lambda_{max}}{\lambda_{min}}} $. The same goes with $\overline{C_{\Pi}} = \lVert \Pi S \Pi \rVert_{{L}} \leq \lVert \Pi \rVert_{{L}} C_{\Pi} \leq \frac{\lambda_{max}}{\lambda_{min}} $ Experimentally we can however remark that we generally observe a factor 10 between the upper bound and the actual value of the constants. Moreover, we obtain lower values for uniform coarsenings. Theoretical interpretation of uniform coarsenings are a promising avenue to use in a coarsening procedure. **Q 2)** *It would be nice to see more experiments on larger datasets, e.g., ogbn-arxiv.* We have conducted experiments on a larger dataset, namely the Reddit one (see global comment A). As said in the global comment, the Reddit Dataset has 1.5 times more nodes than ogbn-arxiv. For the final version of this paper, experiments on ogbn-arxiv, conducted if time allows, would be a valuable addition. --- Rebuttal Comment 1.1: Comment: Thanks for the response. All my concerns are addressed.
Summary: This paper proposes a new message-passing matrix for a graph coarsening algorithm. The goal is to have some message-passing guarantees for the new message-passing matrix, which is not the case with the previous message-passing matrices based on this coarsening. They provide theoretical proofs for linear variants of GNNs (SGC). They examine their theoretical guarantees on synthetic datasets and do experiments on two real-world graphs for comparing different selections of the message-passing matrices. Their approach in most of the experiments varying in the coarsening ratio, works better than alternative approaches. Strengths: Graph coarsening can be indeed very beneficial if it can be done efficiently. Pooling approaches in some domains such as computer vision have been very helpful, but they are not as prominent in the learning on graphs community. Having well-studied approaches for this end can be of great importance because of the memory limitations of the GNNs. Also, their work has a deep root in the theory and provides theoretical guarantees for their approach. Weaknesses: While the theoretical analysis is interesting it is limited to the linear networks. Extending this analysis to more complex GNNs might not be an easy task, however, this does not mean that they could not try their approach for other types of GNNs and see if it works in practice or not. Maybe they could try more common message-passing architecture such as Graph Attention Networks (GAT) or Graph Isomorphism Networks too. Also, the datasets used are fairly old and outdated in the current state of learning on graph works. I would suggest trying some recent datasets, maybe varying from homophilous datasets to heterophyllous ones. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The theory is based on signal processing on graphs that work on a single value for each node. Most datasets have a vector of initial features for each node, how this can be addressed in the theory? 2. How much does the coarsening algorithm help with the memory? GNNs usually scale by the number of nodes + the number of edges, the coarsening ratio r talks about how much you can reduce the number of nodes, but edges seem to be a more important factor in the memory and time complexity. Is there any theoretical or experimental analysis on the edges or memory in general? Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The theoretical analysis is limited to the linear networks and the experiments are limited to two small datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and feedback. **Datasets** See global comment (A): we have performed additional experiments on the Reddit Dataset which is significantly bigger than Cora and Citeseer. Concerning heterophilous datasets, we note that spectral-based coarsening itself is probably very inefficient, as it basically aims at preserving the low-frequencies. Dedicated, new coarsening methods is an interesting path for future work. We will add a remark in the paper. **Other models** Because of attention coefficients, GAT do not rely on a propagation scheme that can be expressed as the multiplication of the node representation $H$ by a fixed propagation matrix $S$. Thus, we can't adapt our method to compute a new propagation matrix for the coarsened graph with this model. However, message-passing on coarsened graphs with attention coefficients and/or edge features is an important path for future work. We will add a remark. **Q 1)** *The theory is based on signal processing on graphs that work on a single value for each node. Most datasets have a vector of initial features for each node, how this can be addressed in the theory?* See global comment (B). **Q 2)** *How much does the coarsening algorithm help with the memory? GNNs usually scale by the number of nodes + the number of edges, the coarsening ratio r talks about how much you can reduce the number of nodes, but edges seem to be a more important factor in the memory and time complexity. Is there any theoretical or experimental analysis on the edges or memory in general?* The memory used and number of edges in the coarsened graph depend on the coarsening algorithm itself: in general, two super nodes are connected if at least two nodes they represent in the original graph are connected. For the Loukas algorithm that we used, Loukas wrote that "The sparsification step was not included in the numerical experiments since it often resulted in increased errors" which prevent us to perform an additional sparsification step. A new coarsening algorithm that better controls the sparsity of the resulting graphs while balancing with good RSA constant is an interesting path for future work, but out of scope of the present paper. The number of edges for Reddit after coarsening and cora citeseer can be found in the table 1 (see pdf attached with tables). --- Rebuttal 2: Comment: I thank the authors for their rebuttal and new experiments. In general, I think the practical applicability of this work is limited at present. The computational cost seems to be expensive and this class of graph coarsening idea seems to be mostly applicable to homogeneous datasets (usually less challenging datasets), and as nodes inside a supernode can only be assigned to the same class, the performance would be poor on more heterogeneous datasets. However, I also think that we need more theory to understand the coarsening and pooling algorithms. I am not an expert in this area and cannot evaluate the significance of the theoretical results provided in this work or their applicability to other works or for future theoretical analysis. Relying on Reviewer 7C68's review, I would like to increase my score to 5; however, I want to decrease my confidence level in my assessment to 1, since I am making a decision in an area where my knowledge is very limited. --- Rebuttal 3: Comment: Thank you for your answer. Indeed, your are correct in pointing out that the *coarsening* process in itself is still an active area of research and that classical spectral-based coarsening must be improved for certain datasets. Our work, however, studies message-passing on coarsened graphs, which is downstream from the coarsening process. But we hope that it might serve as pointers to improve the coarsening itself in future work.
Summary: This work presents a novel computation method for the message-passing matrix on coarsened graphs. This method does not require recalculating degree matrices and other information on the coarsened graph and has comprehensive theoretical guarantees. Overall, it addresses a significant problem in graph coarsening field. Strengths: 1. The theoretical analysis is sufficient and reasonable. 2. The proposed method is very simple. 3. The model performs very well on Cora and Citeseer. Weaknesses: The experimental section is insufficient. Testing only on Cora and Citeseer is not enough to demonstrate the effectiveness of the method. More GNN models should also be tried. I believe this work is a valuable contribution to the field of graph coarsening, and if the authors further increase the experiments, I will raise my score. Technical Quality: 3 Clarity: 2 Questions for Authors: How effective is this work on datasets such as Arxiv and Products? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors adequately addressed the limitation and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and feedback. **Q 1)** *How effective is this work on datasets such as Arxiv and Products* See global comment (A): improving our spectral corsening algorithm, we have conducted experiments on a larger graph, Reddit. This graph has 1.5 times more nodes than ogb-arxiv. For the final version of this paper, experiments on ogbn-arxiv, conducted if time allows, would be a valuable addition. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I believe that the volume of experiments in this paper still has not reached my expectations. However, considering that the paper addresses a very important issue in graph coarsening, I have raised the score to 7 and encourage the authors to continue adding more experiments. --- Reply to Comment 1.1.1: Comment: Thank you for your answer. We are confident at this point that it will be possible for us to extend the experimental section with other large datasets beyond Reddit.
Summary: The authors describe an alternative way to obtain the connectivity matrix of a coarsened graphs and provide some bounds on operations performed on such a matrix. Strengths: Theoretical work on how to optimally compute the connectivity matrix of a coarsened graph is an interesting and potentially useful research direction. Weaknesses: - The main contribution is very small, as it simply consists in replacing the coarsened matrix QSQ^T, commonly used in graph pooling, with QSQ^+. This seems more of a detail in practice and it seems too much to have a whole paper on it. I seriously doubt it would make a significant difference in practice and the limited experimental evaluation (more on this later) does not help to address my concern. - I don't see the usefulness of Theorems 1 and 2, which is the second contribution of the paper. They provide bounds which I don't find useful, as they do not compare against other existing bounds and they are not computed for other coarsening schemes, such as the more common QSQ^T. For example, it would be useful to see that the proposed coarsened matrix yields narrower bounds than the latter. - I believe that there are too many simplifications and assumptions for the theoretical results to be relevant in practice. For example, the RSA constant is defined only for 1-D node features, which is something not commonly found in many graph data processed by graph neural networks. Similarly, it seems that Theorem 2 relies on the assumption that each column of the node features X belongs to R, which seems too strong and unrealistic as assumption. Finally, the whole paper assumes GNNs without nonlinearities. I believe that a GNN without nonlinearity is not a GNN and a paper completely centered around the analysis of such models should rather be published in a (graph) signal processing or linear algebra venue, not a machine learning one. - The experimental evaluation is too limited and not convincing. First of all, it only considers relatively small graphs, as the coarsening algorithm used does not scale well. This defies the whole premise of using coarsened graphs to handle large graphs that cannot be processed due to high computational complexity. - Only one coarsening algorithm is considered to obtain Q, while there is a large pletora of existing graph pooling algorithms that can be used to compute Q. To convince about the effectiveness of the proposed method, the author(s) should show that it works with different coarsening schemes. Remarkably, the coarsening algorithm used in the experimental evaluation does not even account for node features. This, again, set the work apart from the GNN and machine learning community. - Besides the synthetic data, the only two datasets considered are Cora and Citeseer. These datasets are rather similar (both of them are citation networks) and they have very homophilic node features\labels, which might biases the experimental evaluation. In addition, the experiment considers only the largest connected component of these networks, which further simplify the task on datasets that are already simplicistic. The need for such an unusual experimental setting casts further doubts on the effectiveness of the proposed method. - The authors largely overlook relevant related work on graph pooling. See for example the CONNECT operation from the paper entitled "Understanding Pooling in Graph Neural networks". - (minor) Ker(L) is not defined I think. Technical Quality: 2 Clarity: 3 Questions for Authors: I do not have questions besides the concerns above. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: See what I wrote in weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and feedback. We have addressed your comments individually below. Before answering the questions, we would like to precise that Graph Pooling and Graph coarsening are two methods that are linked, but with different purposes. Graph Pooling is generally incorporated into the GNN to upgrade the classification results by mimicking the "pooling" in CNNs. It is often supervised, and diferentiable, such as DiffPool, but in turns has generally no guarantees, being the result of a non-convex optimization problem. On the other hand, Graph coarsening is a preprocessing used to gain memory. It is often non-supervised, with the type of guarantees such as the RSA constant introduced by Loukas. **Q 1)** *The main contribution is very small, as it simply consists in replacing the coarsened matrix $QSQ^T$, commonly used in graph pooling, with $QSQ^+$. [...]* We agree that this new propagation matrix is indeed "deceptively" simple, even if it is quite original in the sense that it is not a Laplacian or any other form of graph representation matrix, as it may not even be symmetric. In effect, we show that our proposal is the "right" normalization to directly translate spectral guarantees (which are classical in graph coarsening and the objective of many coarsening algorithms), to MP guarantees. Other matrices simply do not have this mathematical property. Note that when $Q$ is orthogonal, the two coincide, but classical "uniform" coarsenings are not orthogonal. Also note that we compared in the experiments our matrix with $S\_c^{diff} =QSQ^{T}$ inspired by the pooling litterature, and in our experiments, $S_c^{diff}$ give accuracy results which are far less competitive. **Q 2)** *I don't see the usefulness of Theorems 1 and 2, [...] They provide bounds which I don't find useful, as they do not compare against other existing bounds and they are not computed for other coarsening schemes, such as the more common $QSQ^T$. [...]* The crucial part of our theoretical bounds is to the dependency on the RSA-constant $\epsilon_{Q, {L}, \mathcal{R}}$, which tend to be small, as it is explicitely minimized by coarsening algorithms. Unfortunately, other propagation matrices (including $QSQ^T$) simply do *not* yield any mathematical guarantee in this spectral-based framework, hence the impossibility to ``compare'' with existing bounds. We will clarify this in the final version of the paper. **Q 3)** *I believe that there are too many simplifications[...]. For example, the RSA constant is defined only for 1-D node features[...]. Similarly, it seems that Theorem 2 relies on the assumption that each column of the node features X belongs to R [...] Finally, the whole paper assumes GNNs without nonlinearities* See global comment (B) for multidimensional node features. Thm 2 relies on the assumption that each column of the nodes features $X_{:,i} \in \mathcal{R}$, that is, are close to low-frequency. This assumption seems reasonable for homophilic datasets (Cora, Citeseer) and large preserved space. We agree that for now the assumption on non-linearities is strong. However SGC is indeed used in many theoretical works to analyse the inner workings of GNNs (see eg [1,2] and references therein), and we still believe that it opens a path for interesting future works on the interaction between low-frequencies and non-linearities. **Q 4)** *The experimental evaluation [...] only considers relatively small graphs, as the coarsening algorithm used does not scale well. This defies the whole premise of using coarsened graphs to handle large graphs [...].* See global comment (A). **Q 5)** *Only one coarsening algorithm is considered to obtain Q, while there is a large pletora of existing graph pooling algorithms that can be used to compute Q. [...] Remarkably, the coarsening algorithm used in the experimental evaluation does not even account for node features.[...]* The starting point of our work are indeed coarsening algorithm that present spectral-based theoretical bounds (see header comment on graph pooling vs graph coarsening). To our knowledge, the Loukas coarsening algorithm is one of the most classical algorithm with such theoretical spectral guarantees. We do agree that future works focusing on the nodes features and coarsening algorithms incorporating both spectral guarantees and nodes features are a promising avenue, but this paper did not focus on the coarsening algorithm itself, but rather how to translate classical spectral guarantees to GNNs. **Q 6)** *Besides the synthetic data, the only two datasets considered are Cora and Citeseer. These datasets are rather similar (both of them are citation networks) and they have very homophilic node features\labels, which might biases the experimental evaluation. In addition, the experiment considers only the largest connected component of these networks [...]* We considered the main connected component of Cora and Citeseer as Loukas coarsening algorithm was designed for fully connected graph. It is true that spectral coarsening algorithms might be less efficient on heterophilic datasets, since the features are less likely to live in the low frequency of the graph. The design of new coarsening algorithms for these graphs is an important path for future work. We conducted additional experiments on the Reddit Dataset with good results, see global comment (A) **7)** *See for example the CONNECT operation from the paper entitled "Understanding Pooling in Graph Neural networks"* Thank you for the suggestion, we will add this reference to the final version and discuss it. We will also clarify the relation between pooling and coarsening (see top). [1] Zhu et al. *Graph Neural Networks with Heterophily*. AAAI. [2] Keriven. *Not too little, not too much: a theoretical analysis of graph (over)smoothing*. NeurIPS. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed answers. I do not completely agree with the distinction proposed by the authors between graph pooling and graph coarsening. Graph pooling can also be a pre-processing step that reduces the size of the graph and, thus, the memory consumption. See for example Graclus [1], originally introduced by [2] as a pooling scheme, and other non-trainable pooling operators described in [3]. In addition, there are some recent work that showed some guarantees of pooling operators in terms of their capability of keeping two non-homomorphic graphs distinguishable after pooling [4]. I suggest the authors to clarify this connection... or to find a stronger argument for why pooling and coarsening should be different things. I still believe that the practical contribution seems a rather small detail that would arguably make a small difference using $QSQ^+$ rather than $QSQ^T$ in most practical settings. At least, that's the experience I had myself when I replaced $S^T$ with a pseudo-inverse on some problems I am currently working with. Nevertheless, I see that the value of this work is to be a starting point for a theoretical study on pooling/coarsening in GNNs that will hopefully be developed further in the future. Even if most of my concerns still remain after the rebuttal, I do appreciate the effort of the authors in answering in detail to every author and adding additional experiments. Therefore, I'll raise my scores conditional on the fact that the authors will modify the paper as asked. [1] I. S. Dhillon, Y. Guan, and B. Kulis. Weighted graph cuts without eigenvectors a multilevel approach. IEEE transactions on pattern analysis and machine intelligence, 29(11):1944–1957, 2007. [2] M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, pages 3844–3852, 2016. [3] Grattarola, D., Zambon, D., Bianchi, F. M., & Alippi, C. (2022). Understanding pooling in graph neural networks. IEEE transactions on neural networks and learning systems, 35(2), 2708-2718. [4] Bianchi, F. M., & Lachi, V. (2024). The expressive power of pooling in graph neural networks. Advances in neural information processing systems, 36. --- Rebuttal 2: Comment: Thank you for your careful review and for raising your score. A few comments on the points you mention. **On pooling**: we agree that the vocabulary in the community might overlap a bit at this point. We'll try to clarify as much as we can our meaning in the final version (that is, our focus on spectral-based unsupervised coarsening), and add the references you mention. Thank you for providing them. **On propagation matrix**: it is true that the ``more orthogonal'' $Q$ is, the less the difference between $Q^+$ and $Q^T$. More generally, the difference between the two is more pronounced when supernodes are of very different sizes, which may happen for highly irregular graphs (eg, for uniform coarsenings, when all the supernodes are exactly of the same size, there is only a multiplicative constant between $Q^+$ and $Q^T$). We will explain this better in the final version, as well as outline the datasets where this happens more frequently. --- Rebuttal Comment 2.1: Comment: The point you made in your answer about the propagation matrix, i.e., that $Q^{+}$ and $Q^\top$ become more similar as $Q$ induces a balanced partition makes sense but it is something I missed when reviewing the paper. Indeed, I strongly encourage the authors to stress this point. There is a class of graph pooling methods that encourage the size of the supernodes to be balanced (see for example the dense pooling methods from this recent survey paper [1]). In this case, it would make less sense to use $Q^{+}$. Again, this seems an important point worth commenting on. [1] Wang, Pengyun, et al. "A Comprehensive Graph Pooling Benchmark: Effectiveness, Robustness and Generalizability." arXiv preprint arXiv:2406.09031 (2024). --- Reply to Comment 2.1.1: Comment: Thank you for the pointers and reference, that is indeed an important point that we will emphasize in the final version.
Rebuttal 1: Rebuttal: We thank all the reviewers for their reviews and questions. In this global comment, we address two questions that were mentioned in multiple reviews. Namely, we introduce new experiments on a larger dataset (Reddit) and comment on the multidimensionality of node features. **A)** ***Larger Dataset*** As mentioned in the paper, Loukas' coarsening algorithm is one of the only one that provides RSA guarantees, but in turns it is quite costly to run. Improving the computational cost of spectral-based coarsening algorthms is a major path for future work, but out of scope of this paper. Hence we evaluated our coarsened propagation matrix proposition on small-ish datasets for the first version of our paper. During the reviewing process, we upgraded our code to deal with larger graphs and have conducted experiments on the Reddit Dataset, which is 1.5 bigger than ogb-arxiv dataset and 100 times bigger than Cora or Citeseer. We performed two coarsening ratio on the Reddit Dataset ( available in torch geometric) $ r = 90\%$ and $ r = 99\%$, their number of nodes, and edges can be found in the table 1 (see additional pdf with tables). For a final version paper, experiments will be conducted on ogbn-arxiv if time allows. The coarsening is performed with the adaptation of Loukas coarsening variation edges and preserving the first 400 eigenvectors. As a reference point for such heavy coarsening ratios, we add the "max acc possible", which is inherent to the coarsening: it corresponds to the optimal prediction over the super-nodes of the coarsened graph (all the nodes coarsened in a super nodes has the same prediction, optimally the majority label of this cluster). For the node classification task, the corresponding learning rate and weight decay are 0.1 and 0.0. The node prediction results on the Reddit Dataset are reported in table 2. Our propagation matrix ${S^\textup{MP}_c}$ achieves very good results with the SGC model, very close to the maximum accuracy possible on the given coarsening. Our propagation matrix is still competitive with the GCNconv model and achieved better results on the biggest coarsening ratio. The Message-Passing error for different coarsening propagation matrices is reported in table 3. Our propagation matrix for coarsened graphs achieved a better Message-Passing error, close to the RSA-constant computed in the coarsen graph. It is consistent with the fact the Message-Passing error is bounded by theorem 1 with our propagation matrix, and we thus expect lower values. Thus, these additional experiments show better the effectiveness of our method on large graph for which coarsening as a preprocessing is crucial to save memory. **B)** ***Multiple dimension node features*** The spectral guarantees and the corresponding RSA-constant $\epsilon_{Q , {L}, \mathcal{R}}$ depend of a *whole vector space* $\mathcal{R} \subset \mathbb{R}^{N}$. It therefore applies to multidimensional features by treating each coordinates independently, as seen in Theorem 2. We will clarify this in the final version. Pdf: /pdf/85af411d236e481dd128e892ceaecd051d551586.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper proposes a novel message-passing guarantee for graph coarsening and a new message-passing operation with the message-passing guarantee. Experiments demonstrate that the prediction performance of the proposed method outperforms some baselines. Strengths: 1. The proposed message-passing guarantee is novel. 2. The authors provide the theoretical analysis Weaknesses: 1. How to select the hyperparameters in experiments (e.g. the number of the SGC layers)? The selected coarsening ratio is significantly larger than existing works. 2. Do linear GNNs [3] satisfy Assumption 4? 3. I am not sure whether the analysis under the linearity assumption is enough. Assume the processed features of SGC is $H=A^TX \in \mathbb{R}^{n \times r}$, where the feature dimension $r$ is significantly smaller than the number of nodes $n$. By noticing that the rank of $H$ is lower than $r$, we can compress the node features into sizes $(r,r)$ without errors. So, what is the motivation for graph coarsening under the linearity assumption? 4. The authors may want to compare the spectral guarantee and the proposed message-passing guarantee in details. Moreover, I suggest summarizing these theoretical properties of existing methods and the proposed method. 5. The formulation of message passing is different from [5]. The message passing framework considers the graphs with edge features while Equation (1) does not consider them. Therefore, the concept of message passing guarantees may mislead readers. In my opinion, the proposed concept in this paper is close to convolution matching [6]. 6. How to effectively compute $Q^+$ in practice? The complexity analysis is missing. [1] Featured Graph Coarsening with Similarity Guarantees. [2] Graph Distillation with Eigenbasis Matching. [3] How Powerful are Spectral Graph Neural Networks? [4] Graph Reduction with Spectral and Cut Guarantees. [5] eural Message Passing for Quantum Chemistry. [6] Graph Coarsening via Convolution Matching for Scalable Graph Neural Network Training Technical Quality: 3 Clarity: 3 Questions for Authors: See Weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and feedback. We address each comment below. **Q 1)** *How to select the hyperparameters in experiments (e.g. the number of the SGC layers)? The selected coarsening ratio is significantly larger than existing works* We chose classical values for the training of SGC models as the paper don't focus on that point but on the theoretical guarantees. We will add more range of parameters, including coarsening ratios, in the appendices of the final version. **Q 2)** *Do linear GNNs [3] satisfy Assumption 4?* A linear GNN is formulated in [3] as $Z = g(\hat{L})XW$ where $Z \in \mathbb{R}^{N \times d}$ is the prediction matrix, $g$ is a learnable real-valued polynomial and $W \in \mathbb{R}^{N \times d}$ a learnable weight matrix. The polynom of $L$ is a message passing with a number of layers equal to the degrees; as it is linear with $\sigma = id$, hence the linear GNNs satisfies assumption 4. We will add the reference. **Q 3)** *I am not sure whether the analysis under the linearity assumption is enough. Assume the processed features of SGC is $H = A^KX \in \mathbb{R}^{n\times d}$, where the feature dimension is significantly smaller than the number of nodes . By noticing that the rank of is lower than $n$ , we can compress the node features into sizes without errors. So, what is the motivation for graph coarsening under the linearity assumption?* Thank you for this very interesting remark. Compressing the propagated features into sizes of the nodes features ranks would indeed result in a very different but efficient compressing method, even if it somewhat blurs the link with vanilla semi-supervised learning as the loss couldn't be computed directly. We elected to keep a direct link with message-passing and GNNs in this work, but consider your suggestion as a promising avenue for a future work. **Q 4)** *The authors may want to compare the spectral guarantee and the proposed message-passing guarantee in details. Moreover, I suggest summarizing these theoretical properties of existing methods and the proposed method.* With our new propagation matrix $ S^\textup{MP}\_{c}$ on the coarsened graph (contrarly to other choices), spectral guarantees *lead* to message passing guarantees through the RSA-constant : $\epsilon\_{Q,{L}, \mathcal{R}}$. Hence they are one and the same. To our knowledge, no other method has guarantees of this type, hence our difficulty to ``compare'' the theoretical bounds. **Q 5)** *The formulation of message passing is different from [5]. The message passing framework considers the graphs with edge features while Equation (1) does not consider them. Therefore, the concept of message passing guarantees may mislead readers. In my opinion, the proposed concept in this paper is close to convolution matching [6].* Thank you for this remark. We agree that incorporating edge features in coarsened graphs in an important path for future work (eg to handle GAT), however GNNs without edge features remain classical in a lot of fundamental models such as GCNconv or GIN, where the auhors effectively define the "propagation matrix" to describe the message-passing process. **Q 6)** *How to effectively compute $Q^+$ in practice* We use Loukas coarsening algorithm, where the matrix $Q$ is "well defined", i.e each original node is mapped to an unique "super node" in the coarsened graph. That means that $Q$ has an unique non zero entries by column. For such matrix $Q \in \mathbb{R}^{N,n}$, Loukas proposed an "easy inversion" property (Proposition 6) $ Q^{+} = Q^TD^{-2}$ with $D(r,r) = \lVert Q(r,:) \rVert\_2 $. As the matrix Q has $N$ non-zero entries and as it is needed to compute the sum of the rows of Q, the complexity to compute the pseudo inverse is linear with the number of nodes in the original graph $\mathcal{O}(N)$ [3] How Powerful are Spectral Graph Neural Networks? [5] Neural Message Passing for Quantum Chemistry. [6] Graph Coarsening via Convolution Matching for Scalable Graph Neural Network Training --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. The rebuttal has addressed Concerns 2 and 6. Unfortunately, my concerns about Concerns 1,3,4,5 remain unaddressed. The suggestions and questions are as follows. 1. **Why is the selected coarsening ratio significantly larger than existing works?** How to select the coarsening ratio in experiments? 2. **What is the motivation** for graph coarsening under the linearity assumption? 3. The answer for Concern 4 is confusing. If spectral guarantees lead to message-passing guarantees, then what is the contribution of this paper? A method with spectral guarantees is enough in practice, as the method also ensures message-passing guarantees. 4. The concept of message-passing guarantees is still confusing. Given your answer, convolution guarantees may be more accurate than message-passing guarantees, as the convolution operation usually does not consider the edge features. --- Rebuttal 2: Comment: Thank you for your answer. We attempt to answer your remaining questions below. 1. *Why is the selected coarsening ratio significantly larger than existing works? How to select the coarsening ratio in experiments?* We select three coarsening ratios that are used in Loukas work [1] to illustrate our theoretical results. Following your suggestion, the final version of the paper will include more coarsening ratios (please note that in the Dickens [4] work mentioned earlier in your questions, the coarsening ratio is defined as $1-r$ compared to Loukas and ours). In practice, selecting the ratio really depends on the use-case, whether the aim of the user is to save storage memory, train a GNN, and so on. For instance, compared to smaller datasets, we had to select a very high coarsening ratio for Reddit in order to train a GNN for it on a laptop. Of course, the higher the ratio, the ``worse'' the results compared to the original graph. 2. *What is the motivation for graph coarsening under the linearity assumption?* We agree that for now the assumption on non-linearities is strong. However SGC is indeed used in many theoretical works to analyse the inner workings of GNNs (see eg [2,3] and references therein), and we still believe that it opens a path for interesting future works on the interaction between low-frequencies and non-linearities, in order to treat more general GNNs on coarsened graphs. 3. *The answer for Concern 4 is confusing. If spectral guarantees lead to message-passing guarantees, then what is the contribution of this paper? A method with spectral guarantees is enough in practice, as the method also ensures message-passing guarantees.* Spectral guarantees and message-passing are of different nature, and concern different objects. Spectral guarantees are inherent to a graph coarsening, and refer to the fact that low-frequencies of the graphs are preserved by coarsening (aka a low $\epsilon$ constant). Most algorithms, such as Loukas' that we employ in the experiments, aim at producing such spectral guarantees. Message-passing guarantees concerns the choice of a *propagation matrix*. Our work consist in showing that, even assuming that the coarsening exhibit spectral guarantees, message-passing guarantees are *not* automatic, and generally not satisfied for naive choices of propagation matrices. We then propose a new propagation matrix that yields such message-passing guarantees, when the coarsening has spectral guarantees (that is, we bound the message-passing error of this propagation matrix by $\epsilon$, hence the fact that spectral guarantees ``lead to'' message-passing guarantees *for this new propagation matrix only*). We will clarify this in the final version. 4. *The concept of message-passing guarantees is still confusing. Given your answer, convolution guarantees may be more accurate than message-passing guarantees, as the convolution operation usually does not consider the edge features.* Thank you for your suggestion. We will not change the title at this point, but will make this point of vocabulary clear in the final version. [1] Graph Reduction with Spectral and Cut Guarantees, Andreas Loukas, JMLR 2019 [2] Zhu et al. Graph Neural Networks with Heterophily. AAAI. [3] Keriven. Not too little, not too much: a theoretical analysis of graph (over)smoothing. NeurIPS. [4] Dickens et al. Graph Coarsening via Convolution Matching for Scalable Graph Neural Network Training --- Rebuttal Comment 2.1: Comment: Thanks for the detailed response and most of my concerns have been addressed. Therefore, I raise my score to support the acceptance of this paper.
null
null
null
null
null
null
Model Based Inference of Synaptic Plasticity Rules
Accept (poster)
Summary: This paper proposes a novel method for inferring plasticity rules from neural and behavioral data. In contrast to previous approaches, the plasticity rule is directly optimized to maximize the similarity of the output of a model trained with the plasticity rule to a target (neural activity or behavior). This approach is validated with synthetic experiments and used to infer plasticity rules from fruit fly behavioral data. Strengths: * Determining the functional roles of synaptic plasticity is a fundamental problem in computational & theoretical neuroscience. * The method is not particularly novel, but it is simple, nicely explained, flexible, and well-validated. * The paper is very well-written, and all methods and findings are clearly described. Weaknesses: * My sole concern with this method is its broader applicability to more complex problems; inferring the learning dynamics involved in a simple forced choice task where the biological network is relatively well-known is very different from partially observed neural activity with poorly-understood connectivity during free behavior. I'm particularly concerned about the case where the influence of a large number of unobserved neurons needs to be considered. Technical Quality: 4 Clarity: 4 Questions for Authors: * Double-check the subscripts in Eq (5), I think $y_j$ should be $y_i$. * For the bottom portion of Fig 2, are the weights being compared the weights after training when using the ground truth and inferred rules? * Is there some explanation for why the last rule in Table 1 exhibits such different behavior? * Is there held out validation data in the experiments in Section 5, or is a single dataset used throughout? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to reviewer MyoH Dear Reviewer, Thank you for your feedback and your recognition of the interesting aspects of our paper. We have addressed your highlighted Weaknesses and Questions one by one below: ## Weaknesses 1. We acknowledge that when choosing problems to test our approach we used behavioral settings and plasticity rules that were relatively simple. Our choice of problems was inspired by the kinds of behavioral data that our colleagues had available to test our approach on, and it is something that can be expanded upon in the future to include more complex problems. Despite this, we believe that the experiments described in this paper suggest that our approach can in fact be applied to more complex situations. Our reasoning for this is two-fold: - First, while we made the theoretical assumption in this paper that the plasticity rule is a function of only presynaptic activity, postsynaptic activity, reward, and the current weight value, our fitting procedures are designed such that additional terms can be added without difficulty. This would allow us to estimate a large variety of plasticity rules. - Second, we simulated and were able to retrieve a large number of basically arbitrary plasticity rules (in the Appendix) from neural trajectories. This estimation also showed resilience to noise and sparsity of the input data. ## Questions 1. You are correct that the subscripts in Eq (5) had a typographical error. We have updated the text to reflect this. Thank you for pointing this out. 2. Regarding the R-squared calculation in Figures 2E and 2F, we calculated the R-squared score by comparing the weight trajectories of the ground truth rule with the weight trajectories of the inferred rule after the learning rule optimization was completed. In particular, our R-squared comparison was not limited to the final weights, and it instead compared the full trajectories. 3. The Reviewer is correct that our method sometimes infers plasticity rules that poorly approximate the weight trajectories, activity trajectories, and/or behavioral trajectories (Tables 1 and 2). This isn’t surprising because plasticity rules are nonlinear dynamical systems, and their predictions may depend very sensitively on parameters and initial conditions. We would contend that no inference method will ever be able to perfectly recover an arbitrary plasticity rule, and the intention of Tables 1 and 2 is to merely illustrate the range of things that are possible. Nevertheless, we would hypothesize that the rules implemented by biology are less unwieldy than some of the possibilities revealed by the tables. Indeed, our method works very reliably on the canonical rules that were designed to do useful computations and/or capture biological phenomena. We hope that using our framework to fit real data will help us to identify novel, well-behaved rules without necessitating laborious hand design. 4. In Section 5, we fit behavioral data from 18 flies, utilizing the full dataset for our reported results. We acknowledge, however, that incorporating cross-validation would strengthen our analysis. If our paper is accepted for publication, then we commit to implementing this approach and including it in the final version of our manuscript, thereby enhancing the robustness of our model evaluation. It's worth noting that in our simulation experiments, we employ a separate held-out test set to assess model performance. --- Rebuttal Comment 1.1: Comment: I appreciate the thoughtful response. I maintain that this is an interesting paper that proposes a clean and fairly general method to an important problem and recommend it for acceptance on that basis alone. I agree with the other reviewers that the potential for broader impact is less clear. There seem to be good reasons to doubt that this method or its underlying principles will scale to more complex paradigms, but this is ultimately an empirical question which will only be answered by substantially more involved experiments in follow-up work. I stand by my original score.
Summary: This paper studies how to learn local learning rules (like those plasticity rules thought to be used by real neurons) in a data-driven way from neural activity or behavioral timeseries. This is applied to simulated as well as fly behavioral data. The true learning rules can be learned from synthetic activity timeseries when complete observations are available (Fig 2), whereas from behavior alone some more error is incurred (Fig 3). When using fly behavioral data, the authors get better fits than previous methods (Fig 4), and a claim is made that the "forgetting term" in the resulting fit is important. Strengths: The paper studies an interesting inference problem which is relevant to both AI and neuroscience. It is well-written with a discussion of some limitations. It studies both synthetic and real data. I found the idea of trying to fit this kind of model to behavior interesting. I wouldn't have thought that it would be possible, to be honest. Weaknesses: I think the main weakness is what I identified below in limitations: the degeneracy or non-identifiability of such models. I can imagine that even in settings where the equations are the correct model that it is not identifiable without some regularization. The model fits to behavioral data are given as R^2 values on the behavior themselves. It does not seem that any cross-validation was used for model fitting, so there is a possibility that your model is overfitting. This should be discussed. I'll admit I'm not super familiar with existing work in this area, so I am not sure how novel these results are relative to what's been tried before. Technical Quality: 3 Clarity: 4 Questions for Authors: * What justification do you have for claiming that your model requires less energy than an LLM of the same size? (Line 45-46) * Can you be precise about which term in Fig 4C is the decay term? I think you mean 000. It isn't evident whether that is significantly different from 0. The error bars are quite wide. * Have the authors considered the vast literature on fitting dynamics themselves from timeseries? I am familiar with the SINDy method, which uses polynomial or other families of basis functions and includes a LASSO-type penalty to fit dynamics. It is likely that some regularization or penalty could improve the learned models in your setting, too. (I now see this buried in the appendix, but it should be discussed in the main text.) * Please discuss why you did or didn't use cross-validation Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: I think it's a stretch to claim that the underlying connectivity (presence of nonzeros in weight matrix) is known because of connectomics (lines 80-82). That may be true for some small model organisms like Drosophila, and only just, but it is nowhere near true for others such as the mouse. The authors should be careful about making claims of identifiability with models such as theirs. For instance, it's known that incomplete knowledge of the connectome or partial observations of dynamics can lead to spurious correlations. An example that comes to mind is https://journals.aps.org/pre/abstract/10.1103/PhysRevE.109.044404 . Your experiments actually show this non-identifiability already: In figure 2G we see that the Oja rule is not recovered when the observations are incomplete. There is a brief discussion of this "degeneracy" in the conclusions and limitations section, but I think it is a bigger issue than the authors are claiming. In particular, this conjecture that "in the infinite data limit the fitted plasticity rules would, in fact, be unique" could easily be wrong. It is certainly not a useful argument to the experimentalist who will always have limited data, especially if it's behavioral or brain recording data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to reviewer 5Cze Dear Reviewer, Thank you for your careful reading of our work and insightful comments. We are glad that you found the paper to be “interesting,” and “relevant.” We address each of the comments mentioned in your Weaknesses, Questions, and Limitations sections below. ## Weaknesses 1. We have tried to distinguish between a model being non-identifiable, by which we mean that its parameters are impossible to uniquely infer, and sloppy, by which we mean that its parameters are poorly constrained by available data. 2. Regarding sloppiness, we agree that it could be hard to estimate the parameters of the plasticity rule in practice given finite data sets. Multiple terms in the plasticity rule may lead to similar weight changes. Nevertheless, our paper shows that it is empirically possible to infer plasticity rules given reasonable amounts of simulated data. It also identifies a biologically interpretable but previously unknown term in the plasticity rule of the fly mushroom body. This term would not appear consistently in the statistical fits of individual flies if it was too sloppy to be determined. Simply, our method is already useful, regardless of whether it can detect the sloppiest components of plasticity rules. 3. Regarding identifiability, if one could measure every term that enters the plasticity rule, then the identifiability of the plasticity rule would follow from the uniqueness of Taylor series expansions. Since weights are assumed to be unmeasured, then it’s in principle possible that some plasticity rules may be non-identifiable. This seems unlikely to us, because each component of the plasticity rule contributes to weight changes that measurably affect the postsynaptic activity. Nevertheless, we acknowledge that we don’t have a formal mathematical proof that the plasticity rule is identifiable with unmeasured synaptic weights. 4. In our opinion, identifiability is less relevant than sloppiness, and sloppiness is less relevant than our empirical demonstrations. Our method already works well enough to be applied fruitfully to simulated and real-world problems. 5. We acknowledge the reviewer's concern regarding the potential for overfitting due to the absence of cross-validation. We want to clarify that in our simulated experiments we consistently utilized a separate held-out test set to evaluate model performance - we have clarified this in the main text. However, we recognize the value of incorporating cross-validation for fitting the fly experimental data. If the paper is accepted for publication, we will implement a cross-validation procedure and include it in the final version of our manuscript. We agree with the reviewer that this would substantially enhance the robustness of our model evaluation. ## Questions 1. We appreciate the reviewer's observation and have removed the statement about energy implications for training LLMs, as upon further consideration the point seemed tangential to the goals of our paper. 2. We have added text to the caption of Fig 4C to clarify that parameter 001 corresponds to the weight decay term, and have listed what each of the terms mean. The 000 term is a bias. 3. We thank the reviewer for pointing us to the literature that focuses on fitting dynamical models from time-series data, especially the idea that regularization penalties can substantially improve model performance. As the reviewer pointed out, we have begun exploring the L1 regularization penalty in our model, and we are familiar with the related SINDy approach. If accepted, we will incorporate a more thorough investigation of the impact of regularization on model performance, and we will move the relevant text discussing this from the Appendix to the main manuscript. 4. We have already addressed this concern in the “Weaknesses” section of our response. ## Limitations 1. The reviewer’s comment regarding connectomics is addressed in the general response to all reviewers. 2. We have already discussed identifiability and sloppiness in our response to the Reviewer’s “Weaknesses” comments. Here the Reviewer additionally brings up “model mismatch,” by which we mean that best-fit model parameters are not directly interpretable if the model parameters are out of accordance with the underlying biology. We agree that this is very often an issue with model fitting, and we certainly don’t claim to have a solution to this general problem. We don’t currently see anything that we could add to the study that would alleviate this concern, and our approach has instead been to illustrate related failure modes. Indeed, the Reviewer points to our figures to illustrate the point, which we think indicates that we’ve given it fair treatment. We are happy to make additional revisions if the Reviewer could clarify what they find problematic about our treatment. The quoted text that the Reviewer provides is about “identifiability,” not “model mismatch,” and we think it is important to keep the related issues of identifiability, sloppiness, and model mismatch distinct as these issues are fundamentally different. --- Rebuttal Comment 1.1: Comment: Thanks for your responses. I think that if you address these comments in sufficient detail in the final version, that is sufficient. I am going to adjust my overall score to a 7 and soundness score to a 3.
Summary: This paper presents a reward learning-based method for recovering biological synaptic plasticity rules in a model network. It is meant to be applied to both neural and behavioral data. The method consists of taking real learning trajectories collected in response to stimuli (of some observable metric modality) and showing those stimuli to the model artificial neural network, then computing the loss between the generated and. modeled trajectories and taking a gradient step in the direction of the ground truth data. The weight update is determined by some plasticity function, represented by either a Taylor expansion or an MLP, and also learned over the course of training according to the trajectory loss. After training the model network, various plasticity rules are tested by analyzing the learned parameter values of the plasticity function. After establishing the method, the paper presents specific experimental settings and results. It begins with a simpler case of simulated neural activation data with plasticity dynamics based on Oja's rule, MSE loss between the two trajectories, and a Taylor series plasticity function. The paper shows that Oja's rule can be recovered - it is seen clearly in the learned parameters of the Taylor series plasticity function. Ablations are also conducted over increasing sparsity and noise, showing degradation. The paper then goes on to a simulated behavioral data setting with a small MLP representing the plasticity function. Here, it establishes percent deviance explained between the ground truth and modeled trajectories as a performance metric. Finally, it demonstrates some proof of concept in actual drosophila behavioral data, including a forgetting mechanism (negative dependency on the weights, not just the reward signal and presynaptic activations). Finally, it goes through related work. Strengths: ### Originality As far as I am aware, this particular reward-based training set up with an organism model and a separately trained plasticity model is novel. ### Significance If viable, this would be a very useful work for modeling. This is an initial exploration, but the significance of the results is nontrivial - it opens up a research direction. ### Quality - Training setup is creative! It would be nice to know how well this scales, as trajectory-based sequence learning is difficult. - Experimental settings are thoughtful and elucidating - The toy example does an especially good job of helping the reader build intuition - Showing that synaptic trajectory error improves over time even though training is done with neuronal trajectories - this is clever and convincing - Real drosophila experiment is exciting - Analysis of plasticity functions, how they show up in parametrization, and what that means, is very itneresting ### Clarity - Results has a claims-driven structure that is helpful for understanding takeaways - Methods are very clear Weaknesses: ### Clarity - Paper is very jargon-heavy without defining niche terms and having clear conclusions in paragraphs. It would help to have simpler language and/or more explanation - Percent deviance explained - we don't get grounding or baselines to understand how to judge these results. Are there baseline methods, or any kind of comparison point/context? ### Quality - It's not clear how far this method, or even the principles established in the paper, can take us. - The space of tested plasticity rules seems hand-designed and limited. Rules are rejected only when they can be clearly defined and tested. - Percent deviance explained results seem to leave plenty of room for improvement - Test setups are quite simple. This is mitigated highly by the presence of real drosophila behavior data - that experiment is really useful - but still, simple and limited, particularly in the underlying dynamics - The paper argues that it is "reasonable" to assume that we can have the modeling network architecture exactly match the true architecture, because of available connectome data. Connectomes are static and extremely detailed, and there are lots of elements (e.g. immense recurrence) that we do not have architectures for. This is again very much improved by the real drosophila data experiment, but it is nevertheless concerning. - Underlying plasticity rules in synthetic settings are very simple, even in the case being learned by an MLP. The drosophila behavioral experiment helps because we see the discovery of the forgetting rule, but we don't have any sense of a full space to look for, or even a partial but extensive space to look for (things that both should and shouldn't show up). Even the "forgetting" rule discovery is believable but requires a lot of interpretation of results. - Ablations aren't really discussed. Noise and sparsity make sense as ablation factors, but what do the results mean beyond just "noise and sparsity cause problems"? Especially because their effects are strangely similar. Technical Quality: 3 Clarity: 3 Questions for Authors: - How should I ground my judgement of the percent deviance explained results? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: Limitations - yes Societal impact - they include the NeurIPS checklist in their supplementary and say the societal impacts have been addressed, but they haven't anywhere, nor is there an assertion that they don't need to. They probably don't, though. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to reviewer 2Jtn Thank you for your constructive feedback on our manuscript. We are glad that you found our work to be “creative,” and “clever and convincing.” Below, we address each of the concerns you expressed in your review, and hope that you will find our revised manuscript to be improved. ## Clarity 1. We have added conclusions to paragraphs where this was missing and simplified language that was jargon-filled. These changes are highlighted in red. To help readers understand any remaining specialized language, we added a Glossary before the Appendix. If the Reviewer alerts us to additional terms that they found to be jargon, we would be happy to either eliminate or add it to our Glossary depending on its utility. 2. (Also the response for Question 1) One should interpret the percent deviance explained similarly to the R-squared metric. A value of 0% corresponds to chance performance and a value of 100% corresponds to theoretically optimal performance. It’s useful to our paper because it doesn’t assume continuous outputs or Gaussian noise. It thus applies to binary behavior more naturally than the R-squared metric does. A definition of this metric is provided in Appendix section A3 and we have added a reference to this in the text. We apologize that this reference previously pointed to an incorrect section of the paper. ## Quality 1. We agree that the space of plasticity rules is hard to specify and sample in its entirety. We’d like to clarify two related issues. - First, we acknowledge that when choosing plasticity rules to simulate ground-truth data, we emphasized canonical models that have been “hand-designed” by the computational neuroscience community due to their properties and biological relevance. Their simplicity reflects the current state of understanding in the field. However, we also simulated a large number of other rules in the Appendix (Table 2) excluding the unstable rules. We chose this naive sampling of plasticity rules because the field does not yet have the knowledge to know what class of rules are most relevant. Given this, a better approach was not clear to us as that the space of simulated rules cannot be sampled exhaustively. - Second, we’d like to note that our framework and fitting procedures were designed with the goal of leveraging large-scale biological data to help move beyond hand-designed rules. Our paper’s main assumption is that the plasticity rule is a function of only presynaptic activity, postsynaptic activity, reward, and the current weight value. Our fitting procedures are designed to estimate this unknown function from data. There are multiple ways to parameterize functions, and we consider two parameterizations with complementary strengths. First, we use a low-order polynomial, which can be taken as estimating the Taylor series for the function and is easy to interpret as it relates simply to the canonical rules. Second, we use a multilayer perceptron, which sacrifices interpretability for expressivity. In either case, we’re providing a general parameterization that can reveal unexpected results when fit to biological data. 2. The Reviewer is correct that our method sometimes infers plasticity rules that poorly approximate the weight, activity, and/or behavioral trajectories (Table 2). This isn’t surprising because plasticity rules are nonlinear dynamical systems, and their predictions may show sensitivity to parameters and initial conditions. We would contend that no inference method will be able to perfectly recover an arbitrary plasticity rule. The intention of Table 2 is to merely illustrate the range of possibilities. Nevertheless, we hypothesize that the rules implemented by biology are less unwieldy than some possibilities in Table 2. Indeed, our method works very reliably on the canonical rules that were designed to capture biological phenomena. We hope that using our framework to fit real data will help us to identify novel, well-behaved rules without necessitating laborious hand design. 3. As explained above, we acknowledge that we’ve emphasized tests against simple, plausible learning rules. However, the suite of learning rules in Table 2 include a wide range of ground-truth plasticity rules with nonlinear (polynomial) dependencies among the terms. We agree that applying our method to real data is a productive path forward, and found it very encouraging that the method already uncovered something novel and interpretable in the Drosophila data. We anticipate more discoveries with richer datasets and models. 4. The reviewer’s comment regarding connectomics is addressed in the general response to reviewers. 5. We believe this comment on the simplicity of the synthetic rules has been adequately addressed by our earlier responses. 6. We have added text to Section 3.1 to clarify the interpretation of our choice of ablation experiments. This was guided by the kinds of incomplete information we would likely deal with when applying our model to biological data. Often neural recordings are noisy and even the most state-of-the-art recording tools suffer from not being able to record entire populations of interest. Our ablation experiments quantify how resilient our approach is to these sources of error that are often present in real data. ## Limitations 1. We have changed our response to the societal impact question of the NeurIPs checklist and provide a justification for why this is not discussed in the main text. While we have discussed our paper’s impact within the field of neuroscience, the larger societal impact will likely only be revealed as the method is used more widely to understand specific neural systems and their plasticity rules. As an immediate societal impact is exceedingly unlikely, we did not dedicate a section to discuss these issues. 2. A description about the scalability of our approach has been provided in the general rebuttal to all reviewers. --- Rebuttal Comment 1.1: Title: Increasing score to 7 Comment: Thanks for the detailed comments. The main thing imo is that you've successfully argued that the limitations of the paper are more reasonable and field-standard than I was aware of. Increasing my score to a 7.
null
null
Rebuttal 1: Rebuttal: # General response to reviewers We appreciate the thoughtful and constructive feedback provided by all reviewers. We have revised our manuscript based on this. This rebuttal addresses two key points relevant to all reviewers. ## **1. Scalability of the method** In response to Reviewer 2Jtn's interest in knowing "how well the method scales as trajectory based learning sequence learning is difficult", we share the following hot-off-the-press *preliminary* results. Our primary results (Figures 3, 4, Table 1, and supplementary materials) used a trajectory length of 240 and a [2-10-1] neural network architecture. To address scalability, we conducted additional analyses varying both trajectory length and hidden layer size. These analyses used simulated data with a ground truth rule of x.(r - E(r)) on behavioral data, employing a Taylor series plasticity function parameterization. Results were averaged over three seeds for robustness. It's noteworthy that for a [2-1000-1] architecture, our method performs backpropagation through time over 2000 synapses across 240 time points, with each synapse updated at every time point following the parameterized Taylor expansion. This demonstrates that computational complexity increases with both network size and trajectory length. ### **A) Scalability with trajectory length** | Trajectory length | 30 | 60 | 120 | 240 | 480 | 960 | 1920 | |-------------------|------|------|------|------|------|------|------| | R2 Activity | 0.92 | 0.91 | 0.91 | 0.94 | 0.95 | 0.87 | 0.92 | | R2 Weights | 0.79 | 0.74 | 0.70 | 0.78 | 0.72 | 0.53 | 0.64 | | Percent Deviance | 39.52| 40.14| 48.06| 61.91| 76.90| 73.78| 79.66| The model's goodness-of-fit generally improved with longer simulations, likely due to more data points for inferring the plasticity rule. However, R-squared values for activity and weights peaked before declining, suggesting potential overfitting on very long trajectories. We plan to add cross-validation analysis in the final manuscript if accepted. The current trajectory length (240) appears near optimal for R-squared values, mitigating overfitting concerns for the main results. ### **B) Scalability with network architecture** Our primary findings use a [2-10-1] architecture (20 synapses updated at every time point). We've demonstrated that the framework scales to 1000 hidden units (2000 synapses). | Hidden Layer Size | 10 | 50 | 100 | 500 | 1000 | |-------------------|-------|-------|-------|-------|-------| | R2 Activity | 0.94 | 0.94 | 0.95 | 0.95 | 0.95 | | R2 Weights | 0.78 | 0.75 | 0.79 | 0.79 | 0.79 | | Percent Deviance | 61.91 | 62.29 | 62.25 | 62.27 | 62.26 | Model performance remains consistent when scaling to larger synapse counts, assuming the same plasticity rule is applied. We are currently conducting additional experiments on scalability with respect to the parameters in the plasticity function, focusing on: 1. Plasticity MLP complexity (size) 2. Number of terms in the Taylor expansion (up to 3^4) These findings will be included in the camera-ready version of the manuscript, if accepted. ## **2. Connectomics and model architecture** We acknowledge the reviewers' correct observation that connectomes miss important architectural information and are most complete only in small model organisms. We will revise the text to avoid overstating the relevance of this data for larger brains. Despite these limitations, several research groups have successfully used connectomics to build biologically realistic network models by parameterizing and fitting the most important unknown quantities [1-3]. For example, while connectomes provide synapse counts but not weights, accurate neural network models have been built by fitting cell-type-specific scale factors that convert synapse counts to weights. Dynamical neural network models could be similarly designed to include synaptic plasticity: 1. The connectome could be used to infer an adjacency matrix at the level of cell types or single cells. 2. Plasticity rules could be parameterized, as in this paper, and fit to neural/behavioral dynamics. 3. The probability distribution of synapse counts could be used to constrain the probability distribution of synaptic weights, providing constraints on the plasticity rule. We have two main reasons for assuming it's reasonable to match the architecture to connectomics data: 1. In Section 3, our intention was to demonstrate that our approach can solve the inference problem in a setting where the predictive and generative network architectures matched. This scenario is indeed aspired to by fly neuroscientists in the era of connectomics. As the reviewers point out, we move away from this assumption when modeling the behavioral data in Sections 4 and 5. 2. We argue that the available connectomic information is sufficiently rich to allow us to construct neural networks similar to the structure of brain regions of interest. In Section 5, we show that such a mushroom-body-inspired architecture allows us to infer rules that agree with predominant ideas in the field and contribute unique and interpretable additions to the knowledge base. We have added language to clarify that while we believe using connectomic information to design model architecture is reasonable, mismatches in generative and model architectures can lead to errors in interpretation. These should be taken into consideration, and we have added a citation to the reference suggested by reviewer 5CZe. **References** [1] Lappalainen et al. *Connectome-constrained deep mechanistic networks predict neural responses across the fly visual system at single-neuron resolution*, (bioRxiv, 2023) [2] Mi et al. *Connectome-constrained Latent Variable Model of Whole-Brain Neural Activity*, (ICLR, 2021) [3] Beiran et al. *Prediction of neural activity in connectome-constrained recurrent networks*, (bioRxiv, 2024)
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning Successor Features the Simple Way
Accept (poster)
Summary: This paper presents a method for learning Successor Features (SFs) from pixel-level observations in reinforcement learning (RL) by combining a Temporal-Difference (TD) loss with a reward prediction loss. This approach simplifies the learning process, improves performance, and speeds up learning compared to existing methods. Strengths: The proposed method is simple and easy to implement, which is a significant advantage in practical applications. The effectiveness of the approach is demonstrated in both simple 2D and 3D environments. Weaknesses: 1. The writing quality of the paper could be improved. The Introduction section reads more like an extensive review of related work rather than setting the context for the proposed method. 2. The first three subplots in Figure 1 are difficult to understand without detailed background information. It is recommended to move these figures to the experimental section. 3. The statement in lines 61 to 62, "without any of the drawbacks," seems too absolute and should be toned down. 4. The experiments only demonstrate effectiveness in a few simple 2D and 3D environments. To further validate the proposed method, it is recommended to test in more complex environments, such as Atari games, similar to APS. 5. The experiments only tested the method with DQN, leaving its effectiveness with other RL methods unknown. Technical Quality: 3 Clarity: 2 Questions for Authors: Have the authors tested the proposed method with other RL algorithms and in more complex environments? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: In addition to the limitations mentioned by the authors in the paper, please refer to my comments in the Weaknesses section for further details. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We appreciate the opportunity to clarify and enhance our manuscript based on your observations. Please let us know if there is further clarification we can provide. # 1. Balancing Context and Review of Related Work Thank you for your feedback on the structure of our introduction. We will revise it to better balance the context setting with the review of related work, enhancing both readability and clarity. In response, we will incorporate the following to the introduction section of our manuscript: Successor Features (SFs) are crucial in continual RL for decoupling environmental dynamics from rewards. Yet, current SF implementations often face representation collapse when learning from pixels, due to reliance on predefined assumptions and extensive pre-training. Our approach addresses these limitations by integrating an innovative neural network architecture that enhances computational efficiency and scalability. We have validated our method with experiments in 2D and 3D mazes and the Mujoco environment (added during the rebuttal phase), detailed in **Figure 1 of the General Response (GR)**. Our findings show enhanced learning efficiency and adaptability, proving our model's broad applicability in RL scenarios. # 2. Placement and Purpose of Representation Collapse Analysis Plots in Figure 1. Thank you for your feedback on the placement of the representation collapse analysis plots in Figure 1. We positioned these plots early in the manuscript to establish the central motivation of our research. While representation collapse is a well-known issue in Machine Learning, its empirical analysis within the context of SFs is a novel aspect of our work, warranting prominent placement to set the stage for the discussions that follow. Introducing these plots at the beginning ensures that readers immediately understand the significance of the challenge we are addressing. This approach supports a cohesive narrative by linking the theoretical motivations directly with our proposed solutions and experimental validations. Moving these plots to a later section, such as the experimental results, could disconnect them from their theoretical context and reduce their impact on framing the research problem. # 3. The Exclusion of Certain Drawbacks in Our Method We appreciate the chance to clarify the use of “without any of the drawbacks” in our manuscript. To address your concern, we will amend the phrase to “without some of the drawbacks.” # 4. Further evaluation in complex environments Thank you for the suggestion to use Atari benchmarks. While APS's Atari setup involves pre-training and fine-tuning within environments that do not vary in features and reward functions, it is less suitable for assessing continual learning capabilities. Instead, we opted for a comprehensive evaluation in **Mujoco, utilizing pixel-based observations** [6], which further *demonstrates our model's capabilities with continuous actions*. We started in the half-cheetah domain, rewarding agents for running forward in Task 1. For Task 2, we introduced scenarios with running backwards, running faster, and switching to the walker domain. These are detailed in **Figure 1 in the GR**. Across all scenarios, our model not only maintained high performance but consistently outperformed all baselines in both Task 1 and Task 2, highlighting its superior adaptability and effectiveness in complex environments. This contrasted sharply with other SF-related baseline models, which struggled to adapt under these conditions. # 5. Exclusivity to DQN Thank you for your comment on the scope of our experiments. There are two reasons why we chose the baselines that we did. **First, we did not only compare to DQN, of course, but to several other techniques for learning SFs.** We did this because the focus of our work is learning SFs. Therefore, we selected other techniques for learning SFs as our primary baselines, such as with reconstruction or orthogonality constraints. Thus, we are comparing our approach to several other techniques, namely, those that are most relevant for the question of learning SFs. **Second, we chose DQN as a non-SF baseline because of its direct relation to the mathematical definition of SFs and Q-values, a common practice in SFs literature** [1,2,4,5]. This choice helps clarify the specific contributions of our approach in the context of well-understood benchmarks like DQN and DDPG [7]. Moreover, **our primary goal was to develop a straightforward method for learning SFs, not to conduct a comprehensive benchmark across various RL algorithms.** More complex algorithms do not always lead to better performance, especially in settings with pixel-based observations, as shown in comparisons within the Mujoco environment where *simpler algorithms like DDPG often outperform more complex ones like SAC* [8] (see Figure 9a in [6]). While we acknowledge the value of broadening our evaluations to include a wider array of RL algorithms, our focus was on demonstrating the efficacy of our SF learning approach. Exploring performance with additional algorithms remains an important future research direction to enhance the generalizability of our findings. [1] Machado et al., 2020. Count-based exploration with the successor representation. [2] Ma, et al., 2020. Universal successor features for transfer reinforcement learning. [3] Touati et al., 2023. Does zero-shot reinforcement learning exist? [4] Janz et al., 2019. Successor uncertainties: exploration and uncertainty in temporal difference learning. [5] Barreto et al., 2017. Successor features for transfer in reinforcement learning. [6] Yarats et al., 2021. Mastering visual continuous control: Improved data-augmented reinforcement learning. [7] Lillicrap et al., 2015. Continuous control with deep reinforcement learning. [8] Haarnoja et al. 2018. Soft actor-critic algorithms and applications. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their answers to my questions and the additional experiments. This has helped me better understand the work. I am currently inclined to accept this paper and maintain my current scores. --- Rebuttal 2: Title: Response to Reviewer nkgZ Comment: We are very pleased to hear that our response helped to answer the reviewer’s questions, and that the reviewer is inclined to accept our paper. Given this, we wonder if the reviewer would be willing to raise their score to reflect the fact that we addressed their questions and put the score more clearly in the “accept” range. --- Rebuttal Comment 2.1: Title: Final Day Reminder: Clarifying Concerns and Updating Scores Comment: Dear Reviewer nkgZ, We hope this message finds you well. As today is the final day for the review discussion, We would like to kindly check in to see if our latest response has addressed your concerns. If the clarifications provided have resolved your questions, we would greatly appreciate it if you could update your score accordingly. Thank you once again for your time and thoughtful feedback throughout this process. Your input has been invaluable, and we look forward to hearing from you soon.
Summary: The paper proposes a simpler method to learn Successor Features that avoids representational collapse. For this, the authors decompose the loss function to learn the successor features and task encoding separately. This allows for keeping the basis features fixed while learning the successor features, thus avoiding representational collapse. The experiments involve a continual learning scenario where robustness to task changes is evaluated. The authors show that the method can better adapt to changing tasks. Strengths: - The problem is well motivated and the approach offers a simple solution to representational collapse when learning deep successor features. - The authors provided a number of insightful ablations. Especially that reconstruction based SF methods have trouble learning a good representation for fully observed settings. - The writing is clear and the method is presented in an understandable manner. Weaknesses: - The tested environments seem to be perhaps too simple for comparison both from the representational and task difficulty perspective, since DQN also has very good relearning capabilities in these environments. Why is it that DQN is better than the successor feature counterparts for the continual learning setting? This seems counterintuitive to me since successor features should be more robust than pure DQN. - The presentation of the figures has issues. Some Figures are pixelated, i.e., not vector graphics. (e.g., Figure 4 or other environment Figures). Also I think Figure 1 could be split into 2 figures for better readability. Technical Quality: 3 Clarity: 2 Questions for Authors: - Should the method not be tested on environments where DQN itself cannot adapt to the new tasks at all? I wonder if the simple approach still holds when the transition dynamics become more complicated or the observations are more noisy. - For the Minigrid environment: Do you learn the successor features from pixels or do you use the built-in symbolic state representations? - Since DQN is also robust for the showed environments, I wonder how reward sparsity affects the performance of the different algorithms? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have adequately addressed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We appreciate the opportunity to clarify and enhance our manuscript based on your observations. Please let us know if there is further clarification we can provide. # 1. Is DQN better than the SFs in Continual RL setting? Thank you for your observations regarding the experimental results in Figures 2 and 3. While average episode returns offer quick performance insights, they do not fully capture the long-term benefits of our model. Thus, we also analyzed cumulative total returns across all tasks, as shown in **Figure 2 of the general response (GR)**. These results confirm that our model quickly learns and maintains effective policies, especially in complex 3D environments where tasks recur (**Figure 2c-d in GR**). Our model significantly outperformed the baseline in cumulative returns, demonstrating its robustness and superior transfer capabilities compared to DQN, which showed little to no transfer effects and needed to re-learn tasks. We will include these results in our manuscript to more comprehensively demonstrate our model's effectiveness in continual learning settings. # 2. Figure 4 Quality Thank you for your feedback on the graphics in our figures. While all our figures are created with vector-based graphics for high resolution and scalability, Figure 4 is an exception. It uses pixel-based graphics to **accurately reflect the native format of the RL environments** and the inputs our models process. # 3. Figure 1 We appreciate the suggestion to split Figure 1 to enhance readability. Acknowledging the density of the current figure, we will implement several modifications: 1. **Simplification**: We'll remove the loss functions from Figures 1d and 1e, with detailed descriptions retained in Appendix E and the main text, respectively. This will help focus attention on the structural content. 2. **Reorganization**: Figure 1d will be moved to the Appendix as it primarily presents common approaches rather than our novel contributions, ensuring the main text remains focused on our work. 3. **Relabeling and Relocation**: Figure 1e will be renamed as Figure 2 and relocated closer to Sections 4 and 5 where it is first mentioned, aligning it more closely with its textual references and enhancing narrative coherence. 4. **Visual Guidance Enhancements**: We will replace terms like “Q-SF-TD loss” with “$L_\psi$: Q-SF-TD loss” and introduce color-coded information to improve figure-text integration such as, “Pixel-level observations, $S_t$​, are processed by a convolutional encoder to produce a latent representation $h(S_t)$, which is used to construct the basis features (*indicated by a yellow box in Figure 2*) and the SFs (*indicated by a green box in Figure 2*).” We hope these changes will streamline the presentation and ensure the figures more effectively complement the text. # 4. Complex and noise in environments Thank you for your comment on our model's effectiveness in complex, noisy environments. Firstly, our model's resilience to noise was proven in the “3D Slippery Four Rooms environment” (Section 6.1.3), where agents faced altered actions in Task 2. The results (Figure 3) demonstrate our model's superior robustness to induced stochasticity compared to baselines. Secondly, **we expanded our evaluation during the rebuttal phase to include the Mujoco environments**, using pixel-based observations and accounting for *continuous action spaces*. Following the setup in [1], we tested in scenarios like running backwards, running faster, and a major switch from the half-cheetah to the walker domain in Task 2. The outcomes (**Figure 1 in GR**) show our model consistently outperforming baselines across all scenarios, thereby showcasing its adaptability and effectiveness in more complex settings. These results affirm our model's advanced capability to robustly handle diverse and challenging environments, making it highly suitable for practical applications with complex dynamics and significant noise. # 5. Pixel or Symbolic States observations for SFs? Thank you for your question regarding the input modalities for Successor Features. In our work, **we exclusively use pixel observations across all experiments**. This choice is intentional, addressing a significant challenge in the field—the direct learning of Successor Features from high-dimensional sensory inputs such as pixels, which, as noted in [2], have historically posed difficulties for conventional methods and remain underexplored in the Successor Features literature [3, 4]. # 6. Sparse Rewards Thank you for your question regarding sparse rewards. Like other DQN-based methods, our approach may face challenges in environments with sparse rewards, a recognized issue with bootstrapped learning methods. While our method is tailored for continual reinforcement learning, it is not specifically designed to address sparse rewards. We acknowledge the need for mechanisms to better manage sparse rewards. Recent findings suggest that reconstruction-based objectives do not always capture task-relevant features effectively in such settings [5]. Integrating techniques that generate intrinsic rewards could help by providing more frequent learning signals. However, exploring these techniques further is beyond the current scope of our work. Our primary focus remains on demonstrating the viability of our approach in typical continual learning environments, laying the groundwork for future research to more comprehensively tackle the challenges of sparse rewards. [1] Yarats et al., 2021. Mastering visual continuous control: Improved data-augmented reinforcement learning. [2] Machado, et al., 2020. Count-based exploration with the successor representation. [3] Ma, et al., 2001. Universal successor features for transfer reinforcement learning. [4] Touati et al., 2023. Does zero-shot reinforcement learning exist? [5] Balestriero., 2024. Learning by Reconstruction Produces Uninformative Features For Perception. --- Rebuttal Comment 1.1: Comment: Thank you for your additional ablations and experiments! * I still feel it is somewhat strange that in Figure 2, DQN is still outperforming other SR methods. At some point during task change it even outperform your proposed method. I feel the environments don't demonstrate precisely the effectiveness of your method, when DQN is outperforming other SR methods. * The results regarding continual RL are encouraging. I will keep my score, but increase my confidence to 4. --- Rebuttal 2: Title: Response to Reviewer Tczf Comment: Thank you for taking the time to review our rebuttal. We sincerely appreciate your thoughtful comments and are glad to have the opportunity to provide further clarifications. Please don’t hesitate to reach out if you have any additional questions or concerns. We appreciate your observation, but **we are unclear which specific plot in Figure 2 you are referring to**, as in all Continual RL plots (Figures 2e to 2g in our paper), our approach (orange) consistently outperforms DQN (blue). Additionally, if you refer to the plots generated using the *total cumulative return* in the same setup as Figures 2e to 2g, as shown in **Figures 2a to 2c in the general response**, it is clearly evident that our approach performed much better in the later tasks. **To emphasize why we presented (moving) average returns per episode instead of cumulative total return plots in our manuscript, it was to demonstrate that we allow learning for the first task to converge before introducing the second and subsequent tasks.** Furthermore, we acknowledge that the smaller size of these figures might make the trends less apparent. Therefore, we encourage you to refer to the larger illustrations in Appendix G (Figures 12 to 16), where the replay buffer is not reset to simulate conditions with less interference between task switches. Even under these conditions, our approach (orange) consistently demonstrates superior learning performance compared to DQN (blue). While the performance improvements in the simpler 2D minigrid environments (Center-Wall and Inverted-LWalls) are less pronounced, they remain significant. In contrast, the more complex 3D Four Rooms environment shows a clearer advantage of our method, as seen in Figures 12 and 13. This trend highlights the robustness of our approach, particularly as task complexity increases, further validating the effectiveness of our method across diverse environments. Moreover, the newly added results during the rebuttal phase, which utilize the more complex Mujoco environment, also show that our method (orange) outperforms DDPG (blue), a variant of DQN designed for continuous actions. All these results clearly demonstrate that our method, Simple SF (orange), learns more effectively than DQN and DDPG (blue). This superior performance is due to our method's ability to better generalize and transfer knowledge between tasks, as evidenced by the larger improvements in cumulative total returns when the agent re-encounters the tasks (Exposure 2 in Figure 2 in the General Response). --- Rebuttal Comment 2.1: Title: Final Day Reminder: Clarifying Concerns and Updating Scores Comment: Dear Reviewer Tczf, We hope this message finds you well. As today is the final day for the review discussion, We would like to kindly check in to see if our latest response has addressed your concerns. If the clarifications provided have resolved your questions, we would greatly appreciate it if you could update your score accordingly. Thank you once again for your time and thoughtful feedback throughout this process. Your input has been invaluable, and we look forward to hearing from you soon. --- Rebuttal 3: Title: Response to Reviewer Tczf Comment: **This is a re-submit as it seems that our earlier previous response did not notify the reviewers via email.** Thank you for taking the time to review our rebuttal. We sincerely appreciate your thoughtful comments and are glad to have the opportunity to provide further clarifications. Please don’t hesitate to reach out if you have any additional questions or concerns. We appreciate your observation, but **we are unclear which specific plot in Figure 2 you are referring to**, as in all Continual RL plots (Figures 2e to 2g in our paper), our approach (orange) consistently outperforms DQN (blue). Additionally, if you refer to the plots generated using the *total cumulative return* in the same setup as Figures 2e to 2g, as shown in **Figures 2a to 2c in the general response**, it is clearly evident that our approach performed much better in the later tasks. **To emphasize why we presented (moving) average returns per episode instead of cumulative total return plots in our manuscript, it was to demonstrate that we allow learning for the first task to converge before introducing the second and subsequent tasks.** Furthermore, we acknowledge that the smaller size of these figures might make the trends less apparent. Therefore, we encourage you to refer to the larger illustrations in Appendix G (Figures 12 to 16), where the replay buffer is not reset to simulate conditions with less interference between task switches. Even under these conditions, our approach (orange) consistently demonstrates superior learning performance compared to DQN (blue). While the performance improvements in the simpler 2D minigrid environments (Center-Wall and Inverted-LWalls) are less pronounced, they remain significant. In contrast, the more complex 3D Four Rooms environment shows a clearer advantage of our method, as seen in Figures 12 and 13. This trend highlights the robustness of our approach, particularly as task complexity increases, further validating the effectiveness of our method across diverse environments. Moreover, the newly added results during the rebuttal phase, which utilize the more complex Mujoco environment, also show that our method (orange) outperforms DDPG (blue), a variant of DQN designed for continuous actions. All these results clearly demonstrate that our method, Simple SF (orange), learns more effectively than DQN and DDPG (blue). This superior performance is due to our method's ability to better generalize and transfer knowledge between tasks, as evidenced by the larger improvements in cumulative total returns when the agent re-encounters the tasks (Exposure 2 in Figure 2 in the General Response).
Summary: This work presents a model architecture to learn successor features in reinforcement learning. It consists of optimizing Eqs. (5) and (6), i.e., a loss for learning the features and a loss for learning the task specific weights. It claims to avoid representation collapse. Experiments are conducted in common 2D and 3D tasks to show that the proposed method can achieve better performance and higher sample efficiency. Strengths: - A simple method that is easy to understand, and the presentation is easy to follow - Reasonable performance in the experiments Weaknesses: - Even though the paper claims that using the reward to train the task weights $w$ is new, this approach has been discussed before (Ma et al., 2020). Specifically, it has shown that using the reward $r$ to train $w$ is inferior to using the $Q$ values (Appendix D, Ma et al., 2020). Of course, the two algorithms are not identical, but it remains unclear why using the reward to learn $w$ is the right choice in the current paper. It is necessary to discuss this prior work. - Several other design choices need further explanation. L130-139 presents multiple design choices for the model architecture without proper discussion. For example, why the L2 normalization and the layer normalization are required? Why do we need to stop gradient at those specific places? It would be better to have ablation studies to show the importance of these choices. - Experiment results require further analysis. There is barely any analysis or reasoning for the proposed method. Each subsection in Sec.6 ends with "our method is better" without properly addressing **why** it can achieve better performance. This question is not answered in Sec.7 either. Moreover, the improvement over existing methods is only marginal (see Figs.2&3). More importantly, The quantitive results in Table 1 only show marginal improvement over Orthogonality with a significant performance overlap. Minor comment: Eqs.(5)&(6) are for scalars so a norm is unnecessary. Reference: - Ma, C., Ashley, D.R., Wen, J. and Bengio, Y., 2020. Universal successor features for transfer reinforcement learning. *arXiv preprint arXiv:2001.04025*. Technical Quality: 2 Clarity: 2 Questions for Authors: See the weakness above. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: The authors adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We appreciate the opportunity to clarify and enhance our manuscript based on your observations. Please let us know if there is further clarification we can provide. # 1. Important differences between the Universal SFs and our approach While both our study and [1] utilize reward prediction loss, our approach integrates this with additional losses to directly learn SFs from pixel observations—a significant departure from [1]. **Our model:** 1. **Learns basis features directly from pixels**, unlike [1] which uses pre-determined basis features. 2. **Does not assume prior knowledge of task specifics**, contrasting with [1] where this is required, making our approach more applicable in continually changing environments. 3. **Integrated SFs directly into the Q-value function**, simplifying and streamlining the learning without the need for redundant losses as seen in [1]. 4. **Ensures SFs play a crucial role** in performance, contrary to [1] where SFs play a minimal role due to the low weighting of SF loss. # 2, Reward Integration & Stop-gradient operator The key distinction in our approach lies in the application of the stop-gradient operator during the learning of the task-encoding vector w with reward prediction loss (Eq. 6 in our manuscript). Unlike in Universal SFs [1] where the basis features and the task-encoding vector w are learned concurrently, our approach prevents the updates to the basis features during this learning phase using a stop-gradient operator. This difference is crucial as we further demonstrated in an ablation study, **'Basis-Rewards' (Figure 4 in GR)**, since concurrent learning has shown to degrade learning efficiency (Figure 10 in Appendix D of [1]). # 3. Insufficient analysis We respectfully disagree with the assertion that our work contains “barely any analysis.” Our work goes beyond theoretical discussions of representation collapse by providing empirical evidence (Figures 1a-c) and a **clustering analysis** (Figure 1c) that validate our method's effectiveness. This is complemented by a **mathematical proof sketch in Appendix C**, which explicates the gradient projections in our model, enhancing its applicability in continual learning scenarios. Our comprehensive analysis also includes **computational overhead comparisons** (Figure 6) and ablation studies (**Figure 4 in GR**) that reinforce the efficiency and effectiveness of our approach. Additionally, we will include a proof sketch detailing conditions under which representation collapse can occur (see rebuttal to reviewer 7tgV). # 4. Improvements are marginal Thank you for your observations regarding the experimental results, specifically highlighted in Figures 2 and 3. Your feedback aligns with concerns previously noted by Reviewer Tczf regarding the apparent modest gains when measured using average episode returns. While average episode returns offer quick performance insights, they don't capture the full benefits of our approach. Hence, we've also evaluated cumulative total returns across tasks, which better reflect the agent’s ability to quickly learn and maintain effective policies over time Our analysis included in **Figure 2 in GR**, which consistently demonstrates significant improvements from our model compared to the baselines across various environments. Specifically, in the complex 3D environment, our model demonstrated significant improvement in cumulative returns, especially when the agent re-encountered previous tasks, *highlighting its enhanced transfer capabilities and effectiveness* in continual learning scenarios. # 5. L2 normalization and layer-norm L2-normalization is applied to both the basis features and the task-encoding vector w as commonly done [2,5]. We also normalize w before it enters the Features-Task network. These normalisations are to ensure consistent scale across inputs, enhancing optimization, training stability and preventsing any single feature from disproportionately influencing the learning process due to scale differences. Additionally, we use layer-normalization within the Features-Task network to address un-normalized outputs from the encoder, a practice well-established in deep reinforcement learning to improve model robustness by conditioning the gradients [3,4,6,7]. # 6. Marginal correlation improvement over orthogonality Thank you for your observation regarding the correlation improvements of SFs learned using our model. It's true that models enforcing orthogonality on features might show high correlations with discrete one-hot SRs due to their structured nature. However, our empirical findings, presented in Figures 2 and 3 of the manuscript, highlight that despite possible high correlation, SFs learned with orthogonality constraints often suffer from significant learning deficiencies. This issue becomes even more evident in the challenging Mujoco environments, as detailed in **Figure 1 of GR**. Furthermore, maintaining orthogonality constraints demands considerably more computational resources (Figure 6). Thus, while improvements in correlation might seem modest, our model offers a more balanced approach in optimizing both performance and computational efficiency. # 7. Norm is unnecessary Thank you for the comment and we will make the revision in the final version. [1] Ma, et al., 2020. Universal successor features for transfer reinforcement learning [2] Machado et al., 2020. Count-based exploration with the successor representation. [3] Yarats et al., 2021. Improving sample efficiency in model-free reinforcement learning from images. [4] Yarats et al., 2021. Mastering visual continuous control: Improved data-augmented reinforcement learning. [5] Liu, et al., 2021. Aps: Active pretraining with successor features. [6] Ball et al., 2023. Efficient online reinforcement learning with offline data. [7] Lyle et al., 2024. Normalization and effective learning rates in reinforcement learning. --- Rebuttal Comment 1.1: Comment: I thank the authors for the additional results and clarifications. Yet, there are some concerns: # 1. Differences 1\. [1] also uses a NN model to learn the basis features $\phi$ of a state (see Fig.1 of [1]). Whether the feature extractor is pixel-based or not depends on the task. 3\. [1] also integrated the SFs directly into the Q-value function (Eq.(3) of [1]). 4\. There is no connection between "crucial role" and large weighting. There is no guarantee that larger weighting indicates better performance either, as the scales of different losses can be vastly different across tasks. In fact, I found this argument contradicts point 3 above and also defeats the main point of the current paper. The current paper argues that the canonical SF loss is problematic (Fig.1), but now the rebuttal said that one needs to have a larger weighting for the canonical SF loss so that the SFs can play a "crucial role," whatever that means. # 4. Evaluation It is unclear why the average episode returns and the cumulative total returns can show different trends. Isn't the former equal to the latter divided by the number of test/evaluation runs? --- Rebuttal 2: Title: Response to Reviewer 2gcv on differences with Ma et al. [1] Comment: Thank you for taking the time to review our rebuttal. We sincerely appreciate your thoughtful comments and are glad to have the opportunity to provide further clarifications. Below, we respond to your excellent points. Please don’t hesitate to reach out if you have any additional questions or concerns. # 1. Differences ## 1.1 Basis features First, we see now that we made a mistake in our rebuttal, indeed, in [1] the basis features are learned. As well, the reviewer is correct that, in principle, there is no reason that the approach in [1] could not be applied to pixels. However, to the best of our knowledge, in the paper itself, the authors did not perform experiments and studies involving pixel-based observations. Instead, experiments in [1] were conducted using state inputs, which is likely why their architecture consisting of fully connected networks worked well (Appendix F in [1]). In addition, we ran experiments with the loss from [1] to make a more direct comparison. Please see below for the description of those experiments and the results. ## 1.3. Direct SF Integration in Q-Learning should eliminate the need for redundant Canonical SF Loss Thank you for your observation. Indeed, [1] does integrate SFs directly into the Q-value function, similar to our approach, but there is a key difference. **[1] relies on an additional SF loss (Eq. (4) in [1]), known as the Canonical SF-TD loss in our paper, which our method does not require.** Our main contribution lies in the simple (but we believe elegant) architectural design that allows the SFs to be learned directly through the Q-learning loss, eliminating the need for a separate SF loss. To highlight the impact of this difference, **we conducted experiments comparing our approach to an agent that combines the Q-learning loss, SF loss, and reward prediction loss, similar to the setup in [1].** Notably, we included the reward prediction loss because, unlike [1], our method does not assume prior knowledge of task specifics, such as goals, which aligns with the expectation in continual learning scenarios. We named this approach “SF + Q-TD + Reward.” Our results, presented in Figure 6a, demonstrate that the additional SF loss can impair learning efficiency, requiring more time steps to converge to a good policy in the complex 3D Four Rooms environment. Furthermore, this approach is significantly less computationally efficient, as shown by the slower computational speed and longer training duration in Figure 6b. For a detailed comparison of learning performance, **please refer to Figures 17 to 21 in Appendix H.** We believe these findings underscore the advantages of our method, particularly in terms of efficiency and practicality for continual learning. Moreover, they help to illustrate the key differences between the formulation of our approach from [1]. We agree with the reviewer that our method clearly builds on [1] (which was a seminal paper), but we do feel that what we build on this work represents a novel contribution that can help the field to learn SFs more efficiently, as evidenced by our data. ## 1.4. Avoiding arbitrary weighting adjustments of Separate SF loss Thank you for your feedback. We appreciate the opportunity to clarify our position. On reflection, any claim as to whether or not the SF loss plays a “crucial role” should not hinge on something as basic as the weighting term in the loss. Indeed, the lower weighting used in [1] ($\lambda > 0$) *may be a result of differing scales among the losses or potential conflicts between the SF loss and the Q-learning loss.* But, from a practical point of view, it is fair to say that [1] requires a careful selection of the weighting coefficient $\lambda$. In contrast, our proposal, through careful algorithmic and architectural design, eliminates the need for a separate SF loss and any concerns about its weighting. In doing so, we directly mitigate the potential problems associated with down-weighting the SF loss, ensuring that SFs meaningfully contribute to the agent's performance. We hope this addresses your concerns. After reading your reply, we feel that our initial rebuttal did not accurately capture the essence of why our contribution is unique and novel relative to [1]. But, as we described above, **the key advantages of our method are that: 1) We do not need to provide the goal to the agent (it is learned); (2) We provide direct evidence that we can learn off of pixel inputs; (3) We show that we do not need to include the SF loss; and (4) By eliminating the need for the SF loss, we reduce the number of hyperparameters required.** --- Rebuttal 3: Title: Response to Reviewer 2gcv on Evaluation Comment: # 4. Evaluation Thank you for your comments, and we apologize for any confusion. To clarify, the average episode return is calculated as a moving average over recent episodes—typically the last 100 episodes experienced by the agent in our case. This metric provides a more immediate snapshot of the agent’s recent performance. In contrast, the cumulative total return is the sum of all returns accumulated from the moment the agent is first exposed to the current task until the end of the evaluation for that task. This metric reflects the overall performance across the entire evaluation period. These two metrics can show different trends because the moving average episode return emphasizes recent performance, which may fluctuate, while the cumulative total return captures the long-term accumulation of rewards. **To emphasize why we presented moving average returns per episode instead of cumulative total return plots in our manuscript, it was to demonstrate that we allow learning for the first task to converge before introducing the second and subsequent tasks.** We hope this explanation resolves the confusion, and we will ensure that these differences are clearly explained in the manuscript. --- Rebuttal Comment 3.1: Title: Final Day Reminder: Clarifying Concerns and Updating Scores Comment: Dear Reviewer 2gcv, We hope this message finds you well. As today is the final day for the review discussion, We would like to kindly check in to see if our latest response has addressed your concerns. If the clarifications provided have resolved your questions, we would greatly appreciate it if you could update your score accordingly. Thank you once again for your time and thoughtful feedback throughout this process. Your input has been invaluable, and we look forward to hearing from you soon. --- Rebuttal 4: Title: Response to Reviewer 2gcv on differences with Ma et al. [1] Comment: **This is a re-submit as it seems that our earlier previous response did not notify the reviewers via email.** Thank you for taking the time to review our rebuttal. We sincerely appreciate your thoughtful comments and are glad to have the opportunity to provide further clarifications. Below, we respond to your excellent points. Please don’t hesitate to reach out if you have any additional questions or concerns. # 1. Differences ## 1.1 Basis features First, we see now that we made a mistake in our rebuttal, indeed, in [1] the basis features are learned. As well, the reviewer is correct that, in principle, there is no reason that the approach in [1] could not be applied to pixels. However, to the best of our knowledge, in the paper itself, the authors did not perform experiments and studies involving pixel-based observations. Instead, experiments in [1] were conducted using state inputs, which is likely why their architecture consisting of fully connected networks worked well (Appendix F in [1]). In addition, we ran experiments with the loss from [1] to make a more direct comparison. Please see below for the description of those experiments and the results. ## 1.3. Direct SF Integration in Q-Learning should eliminate the need for redundant Canonical SF Loss Thank you for your observation. Indeed, [1] does integrate SFs directly into the Q-value function, similar to our approach, but there is a key difference. **[1] relies on an additional SF loss (Eq. (4) in [1]), known as the Canonical SF-TD loss in our paper, which our method does not require.** Our main contribution lies in the simple (but we believe elegant) architectural design that allows the SFs to be learned directly through the Q-learning loss, eliminating the need for a separate SF loss. To highlight the impact of this difference, **we conducted experiments comparing our approach to an agent that combines the Q-learning loss, SF loss, and reward prediction loss, similar to the setup in [1].** Notably, we included the reward prediction loss because, unlike [1], our method does not assume prior knowledge of task specifics, such as goals, which aligns with the expectation in continual learning scenarios. We named this approach “SF + Q-TD + Reward.” Our results, presented in Figure 6a, demonstrate that the additional SF loss can impair learning efficiency, requiring more time steps to converge to a good policy in the complex 3D Four Rooms environment. Furthermore, this approach is significantly less computationally efficient, as shown by the slower computational speed and longer training duration in Figure 6b. For a detailed comparison of learning performance, **please refer to Figures 17 to 21 in Appendix H.** We believe these findings underscore the advantages of our method, particularly in terms of efficiency and practicality for continual learning. Moreover, they help to illustrate the key differences between the formulation of our approach from [1]. We agree with the reviewer that our method clearly builds on [1] (which was a seminal paper), but we do feel that what we build on this work represents a novel contribution that can help the field to learn SFs more efficiently, as evidenced by our data. ## 1.4. Avoiding arbitrary weighting adjustments of Separate SF loss Thank you for your feedback. We appreciate the opportunity to clarify our position. On reflection, any claim as to whether or not the SF loss plays a “crucial role” should not hinge on something as basic as the weighting term in the loss. Indeed, the lower weighting used in [1] ($\lambda > 0$) *may be a result of differing scales among the losses or potential conflicts between the SF loss and the Q-learning loss.* But, from a practical point of view, it is fair to say that [1] requires a careful selection of the weighting coefficient $\lambda$. In contrast, our proposal, through careful algorithmic and architectural design, eliminates the need for a separate SF loss and any concerns about its weighting. In doing so, we directly mitigate the potential problems associated with down-weighting the SF loss, ensuring that SFs meaningfully contribute to the agent's performance. We hope this addresses your concerns. After reading your reply, we feel that our initial rebuttal did not accurately capture the essence of why our contribution is unique and novel relative to [1]. But, as we described above, **the key advantages of our method are that: 1) We do not need to provide the goal to the agent (it is learned); (2) We provide direct evidence that we can learn off of pixel inputs; (3) We show that we do not need to include the SF loss; and (4) By eliminating the need for the SF loss, we reduce the number of hyperparameters required.** --- Rebuttal 5: Title: Response to Reviewer 2gcv on Evaluation Comment: **This is a re-submit as it seems that our earlier previous response did not notify the reviewers via email.** # 4. Evaluation Thank you for your comments, and we apologize for any confusion. To clarify, the average episode return is calculated as a moving average over recent episodes—typically the last 100 episodes experienced by the agent in our case. This metric provides a more immediate snapshot of the agent’s recent performance. In contrast, the cumulative total return is the sum of all returns accumulated from the moment the agent is first exposed to the current task until the end of the evaluation for that task. This metric reflects the overall performance across the entire evaluation period. These two metrics can show different trends because the moving average episode return emphasizes recent performance, which may fluctuate, while the cumulative total return captures the long-term accumulation of rewards. **To emphasize why we presented moving average returns per episode instead of cumulative total return plots in our manuscript, it was to demonstrate that we allow learning for the first task to converge before introducing the second and subsequent tasks.** We hope this explanation resolves the confusion, and we will ensure that these differences are clearly explained in the manuscript.
Summary: This work introduces a new algorithm for training successor features in deep reinforcement learning. This is achieved by optimizing two separate metrics. The first requires the model to predict the cumulative reward following a full trajectory and optimizes the successor features and the basis feature. The second optimized metric requires the model to predict the reward at the next step using the basis feature and task vector, but in this case only optimizes the task vector. Experiments are conducted in grid worlds and a 3D Four Rooms domain which demonstrates that the proposed algorithm learns more consistently than other SF baselines and also supports faster task switching. Finally, supporting experiments show that the proposed algorithm is faster algorithmically and in terms of wall clock time than SF alternatives and that it also results in more separable SFs which correlate better with successor representations. Strengths: # Originality The decoupling of training into separate separate equations is new an intuitive idea. The authors note the inspiration from Liu et. al. (2021) however this algorithm is used as a baseline and clearly out-performed empirically. Thus, it is clear that the changes made are material and have an impact on model performance. # Clarity The paper is well written and sections are structured appropriately. Notation is intuitive, consistent and aids understanding. Figure captions are detailed which also aids clarity. # Quality I particularly appreciate some of the additional experiments conducted in support of the algorithm, such as the correlation between the learned SFs and SRs. The core experiments appear sufficiently challenging to separate the proposed algorithm from the baselines, and the baselines which are used are appropriate to challenge the proposed algorithm, The results are interpreted fairly as all algorithms do struggle on at least one domain where the simple SFs do not and consistently perform well. # Significance I do think this work could lead to future work and provide a helpful step in improving SFs and making them more practical. The significance is aided by the originality and simplicity of the approach as it is likely to spur new ideas quickly as a result. Weaknesses: # Clarity The figures in this work are laid out poorly and this significantly hinders the readability of the paper. At the least it would help if a reader does not have to look past unseen figures on their way to look at the one being referenced - such as when looking for Figure 6 which comes after Figure 5. Also having the architectures in Figure 1 far from where they are needed and completely out of context is jarring and unhelpful. These architecture diagrams are also very difficult to follow and there is not clear mapping from what is depicted for some of the pieces to any explanation in the text or caption. Most of what is depicted in Figure 1d is not in Section 3, and similarly Section 5 is not detailed enough for me to map onto Figure 1e. I think more detail could be added to the figure itself and to the caption here. Lastly, including the loss functions in the figure, especially ones which have not been explained in the text, like orthogonality loss, is confusing. If these losses are not necessary in the main text then I don't think it is necessary in the figure and so I would remove them. With respect to Proposition 1, it would be better if this was at least in the main text - some kind of proof sketch would be better - but in the interest of space I see why it was omitted. Once again I do then just point to this as a part which could do with more explanation of why this matters and intuition on what it is true. If I am correct, Proposition 1 is the reason why Equations 5 and 6 cannot be optimized with the representation collapse strategy? Secondly, the Preliminaries section could have more elaboration. Equation 2 in particular is presented without any discussion and $\gamma$ is not introduced at all. A reader with experience on RL and SFs will be fine but less experienced readers will likely be alienated. It would be ideal if this section could set up the ideas to come, and this is done to a degree with it being noted that representation collapse can still optimize Equation 4. More of this insight would just be helpful. Similarly, I would appreciate more discussion on why representation collapse is now not able to optimize Equation 5 and 6. This is merely stated without reason on lines 113 to 115. # Quality I am not certain I agree with the assessment from Section 7.2. While it is a worth experiment, the result of simple SFs being more correlated just appears to be due to the fact that it has a more linear latent embedding. This can be seen in Figure 5. SF+Reconstruction has very separable and clear clusters but they are just not organised in a straight line. So correlations - a linear metric - will not work. It seems to me that it would be more appropriate to try decode SRs from the SFs using a simple but nonlinear model and report the final accuracy. Technical Quality: 3 Clarity: 3 Questions for Authors: I have asked some questions in the my review above and would appreciate those be answered. I do not have any other questions at this time. If my question on Proposition 1 is answers and makes sense to me, and the concerns in Quality address I would be likely to advocate for acceptance, with agreement that the clarity would also be improved and figures restructured. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitations are stated in their own section and a good of consideration given towards the broader impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We appreciate the opportunity to clarify and enhance our manuscript based on your observations. Please let us know if there is further clarification we can provide. # 1. Layout of Figure 5 and Figure 6 Thank you for your feedback on the order of Figures 5 and 6 in our manuscript. We recognize that aligning figure placement with their respective sections will enhance the manuscript's readability and coherence. Currently, Figure 6 is introduced in Section 7.1's “Efficiency Analysis,” and Figure 5 appears later in Section 7.2's “Comparison to Successor Representations.” To improve narrative flow, we propose to swap these sections. This rearrangement will position 'Comparison to Successor Representations' before 'Efficiency Analysis,' ensuring that the figures align more logically with their related discussions. # 2. Modifications of Figure 1 Thank you for your valuable feedback regarding Figure 1. Due to space constraints in this rebuttal response, we invite you to refer to our detailed response provided to Reviewer Tczf under the section 'Splitting Figure 1' and in the general response above. # 3. Motivations behind Proposition 1 Thank you for the suggestion. Proposition 1 aims to mathematically demonstrate that the gradients from optimizing the Q-SF-TD loss (Eq. 5) effectively project the gradients from the canonical SF-TD loss (Eq. 4) along the task-encoding vector w. This projection is crucial in Continual RL as it aligns the SFs with different tasks, enabling the agent to adapt more rapidly to varying tasks. However, your comment on Proposition 1 being the reason why representation collapse is being mitigated is incorrect. For clarity on why representation collapse can occur, we have included an additional proof sketch below. # 4. Proof Sketch for Representation Collapse in Basis Features Consider the basis features function $\phi(\cdot) \in \mathbb{R}^n$ and the SFs $\psi(\cdot) \in \mathbb{R}^n$, omitting the inputs for clarity. The canonical SF-TD loss (Eq. 4) is defined as: \begin{align} L_{\phi, \psi} = \frac{1}{2} \left \| \| \phi(\cdot) + \gamma \psi (\cdot) - \psi(\cdot) \right \| \|^2 \end{align} Assume both $\phi(\cdot)$ and $\psi(\cdot)$ are constants across all states $S$, such that $\phi(\cdot) = c_1$ and $\psi(\cdot) = c_2$. If $c_1 = (1-\gamma)c_2$, then: $L_{\phi, \psi} = \frac{1}{2} \left \| \| (1 - \gamma)c_2 + \gamma c_2 - c_2 \right \| \|^2 = 0$ This scenario illustrates that if both the basis features and SFs become constants, particularly with $c_1 = (1-\gamma)c_2$, the system will satisfy the zero-loss conditions, resulting in representation collapse. In this state, $\phi(\cdot)$ loses its ability to distinguish between different states effectively, causing the model to lose critical discriminative information and thus impair its generalization capabilities. # 5. Introduction to RL in Preliminaries Thank you for the comment. We will add the following text to section 3 to aid readers who may not be familiar with RL. The RL setting is formalized as a Markov Decision Process defined by a tuple $(S, A, p, r, \gamma)$, where $\mathcal{S}$ is the set of states, $\mathcal{A}$ is the set of actions, $r: S \rightarrow \mathbb{R}$ is the reward function, $p: \mathcal{S} \times \mathcal{A} \rightarrow [0,1]$ is the transition probability function and $\gamma \in [0,1)$ is the discount factor which is being to used to balance the importance of immediate and future rewards. At each time step $t$, the agent observes state $S_t \in \mathcal{S}$ and takes an action $A_t\in \mathcal{A}$ sampled from a policy $\pi: \mathcal{S} \times \mathcal{A} \rightarrow [0,1]$, resulting in to a transition of next state $S_{t+1}$ with probability $p(S_{t+1} \mid S_t, A_t)$ and the reward $R_{t+1}$. # 6. Linear Latent Embeddings and Correlation Analysis Thank you for your comment. We welcome the chance to clarify our use of UMAP for embeddings (Figure 5) and Spearman's rank correlation for analysis (Table 1). First, we use UMAP [1], a non-linear dimension reduction technique, for its effectiveness in visualizing complex relationships within Successor Features (SFs) in 2D space. It's crucial to note that UMAP does not imply linearity; the spatial arrangement of clusters should not be interpreted as linear relationships among features. Second, our correlation analysis employs Spearman's rank correlation coefficient [2], outlined in Appendix K. This method assesses monotonic, non-linear relationships, suitable for our data's characteristics. Contrary to any suggestions of linearity, Spearman's correlation is non-parametric and does not assume linear relationships. We will clarify these points in the manuscript to eliminate any ambiguity about our methods and to underscore the appropriateness and robustness of our analysis. # 7. Decoding SRs from SFs Thank you for suggesting we use a simple nonlinear model to decode SRs from SFs. We implemented a single-layer perceptron with ReLU activation, training it for 4000 iterations using a 0.001 learning rate and Adam optimizer to ensure convergence within the center-wall environment. In **Figure 3 of the General Response**, we present the mean squared error (MSE) results for both fully-observable (allocentric) and partially-observable (egocentric) settings. Our model achieved notably low errors, outperforming baselines in both contexts, highlighting its robustness and the effectiveness of our successor features in varying observational settings. This consistency was not observed in baseline models, such as SF + Random (green) and SF + Reconstruction (red), which showed variable performance. These results confirm the strength and reliability of our decoded successor representations across diverse settings. We will incorporate this analysis into the manuscript. [1] Leland et al., 2018. Umap: Uniform manifold approximation and projection. [2] Zar et al., 2005. Spearman rank correlation. --- Rebuttal Comment 1.1: Title: Response to Rebuttal by Reviewer 7tgV Comment: I thank the authors for their response. I make note of the agreed changes to the figures and their layout in the general comment, this rebuttal and in the discussion with Reviewer Tczf. These seem like appropriate changes and address my concerns with the figures. For the points on the motivation of Proposition 1 and the proof sketch for representation collapse. I understood how if the $\phi$ and $\psi$ become constant and the loss is minimized. My point is that the insight of the proposition is missing. For example, the sentence in this rebuttal: "This projection is crucial in Continual RL as it aligns the SFs with different tasks, enabling the agent to adapt more rapidly to varying tasks." is key but even then the insight would come from showing how the projection aligns with the task encoding vector and showing or at least arguing why this helps with downstream tasks. I appreciate that the proof and some insight of this sort is in the appendix, but in reality this is the primary contribution and deeper insight here, and is comes in as a passing comment. Perhaps I am missing the point of the work, but the fact that the proposed method behaves in this way seems crucial. In addition, since the representation collapse strategy to the standard SR is noted as being the primary problem, showing how projecting along the task vector fixes this is key. Relying in the main text only on empirics denies the reader this deeper insight, or at least expects them to go and read the proof and obtain the insight themselves from it. With respect to the RL preliminaries. I appreciate the authors agreeing to this, as it must feel like a nuisance. However, I think it is in the best interest of the broader readership of NeurIPS. For the correlation analysis. I thank the authors for correcting me, and agree that mention of this would be helpful in the main text. As I mention this was one of my main concerns and I will be raising my score as a result. I also appreciate the new experiments added to the pdf draft and find this to be compelling evidence. I thank the authors for including the decoding experiment as well and find it convincing. Ultimately, my lingering concern remains with Proposition 1. Essentially lines 113 to 115 explain to us the outcome of the proposed setup, lines 116 to 118 then state literally the Proposition 1 (given it's proximity to lines 113 to 115 leading me to assume it was more related than it is) and then there is a throw forward to the empirical results. So from lines 113 to 119 where the main purpose of the proposed method is being summarize a reader is told what to think, but never shown it. This makes the entire section fall flat, and since this is the technical punchline it makes the paper fall slightly flat. I believe the experimental results (adjusting for my own misunderstandings and the new results) show the intended meaning and I am confident in the correctness of the claims of the work. But a deeper insight into how the proposed method really results in learned SRs and avoid representation collapse still seems missing. Proposition 1, or something of this nature would likely address this. I would raise my score further if this was addressed in the coming days. I once again thank the authors for their thoughtful response and new experiments. As my quality concerns were due to an error in my understanding, and this has now been corrected, I will raise my score to a 5 and also increase my confidence. I am also raising the soundness and clarity scores in light of the correction and improved figures. I look forward to further discussion on Proposition 1 if the authors are able. --- Rebuttal 2: Title: Response to Reviewer 7tgV Comment: Thank you for taking the time to review our rebuttal. We sincerely appreciate your thoughtful feedback and the subsequent adjustment in your evaluation. In addition, we would like to take this opportunity to further clarify certain points, specifically regarding how our proposed method results in learned Successor Features (SFs) and avoids representation collapse as well as Proposition 1. # Improving Clarification on Overcoming Representation Collapse We would like to thank the reviewer for engaging constructively with us, their input has been extremely helpful for improving our paper. Below, we propose two additional modifications to the manuscript to address the points raised by the reviewer in their response to our rebuttal. First, we will incorporate the proof sketch regarding representation collapse into the main text near line 100, where we initially mention the scenario where the basis features $\phi$ may become a constant vector when the loss is minimized. Second, at the beginning of Section 4, “Proposed Method,” we will emphasize that the key insight from the proof sketch is that preventing representation collapse requires avoiding the scenario where the basis features $\phi$ become a constant vector for all states, which would minimize the loss without contributing to meaningful learning. Our approach addresses these constraints by not optimizing the basis features $\phi$ within any loss functions used. Instead, we treat the basis features $\phi$ as the normalized output from the encoder, which is learned using the Q-SF-TD loss (Eq. 5). When the basis features $\phi$ are needed to learn the task encoding vector $w$ through the reward prediction loss (Eq. 6), we apply a stop-gradient operator to treat the basis features $\phi$ as a constant. As we will demonstrate in section 7 “Analysis of Efficiency and Efficacy”, this inclusion of a stop-gradient operator is crucial. Without it, learning both the basis features $\phi$ and the task encoding vector $w$ concurrently can lead to learning instability (as we explained to Reviewer 2gcv). # Improving Clarification for Proposition 1 Regarding Proposition 1, in re-reading our own text, we must admit that we agree completely with the reviewer. The way that the text jumps from the discussion of representational collapse in lines 113-115, then brings up Proposition 1, it is natural for a reader to assume that Proposition 1 will deal with representational collapse, and yet it doesn’t. We can see now that this would have caused confusion for readers, potentially suggesting a misleading connection between Proposition 1 and representation collapse. To improve clarity, we propose the following amendments: 1. Add a concluding sentence after line 115 where we will state, “Next, we will clarify how our approach relates to learning SFs, as they are defined mathematically.” 2. Create a new subsection titled “4.1 Bridging Simple SFs and Universal Successor Features,” where we will expand on the insights related to Proposition 1 (expanding the text currently in lines 116-119). In terms of expanding on the insights related to Proposition 1, we will highlight the fact that Proposition 1 explains why our approach ultimately produces true SFs. Proposition 1 does this by proving that minimizing our losses (Eqs. 5 & 6) also minimizes the canonical SF loss used in Universal Successor Features (Eq. 4). In order to tie this to the previous section, we will also note that our approach minimizes these losses in a manner such that setting the basis features $\phi$ to a constant is not a solution. Specifically, we will note in the text that if one sets $\psi = c_2 $ and $\phi = c_1 = (1 - \gamma) c_2$, then Eqs. 5 & 6 are not minimized, due to the fact that $\hat{y}$ and $R_{t+1}$ are not constants. We believe these revisions will significantly enhance the clarity and robustness of our manuscript. Again, we thank you for your insightful feedback, and please feel free to reach out if any further clarification is needed. We hope you will consider raising your score once more based on these responses. --- Rebuttal Comment 2.1: Title: Final Day Reminder: Clarifying Concerns and Updating Scores Comment: Dear Reviewer 7tgV, We hope this message finds you well. As today is the final day for the review discussion, We would like to kindly check in to see if our latest response has addressed your concerns. If the clarifications provided have resolved your questions, we would greatly appreciate it if you could update your score accordingly. Thank you once again for your time and thoughtful feedback throughout this process. Your input has been invaluable, and we look forward to hearing from you soon. --- Rebuttal 3: Title: Response to Reviewer 7tgV Comment: **This is a re-submit as it seems that our earlier previous response did not notify the reviewers via email.** Thank you for taking the time to review our rebuttal. We sincerely appreciate your thoughtful feedback and the subsequent adjustment in your evaluation. In addition, we would like to take this opportunity to further clarify certain points, specifically regarding how our proposed method results in learned Successor Features (SFs) and avoids representation collapse as well as Proposition 1. # Improving Clarification on Overcoming Representation Collapse We would like to thank the reviewer for engaging constructively with us, their input has been extremely helpful for improving our paper. Below, we propose two additional modifications to the manuscript to address the points raised by the reviewer in their response to our rebuttal. First, we will incorporate the proof sketch regarding representation collapse into the main text near line 100, where we initially mention the scenario where the basis features $\phi$ may become a constant vector when the loss is minimized. Second, at the beginning of Section 4, “Proposed Method,” we will emphasize that the key insight from the proof sketch is that preventing representation collapse requires avoiding the scenario where the basis features $\phi$ become a constant vector for all states, which would minimize the loss without contributing to meaningful learning. Our approach addresses these constraints by not optimizing the basis features $\phi$ within any loss functions used. Instead, we treat the basis features $\phi$ as the normalized output from the encoder, which is learned using the Q-SF-TD loss (Eq. 5). When the basis features $\phi$ are needed to learn the task encoding vector $w$ through the reward prediction loss (Eq. 6), we apply a stop-gradient operator to treat the basis features $\phi$ as a constant. As we will demonstrate in section 7 “Analysis of Efficiency and Efficacy”, this inclusion of a stop-gradient operator is crucial. Without it, learning both the basis features $\phi$ and the task encoding vector $w$ concurrently can lead to learning instability (as we explained to Reviewer 2gcv). # Improving Clarification for Proposition 1 Regarding Proposition 1, in re-reading our own text, we must admit that we agree completely with the reviewer. The way that the text jumps from the discussion of representational collapse in lines 113-115, then brings up Proposition 1, it is natural for a reader to assume that Proposition 1 will deal with representational collapse, and yet it doesn’t. We can see now that this would have caused confusion for readers, potentially suggesting a misleading connection between Proposition 1 and representation collapse. To improve clarity, we propose the following amendments: 1. Add a concluding sentence after line 115 where we will state, “Next, we will clarify how our approach relates to learning SFs, as they are defined mathematically.” 2. Create a new subsection titled “4.1 Bridging Simple SFs and Universal Successor Features,” where we will expand on the insights related to Proposition 1 (expanding the text currently in lines 116-119). In terms of expanding on the insights related to Proposition 1, we will highlight the fact that Proposition 1 explains why our approach ultimately produces true SFs. Proposition 1 does this by proving that minimizing our losses (Eqs. 5 & 6) also minimizes the canonical SF loss used in Universal Successor Features (Eq. 4). In order to tie this to the previous section, we will also note that our approach minimizes these losses in a manner such that setting the basis features $\phi$ to a constant is not a solution. Specifically, we will note in the text that if one sets $\psi = c_2 $ and $\phi = c_1 = (1 - \gamma) c_2$, then Eqs. 5 & 6 are not minimized, due to the fact that $\hat{y}$ and $R_{t+1}$ are not constants. We believe these revisions will significantly enhance the clarity and robustness of our manuscript. Again, we thank you for your insightful feedback, and please feel free to reach out if any further clarification is needed. We hope you will consider raising your score once more based on these responses.
Rebuttal 1: Rebuttal: We would like to thank the reviewers once again for their valuable feedback, which has guided clarifications and improvements that we will include in the final revision of our manuscript. **We have attached a set of figures in this Author Rebuttal, which we denote as General Response (GR)**, to address the main concerns from the reviewers. The concerns fall broadly under the following themes: # 1. Complexity of the environments During the rebuttal phase, we further evaluated our model in more complex settings using the Mujoco environments with pixel-based observations. We consider this benchmark to show the potential of our model in continuous action spaces. Following the established protocol in [1], we started with the half-cheetah domain in Task 1 where agents were rewarded for running forward. We then introduced three different scenarios in Task 2: agents were rewarded for running backwards (Figure 1a in GR), running faster (Figure 1b in GR), and, in the most drastic change, switching from the half-cheetah to the walker domain (same num of actions) with a forward running task (Figure 1c in GR). **To ensure comparability across these diverse scenarios, we normalized the returns, considering that each task has different maximum attainable returns per episode.** In all tested scenarios, our model consistently outperformed all baselines in Task 1 and particularly, Task 2, highlighting its superior adaptability and effectiveness in complex environments. This performance sharply contrasts with other SF-related baseline models, which struggled to adapt under similar conditions. # 2. Marginal improvements We initially used average episode returns to provide quick insights into short-term performance, but recognize that this metric may not fully capture the long-term benefits of our model. To address this, we also evaluated cumulative total returns across all tasks, which are illustrated in Figure 2 in GR. These results demonstrate that our model not only learns effective policies more rapidly but also sustains these improvements, particularly in complex 3D environments where tasks are re-encountered (Figure 2c-d in GR). Overall, our model showed significant improvement in cumulative returns over the baseline models, highlighting its robustness and ability to transfer learning effectively across tasks. This contrasts with DQN, which exhibited little to no transfer effects and required re-learning from scratch, as evidenced by its performance in these scenarios. # 3. Simple nonlinear decoder Reviewer 7tgV recommended a simple non-linear decoder to assess which model’s SFs most effectively decode into Successor Representations (SRs). We conducted this evaluation using both allocentric (fully-observable) and egocentric (partially-observable) pixel observations within the center-wall environment. The results, depicted in Figure 3 in the GR, demonstrate consistently high accuracy across both settings. This contrasts sharply with SFs developed using reconstruction constraints or random basis features, which, while effective in egocentric settings, perform poorly in allocentric settings where feature sparsity is greater. This analysis highlights the robustness and versatility of our model's SFs in varied observational contexts. # 4. Stop Gradient Operator The comments from Reviewer 2gcv prompted us to conduct an additional ablation study to elucidate the effectiveness of the reward prediction loss (Eq. 6) in our approach, compared to prior work [2] that faced challenges with similar methods. A key differentiator in our model is the application of a stop gradient operator on the basis features during the learning process with reward prediction loss. We designed this study to specifically assess whether the stop gradient operator is essential for successful learning using reward prediction loss. The findings, presented in Figure 4a in GR, conclusively show that omitting the stop gradient operator leads to significantly reduced learning efficiency and policy effectiveness. Additionally, visual analysis of the SFs in Figure 4b in GR further demonstrates that concurrently learned basis features and task-encoding vectors without a stop gradient operator result in SFs with poor discriminative capabilities, undermining effective policy learning. These results underscore the critical role of the stop gradient operator in maintaining the integrity and effectiveness of our learning process, confirming its necessity for achieving the robust performance we report. # 5. Modifications to Figure 1 Lastly, there were additional concerns regarding the configuration and density of Figure 1. As previously detailed in individual rebuttal responses to Reviewer 7tgV and Tczf, and for broader awareness, we will implement the following modifications: Simplification: We'll remove the loss functions from Figures 1d and 1e, with detailed descriptions retained in Appendix E and the main text, respectively. This will help focus attention on the structural content. - **Reorganization**: Figure 1d will be moved to the Appendix as it primarily presents common approaches rather than our novel contributions, ensuring the main text remains focused on our work. - **Relabeling and Relocation**: Figure 1e will be renamed as Figure 2 and relocated closer to Sections 4 and 5 where it is first mentioned, aligning it more closely with its textual references and enhancing narrative coherence. - **Visual Guidance Enhancements**: We will replace terms like “Q-SF-TD loss” with “$L_\psi$: Q-SF-TD loss” and introduce color-coded symbols to improve figure-text integration. For example, pixel-level observations, $S_t$, will be described in the text with direct references to their visual representation in the newly labeled Figure 2. [1] Yarats et al., 2021. Mastering visual continuous control: Improved data-augmented reinforcement learning. [2] Ma, et al., 2020. Universal successor features for transfer reinforcement learning Pdf: /pdf/75224e58c6aaf339e3cfe0caf4e73940d828f2c9.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
F-OAL: Forward-only Online Analytic Learning with Fast Training and Low Memory Footprint in Class Incremental Learning
Accept (poster)
Summary: The author proposed a method called F-OAL for online class incremental learning, which does not rely on back-propagation and is forward-only, significantly reducing memory usage and computational time. In summarize, the contributions are as follows: 1) The paper presents theF-OAL, which is an exemplar-free method. 2) F-OAL can be updated in an mini-batch manner; 3) The methods are evaluated in several benchmarks; Strengths: Strengths: The author proposed a method called F-OAL for online class incremental learning, which does not rely on back-propagation and is forward-only, significantly reducing memory usage and computational time. Weaknesses: 1) Some of the descriptions are unclear, such as Formula 4. The author may want to give a more vivid explanation; 2) The innovation is limited, See Limitations No 5; 3) The author may want to discuss more methods in evulation part; Technical Quality: 2 Clarity: 2 Questions for Authors: See Limitations. The author may want to addresses the questions prpopsed in Limitations. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: 1. The formula No.4 seems to be wrong. Please check the formula carefully and give the proof process to ensure that the equation is valid. 2. Why does the formula No.4 show the optimal solution? For each mini-batch of data, the parameter W is the solution that makes the formula No.5 always equal to 0, in theory. But considering all batches of data, parameter W is too idealized and may even overfit the data of the current batch. 3. It is difficult for the parameter W calculated using a mini-batch of data to have an effect on other batches of data, especially when the distribution of data for different tasks is significantly different. Even if all the parameters W of the mini-batch are combined, I don’t think it can exceed the back-propagation based method in terms of effect, because this combination is linear. 4. Is the optimization process recursive on all batches of data? Why is the recursive method more efficient? Less computation? GPU parallel computing? Please explain the reason. 5. The innovation of this paper is insufficient. It seems that the main contribution is to calculate the parameter W by using the forward process and the least squares method. In fact, this method faces many disadvantages, such as overfitting. 6. Many models that appeared in the comparative experiments did not have annotated references, such as DVC in Table 1. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer XB6P Thank you for the thorough review. We provide more detailed response below. Hope this can help you with your concerns. ## W1&L1: The formula No.4 seems to be wrong. Thank you for pointing out the typo. We revise $ϕ(X)Y$ to $ϕ(X)^⊤Y$ in equation 4. ## L2: Why does the formula No.4 show the optimal solution? Is W too idealized and may even overfit the data of the current batch? We appologize that our way of delivering the analytical solution can be misleading. In fact, Eq. 4 is the optimal LS solution to loss function in Eq. 3 in the non-CIL sense (like a preliminary **in the case with all the data**). That is, $W$ here **does NOT indicates the weight for any batch**, but for the one computed using **all data**. On the other hand, Eq. 5 indicates **the same loss function** as that of Eq. 3, but in the sense of breaking the dataset into $k$ segments (hence the "$1:k$" symbols). This leads to the least-squares solution in Eq. 7 (the same as Eq. 4 but in the sense of breaking the dataset into $k$ segments). Here the **optimal solution** $W$ does not guarantee Eq. 5 being 0. The "optimal" here indicates that the obtained $W$ allows the objective function in Eq. 5 to be the smallest (not necessarily 0 though). The use of Eq. 5 (with many "$1:k$") prepares us for the subsequent recursive derivations (i.e., from "$1:k-1$" to "$1:k$"). Based on this, we are able to derive the F-OAL mainly indicated by Eq. 10 and Eq. 12 in **recursive form**. Note that the $W$ recursively computed using Eq. 10 or Eq. 12, obtains **the identical result to that of Eq. 7**. That is, the update formula in Eq. 10 or Eq. 12 considers both the current-batch data (e.g., $k$) and the past-batch data (e.g., $1:k-1$). In a sense, the F-OAL does no suffer from catastrophic forgetting (CF) at all! ## L3: Can recursive least square exceed the back-propagation? Our F-OAL beats BP because it does not suffer from CF (see analysis in L2). According to [1], using BP to update the model is the reason why CF happens, which leads to incomplete feature learning and recency bias [2]. Existing baselines are still based on BP and manage to alleviate CF. We redefine the OCIL into a recursive learning problem to avoid BP. According to [3], the recursive update paradigm could obtain results that are identical to their joint-learning counterparts. To give further proof, we provide a quick experiment on MNIST to show that BP suffers from CF, while recursive methods do not. A two-layer MLP is trained on first 5 classes of MNIST (i.e., number 0 to 4) via BP and we test it on these 5 classes. Then the MLP is trained on rest of the 5 classes (i.e., number 5 to 9) via BP incrementally for each class per phase and we test it on the old 5 classes (i.e., number 0 to 4). We apply the identical experiment on our F-OAL using the same MLP. The results are reported below. Please check the source code of the quick experiment in **General Response**. | | Acc on 0:4 after trained on 0:4 | Acc on 0:4 after trained on 0:9 | |---------|--------------------------|--------------------------| | BP | 98.4 | 43.2 | | F-OAL | 98.2 | 96.4 | [1] Online continual learning through mutual information maximization, ICML 2022 [2] Learning a unified classifier incrementally via rebalancing, CVPR 2019 [3] Blockwise recursive Moore–Penrose inverse for network learning. IEEE TMSC-S, 2021 ## L4: Is the optimization process recursive on all batches of data? Why is the recursive method more efficient? Yes, the optimization process is on all batches of data. For **space efficiency**, our method is forward-only and exemplar-free, eliminating the need for gradients and additional memory buffers, thereby significantly reducing the GPU footprint. For **computational efficiency**, F-OAL does not have backward pass, resulting in a faster training. Accroding to [1], in BP, the backward pass spends 70% of the entire time. [1] Decoupled Parallel Backpropagation with Convergence Guarantee. ICML 2018 ## L5: The innovation of this paper is insufficient. This method faces many disadvantages, such as overfitting. Our main contributions are 1) pinpoint BP as the main cause of CF, 2) introducing F-OAL, the recursive method instead of BP that well handles OCIL problem, and 3) fusion module and smoothed projection that enhance the performance of F-OAL. Regarding the overfitting disadvantage, we must respectably disagree. The F-OAL is mainly linear regression, which usually invites "under-fitting" instead of "over-fitting" in nature. In addition, we avoid possible over-fitting (due to small data), we have introduced an $L_{2}$ regularization in Eq. 5. However, the F-OAL needs a fixed backbone. This is an existing disadvantage that will be discussed. ## L6:Many models that appeared in the comparative experiments did not have annotated references, such as DVC in Table 1. Thank you for pointing out the issue. We will add citations for each method in Table 1 and also review the rest of the paper to ensure there are no other similar problems. The table 1 will be revised in the following form with the right citation: | Metric | Method | CIFAR-100 | CORe50 | FGVCAircraft | DTD | Tiny-ImageNet | Country211 | |--------|------------------------|-----------|--------|--------------|------|---------------|------------| | $A_{avg}$(↑) | DVC(CVPR 2024) [1] |92.4| 97.1 |33.7 |67.3 |91.5 |16.1| ... [1] Not just selection, but exploration: Online class-incremental continual learning via dual view consistency. In CVPR 2022. Based on these additional results and clarifications, we hope you could consider increasing your score in support of this work. If not, could you kindly let us know what additionally needs to be done in your assessment to make this work ready for publication?
Summary: This paper presents an analytic class incremental learning method that does not need backpropogation. The main idea is to use a pre-trained mode to extract features followed by random projection to higher dimensional space, and then use recursive least squares to update the linear regression weights. By doing so, the closed-form solution solves for all seen data thus guarantees no forgetting. The experiments in several class incremental image classification tasks show superior results than many continual learning baselines. Strengths: - The empirical results of the method is very strong compared to other continual learning baselines. Weaknesses: - Starting from section 3.2, it's better to give more precise definition for all notations, such as their dimensionality. - There are some confusing notations that can be improved. E.g., at line 121 you use $k$ to denote task, but at line 123 you use $k$ to denote batch, then in equation 9 you use change to $n$ to denote batch. - There are some writing issues like typos in the main, e.g. line 17, line 117, equation 4 (should be $\phi(X)^\top Y$?) Technical Quality: 3 Clarity: 2 Questions for Authors: - Does the time recorded in Table 2 for F-OAL include the ViT feature extraction time? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The limitations of the method are not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer UyXG Thank you for your valuable time in reviewing. We provide detailed information for your concerns below ## W1: Give more precise definition for all notations, such as their dimensionality. Thank you for your suggestion. We have included a following notation table. | Name | Description | Dimension | |---------------|------------------------------------------------------------------|------------------------------------------------| | $ϕ(X)$ | Activation of all images | $V×D$ ($V$ is the number of all images, $D$ is the encoder output dimension) | | $Y$ | One-hot label of all images | $V×M$ ($M$ is the number of all classes) | | $\hat{W}$ | Joint-learning result of classifier's weight matrix | $M×D$ | | $X_{k,n}^{(a)}$ | Activation matrix of the n-th batch of the k-th task | $S×D$ ($S$ is the batch size) | | $Y_{k,n}^{train}$| One-hot label matrix of the n-th batch of the k-th task | $S×C_s$ ($C_s$ is the number of classes seen so far)| |$ X_{k,1:n}^{(a)}$| Activation matrix from the start to the n-th batch of the k-th task | $V_s×D$ ($V_s$ is the number of images seen so far)| | $Y_{k,1:n}^{train}$ | One-hot label matrix from the start to the nth batch of the k-th task | $V_s×C_s$ | | $\hat{W}^{(k,n)}$ | Classifier of the nth batch of the k-th task | $C_s×D$ | | $R_{k,n}$ | Regularized feature autocorrelation matrix to the nth batch of the k-th task | $D×D$ | ## W2: There are some confusing notations that can be improved. Thank you for your comments. We will revise the confusing notions as instructed. For instance, we will use $k$ to represent the task index and $n$ to represent the batch index respectively to avoid confusion. ## W3: There are some writing issues like typos. Thank you for pointing out these writing issues! We have corrected "class" to "classes" and "plan" to "plans" in line 17, changed "a" to "an" in line 117, and revised $ϕ(X)Y$ to $ϕ(X)^{T}Y$ in equation 4. Additionally, we will carefully review the paper to correct any other typos. ## Q1: Does the time recorded in Table 2 for F-OAL include the ViT feature extraction time? Yes, we have documented the entire training process, from the image entering the model to the completion of the model updates. ## L1: The limitations of the method are not discussed. Thank you for the suggestion. The major limitation is that, our method relies on well pre-trained backbones such as ViT and ResNet pre-trained on ImageNet. However, there are many open-source pre-trained backbones available within the deep learning community, which are relatively easy to obtain. Additionally, leveraging pre-trained models for fine-tuning downstream tasks has become a mainstream approach. Hence, although the need for backbones is an existing limitation of F-OAL, it is feasible and in line with mainstream. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' rebuttal. I still have minor concerns on the paper's presentation, as well as the limitation that it relies on a strong pretrained encoder. I will consider adjusting my score during the closed discussion. Thanks! --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: Thank you for taking the time to read our response! This means a lot to us! Please let us know if there is anything more needed from us!
Summary: The authors address the problem of online class incremental learning (OCIL), where new tasks arrive periodically in a data stream and the trainer seeks to learn these new tasks without catastrophic forgetting of past performance. The paper presents two modes of OCIL; replay-based methods and exemplar-free methods. Replay-based methods offer strong performance but require storing some amount of replay data from the stream to include in incremental training. Exemplar-free methods lift this limitation, but have not thus far achieved comparable performance to replay-based methods. The authors method (forward-only online analytic learning, or F-OAL) uses projections of blocks from a frozen, pre-trained ViT encoder as input to a trainable linear classifier, and then perform recursive least-squares updates on the linear classifier to avoid the catastrophic forgetting that backpropagation would cause in the same setting. The authors extensively compare their method by accuracy, training time, and memory usage to other recently-SOTA methods for OCIL, both replay-based and exemplar-free. The comparisons show that F-OAL significantly improves over existing exemplar-free methods and achieves comparable performance with replay-based methods, while being significantly more efficient in terms of both training time and memory usage. Lastly, the authors ablate their solution and find that keeping the ViT encoder frozen and using their analytically-learned classifier (instead of learning it through backpropogation) are both key ingredients in F-OAL's accuracy. Strengths: The paper is generally quite strong. The F-OAL method is natural and intuitive; overall, it seems like a significant improvement to the Pareto frontier of accurate and efficient OCIL. The comparisons to past SOTA in both replay-based and exemplar-free methods are extensive and compelling. The authors' exposition is clear and logical, and the experiments are largely informative and useful for the reader. The F-OAL method seems pragmatic, and a natural baseline to which all future work in this area should be compared. Weaknesses: This paper (and past papers in this vein) suggest that exemplar-free methods are "good for data privacy", but there is very little justification for this claim. I understand the basic premise as this: if replay-based methods force you to store some subset of the data stream, which is worse for users' data privacy than exemplar-free methods that don't require such storage. While it's worth noting as a design consideration/feature, I do not agree with this framing as relevant for "data privacy". There is no legitimate security model of privacy in the literature that would recognize this as "higher privacy". Once the model provider has seen/processed the data by running it through the model, any legitimate security model would view this as data that's been made public. Methods that enhance security (e.g. SMPC, HE, TEEs) and methods that provably reduce statistical privacy leakage (i.e. differential privacy) are orthogonal; these methods can be used interchangeably with both replay-based and exemplar-free methods! In a realistic setting, I can see how exemplar-free methods might help assuage compliance concerns or company-specific rules, but I know of no data privacy regulation or cryptographic threat model where this would be a relevant factor. I would suggest that the authors attempt to correct this misconception in the literature by simply stating the feature in terms of its utility: data need not be stored for replay in production. The reader can implicitly understand that this can have several benefits depending on the circumstances of the deployment. Otherwise, the only criticism I'd have for the paper is its ablation study. While it's a useful sanity check to see their result of ablating AC and FCC, the results with Frozen are obvious and unnecessary. The ViT-B model was developed for ImageNet-sized datasets, of course it will overfit CIFAR-100! I think there are more useful ablations that could be performed (more on that below). Technical Quality: 3 Clarity: 3 Questions for Authors: It seems clear to me that the authors method of fusing the ViT blocks $B_i(.)$ with random linear projection is valuable, but also not entirely necessary. The goal of this approach appears two-fold; (1) capture information from different levels of abstraction in the representation that their analytic classifier uses, and (2) be able to control the dimensionality of that representation, which will surely need to be tune-able at training time for different datasets (e.g. to avoid overfitting). The ablation in Appendix B reassures the reader that (2) is necessary, but none of the ablations suggest that (1) is necessary. A simple ablation that could've helped would be to compare their block-averaging + smoothed projection approach with a simpler method that applies the smoothed projection to the last ViT block. I'd be curious to know why the authors chose this particular approach. The paper itself states that this feature fusion was implemented to "further enhance the representativeness of the features", but is there any work or experiment they can point to that suggests this? In any case, this seems to be the only weak point of the paper, and a clarification or improvement would be nice. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The only concern I would have is that the paper (and its cited works) abuse the notion of "data privacy", which I have previously argued against in this review. Works that claim to "improve data privacy" without treatment of the staggering amount of literature that has gone into defining and proving what is and is not "private" in an information-theoretic, statistical, or engineering-focused sense are likely to muddy the waters for those fields. The paper's setting and solution are already worth publishing; in my opinion, the unsubstantiated data privacy claim is hurting more than it's helping. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer nHHU Thank you for your positive reviews and helpful suggestions. We provide detailed responses to your concerns below. ## W1: Abuse the notion of "data privacy". Thank you for raising this important concern. We agree that the use of “data privacy” is less appropriate in this paper. We will remove the use of "data privacy" and stick to “exemplar-free” (i.e., data need not be stored for replay in production). ## Q1: Ablation study results with frozen are obvious and unnecessary. Thank you for the suggestion. We will remove this unnecessary experiment and add more important ablation experiments based on your question. ## Q2: Simple ablation that could've helped would be to compare their block-averaging + smoothed projection approach. Thank you for pointing out the missing items in our ablation study. We have included a new set of experiments as follows according to your suggestions. The results are shown below. | Block-averaging| Smoothed Projection | CIFAR-100 | CORe50 | FGVCAircraft | DTD | Tiny-ImageNet | Country211 | |--------|-----|------|--------|--------------|------|---------------|------------| | √ | √ | 91.1 | 96.3 | 62.2 | 82.8 | 91.2 | 24.4 | | × | √| 90.6 | 95.3 | 60.9 | 80.5 | 91.4 | 21.3 | |√|×|90.7 |95.4 | 58.7 |79.3 |91.2 | 22.8 | |×|×|90.6|95.4|56.0|71.2|91.4|21.1| We conduct ablation study to prove the contribution of block-averaging fusion module and the smoothed projection module. The average accuracies are reported. As the table shows, without two modules, the results of F-OAL are already competitive. The two modules further improve F-OAL's performance, especially on fine-grained datasets (e.g., DTD, FGVCAircraft and Country211). ## Q3: Is there any work or experiment they can point to that suggests feature fusion works? The idea of feature fusion is inspired by DenseNet [1], which suggests that hidden features are still helpful for generating more representative final output. This approach can be considered equivalent to an ensemble of multiple backbones, which can provide feature diversity for F-OAL with a frozen encoder. [1] Densely Connected Convolutional Networks, CVPR 2017 --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thank you for acknowledging and updating based on my remarks, these improvements are satisfactory in my view. However, I misunderstood the scoring procedure here. My score was contingent on those remarks being addressed, so it's unlikely to increase. I'm interested in the ongoing conversation with Reviewer UyXG as well, I hope that will be resolved before the closed discussion. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: Thank you for taking the time to read our response! We are glad that the response addressed your concerns.
Summary: The paper introduces Foward-only Online Analytic Learning (F-OAL), an exemplar-free approach designed for Online Class Incremental Learning. The method addresses Catastrophic Forgetting by utilizing a pre-trained frozen encoder and a recursive least square updated linear classifier, which significantly reduces memory usage and computational time. The authors conducted extensive experiments to demonstrate the effectiveness of F-OAL on multiple benchmark datasets, showing its superior performance over existing exemplar-free methods and several replay-based methods. Strengths: The F-OAL framework introduces a forward-only learning mechanism that avoids back-propagation, effectively reducing computational overhead and memory footprint. By not relying on exemplar storage, F-OAL maintains data privacy, a crucial requirement in many real-world applications where data sensitivity is a concern. Weaknesses: While the paper is strong in many aspects, it lacks a detailed discussion on potential limitations of the proposed method, such as its dependence on the quality of the pre-trained encoder and the challenges that might arise in different data scenarios. Although the paper compares several baseline methods, including more recent and varied techniques could provide a more comprehensive evaluation of F-OAL’s relative performance. Some recent exemplar-free works could be easily generalized to OCIL setting and should be considered include: [1] Divide and not forget: Ensemble of selectively trained experts in continual learning, ICLR 2024 [2] R-dfcil: Relation-guided representation learning for data-free class incremental learning, ECCV 2022 [3] Self-sustaining representation expansion for non-exemplar class-incremental learning, CVPR 2022 [4] DiffClass: Diffusion-Based Class Incremental Learning, ECCV 2024 Technical Quality: 2 Clarity: 2 Questions for Authors: Please refer to the weaknesses. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Please refer to the weaknesses. I think providing a more comprehensive comparison with recent state-of-the-art works, a complexity analysis and an overhead comparison would help justify the contribution of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Replies to Reviewer Pgch Thank you for your constructive and detailed feedbacks. We provide detailed responses to your concerns below. ## W1: Dependence on the quality of the pre-trained encoder. Thank you for the suggestion. Indeed, our method relies on well pre-trained backbones such as ViT and ResNet pre-trained on ImageNet. We acknowledge this as a limitation and will discussion this limitation in the manuscript so readers can fully understand our technique. On the other hand, there are many open-source pre-trained backbones available within the deep learning community, which are relatively easy to obtain. Additionally, leveraging pre-trained models for fine-tuning downstream tasks has become a mainstream approach. Hence, although the need for pre-trained backbones is an existing limitation of F-OAL, it is feasible and in line with mainstream. ## W2: Challenges that might arise in different data scenarios. Thank you for the suggestion. Indeed, we focus on coarse-grained data scenarios, such as CIFAR-100, Tiny ImageNet, and Core50, as well as fine-grained data scenarios, including DTD, FGVC Aircraft, and Country211. However, there are other data scenarios, such as **long-tail distributions**, which we have not addressed in this work. We shall include this discussion in the manuscript. ## W3: Including some recent exemplar-free works. Thank you for pointing out these baselines. However, they seem to be specially designed for non-pre-trained ResNet, and changing their backbones will compromise their performance. Therefore, we have included several alternatives, i.e., EASE [1], LAE [2] and SLCA [3], which are designed with pre-trained ViT and are the SOTA exemplar-free CIL approaches. **Please refer to the uploaded PDF in the general response.** In addition, the mentioned baselines [4-7] are good references, and we shall include them in our literature review to complete our review. [1] Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning, CVPR 2024 [2] A Unified Continual Learning Framework with General Parameter-Efficient Tuning, ICCV 2023 [3] SLCA: Slow Learner with Classifier Alignment for Continual Learning on a Pre-trained Model, ICCV 2023 [4] Divide and not forget: Ensemble of selectively trained experts in continual learning, ICLR 2024 [5] R-dfcil: Relation-guided representation learning for data-free class incremental learning, ECCV 2022 [6] Self-sustaining representation expansion for non-exemplar class-incremental learning, CVPR 2022 [7] DiffClass: Diffusion-Based Class Incremental Learning, ECCV 2024 ## L2: A complexity analysis. Thank you for the suggestion, we have included the complexity analysis as follows. In terms of space complexity, our trainable parameters are only _**R**_ and _**W**_. The _**R**_ matrix is of size D × D , where D is the output dimension of the encoder. In our paper, the encoder output dimension is 1000. Therefore, according to Equation 9, the size of the _**R**_ matrix is 1000 × 1000 . The _**W**_ matrix has dimensions of C × D , where C is the number of classes in the target dataset. For example, with CIFAR100, its size is 100 × 1000 . The total number of trainable parameters is relatively small and does not require gradients. This results in our method using less than 2GB of memory, as shown in Figure 2. In terms of computational complexity, we denote the batch size as S, encoder’s output size as D, and class number as C. Therefore, the dimensions of $X$, $Y$, $R$ and $W$ are S×D, S×C, D×D and C×D, respectively. Thus, the calculation is shown below: The computational complexity for updating _**R**_ is dominated by the matrix multiplications, thus: $\text{max}${$\mathcal{O}(SDC), \mathcal{O}(SC), \mathcal{O}(SDC), \mathcal{O}(D^2C), \mathcal{O}(DC)$}≈ $\text{max}${$\mathcal{O}(SDC), \mathcal{O}(D^2C)$} The computational complexity for updating _**W**_ is dominated by the matrix multiplications and the matrix inversion: $\text{max}${$\mathcal{O}(SD^2), \mathcal{O}(S^2), \mathcal{O}(S^3), \mathcal{O}(DS^2), \mathcal{O}(D^2S), \mathcal{O}(D^2)$} ≈ $\text{max}${$\mathcal{O}(S^3), \mathcal{O}(DS^2), \mathcal{O}(D^2S)$} In the OCIL setting, the batch size is relatively smaller. Therefore, the overall computational complexity is primally controlled by $D$. ## L3: Overhead comparison. In terms space overhead, compared to the conventional backbone + classifier structure, F-OAL introduces an additional linear projection to control the output dimension D of the encoder, and a matrix $R$ , where only $R$ is trainable. According to Equation 9, the dimension of $R$ remains a fixed size of D × D. Other methods require more extra space. For instance, LwF employs knowledge distillation, necessitating the storage of additional models, while replay-based methods require extra storage to retain historical samples. In contrast, the overhead introduced by F-OAL, consisting of an additional matrix and a linear layer, is smaller. In terms of time overhead, our method primarily consists of a forward pass and matrix multiplication, which is determined by output dimension of encoder. By changing the output dimension of encoder, we can balance the accuracy and time. According to [8], the backward pass in backpropagation (forward pass + backward pass) accounts for 70% of the time. Therefore, our method's time overhead is also relatively small. [8] Decoupled Parallel Backpropagation with Convergence Guarantee. ICML 2018 Based on these additional results and clarifications, we hope you could consider increasing your score in support of this work. If not, could you kindly let us know what additionally needs to be done in your assessment to make this work ready for publication? --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their detailed response. I have also checked the feedback from other reviewers and will adjust my score accordingly. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: Thank you for taking the time to read our respose. Could you kindly adjust our score if our response addresses your cocerns, before the discussion closes? Thanks!
Rebuttal 1: Rebuttal: # General Response We thank all the reviewers for their time, insightful suggestions and valuable comments. In summary, Reviewer nHHU appreciates that our work is **natural**, **intuitive**, and overall **quite strong**. The writing is **clear**, **logical**, **informative** and **useful** for the reader. Reviewer UyXG points out that our empirical results are **very strong**. Both Reviewer Pgch and Reviewer XB6P appreciate that our work effectively reduces computational overhead and memory footprint. We provide point-by-point responses to all reviewers’ comments and concerns. On the other hand, reviewers also point out the recursive mechanism is relatively rare in OCIL, and could not fully understand the F-OAL purely through the derivations. To address this in the response, we have attached the source code for a **quick experiment** comparing the BP and the F-OAL on MNIST dataset as follows. These codes should be able to run freely on any platform such as Colab. Also the **PDF contains the table for Reviewer Pgch**. ``` import torch import torch.nn as nn import torch.optim as optim from torchvision import datasets, transforms import torch.nn.init as init import torch.nn.functional as F class MLP(nn.Module): def __init__(self): super(MLP, self).__init__() self.fc1 = nn.Linear(28 * 28, 1000) self.fc2 = nn.Linear(1000, 10, bias=False) def forward(self, x): x = self.get_activation(x) return self.fc2(x) def get_activation(self, x): x = x.view(-1, 28 * 28) x = torch.relu(self.fc1(x)) return x transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]) train_dataset = datasets.MNIST('./data', train=True, download=True, transform=transform) test_dataset = datasets.MNIST('./data', train=False, download=True, transform=transform) train_dataset_5 = [(data, target) for data, target in train_dataset if target < 5] test_dataset_5 = [(data, target) for data, target in test_dataset if target < 5] train_loader_5 = torch.utils.data.DataLoader(train_dataset_5, batch_size=64, shuffle=True) test_loader_5 = torch.utils.data.DataLoader(test_dataset_5, batch_size=1000, shuffle=False) model = MLP() criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9) def train(model, train_loader, criterion, optimizer, epochs=1): model.train().cuda() for epoch in range(epochs): for data, target in train_loader: optimizer.zero_grad() output = model(data.cuda()) loss = criterion(output, target.cuda()) loss.backward() optimizer.step() def test(model, test_loader): model.eval().cuda() correct = 0 with torch.no_grad(): for data, target in test_loader: output = model(data.cuda()) pred = output.argmax(dim=1, keepdim=True) correct += pred.eq(target.cuda().view_as(pred)).sum().item() print(f'\nTest set: Accuracy: {correct}/{len(test_loader.dataset)} ({100. * correct / len(test_loader.dataset):.1f}%)\n') return 100. * correct / len(test_loader.dataset) print('We train the MLP on first 5 classes of MNIST via 1 epoch of BP (OCIL setting)') train(model, train_loader_5, criterion, optimizer) print('\nThe test accuracy on the first 5 classes is:') old_acc = test(model, test_loader_5) train_dataset_5to9 = [(data, target - 5) for data, target in train_dataset if target >= 5] test_dataset_5to9 = [(data, target - 5) for data, target in test_dataset if target >= 5] train_loader_5to9 = torch.utils.data.DataLoader(train_dataset_5to9, batch_size=64, shuffle=True) test_loader_5to9 = torch.utils.data.DataLoader(test_dataset_5to9, batch_size=1000, shuffle=False) print('Then we train the MLP on the rest of 5 classes') train(model, train_loader_5to9, criterion, optimizer) print('We test the model on the old 5 classes to see if it forgets the old knowledge') new_acc = test(model, test_loader_5) print(f'The accuracy drops from {old_acc:.1f}% to {new_acc:.1f}%, suggesting that BP suffers a lot from CF') print('\nNow we verify that our approach tackles CF') print('For this easy dataset, we even do not need a powerful pre-trained backbone, and use the same MLP') new_mlp = MLP() R = (torch.eye(1000).float()).cuda().double() W = (init.zeros_(new_mlp.fc2.weight.t())).double().cuda() def trainFOAL(new_model, train_loader): global R, W new_model.train().cuda() with torch.no_grad(): for data, target in train_loader: data, target = data.cuda(), target.cuda() activation = new_model.get_activation(data).double().cuda() label_onehot = F.one_hot(target, 10).double().cuda() R = R - R @ activation.t() @ torch.pinverse( torch.eye(data.size(0)).cuda() + activation @ R @ activation.t()) @ activation @ R W = W + R @ activation.t() @ (label_onehot - activation @ W) new_model.fc2.weight = torch.nn.parameter.Parameter(torch.t(W.float())) print('Similarly, we still train our model on first 5 classes, and the test accuracy is:') trainFOAL(new_mlp, train_loader_5) old_FOAL_acc = test(new_mlp, test_loader_5) print('Then we train our model on rest 5 classes, and the test accuracy on old classes is:') trainFOAL(new_mlp, train_loader_5to9) new_FOAL_acc = test(new_mlp, test_loader_5) print(f'The small gap between {old_FOAL_acc:.1f}% and {new_FOAL_acc:.1f}% suggests that the our model does not forget the old knowledge') ``` Pdf: /pdf/6cff7c984d1d645ea9c9bd5b52624cbe4c86fb99.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
CLAP4CLIP: Continual Learning with Probabilistic Finetuning for Vision-Language Models
Accept (poster)
Summary: The authors present a novel method for CL called CLAP (Continual LeArning with Probabilistic finetuning) applied to the CLIP model. The technique employs probabilistic modeling to refine task-specific modules aligned with visual-guided text features, improving model adaptation to new tasks while mitigating the forgetting of old ones. It also benefits from the knowledge of pre-trained CLIP for this purpose. This method also offers compatibility with prompt-based finetuning methods as well. The approach demonstrates superiority over traditional deterministic finetuning methods through enhanced performance and better uncertainty estimation in diverse experimental setups. Strengths: 1. **Originality and Significance:** The paper introduces a unique approach to CL by integrating probabilistic finetuning with the CLIP model. The proposed pipeline is innovative and significant, as it addresses the critical issue of catastrophic forgetting in CLIP when trained on streams of tasks using well-justified solutions. 2. **Quality and Clarity:** The paper is well-written, with clear explanations of the methodology and its advantages. 3. **Technical Soundness:** The experimental setup is comprehensive, covering several datasets and comparative baselines. The results convincingly demonstrate the effectiveness of CLAP4CLIP in improving in-domain performance and generalization to new tasks. Especially achieving positive back-ward transfer in VTAB (Table 12) was a very interesting finding. Weaknesses: 1. **Presentation Issues:** Figure 1 could be larger to improve readability and clarity. The paper would benefit from better visualization to help convey the complex mechanisms of the proposed method more effectively. 2. **Reference Order and Citations:** The order of the references seems incorrect, which could potentially confuse readers. Additionally, Reference [36] should be discussed in the related works section as well to better contextualize the contributions of the paper. Technical Quality: 4 Clarity: 4 Questions for Authors: Please address the mentioned weaknesses. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The limitations of this work are discussed and presented in Appendix E. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer ey1B, Thank you for your comments. - We assure you that we will enlarge Figure 1 in the next version of the paper to improve its readability and clarity. - We will correct the reference order accordingly and discuss the relevant reference in the related works. We apologize for any confusion this may have created. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for addressing my concerns. I will be keeping my original score.
Summary: This paper introduces CLAP4CLIP, a method designed to enhance continual learning (CL) using CLIP, a pre-trained vision-language model. The method leverages probabilistic fine-tuning with task-specific adapters to mitigate the issue of catastrophic forgetting commonly faced in CL. By incorporating visual-guided attention (VGA) modules, the model aims to align text features with frozen visual features during incremental training. The proposed method is evaluated on several datasets, including CIFAR100, ImageNet100, ImageNet-R, CUB200, and VTAB, and compared against multiple baselines and state-of-the-art fine-tuning methods. Strengths: 1. The paper presents a novel approach by integrating probabilistic fine-tuning and visual-guided attention into CLIP for continual learning. The use of task-specific adapters and the Bayesian variational inference framework adds a unique angle to the existing methods. 2. The methodology is well-structured, and the experiments are thorough, comparing the proposed method against a wide range of baselines and state-of-the-art approaches. 3. Addressing catastrophic forgetting in CL is a significant challenge, and the proposed method offers a promising solution. The integration with CLIP, known for its zero-shot learning capabilities, highlights the potential impact on real-world applications. Weaknesses: 1. Given CLIP’s strong generalization abilities and its effectiveness in zero-shot learning, the reliance on replay strategies might seem redundant and potentially underutilizes CLIP’s full capabilities. 2. The paper does not explicitly detail whether the VGA module is updated during incremental tasks. This is a critical aspect, as the update strategy could significantly impact the model's performance and stability. 3. The method involves multiple components, such as probabilistic fine-tuning, VGA modules, and task-specific adapters, which may introduce significant computational overhead. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Could you provide more insights into why the replay strategy is necessary and how it complements the use of CLIP in your method? 2. How is the VGA module handled during incremental tasks? Are its parameters updated, and if so, what strategy is used to ensure consistency and stability across tasks? 3. What are the computational requirements of your method compared to the baselines, especially concerning the additional components introduced? How do you balance the trade-off between performance improvements and computational efficiency? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: * Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer KrxH, Thank you for your comments and suggestions. In what follows, we have tried our best to address your concerns. - **Our method stands out without replay, replay further boosts its performance:** Thank you for your comment. We would first like to mention that our additional rebuttal experiments without memory replay (Table 2 in the rebuttal pdf) clearly show that our method outperforms the compared SOTA even without replay. The use of memory replay in our proposed continual learning method is thus not redundant, but rather complementary to CLIP's existing strengths. CLIP's power primarily comes from its strong alignment between visual and textual modalities. As we finetune the model to incremental tasks, memory replay helps maintain this crucial alignment for previously learned tasks. Without it, the model might drift, compromising its image-text retrieval performance and in turn, its zero-shot capabilities on previously seen concepts while learning new ones. More importantly, like other CL setups, our use of replay is confined to the training process. Once the model is fine-tuned, there's no additional computational overhead during inference brought about by memory replay. In addition, from the perspective of prior regularization, memory replay has broadly been used for adapting/finetuning a range of foundation models, e.g. the adaptation of vision and language models [1], latent diffusion models [2], and large language models [3]. These practical scenarios often demand that the deployed foundation models maintain their zero-shot transfer ability as well as their inference-time efficiency. Replay thus helps boost the former without affecting the latter. Lastly, in Section 5 of the main paper, we discuss that our proposed method is **agnostic** of the strategy used to select exemplars for memory replay. To support this, in Table 18, Appendix D.2, we show that our method works well even with an entropy-based exemplar selection strategy – a setup where existing deterministic methods lag due to their poor predictive confidences. - **Working of VGA module:** Thank you for your comment and apologies for the working of the VGA module being unclear from the main paper – we mention it explicitly in Appendix A.2 “Training for memory consolidation”. The VGA module is indeed shared across different tasks and its parameter is updated normally during the training phase when our training data comprises the current task data as well as the replay memory data. This is followed by the training phase for memory consolidation where we finetune our model on the class-balanced dataset of new data and rehearsal data. Following other well-established parameter-isolation CL algorithms [4-5], here we freeze the task-shared VGA parameters to avoid interference with the knowledge acquired during the normal training phase (given that during the consolidation phase, our training data is vastly reduced and comprises only the class-balanced data maintained in the small replay buffer). - **Computational requirements of our method:** We provide a detailed comparison of the parameter and time analyses for different methods in Fig. 4 and Appendix Table 16, respectively. A major computational overhead for our proposed probabilistic method is the number of Monte Carlo samples. In App C.1, we thoroughly report the accuracy-runtime tradeoff for our proposed method with the varying numbers of MC samples. Namely, the accuracy remains poorer in the range [1,10], grows in the range [10, 20], and in general, saturates thereafter. On the other hand, the runtime grows roughly by 103% as we increase the number of MC samples from 1 to 50. From the perspective of probabilistic finetuning, we also analyze the performance vs efficiency trade-off for the choice of prior type in Appendix C.4. As shown in Table 15, compared to the static standard normal prior, the data-driven and language-aware priors offer us minor performance gains in terms of last and avg. accuracy and backward transfer scores. However, these performance gains are neutralized by the higher cost of runtime per inference iteration. As a result, we stick to using the standard normal prior throughout our work. References: [1] Smith, James Seale *et al.* “Adaptive Memory Replay for Continual Learning.” CVPR 2024 Workshops. [2] Kumari, Nupur *et al.* “Multi-Concept Customization of Text-to-Image Diffusion.” CVPR 2022. [3] Wang, Yifan *et al.* “InsCL: A Data-efficient Continual Learning Paradigm for Fine-tuning Large Language Models with Instructions.” ACL 2024. [4] Castro, Francisco Manuel *et al.* “End-to-End Incremental Learning.” ECCV 2018. [5] Douillard, Arthur *et al.* “DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion.” CVPR 2022. --- Rebuttal Comment 1.1: Title: Requesting feedback on our rebuttal Comment: Dear Reviewer KrxH, We thank you again for taking the time to review this work. We put our best effort into preparing the rebuttal to your questions, including running experiments without memory replay. We would very much appreciate it if you could provide us with your feedback on our rebuttal. We would be glad to answer any further questions and clarify any concerns. Also, if you are satisfied with our answers, please consider revising your score. With best regards
Summary: The paper emphasizes on the existing limitations of deterministic approaches in fine-tuning and highlights the need for probabilistic fine-tuning approach. Following this, it proposes a probabilistic parameter efficient fine-tuning method for continually learning vision language models like CLIP. Strengths: 1. The proposed approach seems novel. Weaknesses: 1. Justification of probabilistic modelling of the text feature and not the image feature space is not clear. 2. The approach is highly inefficient in terms of inference time. 3. Recent approaches like ConvPrompt[a], CODA-Prompt[b], HiDe-Prompt[c], and SLCA[d] not compared. [a]Roy, Anurag, et al. "Convolutional Prompting meets Language Models for Continual Learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024 [b]Smith, James Seale, et al. "Coda-prompt: Continual decomposed attention-based prompting for rehearsal-free continual learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [c]Wang, Liyuan, et al. "Hierarchical decomposition of prompt-based continual learning: Rethinking obscured sub-optimality." Advances in Neural Information Processing Systems 36 (2024). [d]Zhang, Gengwei, et al. "Slca: Slow learner with classifier alignment for continual learning on a pre-trained model." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Could you address the points I raised in the weakness section? 2. The paper demonstrates a ~2% performance improvement with the addition of the memory consolidation component. However, I'm interested in seeing how the proposed method performs when the replay memory size is set to zero. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations of their work in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer bibK, Thank you for your comments and suggestions. In what follows, we have tried our best to address your concerns. - **Why do we do probabilistic modeling of text feature space?** We opt for probabilistic modeling of task-specific text feature space rather than image feature space mainly in light of the practical constraints imposed by the class-incremental learning (CIL) setting. In CIL, at test time, we are not given the task labels for images. As such, if we were to use task-specific adapters to model task-specific visual features distribution (rather than task-specific text features distribution as we do now), then we must know which images should be routed to what adapter – something not plausible at test time. A naive getaway would be to route the text-guided visual features to all available adapters and then infer the correct prediction based on the adapter outputs. Such an exhaustive routing mechanism would greatly increase our computational burden at test time. Modeling the distribution of visual-guided text features helps us overcome this because now our visual features serve as a shared context to which all task-specific text features (which we can distinguish simply by their labels) can attend. By sampling from the distributions over such visual-guided task-specific text features, we can compute their cosine similarities with the visual features to obtain our predictive logits. - **Comparison with zero replay memory size:** In Table 2 of the rebuttal pdf, we have provided a comparison of our proposed method with the SOTA models for continual learning without replay with ViT (i.e., CODA-Prompt) and with CLIP (i.e., AttriCLIP). Here, leveraging the instance-conditioned and semantically diverse prompts of AttriCLIP provides an edge. Our variant leveraging AttriCLIP further improves its performance surpassing the SOTA. In the bottom two rows of the table, we ablate the role of our proposed language-aware distribution regularization and weight init. components and find that the former is crucial in avoiding forgetting in this setting. - **Inference time overhead:** The inference time of our method is highly dependent on the number of MC samples as well as the prompt type being used. For instance, in Table 16 Appendix C.5, we show that using multi-modal prompts (MaPLe) with Ours leads to an average inference time of 0.064s which is comparable to and even lower than other existing methods like AttriCLIP (0.257s). Also, in our rebuttal to Reviewer **giBN**, we state that the inference time of several variants of our method remains lower than the compared SOTA (PROOF) -- 0.163s (Ours) vs 0.177s (PROOF). We nonetheless agree that MC sampling for probabilistic modeling endows our method with higher inference time overhead compared to other deterministic methods (something we have thoroughly ablated in Fig. 8, Appendix C.1). However, such a caveat is known to be general for probabilistic models, and we believe that rejecting our method based on this (while downweighing the wide range of performance advantages) would be unfair. Put together, our proposed method outperforms several existing SOTA with better accuracy, backward and forward (zero-shot) transfer ability, and calibration, and also performs better on settings without memory replay and with restricted computational budget, as shown in Table 2 and Table 4 in the rebuttal pdf, respectively. - **Comparison with every single recent work:** We would like to state that continual learning with pre-trained foundation models is a rapidly evolving field, as is the evolution of new pre-trained foundation models itself. Given this fast-moving environment, it is challenging to provide an exhaustive comparison with every single recent work. While we strive for comprehensive analysis, we hope the reviewer understands that an all-encompassing comparison may not be practically achievable within the scope of this work. That said, we have thoroughly covered the **most relevant** SOTA models for vision-language models (PROOF and AttriCLIP) as well as for vision-only models (DualPrompt and CODA-Prompt) across a range of datasets and settings. --- Rebuttal Comment 1.1: Title: Requesting feedback on our rebuttal Comment: Dear Reviewer bibK, We thank you again for taking the time to review this work. We put our best effort into preparing the rebuttal to your questions, including reporting the experiments on zero replay memory size, comparing with CODA-Prompt, and justifying design choices/inference time overhead. We would very much appreciate if you could engage with us through your feedback on our rebuttal. We would be glad to answer any further questions and clarify any concerns. Also, if you are satisfied with our answers, please consider revising your score. With best regards
Summary: This paper proposes Continual Learning with Probabilistic Finetuning (CLAP) for class-incremental learning using CLIP. The key modules of the proposed idea are as follows. First, the authors introduce a CLIP-based probabilistic finetuning model using Bayesian Variational Inference to achieve better generalization during the continual learning process. Second, they propose a visual-guided attention (VGA) model to facilitate cross-modal feature alignment in the continual learning process. Lastly, to alleviate forgetting of pre-trained language-aware CLIP knowledge, they suggest past-task distribution generalization. Additionally, to enable parameter-efficient learning, a probabilistic adapter is used, along with a method for its initialization to enhance stability. Experimental results on various datasets demonstrate that the proposed algorithm achieves superior class-incremental learning performance compared to existing algorithms. Strengths: The strengths of this paper are as follows: 1. The paper is well-written and easy to understand. 2. The proposed modules for successful class-incremental learning using CLIP are meticulously designed. Through various analyses and ablation studies, the roles of each module are thoroughly demonstrated. Although the algorithm appears somewhat complex compared to existing algorithms, parameter and time analyses show that the actual cost difference is negligible. 3. In class-incremental learning experiments using diverse datasets, the proposed algorithm consistently outperforms existing algorithms across various evaluations. Weaknesses: 1. I have no major concerns regarding the contribution of the proposed algorithm for class-incremental learning using CLIP. However, I have some questions based on the experimental results. 1-1) Unlike algorithms like L2P and DualPrompt, which utilize exemplar memory, the paper does not use exemplar memory. Are the results in L2P and DualPrompt sections of the paper based on using exemplar memory in a fair comparison? To ensure a fair comparison, results using exemplar memory should be shown. 1-2) The paper only considers prompt-based algorithms like L2P and DualPrompt using Vision Transformer (ViT). However, newer prompt-based algorithms (e.g., CODA-Prompt) that achieve better performance and representation-based algorithms (e.g., Ranpac and EASE) that generally improve performance have been proposed. The authors should consider these algorithms as additional baselines. For more details, please refer to this survey paper [1]. 1-3) Among existing baselines, PROOF is currently the state-of-the-art algorithm. While Table 1 shows PROOF's results, Tables 2 and 16 do not. To validate the superiority of the proposed algorithm, results for PROOF should be shown in these tables and other key experiments. 2. Recent research has highlighted discussions on hyperparameter tuning for class-incremental learning algorithms [2] and pointed out issues regarding computational costs[3]. From this perspective, I believe that further discussion and additional analysis on computational costs regarding the proposed algorithm would make the paper more convincing. 3. Lastly, I have personal concerns regarding the necessity of class-incremental learning research using CLIP. As shown in [1], algorithms using only ViT have already achieved excellent performance in class-incremental learning research. Despite using pretrained ViT, one of their major advantages is achieving superior performance without using exemplar memory. Considering this perspective, the proposed research not only uses a pretrained CLIP model but also requires text information and additional exemplar memory usage. Simply comparing numerical experimental results suggests that the proposed algorithm does not achieve overwhelmingly superior performance compared to algorithms using only pretrained ViT. In light of this, I would like to understand: 1) What justifies the necessity of class-incremental learning research using CLIP with exemplar memory? Additionally, 2) I am curious about the potential for the proposed algorithm's ideas to be applied to continual learning in other domains using CLIP or continual pre-training of CLIP itself. [1] "Continual learning with pre-trained models: A survey." arXiv preprint arXiv:2401.16386 (2024). [2] "Hyperparameters in Continual Learning: a Reality Check." arXiv preprint arXiv:2403.09066 (2024). [3] "Computationally budgeted continual learning: What does matter?." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: See the Weakness section. I don't have significant concerns about the algorithm proposed in this paper, but I couldn't give it a higher score due to some experimental uncertainties and questions about the necessity of the setting. If the authors address these concerns in the rebuttal, I would gladly raise my score. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: There is no potential negative societal impact of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer giBN, Thank you for your comments and suggestions. We have tried our best to address your concerns below. 1-1) **Fair comparison with L2P and DualPrompt:** The scores we report for L2P and DualPrompt are fair and use a CLIP-based backbone as well as memory replay with the same number of exemplars as our method. As also mentioned in our global rebuttal, both L2P and DualPrompt originally use the ViT-B/16 backbone pre-trained on the ImageNet-21K dataset [1] which leads to higher performance scores on the compared datasets. However, replacing this pretrained ViT backbone with that of OpenAI CLIP’s pretrained ViT backbone leads to a drop in the performance across all datasets and clearly necessitates the need for memory replay to catchup with our proposed method. The aforesaid drop in performance has also been observed by other studies adapting L2P and DualPrompt for continual learning with CLIP [2]. 1-2) **CODA-Prompt as baseline:** Based on your comment, we have included CODA-Prompt (reimplemented using the pre-trained OpenAI CLIP’s ViT backbone) into our baselines. Like L2P and DualPrompt, the original CODA-Propmt paper uses ImageNet-21K pre-trained ViT. Replacing this with CLIP’s ViT backbone necessitates the need for memory replay (see Table 1 vs Table 2 in the rebuttal pdf) to catch up with the performance of our method. Hence, our comparison with CODA-P, L2P, and DualPrompt remains fair and consistent. 1-3) Thank you for your concern. We have included the comparison with PROOF on the Cross-Dataset Continual Learning setup in Table 3 of the rebuttal pdf. Also, adding to Table 16, the avg. inference time for PROOF remains 0.177s which is slower than the base (Ours) and the MaPLe variant of ours but faster than our other variants utilizing task-conditioned (CoOp) and instance-conditioned (AttriCLIP) prompts. 2) **Computationally-budged CL setup:** Regarding computationally-budgeted CL setup, we would like to state that not all CL training setups require to be computationally-budgeted. In fact, upon finetuning of large pre-trained models like CLIP, it is often the case that their performance post-deployment, both in terms of accuracy/zero-shot transfer ability and inference time, counts the most. Our thorough evaluations make a clear statement about the upper hand of our method in terms of performance based on a number of metrics – accuracy, backward and forward (zero-shot) transfer ability, and calibration. Our inference time is also mainly dependent on the type of prompts used with our method. In Table 4 of the rebuttal pdf, we have nevertheless provided an ablation on the “normal” budgeted CL setup of [3] where, on each incremental training task, we allocate the number of training iterations equivalent to 1 epoch on the first (base) task of each dataset. Here, our variant utilizing instance-conditioned prompts of AttriCLIP outperforms other compared methods. A further ablation shows that our proposed weight distribution regularization technique indeed remains a crucial component at tackling forgetting on the budgeted setup (see the two bottom-most rows in Table 4). 3) **“The proposed algorithm does not achieve overwhelmingly superior performance .. “:** In light of the consistent superior performance scores of our method without using additional exemplar memory replay and on computationally budgeted CL setup, we hope that the reviewer reconsiders their remark. 4) **Justification on exemplar memory:** We would like to state that the use of memory replay in our proposed continual learning method is not redundant, but rather complementary to CLIP's existing strengths. To justify this, in Table 2 of the attached pdf, we show that our method is indeed compatible with setups without memory replay, and performs either on par or better than other compared methods, including ViT-only methods. Moreover, we would like to highlight two such perspectives to clarify that the usage of a small additional exemplar memory is not to be seen as an overhead in continual learning. **First,** for pre-trained foundation models, a critical performance measure for deploying their (incrementally) finetuned variants is often their inference time overhead and their downstream/zero-shot transfer abilities. The use of exemplar memory does not affect the inference time overhead of these methods. In fact, comparing Tables 1 and 2 from the rebuttal pdf, it is clear that using replay memory boosts their downstream task performance. Lastly, we clearly state the superior zero-shot (forward) and backward transfer abilities of our proposed method in Sec. 4.1 in the main paper. **Second,** if we were to look at exemplar memory from a prior regularization perspective, then it becomes immediately clear how predominant the usage of these is across different domains, e.g. the adaptation of Vision-Language Models [4], latent diffusion models [5], and LLMs [6]. References: [1] Dosovitskiy, Alexey *et al.* “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.” ICLR 2021. [2] Zhou, Da-Wei *et al.* “Learning without Forgetting for Vision-Language Models.” [3] Prabhu, Ameya *et al.* “From Categories to Classifier: Name-Only Continual Learning by Exploring the Web.” ArXiv abs/2311.11293 (2023). [4] Smith, James Seale *et al.* “Adaptive Memory Replay for Continual Learning.” CVPR 2024 Workshops. [5] Kumari, Nupur *et al.* “Multi-Concept Customization of Text-to-Image Diffusion.” CVPR 2022. [6] Wang, Yifan *et al.* “InsCL: A Data-efficient Continual Learning Paradigm for Fine-tuning Large Language Models with Instructions.” ACL 2024. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for conducting additional experiments and providing an author response to address the concerns raised in my review. I have thoroughly read both the author response and the updated PDF, and as a result, I am increasing my score to 5. I hope that the final version of the paper will fully incorporate the contents of the response.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their comments and constructive suggestions on our manuscript. Here, we provide a pdf containing results for the experiments asked in the reviews. We also highlight the three major points raised in the reviews and how our rebuttal has addressed these: - **Comparison with CODA-Prompt:** We have included CODA-P into our baselines and have compared it with our method across all datasets with and without replay in Table 1 and 2 of the rebuttal pdf, respectively. Note that similar to L2P and DualPrompt, CODA-P paper uses ImageNet-21K pre-trained ViT-B/16 model as the baseline. Given the similarity of ImageNet-21K images with the compared datasets, this leads to quite high performances. For a **fair** comparison, we thus reimplement CODA-P using pretrained OpenAI CLIP’s ViT backbone (similar to how we compare L2P and DualPrompt in our paper). The latter reimplementation then demands memory replay to catch up with the performance of our proposed method, thus making our comparisons fair. Lastly, as shown in Table 1, our method surpasses CODA-Prompt across all datasets. - **Our method works without replay, replay further boosts its performance:** In Table 2 of the rebuttal pdf, we have provided a comparison of our proposed method with the SOTA models for replay-free continual learning using ViT-based (i.e., CODA-Prompt) and CLIP-based (i.e., AttriCLIP) backbones. Here, leveraging the instance-conditioned and semantically diverse prompts of AttriCLIP provides an edge. Our variant leveraging AttriCLIP further improves its performance surpassing the SOTA. In the bottom two rows of the table, we ablate the role of our proposed language-aware distribution regularization and weight init. components and find that the former is crucial in avoiding forgetting in this setting. - **Our method is agnostic of the exemplar selection strategy used for replay:** In Section 5 in the main paper, we discuss that our proposed method is agnostic of the strategy used to select exemplars for memory replay. To support this, in Table 18, Appendix D.2, we show that our method works well even with entropy-based exemplar selection strategy – a setup where deterministic methods generally lag due to their poor predictive confidences [1]. References: [1] Chaudhry, Arslan *et al.* “Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence.” ECCV 2018. Pdf: /pdf/521e7a3729345becff0c370489a57da32a86d3c0.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
KG-FIT: Knowledge Graph Fine-Tuning Upon Open-World Knowledge
Accept (poster)
Summary: This paper introduces KG-FIT, a general framework that enhances the expressiveness of existing Knowledge Graph Embedding (KGE) models by integrating LLMs. KG-FIT contains four key steps: First, it utilizes an LLM to generate descriptions for a set of given entities, forming an enriched entity representation by concatenating the entity's embedding with its description. Second, it constructs a semantically coherent seed hierarchical structure. Third, it leverages the real-world entity knowledge captured by the LLM to refine this hierarchical structure. Finally, it fine-tunes the knowledge graph embeddings by integrating the hierarchical structure with textual embeddings. Extensive experiments validate the effectiveness of KG-FIT. Strengths: 1. The motivation of the proposed KG-FIT is clear and the paper is well-structured. 2. Extensive experimental results demonstrate that KG-FIT can improve most KGE baseline models. 3. Codes are provided for reproducibility. Weaknesses: 1. The performance of KG-FIT heavily relies on the LLM used for generating entity descriptions and guiding the refinement of the seed hierarchy refinement. If the LLM lacks comprehensive real-world entity knowledge about the given entities or domains, the resulting embeddings may not suboptimal. Furthermore, this dependency could make KG-FIT less effective in the situations where LLM have limited coverage. 2. If the LLM has limited coverage of a specific domain, the LLM guided hierarchy refinement process may yield incorrect results, potentially distorting rather than enhancing the structure of the well-formed seed hierarchy. 3. The paper does not justify the selection of agglomerative hierarchical clustering and the use of silhouette score. Additional ablation studies will enhance this work. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. It is unclear the explicit description of $P$, $L$, $R$ in line 134. 2. It remains unclear whether the efficiency evaluation of training time contains the duration required for seed hierarchy construction and LLM-guided hierarchy refinement stages. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have discussed the limitations of their work in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful comments and their recognition of our work's strengths. We address their concerns as follows: --- > ### **[W1/W2]** (Dependency on LLM knowledge and potential issues with limited domain coverage) We appreciate the reviewer's concern about KG-FIT's reliance on LLM knowledge. While we view this as a strength that allows our method to leverage vast and continually improving knowledge, we acknowledge the potential limitations in highly specialized domains. To address this, we propose several mitigating strategies: 1. Incorporating external knowledge: We can enhance LLM performance in specialized domains by using Retrieval-Augmented Generation (RAG) to incorporate domain-specific external knowledge bases during the description generation process. 2. Leveraging KG context: For domains where external knowledge is limited, we can use the context within the Knowledge Graph itself to generate more informative entity descriptions. This approach ensures that even without extensive domain knowledge, the LLM can still provide useful descriptions based on the relationships and attributes present in the KG. 3. Fallback to seed hierarchy: In cases where the LLM truly lacks domain-specific knowledge, our results show that the seed hierarchy alone still significantly improves KG embeddings. As demonstrated in Table 2 of our paper, KG-FIT with just the seed hierarchy (before LLM refinement) consistently outperforms base models across all datasets. 4. Domain-specific LLMs: When available, using domain-specific LLMs can provide more accurate and relevant knowledge for specialized fields. These strategies ensure that KG-FIT remains effective and adaptable across a wide range of domains, from general knowledge to highly specialized fields. Our experiments on diverse datasets (FB15K-237, YAGO3-10, PrimeKG) demonstrate KG-FIT's robustness and broad applicability, even when dealing with domain-specific knowledge. In future work, we plan to explore methods for automatically selecting the most appropriate strategy based on the domain and available resources, further enhancing KG-FIT's versatility and performance. > ### **[W3]** (Justification for agglomerative clustering and silhouette score) We appreciate the reviewer's suggestion for additional justification. We chose agglomerative clustering for its natural ability to create a hierarchical structure without pre-specifying the number of clusters, which is crucial for our approach. The silhouette score was selected as it balances both cluster cohesion and separation, providing a robust measure of clustering quality. To address the reviewer's concern, we will conduct additional ablation studies comparing different clustering methods: 1. Top-down agglomerative clustering: This variant will start with all entities in one cluster and progressively split them, potentially offering a different perspective on the hierarchy. 2. K-means: We will implement a recursive K-means process, where we first cluster all entities, then recursively apply K-means to each resulting cluster until a stopping criterion is met (e.g., cluster size or maximum depth). This will create a top-down hierarchical structure. 3. DBSCAN: We will use a similar recursive approach as with K-means, but DBSCAN's ability to detect noise points will allow us to create a hierarchy that potentially captures outliers at higher levels. For evaluation metrics, alongside the silhouette score, we will also compare the Calinski-Harabasz index and Davies-Bouldin index. We will include these results in the appendix of our revised paper, providing a comprehensive comparison of different clustering methods and their impact on KG-FIT's performance. --- > ### **[Q1]** (Clarification on P, L, R in line 134) We apologize for the lack of clarity. P, L, and R refer to Parent, Left child, and Right child, respectively. We will add this explanation to the paper for better understanding. > ### **[Q2]** (Clarification on efficiency evaluation) The reported training time in Table 4 indeed focuses on the fine-tuning stage for a fair comparison with baselines. The hierarchy construction (Steps 1-3) is a one-time preprocessing step. For transparency, we will add the following breakdown to Appendix H: - Entity description & text embedding generation (with 15 threads): ~10 minutes for FB15K-237, ~1 hour for YAGO3-10, ~8 minutes for PrimeKG - Seed hierarchy construction: ~2 minutes for FB15K-237, ~8 minutes for YAGO3-10, ~1.5 minutes for PrimeKG - LLM-guided refinement (with 15 threads): ~12 minutes for FB15K-237, ~1 hour for YAGO3-10, ~10 minutes for PrimeKG These preprocessing times are relatively small compared to the overall training process, especially considering they're one-time operations that can be reused for multiple experiments or model variations. Moreover, the LLM-guided refinement step shows good scalability with parallel processing, which can further reduce preprocessing time for larger datasets. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks for the response. I have read all the reviews and responses, and am satisfied with the response to my comments. I will maintain my positive review score. --- Reply to Comment 1.1.1: Title: Thank You for Your Recognition Comment: Thank you for your thoughtful review and for recognizing the strengths of our work! We appreciate your positive evaluation and are glad our responses addressed your concerns. Your feedback has been invaluable in refining our approach. Please let us know whenever you have any further questions during this reviewer-author discussion period. We are happy to discuss and provide any additional information. Thank you again for your support, and we look forward to continuing our research in this exciting area.
Summary: This paper addresses the limitations of existing KGE models that focus either on graph structure or fine-tuning pre-trained language models. It introduces KG-FIT, which leverages LLM-guided refinement to incorporate hierarchical and textual knowledge, effectively capturing both global and local semantics. Experiments on benchmark datasets demonstrate KG-FIT's superiority, achieving significant performance improvements over state-of-the-art methods. Strengths: 1. The proposed method can automatically construct a semantically coherent entity hierarchy using agglomerative clustering and LLM-guided refinement, which is an interesting topic. 2. The authors provide detailed illustrations for extensive empirical study on benchmark datasets and demonstrate significant improvements in link prediction accuracy. Weaknesses: 1. The paper is not organized clearly, which is not friendly for understanding. For example, there is a lack the sensitivity study for the hyperparameters in the loss function. 2. The comparable methods are old and lack the new ones in the last 2 years such as [1][2][3][4]. The performance is not comparable with the previous work. [1] Compounding Geometric Operations for Knowledge Graph Completion [2] Geometry Interaction Knowledge Graph Embeddings [3] KRACL: Contrastive Learning with Graph Context Modeling for Sparse Knowledge Graph Completion [4] Dual Quaternion Knowledge Graph Embeddings 3. The paper lacks the analysis of time complexity as well as space complexity, which is necessary to study the efficiency of the model. 4. There are some typos and It is commanded that the writing needs to be improved. (1) On page 5, line 163 “determined by lowest common ancestor” should be “determined by the lowest common ancestor” (2) On page 6, line 195 “is sigmoid function” should be “is the sigmoid function” Technical Quality: 3 Clarity: 2 Questions for Authors: Please refer to the weaknesses. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We appreciate your recognition of our work's strengths and we address your concerns as follows. --- > ### **[W1]** *"there is a lack of the sensitivity study for the hyperparameters in the loss function"* We appreciate the reviewer's concern about the lack of a sensitivity study. To address this, we conducted a sensitivity analysis, presented in **Figure A in our rebuttal PDF**. We have also presented our hyperparameter study in **Appendix I in our paper**. Figure A demonstrates KG-FIT's robustness to hyperparameter variations: 1. Left plot (λ1, λ2, λ3): Performance metrics remain stable across different combinations, indicating that KG-FIT is not overly sensitive to these hierarchical clustering constraint parameters. 2. Right plot (ζ1, ζ2, ζ3): While there's some variation, performance remains consistently high over a range of ratios for these loss function component weights. These results show that KG-FIT maintains good performance across various hyperparameter settings, addressing the reviewer's concern and suggesting that the model can be readily applied to new datasets without requiring extensive tuning. > ### **[W2]** *"The comparable methods are old and lack the new ones in the last 2 years"* Thank you for suggesting these recent baselines. We have conducted additional experiments to compare KG-FIT with CompoundE [1], GIE [2], and DualE [4]. The results are presented in **Table C in our rebuttal PDF**. These results demonstrate that KG-FIT not only compares favorably with but substantially improves upon recent state-of-the-art KGE methods. This underscores KG-FIT's effectiveness in leveraging LLM knowledge to enhance various KGE architectures. Regarding KRACL [3], as it employs a GNN-based approach, integrating it with KG-FIT presents unique challenges. We have added the exploration of KG-FIT's integration with GNN-based methods to our future work list. We appreciate the reviewer's suggestion to include these recent baselines, as it has allowed us to further demonstrate KG-FIT's capabilities and versatility. > ### **[W3]** *"The paper lacks the analysis of time complexity as well as space complexity"* We appreciate the reviewer's attention to this important aspect of our model. Here's a clarification: **For time complexity:** We have analyzed KG-FIT's time complexity both theoretically (Lines 206-210) and empirically (Table 4, Lines 296-303). Our results demonstrate that KG-FIT is 12 times faster in training than the best PLM-based method, while maintaining inference speed comparable to backbone KGE methods. **For space complexity:** While we didn't explicitly state it in the paper, the space complexity of KG-FIT in terms of trainable parameters is the same as the backbone KGE models: $O(|E| * n + |R| * m)$ Where $|E|$ is the number of entities, $|R|$ is the number of relations, $n$ is the entity embedding dimension, and $m$ is the relation embedding dimension. This is because the main trainable components of KG-FIT are: 1. Entity embeddings: $O(|E| * n)$ 2. Relation embeddings: $O(|R| * m)$ The additional components introduced by KG-FIT (entity text embeddings and cluster embeddings) are not trainable parameters, but fixed inputs used during the forward pass. They do consume memory during runtime but do not increase the model's parameter count. This space complexity is significantly lower than PLM-based methods, which often require gigabytes of memory for model parameters alone. For example, on the FB15K-237 dataset, KG-FIT's trainable parameters would only occupy approximately 60MB of memory (assuming 32-bit floating-point numbers and $n = m = 1024$). > ### **[W4]** (Typos in the paper) We appreciate the reviewer's attention to detail. We will correct these typos: 1. Page 5, line 163: "determined by the lowest common ancestor" 2. Page 6, line 195: "is the sigmoid function" We will thoroughly proofread the entire manuscript to improve clarity and precision. Thank you for helping us enhance the quality of our paper. --- **References** [1] (ACL 2023) Compounding Geometric Operations for Knowledge Graph Completion. [2] (AAAI 2022) Geometry Interaction Knowledge Graph Embeddings. [3] (WWW 2023) KRACL: Contrastive Learning with Graph Context Modeling for Sparse Knowledge Graph Completion. [4] (AAAI 2021) Dual Quaternion Knowledge Graph Embeddings. --- Rebuttal 2: Comment: Dear Reviewer dKZ2, Thank you for your insightful comments and suggestions. In our author response, we have provided additional experimental results, analyses, and clarifications to address your concerns. As the **discussion period nears its end (in 24 hours)**, we would be grateful if you could take a moment to review our response and let us know if there are any remaining concerns or if our clarifications have adequately addressed your points. We are grateful for the time and expertise you have shared in reviewing our work. Sincerely, The Authors
Summary: Knowledge graphs (KGs) are essential for representing structured knowledge in various domains. They consist of entities and relations, forming a graph structure for efficient reasoning and knowledge discovery. Current knowledge graph embedding (KGE) methods create low-dimensional representations of these entities and relations but often overlook extensive open-world knowledge, limiting their performance. Pre-trained language models (PLMs) and large language models (LLMs) offer a broader understanding but are computationally expensive to fine-tune with KGs. To address these issues, the authors propose KG-FIT, a framework that incorporates rich knowledge from LLMs into KG embeddings without fine-tuning the LLMs. KG-FIT generates entity descriptions from LLMs, constructs a hierarchical structure of entities, and fine-tunes KG embeddings by integrating this hierarchy with textual embeddings. This approach enhances KG representations, combining global knowledge from LLMs with local KG knowledge, significantly improving link prediction accuracy on benchmark datasets. Strengths: 1. Extensive experiments. The authors compare experiment performance with 8 baselines on three datasets and apply on 8 KG embedding backbones. 2. Extensive experimental details description. For example, the hardware environment for running the experiment, data processing, prompts for interacting with large models, and codes to reproduce their results. 3. Clear figures and presentations. Weaknesses: 1. The motivation needs to be reconsidered. The authors mention that using KG to fine-tune LLMs is computationally expensive. Many current research efforts do not fine-tune LLMs with KG. Instead, they use retrieval-based methods to explicitly provide the knowledge. 2. LLM-based baselines should be considered. The authors extensively use LLMs in their methods. They also should incorporate the LLM-based methods. For example, those LLMs retrieval-based methods. Only comparing with small LM-based methods is not enough. 3. The contribution is kind of limited. I do not know why the authors still use KG embedding methods for efficient reasoning and knowledge discovery since LLMs have very strong reasoning ability for knowledge discovery. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Could the authors reconsider the motivation behind their approach? 2. Why haven’t the authors considered incorporating published LLM-based baselines? 3. Why do the authors continue to use KG embedding methods for efficient reasoning and knowledge discovery when LLMs possess very strong reasoning abilities for knowledge discovery? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Please refer to my weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We appreciate your recognition of our work's strengths and we address your concerns as follows. --- > ### **[W1/Q1]** (Reconsidering motivation) If the retrieval process here refers to the "retriever" mentioned in KICGPT [1], it's important to note that: - The retriever in [1] is actually a KG embedding (KGE) model (RotatE in their case) used to generate top-ranked candidate object entities $o$ given a query $(s, r, ?)$. The LLM is then used to re-rank these candidates. - In this case, the final results rely heavily on the candidate generation handled by the KGE methods. This means that re-ranking methods like [1, 2, 3] can be used as an add-on to KG-FIT, where KG-FIT provides more accurate candidates, improving the final results. We demonstrate this by implementing KICGPT with KG-FIT and show the results in **Table B of our rebuttal PDF**. However, if the retrieval process does not involve KGE methods: 1. Link prediction is a crucial task for evaluating knowledge discovery ability. To the best of our knowledge, there are currently no methods that use LLM with retrieval for this task. 2. Knowledge discovery on an existing KG typically requires the model to be fine-tuned on the KG. This is because (1) we need a comprehensive view of all possible object entities $o$ given a query $(s, r, ?)$, and (2) the model must learn underlying patterns from the existing knowledge. 3. Traditional retrieval methods without KGE cannot be directly applied for link prediction because they do not provide a systematic way to rank all possible object entities for a given query. They typically retrieve a small subset of relevant entities based on text similarity, which may miss many valid candidates. In contrast, KGE methods like KG-FIT learn to embed the entire KG structure, enabling them to score and rank all possible object entities for a given query. We will clarify these points in the revised paper to highlight the unique strengths of KG-FIT and its potential to complement retrieval-based methods. > ### **[W2/Q2]** (Incorporating LLM-based baselines) Thank you for this suggestion. We have added two recent LLM-based baselines: KICGPT [1] and KG-LLM [4]. The results are shown in **Table B of our rebuttal PDF** and will be added to Table 2 in the revised paper. Note that we only implemented KG-LLM on FB15K-237 due to its high computational cost. > ### **[W3/Q3]** (Why KG embedding methods) We appreciate this question as it allows us to clarify the unique advantages of our approach. Let's compare different methods: 1. Fine-tuning-based methods: - Pros: - Can adapt LLMs to specific KG domains - Leverage the vast knowledge and reasoning capabilities of LLMs - Cons: - Computationally expensive, especially for large LLMs - May overfit to small KGs due to the large number of parameters - Difficult to update as the KG evolves, requiring retraining 2. Re-ranking (Retrieval)-based methods: - Pros: - Leverage LLM knowledge without expensive fine-tuning - Computationally efficient for inference - Cons: - Rely on pre-existing KG embeddings for candidate generation - May miss global patterns and relationships in the KG - Limited by the quality of the retrieval process and the retrieved context 3. KG-FIT and KG embedding methods: - Pros: - Capture the global structure and patterns of the entire KG - Computationally efficient - Easily updatable as new knowledge is added to the KG - Provide interpretable entity and relation representations - Cons: - Require a well-designed model architecture and training process to effectively incorporate LLM knowledge To illustrate the importance of KGE methods for knowledge discovery, let's consider the following example. > **[Example]** Suppose we have a movie knowledge graph where entities represent actors, movies, directors, and genres, and relations represent facts like "acted_in", "directed_by", and "belongs_to_genre". Given a query $(s, r, ?)$ where $s$ is a specific actor, $r$ is "likely_to_collaborate_with", and the goal is to predict another actor $o$ that $s$ is likely to work with in the future, a KGE model like KG-FIT can learn from the global KG structure to infer patterns such as "actors who have worked with the same directors or in similar genres are more likely to collaborate". This allows KG-FIT to make informed predictions about potential actor collaborations, even if those specific actors have never worked together before. In contrast, a non-fine-tuned LLM-based method might have difficulty making such inferences if the actors' previous collaborations are not explicitly mentioned in the training set/retrieved context, as it lacks the global understanding of the interconnected nature of the film industry that KGE methods can capture. It is also important to note that **KG embeddings enhanced by KG-FIT are complementary to LLM/PLM fine-tuning and re-ranking methods**. For example, PKGC [2] and TagReal [3] apply both fine-tuning and re-ranking, but they heavily rely on the "retrieved" results from their backbone KG embeddings. KG-FIT can improve the quality of these backbone embeddings, leading to better overall performance. Moreover, KG-FIT strikes a balance between leveraging the strengths of LLMs and maintaining the desirable properties of KG embeddings. By incorporating LLM knowledge into the embedding space, KG-FIT can capture more semantic information and complex relationships while still being computationally efficient and interpretable. --- **References** [1] (EMNLP 2023) KICGPT: Large language model with knowledge in context for knowledge graph completion. [2] (ACL 2022) Do pre-trained models benefit knowledge graph completion? [3] (ACL 2023) Text-augmented open knowledge graph completion via pre-trained language models. [4] (arXiv) Exploring large language models for knowledge graph completion. --- Rebuttal Comment 1.1: Comment: Thanks for the response. After carefully considering your feedback as well as the comments from other reviewers, I have decided to maintain my rating since this paper does not yet meet the rigorous standards expected for publication in NeurIPS. I'd like to see an improved version after a major revision.
Summary: The paper introduces a framework called KG-FIT for enhancing knowledge graph embeddings by integrating knowledge from large language models (LLMs). KG-FIT enriches entity descriptions using LLMs and then constructs a semantically coherent hierarchical structure of entities. It finally fine-tunes KG embeddings using this hierarchy and textual information. Experiments on benchmark datasets (FB15K-237, YAGO3-10, PrimeKG) demonstrate its effectiveness in link prediction. Strengths: - Using LLMs to enhance KG embeddings can capture comprehensive features. The proposed method demonstrates improvements in link prediction when compared to selected baseline methods. - The presentation of the paper is good. Weaknesses: - There are several related studies on using LLMs to enhance text information in KGs [1,2], which, however, were not discussed in the paper. Additionally, constructing a hierarchy seems redundant given the existing graph structure of the KG. More discussion and explanation are needed. - The rationale behind the technical design is unclear. For instance, the proposed method concatenates structural and textual embeddings to construct the hierarchy, and then linearly combines these embeddings for fine-tuning. What are the underlying reasons for these choices? - The method may be computationally expensive due to the use of LLMs and hierarchical refinement. In my view, it is not profitable to use LLMs to achieve ~0.02 Hits@1 improvements in link prediction. - Several recent strong baselines for KG link prediction, such as NBFNet [3] and AdaProp [4], which both achieve over 0.32 Hits@1 on FB15K-237, are absent from the experiments. It remains uncertain whether the proposed method can still improve the two baselines. [1] Derong Xu, Ziheng Zhang, Zhenxi Lin, Xian Wu, Zhihong Zhu, Tong Xu, Xiangyu Zhao, Yefeng Zheng, Enhong Chen: Multi-perspective Improvement of Knowledge Graph Completion with Large Language Models. LREC/COLING 2024: 11956-11968 [2] Dawei Li, Zhen Tan, Tianlong Chen, Huan Liu: Contextualization Distillation from Large Language Model for Knowledge Graph Completion. EACL (Findings) 2024: 458-477 [3] Zhaocheng Zhu, Zuobai Zhang, Louis-Pascal A. C. Xhonneux, Jian Tang: Neural Bellman-Ford Networks: A General Graph Neural Network Framework for Link Prediction. NeurIPS 2021: 29476-29490 [4] Yongqi Zhang, Zhanke Zhou, Quanming Yao, Xiaowen Chu, Bo Han: AdaProp: Learning Adaptive Propagation for Graph Neural Network based Knowledge Graph Reasoning. KDD 2023: 3446-3457 Technical Quality: 2 Clarity: 3 Questions for Authors: - The proposed method leverages entity names to prompt large language models (LLMs) to generate corresponding descriptions. How does it address the issue of multiple entities having the same name? - Please see other questions in "Weaknesses". Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: NA. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We appreciate your recognition of our work's strengths and we address your concerns as follows. --- > ### **[W1.1]** *"There are several related studies on using LLMs to enhance text information in KGs [1,2] ..."* We acknowledge this oversight. In our latest draft, we have cited [1] and [2] in line 83: - "Some methods [1,2] are proposed to improve the performance of the aforementioned methods by enhancing the text information of entities/relations in KGs." Interestingly, these methods are complementary to Step 1 of KG-FIT. We conducted additional experiments replacing our entity descriptions with those generated by MPIKGC [1] (E&S strategy) and Contextualization Distillation (CD) [2] (ED strategy). For CD, we averaged the embeddings of all descriptions generated for each entity. Results on FB15K-237 with RotatE and HAKE are shown in **Table A in our rebuttal PDF**, which demonstrates that KG-FIT can be further enhanced by incorporating these methods. > ### **[W1.2]** *"Additionally, constructing a hierarchy seems redundant given the existing graph structure of the KG."* Constructing a hierarchy offers key benefits: 1. **Different levels of abstraction:** KGs represent direct relationships but not hierarchical relationships or varying abstraction levels. KG-FIT's hierarchy captures broader semantic relationships not directly encoded in the KG. - *Example:* In a biomedical KG, "Aspirin" and "Ibuprofen" might be directly connected as "pain relievers". However, an LLM-constructed hierarchy could group them under "NSAIDs", then "Analgesics", and finally "Pharmaceuticals", providing a richer semantic context. 2. **Incorporating external knowledge:** The LLM-constructed hierarchy in KG-FIT incorporates external knowledge absent from the original KG, enriching entity representations. - *Example:* "Apple" and "Microsoft" grouped under "Tech Giants", "Consumer Electronics", and "Fortune 500 Companies", incorporating broader market knowledge not explicit in the original KG. 3. **Handling sparse connections and improved generalization:** KGs often have sparse entity connections. Hierarchical structure bridges gaps between semantically related but unconnected entities and enhances generalization to unseen entities or relationships. - *Example:* In a medical KG, "Heart Disease" and "Diabetes" might not be directly connected, but both could be grouped under "Chronic Diseases". An LLM-constructed hierarchy could further classify them under "Cardiovascular Diseases" and "Metabolic Disorders", respectively. This organization bridges gaps and generalizes treatment or risk factors shared among similar diseases, providing a richer context for inference. These hierarchies provide valuable semantic context beyond the explicit KG structure, enhancing overall embedding representational power. We have also analyzed the effect of hierarchy in **Table 3 and Figure 3 in the paper**. > ### **[W2]** *"The proposed method concatenates structural and textual embeddings to construct the hierarchy, ..."* This is a misunderstanding. Our method involves two distinct steps: 1. **Hierarchy construction**: We use **only textual information** - *no structural embeddings*. The enriched entity embedding $v_i$ is a **concatenation of entity name embedding $v^e_i$ and description embedding $v^d_i$** (Equation 1). 2. **Fine-tuning**: Entity embedding $e_i$ is **initialized as a linear combination** of a random embedding $e'_i$ and sliced text embedding $v'_i$ (Equation 5). This allows the integration of *LLM-derived semantics while adapting to KG structure*. Structural information from the KG is used *only in the link prediction objective (Equation 8)*. > ### **[W3]** *"Expensive use of LLMs and hierarchical refinement."* We appreciate this concern, but we respectfully disagree for two main reasons: 1. **Significant improvements**: In many cases the improvements are substantial. For example, KG-FIT-HAKE w/ LHR on PrimeKG achieves a ~0.07 improvement in Hits@1 over KG-FIT-HAKE w/ seed hierarchy. 2. **One-time, reasonable cost**: The use of LLMs is limited to the hierarchy construction, which is a one-time preprocessing step. As detailed in Appendix H.2, even for large-scale KGs like YAGO3-10, the cost remains reasonable when considering the deployment of KG-FIT in real-world applications. > ### **[W4]** *"Several recent strong baselines for KG link prediction, such as NBFNet [3] and AdaProp [4] ..."* We appreciate the suggestion to compare with these strong GNN-based baselines. However, KG-FIT is designed for embedding-based methods, offering better scalability, interpretability, and lower computational requirements. Adapting KG-FIT to NBFNet or AdaProp would require substantial changes and might lose these benefits. KG-FIT constructs a hierarchical structure of entity clusters using **static embeddings,** while NBFNet and AdaProp use **dynamic entity representations** through message passing. Integrating these fundamentally different architectures would essentially amount to developing a new hybrid model, beyond our current scope. Nonetheless, we believe KG-FIT's underlying philosophy could inspire improvements in GNN-based methods in future work. --- > ### **[Q1]** *"Issue of multiple entities having the same name"* Thank you for this question. First, as shown in Fig. 7 in Appendix E.1, the prompt asks the LLM to generate descriptions with a "hint" (an entity description from the original KG dataset). This helps differentiate entities with the same name during this step. In our paper, only YAGO3-10 does not have such descriptions, but it does not have this issue. Second, we can also mitigate this by using strategies like MPIKGC/CD (mentioned in W1.2). Feeding the LLM triples of entities from the training set helps provide context and differentiate entities within the KG. --- Rebuttal 2: Comment: Dear Reviewer hiNn, We greatly appreciate your thoughtful review and the time you have taken to provide detailed feedback on our work. In our author response, we have addressed your valuable comments regarding related work, motivation, and the several technical aspects of KG-FIT. As **the discussion period nears its end (in 24 hours)**, we would be grateful if you could take a moment to review our response and let us know if there are any remaining concerns or if our clarifications have adequately addressed your points. Thank you once again for your efforts in evaluating our submission. Sincerely, The Authors
Rebuttal 1: Rebuttal: Dear Reviewers, We sincerely thank you for your thoughtful and constructive feedback on our submission "Knowledge Graph Fine-Tuning Upon Open-World Knowledge from Large Language Models". We deeply appreciate the time and effort you've invested in reviewing our work. > **[Strengths of Our Work]** We are grateful that multiple reviewers recognized several strengths of our work: 1. The effectiveness of KG-FIT in leveraging knowledge from LLMs to significantly enhance KG embeddings for the link prediction task (All Reviewers). 2. The extensive experimental evaluation, including comparisons with numerous baselines across multiple datasets and KG embedding backbones (Reviewers *G3dr*, *dKZ2*, and *9oop*). 3. The clear presentation, including well-structured paper organization and clear figures (Reviewers *hiNn*, *G3dr*, and *9oop*). 4. The provision of code for reproducibility and detailed experimental setup descriptions (Reviewers *G3dr* and *9oop*). > **[Our Responses to Weaknesses]** We acknowledge the concerns raised and have addressed them in our **individual rebuttals**. We have also attached a **Rebuttal PDF** which includes several tables and a figure showing new experimental results to address reviewers' concerns. Here's an overview of its content and key findings: 1. Table A: Performance of KG-FIT augmented by LLM-based KG textual information enhancement methods (MPIKGC and CD) on FB15K-237. * Key finding: KG-FIT can be further enhanced by incorporating improved textual information, showing consistent improvements across all metrics. 2. Table B: Comparison with additional LLM-based baselines (KG-LLM and KICGPT) on FB15K-237 and WN18RR. * Key finding: KG-FIT outperforms KG-LLM and significantly enhances the performance of KICGPT when used as its backbone retriever, demonstrating its effectiveness in combining with LLM-based methods. 3. Figure A: Sensitivity analysis demonstrating KG-FIT-HAKE's robustness to hyperparameter variations. * Key finding: KG-FIT maintains stable performance across various hyperparameter settings, indicating its robustness and potential for easy adaptation to new datasets. 4. Table C: Comparison with additional recent KG embedding baselines (CompoundE, GIE, and DualE) across FB15K-237, YAGO3-10, and PrimeKG. * Key finding: KG-FIT consistently enhances the performance of these state-of-the-art KG embedding methods across all datasets, achieving new state-of-the-art results. We have provided detailed, point-by-point responses to each reviewer's specific concerns in our individual rebuttals. We encourage you to refer to these for in-depth discussions on particular aspects of our work. We are committed to incorporating your valuable feedback to further improve our paper. Thank you again for your time and expertise in reviewing our submission. Pdf: /pdf/84bb3b7cc6d9737170ecf9423d384117489ae03d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Neural Persistence Dynamics
Accept (poster)
Summary: This work presents a novel approach to infer the parameters of governing equations describing the collective behavior of systems like point clouds. It leverages persistent homology to capture the topological features of the system's state. These features are then modeled using a Latent ODE system, capturing the temporal evolution of the system. By analyzing the latent dynamics, the method can identify and regress the parameters of the underlying governing equations (e.g., PDE) that govern the system's behavior. Strengths: - This work proposes a novel method based on persistent homology (PH) to infer the parameters of governing equations describing the collective behavior of systems like point clouds. - A novel application in combining persistent homology with inverse problems. - The paper discusses various aspects of combining PH with modeling temporal correlations, giving an in-depth insight into the field. Weaknesses: - This work resonates closely with inverse problems for dynamical systems, where several works have been conducted to infer the initial parameters or parameters of interest based on observed data. Some of them are: 1. Learning to Solve PDE-constrained Inverse Problems with Graph Networks (https://arxiv.org/pdf/2206.00711) 2. Fully probabilistic deep models for forward and inverse problems in parametric PDEs 3. Invertible Fourier Neural Operators for Tackling Both Forward and Inverse Problems (https://arxiv.org/pdf/2402.11722) A comparison with these methods will strengthen the aspect of this work, as these methods do not explicitly model the topological features. - I think the paper should include some additional discussion about the computational limitations of the method. While these are briefly discussed at a high level, a more quantitative analysis (e.g., comparing the effective run times and computational complexity) seems essential to give the reader a better idea of the overhead incurred when using this method. - Despite the capability of their model to incorporate persistent homology, enabling the use of topological features, they only utilize classical persistent homology via Rips filtrations. The potential advantages of this feature are not adequately explored in the model. Technical Quality: 2 Clarity: 2 Questions for Authors: - Does the model output a single point estimate for the parameters of interest or a corresponding distribution (mean and std)? - Can authors describe the regression objective w.r.t to underlying parameters of interest, as it is unclear from the paper, maybe including the correct equations they use? - Since PH does not account for the temporal changes in the point cloud, - How is the work different from this https://arxiv.org/abs/2406.03164? It seems like it can be subsumed as one can define a graph out of point clouds and operate on them. - Does the fitting of the ODE system deal with stiffness as you are fitting the topological features via an ODE, which can be non-smooth in between, leading to inefficient modeling? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See questions and weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: For better readability, we restate your comments/questions in *italic*, our response(s) are marked by &#9654; *This work resonates closely with inverse problems for dynamical systems, where several works have been conducted to infer the initial parameters ...* ▶ Currently, we discuss related work on the inverse problem, but only in the context of models of collective behavior. We thank the reviewer for suggesting a broader perspective and comparing with prior learning-based works on inverse problems for dynamical systems in general, including the three listed ones. This will certainly be a good addition to the manuscript. Regarding refs 1-3, we note that the underlying differential equations (i.e., classic PDEs) differ from the multi-particle systems we consider. In our case, forces on each particle depend on its relative position towards all other (or its neighboring) particles, resulting in strong couplings between individuals. For specific systems (Volume exclusion), it is possible to derive parametric PDEs that describe the asymptotics of infinitely many particles (Oelschläger, 1990), and ideas from references 1-3 might be applicable. However, in general, the number $N$ of particles influences the dynamics. E.g., for the D'Orsogna model, particle distances can converge to 0 as $N\to \infty$. *I think the paper should include some additional discussion about the computational limitations of the method ...* &#9654; We agree; please see our *General Response* section. *Despite the capability of their model to incorporate persistent homology, enabling the use of topological features, they only utilize classical persistent homology ...* &#9654; Vietoris-Rips persistent homology is the de-facto workhorse for point cloud data. While one could use other geometric complexes (e.g., Alpha complexes as mentioned by reviewer **dbj5**) or different vectorization strategies (ATOL, persistence images, etc.), we decided to work with the most prevalent tools, but acknowledge that other choices are possible. Also, using Vietoris-Rips PH facilitates straightforward comparisons to previous topology-related works on the same problem, such as Crocker Stacks or the PSK approach. Importantly, our goal was not to assess the impact of different design choices but to introduce a latent dynamical model based on topological per-time-point descriptors for the inverse problem at hand. We are convinced that future work will explore different variants of our approach that may be more suitable in certain situations (e.g., using LSTMs in case of discrete dynamical systems, see **83FG**). *Does the model output a single point estimate for the parameters of interest or a corresponding distribution (mean and std)?* &#9654; At the moment, all models output *point estimates* of the sought-for parameters. However, one could easily sample $q_{\\theta}$, integrate forward, and obtain a distribution of estimates. We will add tables to the appendix that report an estimated mean & std. dev. from this sampling. *Can authors describe the regression objective w.r.t to underlying parameters of interest, ...?* &#9654; The objective is to predict the simulation parameters that led to a particular realization of a point cloud sequence. Take the PH-only variant `v1` of Fig. 1, for instance: input is a sequence of vectorized persistence diagrams $v_{\\tau_0}, \\ldots, v_{\\tau_n}$. The encoder $Enc_\\theta$ yields parameters of the approx. posterior $q_\\theta$ from which we sample and integrate the latent ODE forward in time to get latent states along the trajectory; this latent state sequence is then fed through $Enc_\\alpha$ which summarizes the sequences into a vector and linearly maps the latter to simulation parameter estimates $\\hat\\beta = (\hat{\beta}\_1, \ldots, \hat\beta\_P)$. During training, we minimize the mean-squared-error (MSE) between the predictions and the ground-truth simulation parameters, implicitly making a Gaussian noise assumption. One could write the regression model as $$\\hat\\beta = \\phi(v_{\\tau_0}, \\ldots, v_{\\tau_0}) + \epsilon, \epsilon \sim \\mathcal N(0,\sigma I)$$ where $\phi$ subsumes all steps above. *Since PH does not account for the temporal changes in the point cloud, how is the work different from this https://arxiv.org/abs/2406.03164? ...* &#9654; As correctly pointed out, we do not track topological features on the level of *individual points* in a persistence diagram but on the level of *vectorized persistence diagrams*. The crucial difference to the arXiv paper is that the integration of neural ODEs is on the *message passing level* of a GNN for a *fixed* graph. In our context, this means that the graph object needs to be such that each vertex corresponds to the same particle at all times. However, such correspondences are unknown or hard to obtain in practice. In fact, in our work, we deliberately avoid this "tracking" step. *Does the fitting of the ODE system deal with stiffness as you are fitting the topological features via an ODE, which can be non-smooth in between, leading to inefficient modeling?* &#9654; This is an interesting point! However, we *do not* model the dynamics directly in the space of persistence diagrams but rather in a latent space learned from diagram vectorizations. This modeling could be inefficient to some extent, yes, but the latent ODE approach seeks to learn the most suitable latent space for the task at hand, which is to minimize prediction error for the model parameters (via the MSE) and to minimize the reconstruction error for the vectorized persistence diagrams (as part of the ELBO). This strategy is common in the literature, as, e.g., seen in the PhysioNet 2012 experiments of (Rubanova et al., 2019), where variables include binary indicators (e.g., of whether mechanical ventilation is used at available time points). --- Rebuttal Comment 1.1: Comment: We would like to kindly ask, at the end of this rebuttal phase, whether our response addressed your questions/concerns or whether we can provide any further clarifications.
Summary: The paper addresses the challenge of predicting the specific parameters of models yielding point cloud dynamical systems known only partially from a set of observations in different time steps. This is achieved by leveraging information about the evolution of persistent homology vectorizations of the observed point clouds at different time steps. Particularly, the paper uses a specific, previously published, vectorization that is Lipschitz with respect to the standard point set Wasserstein distance. The evolution of the topological vectorization for the different point cloud dynamical systems is assumed to be governed by latent dynamic processes, which are approximated via a neural ODE and used to infer, given a set of observations of an evolving point cloud, the model parameters that produce these observations. The proposed baseline method is tested and benchmarked across three different scenarios, demonstrating the utility of its components and showing a significant superiority over the leading state-of-the-art methods. Strengths: **Originality**: To the best of my knowledge, the idea of modeling the dynamics of vectorized persistence diagrams via a continuous latent variable model is novel and powerful. **Significance**: As stated and demonstrated in the paper, neural persistence dynamics, and the evolution of point clouds over time seen as a group rather than individual points, provide significant insights into the dynamical systems governing the behavior of the individual points. This approach has great potential in studying real-world problems, especially in "natural systems" where the interaction of individuals as a group is key to understanding their behavior (e.g., flocks of birds, insects, fish, cells in an organism). Specifically related to neural networks, the method proposed in this paper has the potential to improve the analysis of the dynamics of data going through the different layers of a neural network, as done in a more simplistic way in [1] and [2], where contradictions were found that this project might help resolve. Additionally, the method's significant superiority compared to other approaches for predicting the parameters of dynamical systems from evolving point clouds suggests that this is indeed a promising direction for further research in the study of evolving topology and in the aforementioned areas. **Clarity and Quality**: The paper is generally well-written, with a few exceptions that I will address in the weaknesses section. The comprehensive literature review significantly enhances the quality of the manuscript. Figure 1 and 2 are particularly clarifying when reading the text. [1] Naitzat, Gregory, Andrey Zhitnikov, and Lek-Heng Lim. "Topology of deep neural networks." Journal of Machine Learning Research 21.184 (2020): 1-40. [2] Wheeler, Matthew, Jose Bouza, and Peter Bubenik. "Activation landscapes as a topological summary of neural network performance." 2021 IEEE International Conference on Big Data (Big Data). IEEE, 2021. Weaknesses: - I think that the sentence in line 32 "... due to missing correspondences between individuals..." is hard to read. Which missing correspondences? -Table 1 lacks context and is duplicated in Table 2. The experiments of this table are not explained and metrics are not justified. I think a more qualitative explanation may be better than the quantitative explanation given in the introduction. Alternatively, providing more information about the experiments would help, but it could lead to information duplicity, which might be undesirable. - The proposed approach does not track of the evolution of individual topological features of the persistence diagram. This could be important in scenarios requiring fine-grained topological information. - The authors claim that their method is more efficient than other methods, but computation times are not reported in the main text. - In the persistent homology paragraph (starting at line 154), the authors state that there is a decomposition in persistence diagrams (births and deaths) when using abelian groups in the chain complexes. As far as I know, this is not a trivial property and does not happen always [Theorem 2, reference 3] for abelian groups. The usual theorem applies to vector spaces. - Only one vectorization method is used, without significant justification. I would have appreciated a more detailed ablation on this point. - Maybe I'm wrong, but when I access the arxiv version of reference [23] in your paper, Table 4 contains execution times, and not SOTA results. [3] Jiajie Luo, , Gregory Henselman-Petrusek. "Interval Decomposition of Persistence Modules over a Principal Ideal Domain." (2023). Technical Quality: 2 Clarity: 3 Questions for Authors: - Why do you use $R^2$? I thought that it was a misleading metric [4]. This is my main concern, and the core reason why the score is a borderline accept. - Can your approach be used for discrete dynamical systems (where time is discrete and not continuous?) - Since the method is very general and can work with any kind of vectorizations, not just topological ones, is there a way to regularize or add explicit bias during training to inform the training process/neural ODE that the representations are topological? [4] Li J (2017) Assessing the accuracy of predictive models for numerical data: Not r nor r2, why not? Then what?. PLOS ONE 12(8): e0183250. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Some limitations have been addressed, although I would appreciate a better discussion, for example, adding that the evolution of individual topological features cannot be studied. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: For better readability, we restate your comments/questions in *italic*, our response(s) are marked by &#9654; *I think that the sentence in line 32 "... due to missing correspondences between individuals..." is hard to read. Which missing correspondences?* &#9654; By "correspondences" we mean that many previous works rely on being able to unambiguously identify particles across point clouds over time. We will clarify this unfortunate formulation. *The proposed approach does not track of the evolution of individual topological features of the persistence diagram. This could be important in scenarios requiring fine-grained topological information.* &#9654; Indeed, our method does not retain information about how individual homology classes evolve. We model the evolution of features *on the level of persistence diagram vectorizations*, not on the level of individual points in the persistence diagram(s). Persistence vineyards would allow us to do this, but they are (i) hard to vectorize for ML tasks, (ii) computationally expensive to compute, and (iii) often infeasible to obtain without particle correspondences over time. Following your suggestion, we will extend our discussion to include these points. *The authors claim that their method is more efficient than other methods, but computation times are not reported in the main text.* &#9654; Please see our runtime analysis in the *General Response* section. *In the persistent homology paragraph (starting at line 154), the authors state that there is a decomposition in persistence diagrams (births and deaths) when using abelian groups ...* &#9654; When sketching persistent homology in Sec. 3, our writing unintentionally and wrongly suggests that birth/death times (interval compositions) are well-defined for homology with coefficients in arbitrary Abelian groups, e.g., in the presence of torsion. We will clarify this part; thank you for pointing it out. We use coefficients in the field $\\mathbb{Z}/2\\mathbb{Z}$ (as is usual). *Only one vectorization method is used, without significant justification. I would have appreciated a more detailed ablation on this point.* &#9654; Our contribution is not within the particular realization of our core idea, but in the surprising insight that one can obtain remarkable predictive performance from capturing the dynamics of vectorized topological summaries per time point, without tracking particles, or without computationally-expensive summaries of time-varying persistence. On could, however, use other vectorization techniques; we experimented with ATOL vectorizations (which are conceptually very close to ours) and persistence images, but only observed minor changes in predictive performance. Nevertheless, ATOL vectorizations are not stable, and persistence images can be hard to parametrize (and discretize) without obtaining (very) high-dimensional descriptors. *Maybe I'm wrong, but when I access the arxiv version of reference [23] in your paper, Table 4 contains execution times, and not SOTA results.* &#9654; Thank you for pointing this out! Table 4 is meant to refer to the table in the submitted manuscript, not in the referenced work. We will rephrase our formulation. *Why do you use 𝑅2 ? I thought that it was a misleading metric [4]. This is my main concern, and the core reason why the score is a borderline accept.* &#9654; We agree, $R^2$ has known flaws, much like other metrics to assess regression performance, yet it is easy to interpret. As a summary impression across *all* predicted parameters, MSE and RMSE cannot be applied due to differing ranges of simulation parameters. We used $R^2$ and SMAPE to have two different evaluation scores. Our *joint* variant (v3) outperforms the state-of-the-art in both of these scores across all datasets, and even our PH-only variant (v1) does so on 3 out of 4 datasets. To further reduce evaluation bias, we will add the explained variance (EV) as suggested in your ref [4] to all tables, and also list the RMSE per parameter in our appendix. Below is an example for the `dorsogna-1k` data (parameters $C,l$ are as in (Guisti et al., 2023)): | | $R^2$ $\uparrow$ | SMAPE $\downarrow$ | RMSE ($C$) $\downarrow$ | RMSE ($l$) $\downarrow$ | EV $\uparrow$ | | ---|---|---|---|---|---| | Ours (joint, v3) | 0.930 $\pm$ 0.003 | 0.068 $\\pm$ 0.003 | 0.080 $\\pm$ 0.005 | 0.190 $\\pm$ 0.004 | 0.931 $\\pm$ 0.004 | | Ours (PointNet++, v2) | 0.814 $\\pm$ 0.032 | 0.132 $\\pm$ 0.018 | 0.178 $\\pm$ 0.017 | 0.242 $\\pm$ 0.020 | 0.816 $\\pm$ 0.003 | | Ours (PH-only, v1)| 0.846 $\\pm$ 0.011 | 0.097 $\\pm$ 0.005 | 0.116 $\\pm$ 0.004 | 0.275 $\\pm$ 0.006 | 0.851 $\\pm$ 0.008 | | PSK | 0.816 $\pm$ 0.015 | 0.096 $\\pm$ 0.006 | 0.112 $\\pm$ 0.010 | 0.305 $\\pm$ 0.013 | 0.819 $\\pm$ 0.016 | | Crocker Stacks | 0.743 $\\pm$ 0.083 | 0.150 $\\pm$ 0.005 | 0.156 $\\pm$ 0.007 | 0.331 $\\pm$ 0.011 | 0.746 $\\pm$ 0.023| *Can your approach be used for discrete dynamical systems (where time is discrete and not continuous?)* &#9654; We are confident that our method can be used with discrete systems. However, some components of our architecture may need to be adjusted; e.g., switching from neural ODEs for modeling latent dynamics to, say, LSTMs or GRUs. Thank you for this comment, we will add a remark! *Since the method is very general and can work with any kind of vectorizations, not just topological ones, is there a way to regularize or add explicit bias during training ....?* &#9654; If we understand your question correctly, you are asking whether some form of topological prior could be used to inform the neural ODE training process. This is an interesting question, and while we have not experimented with topological regularization so far, it might be possible to enforce specific topological/geometrical properties through an appropriate loss (on persistence diagrams) and differentiating through the PH computation (as done in prior works). --- Rebuttal Comment 1.1: Comment: Really good answer, thank you very much. You addressed all my comments. Although I think it could have been really interesting to add individual tracking of persistent homology features (e.g., using vineyards, as you propose), I think the paper should be accepted for the conference. Also, I would love to see a continuation of this paper addressing discrete dynamical systems. I know several problems where this could be used successfully. In general, I'm really excited about this particular direction of applied TDA, so I'm increasing my score to 6. Congratulations and thank you very much again!
Summary: This paper considers the problem of learning some parameters $\theta$ in---roughly speaking---a dynamical system of the form $\dot X = \phi(X, \theta)$, where $X \in \mathbb{R}^{n \times d}$ (with typically $d=3$), from an observed time-discrete trajectory $X(\tau_0),\dots,X(\tau_N)$. The idea conveyed by this paper is that instead of tracking the dynamic of the whole point cloud ($n$ can be quite large) to infer the parameter $\theta$, it may be somewhat sufficient to look at a "featurization" of it---here, of a topological nature, yielding a sequence of vectors $v_{\tau_0},\dots,v_{\tau_n}$ that somewhat summarize the dynamic while being (hopefully) sufficient to infer the parameters $\theta$. The parameters of the system are then learned by existing techniques, essentially by using a low-dimensional latent dynamic $(z_t)_t$ modeled by a neural ODE and then connected through the observed $(v_t)_t$ by encoder-decoder; training being performed using ELBO maximization. The experiments are conducted on synthetic data (from rather sophisticated dynamical systems), containing an extensive ablation study, and showcasing the usefulness of the proposed method on the considered models. Strengths: The proposed use-case of Topological Data Analysis in the context of dynamical systems feels quite convincing and refreshing. The methodology, at a global picture level, is fairly clear. The performance showcased by the experimental results, supported by an extensive ablation study, seems to robustly support the usefulness of the method. Weaknesses: My main concern is about the clarity of the paper in the following sense: at the global picture level, the paper feels very clear and convincing. However, I think that it is not clear at the low-level scale, in that I feel completely unable to implement the method based on what is described in the (main) paper. For, the PD vectorizations are never described precisely, the final loss function (ELBO + regression) is only informally discussed in lines 226-234, I actually do not exactly understand how the parameters of the systems are encoded in the models, between which variables the $R^2$ coefficient (line 268) is computed (I infer that it is the parameters of the models wrt the one inferred at training time), etc. I understand that code and so on will be delivered and that the method will be usable by others if needed, but I believe that the impact of the paper would be strengthen if one could understand what is done _at the technical level_ (i.e., be able to roughly implement it) after reading it. Adding a minimalist proof-of-concept experiment, with a trivial dynamical system like $\dot x = \theta x$ or something like that, may possibly be helpful (or not). Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Related to my comments above, could you make explicit the way the parameters of the systems are encoded and learn? Say that I consider the Volume exclusion model with parameters $(\alpha, R, \lambda_b, \lambda_d)$. How are these parameters actually estimated once $\texttt{Enc}_\alpha, \texttt{Enc}_\theta, \texttt{Dec}_\gamma$ are trained with the ELBO+regression rule? How are the $\hat{\beta}_t$ (showcased in Figure 3) used afterward? 2. (more genuine question out of curiosity) In some sense, the seminal point cloud $X$ contains implicitly all the possible topological information one could extract using persistent homology. It also seems that PointNet-like architectures can extract this topological information---see for instance the RipsNet model proposed in [1]. So in some sense, this suggests that turning the observed $X(\tau)$ into diagrams is "the way to go" but PointNet fails to learn that. Do you agree with this statement? If so, do you have any intuition on why? 3. Still related to the above, the paper mentions that it uses PD with homology dimension 0, 1 and 2. Did you run the experiments using $H_0$ alone? If the performance significantly decreases, that would be a nice example where non-trivial topology is useful (as frustrating as it can be, I often observed that $H_0$ alone, i.e. looking mostly at the distribution of pairwise distances, was sufficient to reach good performances). Note: please do not feel obliged to run new experiments during the rebuttal period! This is just a genuine question. 4. In the conclusion, you mention that computing $H_2$-PD with Rips filtration is quickly computationally expensive (I completely agree), and propose to use alpha-filtration instead. Is there any difficulty to do so right now? (Alpha filtrations are readily implemented in several Tda-libraries according to https://cat-list.github.io/ ). Ref : [1] RipsNet: a general architecture for fast and robust estimation of the persistent homology of point clouds, de Surrel et al. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: For better readability, we restate your comments/questions in *italic*, our response(s) are marked by &#9654; **ad Clarity:** We tried to balance the presentation of our conceptual idea vs. the technical detail of its particular realization. We can provide more detail on the ELBO objective and the added regression loss in the appendix. The same holds for the PD vectorization technique we use. As both parts appeared separately in prior work, we sacrificed technical detail for those parts in favor of a more comprehensive conceptual perspective. We will add these details to the appendix to facilitate reproducibility, as you suggested, including pointers to the respective parts in the source code. Further, for **training**, the regression loss is the MSE between the parameter estimates and the ground truth simulation parameters. One may also use $R^2$ (or any other reasonable objective) during training, but we found the MSE to already yield good results. During **inference/evaluation** (on held-out testing data), we report the $R^2$ and SMAPE computed from predictions vs. ground truth values, just as you correctly inferred. *Related to my comments above, could you make explicit the way the parameters of the systems are encoded and learned? Say that I consider the Volume exclusion model with parameters. How are these parameters actually estimated once $Enc_\\alpha, Enc_\\theta, Dec_\\gamma$ are trained with the ELBO + regression rule. How are the $\hat{\beta}_t$ (showcased in Figure 3) used afterward?* &#9654; Inferring the simulation model parameters (e.g., for the Volume Exclusion model) is done as follows: For a given sequence of point clouds (at time points $\tau_0, \\dots,\\tau_n$), one first pre-computes Vietoris-Rips persistent homology (e.g., for $H_0, H_1, H_2$) per available time point and vectorizes the diagrams. This yields a sequence of vectors, i.e., one vector per available time point. This sequence is then fed to the encoder $Enc_\theta$ which outputs a parametrization for the approximate posterior $q_\\theta(z_{t_0}|\\{ v_{\tau_i} \\})$. Upon sampling from $q_\\theta$, we get an initial latent state $z_{t_0}$ and integrate the latent ODE forward in time to $t_n$. $Enc_\\alpha$ then summarizes this sequence of latent states and linearly maps it to a vector of *simulation parameter estimates* $\beta_1, \\ldots, \beta_P$. (For Volume Exclusion, we have four simulation parameters, so $P=4$). *(more genuine question out of curiosity) In some sense, the seminal point cloud contains implicitly all the possible topological information one could extract using persistent homology. It also seems that PointNet-like architectures ...* &#9654; An important point in this context is that RipsNet is *trained* (in a supervised manner) to predict precomputed *vectorizations* of persistence diagrams, not the persistence diagrams themselves. This means the desired vectorized diagram is known in advance and can be used in a loss function. Clearly, this strategy is designed to guide the network towards capturing topological information. Instead, in our experiments, the PointNet++ *directly* predicts the parameterization of the simulation; hence, it would have to learn topological information implicitly (as there is no external guidance towards this goal). The only guidance given is a loss wrt. the predicted simulation parameters. It might, however, be possible to replace our full pre-computation step (for Vietoris-Rips PH) with a pre-trained RipsNet (i.e., pre-trained on point clouds from such simulation experiments). We did not explore this direction here, but we greatly appreciate the comment. *Still related to the above, the paper mentions that it uses PD with homology dimension 0, 1 and 2. Did you run the experiments using $H_0$ alone? If the performance significantly decreases, that would be a nice example where non-trivial topology is useful (as frustrating as it can be, I often observed that $H_0$ alone, i.e. looking mostly at the distribution ...* &#9654; Yes, initially, we experimented with $H_0$-only and found that prediction performance drops in that case. The situation is less clear when also including $H_2$ features, where the results are more mixed and often not noticeably different. From a more quantitative perspective, below you can find a table for the `dorsogna-1k` experiment comparing $H_0$ vs. $(H_0, H_1)$ vs. $(H_0, H_1, H_2)$ for our approach (i.e., the `v1` variant from Fig. 2): | | $R^2$ $\uparrow$ | SMAPE $\downarrow$ | |---|---|---| | Ours (PH-only, v1, $H_0$) | 81.9 $\pm$ 0.015 | 0.101 $\pm$ 0.003 | | Ours (PH-only, v1, $H_0, H_1$) | 84.4 $\pm$ 0.021 | 0.098 $\pm$ 0.007 | | Ours (PH-only, v1, $H_0, H_1, H_2$) | 84.6 $\pm$ 0.011 | 0.097 $\pm$ 0.005 | *In the conclusion, you mention that computing $H_2$-PD with Rips filtration is quickly computationally expensive (I completely agree), and propose to use alpha-filtration instead. Is there any difficulty to do so right now? (Alpha filtrations are readily implemented in several Tda-libraries according to https://cat-list.github.io/ ).* &#9654; There is no inherent limitation to switching out Vietoris-Rips complexes for Alpha complexes. However, we did find in a preliminary experiment that different implementations of Alpha complexes (some of which are in the link you provided) surprisingly yield different diagrams, and without any further in-depth investigation refrained from using the latter in our experiments. Nevertheless, switching from VR to Alpha complexes would, as you suggested, significantly improve the performance of the preprocessing step in which we compute PH for different dimensions. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank you for taking time answering my review, giving experimental details and clarifying the content of the paper. I like that $H_1 + H_0 > H_0$, that's somewhat of a "good new" for TDA :-) I still need some time to read other reviews / comments and discuss with other reviewers to make my mind clear but I feel more positive about this work. --- Reply to Comment 1.1.1: Comment: We wanted to kindly ask, at the end of this rebuttal phase, whether the other reviewer's comments/answers to our response have helped in your deliberation or whether we can provide some further "short-term" answers to any remaining open issues.
Summary: This work considers the problem of learning the latent, continuous-time dynamics underlying time-evolving point clouds. To solve it, it leverages previous work on the persistent homology of point clouds and their vectorization, as well as the PointNet++ network, to obtain static representations of the point clouds at a set of observation times. It then makes use of the LatentODE framework of Rubanova et al. (2019) to infer a continuous-time representation encoding the observed representations. The authors employ the learned continuous-time representations to tackle the inverse problem of regressing the ground-truth parameters of a set of governing equations from which the point cloud observations were simulated. They show their model outperforms two recent baselines. They also empirically demonstrate that (i) the deep representations obtained via PointNet++ are complementary to those obtained via persistent homology; that (ii) actually modelling the continuous-time dynamics helps with the regression task; and that (iii) the regression tasks increases its complexity with that of the initial condition in the simulation. *References:* - Latent ODEs for Irregularly-Sampled Time Series. Rubanova et al. (2019) Strengths: The paper is very well written, and provides enough information for a reader not versed in the use of persistent homologies for point cloud summarization. The authors justify their methodology, viz. the encoding of vectorized persistent diagrams into a continuous-time latent process, by using stability arguments of previous work. Their argumentation reads well and is convincing. Another strenght is that, besides empirically demonstrating that their method outperforms two recent baselines, the authors additionally perform reasonable ablation studies that further justify their methodology. Weaknesses: The main contribution of this work could be read as a direct application of the Latent ODE framework of Rubanova et al. (2019) to the problem of time-evolving point clouds. It can be seen as incremental too, for it leverages stablished methods for point cloud representations. One could therefore argue that, despite its merits, the paper would better fit a conference or journal with a more applied character. It is also not clear how the method would perform on real-world regression tasks with empirical point cloud data. Can one, for example, use the proposed methodology to study the recorded data presented in the seminal work of Bialek et al. (2012)? It’d be nice if the authors could comment on the applicability of their method to empirical data. *References:* - Statistical mechanics for natural flocks of birds. Bialek et al. (2012) Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Neural ODEs are well known to be very difficult to train (see e.g. Dupont et al. (2019), Finlay et al. (2020), Choromanski et al. (2020) or Pal et al. (2021), just to cite a few). Can you comment on how difficult (or easy) was to train your Latent ODE network on the synthetic data you studied? What about the training time? 2. How did you parametrize the decoder network? I don’t find it in the manuscript. 3. Did you consider adding some noise to your observations in point cloud space? As presented, your network should easily be able to handle noisy observations. Similarly, how does the persistent homology representation deal with noise? These questions are of course relevant if one wants to apply your methods to empirical data. 4. How many points from the latent path are used as input to the regressor model? Does the model work with a single point, as e.g. the last point along the latent trajectory? It’d have been nice to understand what information of the latent path is important for the regression task, specially given that completely dispensing from the latent dynamics still gives compelling results (i.e. Table 3). *References:* - Augmented neural odes. Dupont et al. (2019) - How to train your neural ode: the world of jacobian and kinetic regularization. Finlay et al. (2020) - Ode to an ode. Choromanski et al. (2020) - Opening the blackbox: Accelerating neural differential equations by regularizing internal solver heuristics. Pal et al. (2021) Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes, they did address the limitations of their method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: For better readability, we restate your comments/questions in *italic*, our response(s) are marked by &#9654; **Ad contribution:** The reviewer is correct in that we use the latent ODE framework of (Rubanova et al., 2019). However, the latter is only one particular variant (of many; other options are, e.g., latent SDEs or Continuous Recurrent Units (CRUs)) for modeling latent dynamics. Our main contribution is the idea of modeling the point cloud dynamics through the lens of vectorized topological summaries, i.e., capturing topological changes on a *persistence diagram level*, as opposed to capturing changes on the level of individual points in such topological summaries. Even this arguably "simple" approach yields remarkable predictive performance for the parameters of governing equations of collective behavior. Furthermore, our approach avoids the need to track particles over time (and infer velocities) or to rely on computationally expensive summaries of time-dependent persistence. **Ad empirical data:** Thank you for that comment. Yes, our approach would work on the data from (Bialek et al., 2012). In fact, one motivation for our work was to alleviate the challenge of having to compute/estimate per-particle velocities (this is required in the work of Bialek et al. as they compute correlations from normalized velocities). In many real-world settings, one might not even be able to unambiguously track particles (let alone when the cardinality of the point clouds may vary as well). Instead, our approach only hinges on the position of the particles (and can even naturally deal with point clouds of varying size, as demonstrated with the `volex-10k` experiment). **Ad training time**: Please see our runtime analysis in the *General Response* section. *Neural ODEs are well known to be very difficult to train (see e.g. Dupont et al. (2019), Finlay et al. (2020), Choromanski et al. (2020) or Pal et al. (2021), just to cite a few). Can you comment on how difficult (or easy) was to train your Latent ODE network on the synthetic data you studied? What about the training time?* &#9654; First, we did not experience any difficulty training the full model. However, we did only experiment with the arguably simplest ODE model (an autonomous ODE). We did preliminary experiments also with a non-autonomous variant, but did not observe any noticeable improvements. Nevertheless, our experiments support the conclusion that even the "simplest" choice of latent ODE already suffices to largely outperform existing techniques. *How did you parametrize the decoder network? I don’t find it in the manuscript.* &#9654; We apologize for not being clear enough on that point. The decoder (aka reconstruction network, denoted as $Dec_\\gamma$ in Fig. 2 of the manuscript) is a simple 2-layer MLP with ReLU activation that maps latent states (from the latent ODE) to reconstructed persistence diagram vectorizations. We will include this information in the revised manuscript's appendix (and point to the locations in our source code). *Did you consider adding some noise to your observations in point cloud space? As presented, your network should easily be able to handle noisy observations. Similarly, how does the persistent homology representation deal with noise? These questions are of course relevant if one wants to apply your methods to empirical data.* &#9654; Currently, only the governing equations of the Vicsek model (see Fig. 3) include noise through the Brownian motion. In general, we point out that due to the stability of the persistence diagrams wrt. perturbations of the input (see the *Stability/Continuity aspects* section of the manuscript), our method is suitable in case of observation noise (unless the noise is excessively large). E.g., on `dorsogna-1k` where points are within $[-1,1]^3$, adding Gaussian noise $\\mathcal{N}(0,\\sigma)$ to all point coordinates and all time points, ($R^2$,SMAPE) drops from (0.846 $\\pm$ 0.011, 0.097 $\\pm$ 0.005) to (0.822 $\\pm$ 0.016, 0.098 $\\pm$ 0.002) at $\\sigma=0.01$, and to (0.730 $\\pm$ 0.013, 0.150 $\\pm$ 0.001) at $\\sigma=0.1$. *How many points from the latent path are used as input to the regressor model? Does the model work with a single point, as e.g. the last point along the latent trajectory? It’d have been nice to understand what information of the latent path is important for the regression task, specially given that completely dispensing from the latent dynamics still gives compelling results (i.e. Table 3).* &#9654; Again, we apologize for not being precise enough. In fact, we experimented with multiple variants: you are correct that one could use the “last” point along the latent trajectory. Another variant would be summarizing the latent trajectory via a signature approach (although this summary can become quite high-dimensional). The most conceptually elegant approach, from our perspective, was to re-use the encoder architecture (i.e., a duplicate of the mTAN encoder module with its own set of parameters, denoted as $Enc_{\alpha}$ in Fig. 2). However, instead of receiving an unequally-spaced sequence of vectorized persistence diagrams, $Enc_{\alpha}$ receives ALL points along the latent trajectory. In our case, “all” means 100 latent states at equally spaced time points in $[0,1]$, which we obtain by integrating the latent ODE forward in time. Also, our baseline (w/o dynamics) could be considered quite strong since it uses an mTAN encoder (i.e., an attention-based approach) that directly maps sequences of vectorized persistence diagrams to parameter predictions. While mTANs can attend to relevant information in a sequence, this baseline still consistently falls short of our approach, which explicitly models the dynamics. --- Rebuttal 2: Comment: I thank the authors for their detailed responses. After reading the reviews from other reviewers and your replies to them, I have decided to increase my score. I believe that, if the updated manuscript addresses all the points raised during this discussion session, it will make a valuable contribution.
Rebuttal 1: Rebuttal: # General Response We like to thank **all** reviewers for their overall positive feedback, their time, and their valuable comments and suggestions! While we address all issues point by point per reviewer, we first comment on our approach's *computational aspects* and present a detailed runtime analysis as this issue has come up across multiple reviewers. --- For reference, we refer to the following works in our rebuttal: **(Rubanova et al., 2019)** Y. Rubanova, R.T.Q. Chen, and D. Duvenaud. Latent ODE for irregularly-sampled time series. In: NeurIPS 2019. **(Hofer et al., 2019)** C. Hofer, R. Kwitt, and M. Niethammer. Learning representations of persistence barcodes. In: JMLR 20.126 (2019), pp. 1–45. **(Bialek et al., 2012)** W. Bialek, A. Cavanga, I. Giardina, and A. Walczak. Statistical mechanics for natural flocks of birds. In: PNAS 109.13 (2012), pp. 4786–4791. **(Carriere et al., 2021)** M. Carriere, F. Chazal, M. Glisse, Y. Ike, H. Kannan and Y. Umeda. Optimizing persistent homology based functions. In: ICML 2021. **(Oelschläger, 1990)** K. Oelschläger. Large systems of interacting particles and the porous medium equation. In: Journal of Differential Equations, Volume 88, Issue 2. --- ## Runtime analysis (pre-processing) We re-ran experiments on the `dorsogna-1k` dataset (1k sequences of length 100) and provide a breakdown of runtime below (measured on the same system as described in Appendix C of our manuscript): First, we point out that computing Vietoris-Rips persistent homology (PH) as well as persistence diagram (PD) vectorization is done as a *pre-processing step* for all available point cloud sequences. *These steps are trivially parallelizable across multiple CPUs/GPUs.* **Vietoris-Rips PH computation.** The following table contains wall clock time measurements (using Ripser++ on one GPU) per point cloud. We list runtime for computing $H_0$ and $H_1$ features, as well as for computing $H_0, H_1$ and $H_2$. Point clouds are of size 200, as in the manuscript: | | Time per point cloud | Overall | |--- |--- |--- | | PH ($H_0$, $H_1$) | 0.018 s | $\approx$ 30 min | | PH ($H_0$, $H_1$ and $H_2$ ) | 0.330 s | $\approx$ 6 hrs | **PD vectorization.** Vectorization can essentially be broken down into two steps: (i) parameter fitting for the structure elements of (Hofer et al., 2019) and (ii) mapping PDs to vector representations using those structure elements. The following table lists the runtime for both steps on `dorsogna-1k` when vectorizing 0-, 1- and 2-dimensional persistence diagrams: | | Time per diagram | Overall | | --- | --- | --- | | (i) Parameter fitting | n/a | $\approx$ 27 s | | (ii) Mapping of PDs to vectors | 0.0052 s | $\approx$ 52 s | *PH computation & PD vectorization are both necessary steps for our approach, Crocker Stacks, as well as the PSK approach*. ## Runtime comparison to PRIOR WORK (using topological summaries) Below, we list the overall **training times** for our approach, Crocker Stacks, and the PSK method. Runtime for pre-processing (see above) is *excluded* from these measurements. Also, PSK and Crocker Stack timings do include hyperparameter optimization, as suggested in the corresponding references. Importantly, this is *not* optional but required to obtain decent regression performance: | | Time | | --- | --- | | **Crocker stacks** | 24600 s (6 hrs 50 min) | | **PSK** | 646 s | | **Ours** (PH-only) | 190 s | Notably, PSK kernel computation scales quadratically with the number of sequences $n$, kernel-SVR training takes time somewhere between quadratic and cubic, hence scaling up the number of training sequences $n$ quickly becomes computationally prohibitive for the PSK method, especially in light of the required hyperparameter tuning. Finding suitable hyperparameters is also the main bottleneck for Crocker Stacks (which rely on a linear SVR). ## Runtime comparison to the BASELINE model (w/o dynamics) We also compare the runtime of our approach to the *baseline* model which does not explicitely model any dynamics via a latent ODE. | | Time | | --- | --- | | **Ours** (PH only) | 190 s | | **Ours** (PointNet++ only) | 3780 s | | **Ours** (PH & PointNet++) | 4100 s | | | | | **Baseline (w/o dynamics)** (PH only) | 50 s | | **Baseline (w/o dynamics)** (PointNet++ only) | 525 s | | **Baseline (w/o dynamics)** (PH & PointNet++) | 600 s | We will include such a runtime study (across datasets) in our appendix.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Improving Deep Learning Optimization through Constrained Parameter Regularization
Accept (poster)
Summary: The authors propose a form of regularization which adjusts the regularization strength based on the weights being inside or outside of a certain norm bound. A violation of the bound results in an increasing penalty and coefficient, while conforming to the bound results in a decreasing or zero regularization penalty. Strengths: - The method proposes a way to adaptively control the strength of regularization for weigth groups within a model - The model intuitively makes sense, as the regularization term is meant to constrain weights to within a certain region. However, current optimizers leave the regularization in place regardless of how well the weights are conforming to it. Weaknesses: - The statement on L123-124 is hard to accept, and I think it needs more explanation. Why if $F(x)$ not suitable for gradient based optimization? It seems that if $F$ and $c$ are both differentiable, then it indeed would provide iseful information to restore the feasibility of an infeasible $x$. Am I missing something here? - It would be interesting to know what effect the training has on the rubustness to OOD inputs or noise perturbed inputs. Is it possible to run some tests on the trained models related to CIFAR100-C and AdvSQUAD or AdvGLUE (or other suitable language dataset to measure robustness)? I ask this because I suspect that the smaller weight norm maintained throughout training would have the effect of not being overly reliant on particular weights in the model, which would likely result in a model which is more robust to OOD and adversarial inputs. - The minor runtime overhead mentioned in the paper is actually on the order of 5-6%. The definition of minor is subjective, and I would not consider a 5% increase in runtime minor. I think it would be better to state the actual runtime increase within the main text of the paper to give the reader a better idea of the cost. ## Minor - L70: pertaining --> pretraining Technical Quality: 3 Clarity: 3 Questions for Authors: Can you add the OOD or adversarial tests mentioned above? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations are discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. We appreciate your positive assessment of our method's intuition and potential for adaptive regularization. We have carefully considered your comments and would like to address them as follows: > The statement on L123-124 is hard to accept, and I think it needs more explanation. Why if 𝐹(𝑥) not suitable for gradient based optimization? It seems that if 𝐹 and 𝑐 are both differentiable, then it indeed would provide iseful information to restore the feasibility of an infeasible 𝑥. Am I missing something here? Even if $f(x)$ and $c(x)$ are differentiable, $F(x)$ is not differentiable for $c(x) > 0$. This is because of the maximization over $\lambda$ in F(x). $\underset{x}{\operatorname{minimize}} \; F(x) \quad \text{with} \quad F(x) = \max_{\lambda \ge 0} \; f(x) + \lambda \cdot c(x)$. The function F(x) (that is to minimize with respect to x) has an inner maximization with respect to $\lambda$. If $c(x) > 0$, in other words, if x is infeasible, $\lambda \to \infty$ “maximizes” $F(x)$ with respect to $\lambda$. Hence, for $c(x) >0, F(x)$ jumps to $\infty$ due to the maximization over \lambda. On the other hand, for $c(x) < 0$, when x is feasible, the maximization over \lambda yields $\lambda = 0$. We can thus alternatively write $F(x)$ as $F(x) = f(x) if c(x) >= 0$ and $F(x) = \infty if c(x) < 0$ From this formulation, it is evident that we cannot run gradient based optimization on this objective. As soon as we encounter an infeasible x, we receive an input of $\infty$. The smoothed approximation $\hat{F}$ in Equation 1 addresses this effect by adding a quadratic term for $\lambda$ which prevents the maximization from returning $\infty$. Does this clarify our statement in L123-124? However, we agree that this point was not well-articulated. We will revise this section to provide a more precise and accurate explanation of the optimization challenges and how our approach addresses them. > It would be interesting to know what effect the training has on the rubustness to OOD inputs or noise perturbed inputs. Is it possible to run some tests on the trained models related to CIFAR100-C and AdvSQUAD or AdvGLUE (or other suitable language dataset to measure robustness)? I ask this because I suspect that the smaller weight norm maintained throughout training would have the effect of not being overly reliant on particular weights in the model, which would likely result in a model which is more robust to OOD and adversarial inputs. Can you add the OOD or adversarial tests mentioned above? We performed an additional experiment in CIFAR100-C, see the general response and Rebuttal PDF Figure 2. We found that AdamCPR outperforms AdamW which could indicate that CPR leads to a more robust optimization. We see that AdamCPR performs better than AdamW with Kappa-WS but not with Kappa-IP. None of the optimizer and hyperparameter configurations lead to an outstanding performance on this task, we wouldn’t claim that CPR is particularly good for noisy data. However, a qualified analysis of the robustness will have the scope of a separate paper and could be an interesting follow-up work. > The minor runtime overhead mentioned in the paper is actually on the order of 5-6%. The definition of minor is subjective, and I would not consider a 5% increase in runtime minor. I think it would be better to state the actual runtime increase within the main text of the paper to give the reader a better idea of the cost. We appreciate your perspective on the runtime increase. Upon reflection, we agree that characterizing a 5-6% increase as "minor" may be subjective. In the revised version, we will explicitly state the runtime increase in the main text. Minor correction: Thank you for catching the typo on L70. We will correct "pertaining" to "pretraining" in the revised version. Thanks again for your review and useful thoughts. Might we kindly ask you to increase your score in case we address your named concerns? --- Rebuttal Comment 1.1: Title: Thank you for the responses. Comment: Thank you for the responses. The authors have adequately answered my questions. As I was already quite positive of the work, I will maintain my current score.
Summary: This paper presents Constrained Parameter Regularization (CPR) as an alternative to traditional weight decay. CPR enforces an upper bound on the L2-norm of individual parameter matrices. It frames learning as a constraint optimization problem solved with the augmented Lagrangian method and can be integrated seamlessly with gradient-based optimizers. During training, CPR dynamically tailors regularization and reduces the need for hyperparameter selection. Empirical results on computer vision and language modeling tasks demonstrate its effectiveness. Strengths: - The proposed CPR is straightforward to implement and can be integrated with existing gradient-based optimization methods. - Experiments demonstrate CPR’s effectiveness in various tasks compared to traditional weight decay. Results also show that the performance of CPR is robust to hyperparameter selection and needs less training budget. - The paper is well-written and easy to understand, making the concepts accessible. Weaknesses: The general idea of using adaptive regularization does not seem very novel. However, the reviewer does not specialize in this area and won't view this as a significant flaw given CPR’s experimental results. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. CPR constrains parameters with an L2 norm upper bound. Will this method impede the model’s learning ability under complex tasks, e.g., very slow convergence or loss of training accuracy? 2. It would be interesting to investigate CPR’s robustness to noise and distributional shifts. How does CPR perform when the training data is noisy or comes from a different distribution than the test data? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review. We appreciate your positive comments on CPR's straightforward implementation, effectiveness across tasks, and clear presentation. We'd like to address your questions and concerns: > The general idea of using adaptive regularization does not seem very novel. However, the reviewer does not specialize in this area and won't view this as a significant flaw given CPR’s experimental results. We agree that the idea of adaptive regularization is not new, however, the idea of applying an upper bound to the $L_2$ norm (or any other statistical measurement) and enforcing this bound with the use of an augmented Lagrangian is novel. Furthermore, it gives access to a different hyperparameter ($\kappa$) for which more suitable initialization heuristics, see Kappa-WS/IP, can be developed than for AdamW’s static weight decay hyperparameter. CPR with the Kappa IP initialization is a hyperparameter-free regularization and the usage of the inflection point of the $L_2$ norm is novel too. Thereby, CPR outperforms AdamW on training with 350M parameter LLM experiments as well as ImageNet vision transformer training. No other regularization alternative to weight decay showed such successful and extensive evaluation. > CPR constrains parameters with an L2 norm upper bound. Will this method impede the model’s learning ability under complex tasks, e.g., very slow convergence or loss of training accuracy? Similar to weight decay, CPR limits the effective capacity of the model. This naturally interferes with training loss minimization. Nevertheless, regularization, in the form of constraints or penalties (such as weight decay) is usually required to avoid overfitting and increase generalization. We trained multiple complex tasks in our experiments, such as language models (GPT2s/m on OpenWebText) and image classification (ViT on ImageNet) and CPR surpassed traditional weight decay. > It would be interesting to investigate CPR’s robustness to noise and distributional shifts. How does CPR perform when the training data is noisy or comes from a different distribution than the test data? To test the performance of CPR on noisy data, we performed an additional experiment with the use of the noisy CIFAR100-C dataset [1] as mentioned in the general response. The results can be found in the Rebuttal PDF Figure 2. We see that AdamCPR outperforms AdamW which could indicate that CPR leads to a more robust optimization. We see that AdamCPR performs better than AdamW with Kappa-WS but not with Kappa-IP. None of the optimizer and hyperparameter configurations lead to an outstanding performance on this task, we wouldn’t claim that CPR is particularly good for noisy data. However, a qualified analysis of the robustness will have the scope of a separate paper and could be an interesting follow-up work. We appreciate your openness about your expertise, your fair assessment of CPR's contributions, and your questions. We believe that we have addressed them and that adding the results on noisy data (and ImageNet) will make the paper stronger. If you agree, might we kindly ask you to increase your score? We would be happy to provide any additional information or clarifications if needed. ___ [1] Hendrycks, Dan, and Thomas Dietterich. "Benchmarking Neural Network Robustness to Common Corruptions and Perturbations." International Conference on Learning Representations. 2018. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer 4Jjm Comment: Thanks for the detailed response. I have no further questions and I have raised my score to 6.
Summary: This paper illustrates a new training algorithm to improve the weight decay strategy. Instead of giving the same strength to all weights in weight decay, the proposed method penalizes only the elements that are larger than the threshold. Based on extensive evaluation, the proposed method performs better than AdamW/SGD+weight decay. Strengths: 1. The experiment is strong enough to support the proposed method. 2. The intuition is easy to understand; that is, instead of using the same weight decay strength for all weights, the strength should be different for different weights since the importance is not the same. Weaknesses: 1. As mentioned in the paper, the computational cost is higher. 2. The evaluation of SGD can be improved since it is well known that the best test accuracy can usually be achieved when using SGD with weight decay on image classification tasks. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and feedback on our paper. We appreciate your positive assessment of the strength of our experiments and the intuitive nature of our method. We would like to address the weaknesses you identified as follows: > As mentioned in the paper, the computational cost is higher. We acknowledge that there is a small computational overhead with our method depending on the model size as we analyze in Appendix H “Runtime Analysis on LLM training” of the original submission. However, the benefits in performance clearly outweigh this modest increase in cost as detailed in the following. For smaller models and larger batch sizes, the overhead is negligible (less than 1%). Even for our largest tested model (GPT2-XL with 1.5B parameters), the overhead was only 5.76% with a batch size of 1 and decreased to 2.44% at the maximum possible batch size. Importantly, this small increase in compute time translates to significant improvements in model performance and reduced total training time. For example, with GPT2s, we achieved the same performance as AdamW in only 2/3 of the training budget (Figure 1). > The evaluation of SGD can be improved since it is well known that the best test accuracy can usually be achieved when using SGD with weight decay on image classification tasks. We thank you for this point and, as mentioned in the general response, we performed additional experiments with SGD and SGD with CPR on the CIFAR100/ResNet18 task. We trained each configuration three times with random seeds and report the mean percentage of correct labels and standard deviation of the experiments. The results are in the Rebuttal PDF in Figure 1. We find that SGD with CPR outperforms SGD with weight decay when using the Kappa-WS initialization. However, the IP initialization seems not to work with the use of SGD, probably due to the changed convergence behavior in contrast to Adam. We will add this experiment and findings to our paper. Thanks again for your constructive feedback and the opportunity to improve our work. Might we kindly ask you to increase your score in case we address your concerns? We would be happy to provide any additional information or clarifications if needed. --- Rebuttal Comment 1.1: Comment: Thank you for your response! I don't have other concerns so I keep my score.
Summary: The paper introduces Constrained Parameter Regularization (CPR), a regularization technique for deep learning by dynamically tailoring regularization to individual parameters. CPR sets upper bounds on statistical measures of parameter matrices and reduces the learning into a constraint optimization problem. The authors conduct a series of experiments to evaluate the performance of the proposed method. Strengths: - The article provides an accurate formal description of the background and derivation process of the algorithm, which is clear and straightforward, making it easy to understand. - The article theoretically explains the differences and connections between the CPR algorithm and weight decay algorithms, providing a good summary of existing optimization methods. It designed multiple initialization methods for the constraint upper bound κ, and extensively verified its effectiveness through experiments in various deep learning domains. - The experimental results verify that the proposed method is techinically correct. Weaknesses: - The optimization algorithm lacks theoretical analysis on aspects such as convergence, which needs to be further analyzed using more rigorous mathematical language. Additionally, the initialization algorithm designed lacks theoretical support. - The authors conduct experiments on CIFAR100, which is too small to show the superiority of a training method for deep learning. The authors need to give the results on ImageNET or larger-sized datasets. - The proposed methods have more learnable parameters $\lambda$ and $\kappa$. The authoes need to report the addtional computational and memory costs. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the comments on weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your time and effort in reviewing our paper. We appreciate your positive feedback on the clarity of our presentation and the extensive empirical validation across multiple deep learning domains. In the following we carefully considered your concerns and questions: > The optimization algorithm lacks theoretical analysis on aspects such as convergence, which needs to be further analyzed using more rigorous mathematical language. We acknowledge this limitation and agree that a rigorous theoretical analysis would strengthen the method. While we focused primarily on empirical validation in this initial work, we recognize the importance of theoretical foundations. However, a theoretical analysis of CPR exceeds the scope of this work. In particular, providing a fair analysis of the interplay between momentum methods and the decoupled updates (in the spirit of weight decay) likely requires a different treatment than the analysis of the standard augmented lagrangian methods. What is more, the incomplete optimization (single update step) of the loss, before $\lambda$ is updated, further complicates the analysis as we cannot rely on the classic interpretation of the update of lambda as a step of dual gradient ascent. Also, since the update of the parameters with $\nabla R(x) * \lambda$ is not multiplied with the learning rate, we cannot expect lambda to converge while the learning rate is still decreasing, as stationarity is only reached when the update direction of the optimizer and $\lambda * R(\theta)$ cancel. Therefore, we focused in this work on the empirical evaluation of the method to provide deep learning practitioners with a powerful alternative to weight decay and in the case of Kappa-IP, without the need for tuning a regularization hyperparameter (like the weight decay $\gamma$). > Additionally, the initialization algorithm designed lacks theoretical support. Both, gamma in weight decay and kappa in CPR are hyperparameters to be determined experimentally. This is because any measure of model complexity can only be estimated through the validation data. Without this, the correct model complexity, measured, e.g., through a regularization function, can only be assumed. Such assumptions are implicitly made through the choice of the regularization parameters. For this choice, we propose an initialization heuristic (Kappa-IP) and show that it works well across different tasks, datasets, and architectures. From this, we conclude based on experimental evidence, that the assumption underlying it generalizes well. We provide empirical evidence for the Kappa-IP initialization in experiments on LLMs (Paper Figure 3, Figure 5) and ImageNet (Rebuttal PDF Table 1). > The authors conduct experiments on CIFAR100, which is too small to show the superiority of a training method for deep learning. The authors need to give the results on ImageNET or larger-sized datasets. We performed ImageNet pretraining experiments on a vision transformer (DeiT [1]) with two sizes, small with 22M parameters and base with 86M. Unfortunately, we only got the small experiments done until the rebuttal deadline but will provide the base results in a comment on the weekend. The results can be found in Table 1 in the Rebuttal PDF. In contrast to [1], we found that a higher weight decay works better. However, AdamCPR outperforms AdamW with both kappa initializations, a tuned warm start kappa initialization (Kappa-WS), and the hyperparameter-free Kappa-IP. In the small model training, we outperform weight decay by 0.84% without the need for tuning a regularization hyperparameter. We measured a very minor runtime increase of 0.02% when using CPR in comparison to AdamW. We also want to stress that the GPT2s (124M parameter) and GPT2m (345M) are trained on the OpenWebText dataset with ~9B token, which is not small and a common experimental setting in LLM-pretraining. > The proposed methods have more learnable parameters 𝜆 and 𝜅. The authoes need to report the addtional computational and memory costs. Note that \lambda and \kappa are only scalars (one per parameter group / weight matrix). These parameters are not learnable but are used by the Lagrangian optimization. Their updates do not require gradient computations but only the computation of $R(\theta)$. For the computational costs, we would like to refer you to Appendix H “Runtime Analysis on LLM training” of the original submission. We have conducted a detailed analysis of the computational overhead introduced by CPR. For GPT2-small (124M params), CPR introduces only a 0.4% runtime increase compared to AdamW. For GPT2-medium (354M params), the overhead is 2.4%. Importantly, this small increase in compute time translates to significant improvements in model performance and reduced total training time. For example, with GPT2s, we achieved the same performance as AdamW while requiring only 2/3 of the training budget (Figure 1). We thank you again for your constructive feedback and the opportunity to improve our work. Might we kindly ask you to reevaluate your scoring and increase the score in case we address your concerns? If any concerns remain we are more than happy to discuss these via comments in OpenReview. ___ [1] Touvron, Hugo, et al. "Training data-efficient image transformers & distillation through attention." International conference on machine learning. PMLR, 2021. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. Most of mu concerns have been addressed. But as the poposed method is actually a training algorithm/technique for deep neural networks, I think it would be better and necessary to provide some theoretical analysis for it. I increased my rating to 4.
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your thorough and constructive reviews of our paper on Constrained Parameter Regularization (CPR). We greatly appreciate your thoughtful comments and the opportunity to address your concerns. We have prepared a 1-page PDF with additional experimental results that directly address several of the points you raised: - **ImageNet pre-training**: We compare AdamW and AdamCPR in a vision transformer [1] pertaining on ImageNet. We train a small DeiT[1] model with 22M parameters and a base model with 86M parameters with the use of the PyTorch Image Models library [2] for 300 epochs and with the configuration for the DeiT Paper but also with a 10x and 0.1x weight decay value. Unfortunately, we only managed to run the smaller model experiments within the rebuttal week. The base model experiments are still running and we will announce the results in a comment on the weekend. As seen in Table 1 of the Rebuttal PDF AdamCPR outperforms AdamW in the small DeiT training with both kappa initialization methods. Especially the hyperparameter free regularization with Kappa-IP performs best by outperforming the best AdamW run by 0.86%. In the case of this small model, we measured a very minor runtime increase of 0.02% when using CPR in comparison to AdamW (14.85h for AdamW and 14.89h for AdamCPR on 4xA100). - **ImageNet finetuning**: We appreciate the reviewer’s suggestion regarding the evaluation on ImageNet. Following the reviewer's recommendation, we conducted fine-tuning of CLIP’s ViT-B/32 model on the ImageNet dataset. We used the ViT-B/32 model pre-trained by CLIP. The model was fine-tuned for 10 epochs following the hyperparameter choices in [3], with the exception of the special classification head initialization. As in [3], we employ a learning rate of $3 \times 10^{-5}$, default PyTorch AdamW hyperparameters $\beta_1=0.9$, $\beta_2=0.999$ $\epsilon=10^{-8}$ and a cosine-annealing learning rate schedule with 500 warm-up steps. Due to time and compute constraints, the training was performed on a single GPU with a batch size of 512, compared to the original setup of 8 GPUs with a batch size of 512 each. In Table 2 of the Rebuttal PDF, we compare AdamW with different weight decay values to the proposed AdamCPR in different configurations, where we report the top-1 accuracy after finetuning. From these results, we see that the Kappa-WS initialization also leads to better results in this finetuning setting, comparing favorably to traditional weight decay. - **SGD CPR experiments**: As mentioned by Reviewer M5LW, the best accuracy can be achieved in image classification (probably on CNNs, on ViTs is Adam used) with SGD and weight decay, So we performed additional experiments with SGD and SGDCPR on the CIFAR100/ResNet18 task. We used SGD with Nesterov momentum of 0.9 and configured the training similarly to the CIFAR100 experiments described in the paper. The results can be found in Figure 1 in the Rebuttal PDF. We trained each configuration three times with random seeds and report the mean percentage of correct labels and standard deviation of the experiments. We find that SGD with CPR outperforms SGD with weight decay when using the Kappa-WS initialization. However, the IP initialization seems not to work with the use of SGD, probably due to the changed convergence behavior in contrast to Adam. - **CIFAR100-C experiments**: To evaluate CPR's robustness according to data noise, we've included experiments on training a ResNet18 on the noisy CIFAR100-C dataset [4]. The training setup and configuration are similar to the CIFAR100 experiments described in the paper. The results are visualized in Figure 2 in the Rebuttal PDF. We see that AdamCPR performs better than AdamW with Kappa-WS but not with Kappa-IP. None of the optimizer and hyperparameter configurations lead to an outstanding performance on this task, we wouldn’t claim that CPR is particularly good for noisy data. We trained each configuration three times with random seeds and reported the mean percentage of correct labels and standard deviation of the experiments. These additional experiments significantly strengthen our empirical evaluation and address points you raised. We believe they underscore CPR's effectiveness across a broader range of scenarios and dataset scales. We will add all experiments to our paper. We answer individual questions and address further concerns in each response below. ___ [1] Touvron, Hugo, et al. "Training data-efficient image transformers & distillation through attention." International conference on machine learning. PMLR, 2021. [2] Wightman, Ross, "PyTorch Image Models.", github.com/rwightman/pytorch-image-models, GitHub repository 2019. [3] Wortsman, Mitchell, et al. "Robust fine-tuning of zero-shot models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. [4] Hendrycks, Dan, and Thomas Dietterich. "Benchmarking Neural Network Robustness to Common Corruptions and Perturbations." International Conference on Learning Representations. 2018. Pdf: /pdf/4995d44a7f71a4bd405f55e12b6477d5f30ffae1.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper proposes a new regularization technique, Constrained Parameter Regression (CPR), to replace weight decay for training deep learning models. Conventional weight decay penalizes significant weight deviation. However, this can be too restrictive since some layers may need a larger deviation. Instead of applying the same regularization strength for all layers as in weight decay, the paper uses different hyperparameters for each layer, resulting in a more flexible regularization strength. Intuitively, a model with a larger weight $||\theta||$ is allowed more deviation under CPR. The method is theoretically motivated by the augmented Lagrangian method and tested on multiple tasks and benchmarks. Strengths: * The method is principledly motivated and intuitively explained. Overall, the paper is very well-written. * The paper proposes multiple alternatives for setting the method's hyper-parameters, taking into account efficiency and tuning flexibility. * The paper demonstrates superior performance on multiple tasks with modern deep learning models, including large language models. Weaknesses: * **The regularization strength (upper bound $k$) relies on empirical observations and intuition**. While the overall method is theoretically motivated, the upper bound $k$ setting relies mainly on intuition. Specifically, the paper does not explain why $k\leftarrow R(\theta)$ is a good strategy. Intuitively, this strategy indicates that a larger weight $||\theta||$ should have less regularization. * **Computer vision tasks are limited**. For classification, computer vision experiments are conducted on relatively small-scale datasets, such as CIFAR100. The paper should consider using moderate-to-large-scale datasets with well-established benchmarks, such as ImageNet, to make the empirical evidence more convincing. For segmentation, popular benchmarks, such as MSCOCO, could be a better choice to establish the superiority of this regularization. Technical Quality: 3 Clarity: 4 Questions for Authors: * Why does the inflection point of the regularization function $\Delta\Delta R$ reflect a saturation of performance? Shouldn't the inflection point of the loss function $L$ be a better fit for this? * Could the authors report results on ImageNet? If time is constrained, fine-tuning a pre-trained model, such as CLIP's pre-trained ViT-B [1], to ImageNet is a good choice. * Is there any empirical evidence supporting the design choice $k\leftarrow R(\theta)$? It would be great if the authors could discuss this choice in detail. * Prior works in hyper-optimization [2,3] show how to optimize hyper-parameters in optimizers differetiably. This could be a valid future direction for optimizing the upper bound $k$. A new work uses hyper-optimization in an optimizer for fine-tuning [4], which shares a similar spirit to this paper at a high level. [1] Radford, Alec, et al. "Learning transferable visual models from natural language supervision." International conference on machine learning. PMLR, 2021. [2] Baydin, Atilim Gunes, et al. "Online learning rate adaptation with hypergradient descent." arXiv preprint arXiv:1703.04782 (2017). [3] Chandra, Kartik, et al. "Gradient descent: The ultimate optimizer." Advances in Neural Information Processing Systems 35 (2022): 8214-8225. [4] Tian, Junjiao, et al. "Fast trainable projection for robust fine-tuning." Advances in Neural Information Processing Systems 36 (2023). Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: There is no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review of our paper on Constrained Parameter Regularization (CPR). We appreciate your positive feedback on the method's principled motivation, clear presentation, and strong empirical results across multiple tasks. Regarding your summary, we would like to point out that we do not use “different hyperparameters for each layer”. We introduce only one hyperparameter for the regularization, namely the kappa initialization (or without any hyperparameter when using Kappa IP) but our method regularizes each layer individually with an additional scalar variable for each layer. We also would disagree that “a model with a larger weight $||𝜃||$ is allowed more deviation under CPR” but CPR enforces a norm of $||𝜃|| \le \kappa$ earlier in model training. In the following, we address your questions: > Why does the inflection point of the regularization function ΔΔ𝑅 reflect a saturation of performance? Shouldn't the inflection point of the loss function 𝐿 be a better fit for this? The inflection point tries to identify a point at which increasing the model complexity (measured through $R$) starts becoming less relevant for reducing the loss than in previous iterations. In other words, the benefit for increasing the model complexity further, starts saturating. Hence, we try to capture the point where $\Delta R$ (the change in $R$ over successive iterations) starts to decrease. Also, note that each parameter group (weight matrix) has its own inflection point. > Could the authors report results on ImageNet? If time is constrained, fine-tuning a pre-trained model, such as CLIP's pre-trained ViT-B [1], to ImageNet is a good choice. We performed both, new experiments on pretraining a vision transformer on ImageNet as well as finetuning CLIP, as mentioned in the general response. We performed ImageNet pretraining experiments on a vision transformer (DeiT [1]) with two sizes, small with 22M parameters and base with 86M. Unfortunately, we only got the small experiments done until the rebuttal deadline but will provide the base results in a comment on the weekend. The results can be found in Table 1 in the Rebuttal PDF. In contrast to [1], we found that a higher weight decay works better. However, AdamCPR outperforms AdamW with both kappa initializations, a tuned warm start kappa initialization (Kappa-WS), and the hyperparameter-free Kappa-IP. In the small model training, we outperform weight decay by 0.84% without the need for tuning a regularization hyperparameter. We measured a very minor runtime increase of 0.02% when using CPR in comparison to AdamW. Additionally, we performed finetuning experiments, as you mentioned, with a CLIP ViT-B to ImagenNet. The results can be found in Table 2 in the Rebuttal PDF. AdamCPR outperforms AdamW with the use of a tuned Kappa-WS initialization and the Kappa-IP initialization is on par with the best weight decay hyperparametrization. > Is there any empirical evidence supporting the design choice 𝑘←𝑅(𝜃)? It would be great if the authors could discuss this choice in detail. Note that $\kappa$ is a bound for the value of $R(\theta)$ (Section 4.3). Choosing $\kappa \gets R(\theta)$ for a specific $R(\theta)$ observed in the training run (as done in Kappa-WS and Kappa-IP), ensures that the bound is active, while also not restricting the training so much that it fails to reduce the loss. In particular, it is ensured that when the bound becomes active, a value of $R(\theta)$ is enforced for which healthy training dynamics are expected. We also performed an empirical evaluation of the different initialization methods which can be found in Appendix E, Figure E.1. While Kappa-kI$_0$ (kappa initialization depending on the initial $R(\theta)$) also leads to good performance, we found Kappa-WS ( $\kappa \gets R(\theta)$ after $x$ warm start steps) to perform better over a larger span of different learning rates. > Prior works in hyper-optimization show how to optimize hyper-parameters in optimizers differentiable. This could be a valid future direction for optimizing the upper bound 𝑘. A new work uses hyper-optimization in an optimizer for fine-tuning, which shares a similar spirit to this paper at a high level. We thank you for this suggestion. Optimizing for $\kappa$ could be an interesting direction. One way to get gradients for $\kappa$ could be to use implicit differentiation using the KKT conditions of the CPR problem (see, e.g. [2]). However, this would likely not be straightforward, and while this is an interesting avenue for future extensions of this work, we hope to have demonstrated in the paper that our heuristics, in particular choosing the inflection point for setting $\kappa$, already provide strong results for negligible computational overhead. Thanks again for your review and useful thoughts. We believe that we have addressed your concerns and the additional positive results on ImageNet training strengthen our paper. Might we kindly ask you to increase your score in case we address your named concerns? We would be happy to provide any additional information or clarifications if needed. ____ [1] Touvron, Hugo, et al. "Training data-efficient image transformers & distillation through attention." International conference on machine learning. PMLR, 2021. [2] Blondel, M., “Efficient and Modular Implicit Differentiation”, 2021. doi:10.48550/arXiv.2105.15183. --- Rebuttal Comment 1.1: Comment: Thank the authors for their new experiments, and thank you for clarifying my misunderstandings of the paper. * I want to mention that ''additional scalar variables for each layer'' is what I meant by ''different hyperparameters for each layer''. * In the summary, I summarized the method as " a larger weight is allowed more deviation". This is my logic. The goal is to keep $\||\theta\|| < \kappa$ (line 160), where $\kappa\propto R(\theta)$ (sec.4.3) and $R(\theta) = 1/2\||\theta\||^2_2$ (line 107). In other words, a weight with larger norm $\||\theta\||^2_2$ will have smaller regularization because $\kappa$ is large (a larger upper bound). Hence, it is allowed more deviation. Please let me know where I misunderstood the method. Thanks. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response and clarification. - When we read “hyperparameters”, we think of some parameters to tune (beforehand). Since this is not the case we criticized it. We just misunderstood the wording. - In principle, the degree of allowed deviation in our method corresponds to the value of $\kappa$ and $\kappa$ depends on the $\kappa$ initialization method. For example, if one uses the $\kappa$ initialization method that has fixed $\kappa$ for all weight matrices (Kappa-K) then it does not depend on the weights $\theta$ and $\kappa \not\propto R(\theta_t)$. If one uses a $\kappa$ initialization method which is dependent on $R(\theta_t)$, like Kappa-WS, then $\kappa \propto R(\theta_t)$ and we agree that the entries of weight matrices with larger $R(\theta_t)$ are allowed more deviation. Or in other words, a model with a larger weight $||\theta||$ is allowed more deviation under CPR when initializing $\kappa$ with a $\kappa$ initialization method which is $\kappa \propto R(\theta_t)$.
null
null
null
null
null
null
GeNIe: Generative Hard Negative Images Through Diffusion
Reject
Summary: The paper proposes to employ text-to-image latent diffusion models to augment images through a controlled modification such that the resultant class is different from the source class. Such augmented images are referred to as hard negative images. Building upon SDEdit style image modification, the paper controls the extent of modification by adaptively determining the appropriate noise-scale for each image separately. The benefits of this type of augmentation have been demonstrated on few-shot and long-tailed imagenet classification tasks. Strengths: - The paper is very well written presenting the core idea of generating hard-negative images by modifying an image with a caption of another class. This idea is simple, intuitive and interesting. - Furthermore, the algorithm to determine the optimal noise-level for each image adaptively is not only simple and intuitive but also effective in eliminating dependence on hyperparameters. I feel that a connection can be made to the recent work [1] on phase-transition in diffusion models since this algorithm is attempting to find the diffusion-time when phase-transition occurs. - The evaluation is comprehensive considering a variety of diffusion-augmentation baselines as well as traditional augmentations. - The paper illustrates the effectiveness of the adaptive search procedure through separate experiments with DINO-v2 and visualisations. - In many cases, synthetic data generation with a diffusion model may be replaced by a simpler retrieval baseline [2]. However, the goal of this work is to use a diffusion model to search for and generate hard negatives, which is an interesting deviation from some of the previous synthetic data augmentation approaches. [1] Sclocchi, Antonio, Alessandro Favero, and Matthieu Wyart. "A phase transition in diffusion models reveals the hierarchical nature of data." arXiv preprint arXiv:2402.16991 (2024). Weaknesses: - From the various results in the paper, it seems that the Text2Image, GeNIe, and GeNIe-Ada achieve comparable performance with respect to each other on average. This seems to suggest that the majority of the gains can be attributed to the increased number of _distinct_ examples --- as compared to regular augmentations which simply apply different transformations to the same image --- for each class rather than the hard-negatives in GeNIe/GeNIe-Ada. - Additionally, it seems that beyond some threshold, any value of $r$ that changes the source-image to the target image yields comparable performance indicating that it may be sufficient to generate an augmentation that is similar to source-image and it need not specifically be a _hard-negative_. It may be useful to consider some other applications where images lying in the boundary of the classifier may be informative: for example, see recent work on generating outliers [1] for OOD detection. - GeNIe-Ada algorithm is compute-intensive as compared to a simple Text2Image augmentation since it requires generating several augmentations for each source image before selecting one optimal augmentation that lies on the decision boundary. Given how close the text2image and genie-ada performances are in some cases, it may be possible that we could generate more augmentations using text2image in the same compute budget and improve over GeNIe. - (minor) GeNIe is applicable to the fine-tuning stage rather than the pretraining stage. [1] Du, X., Sun, Y., Zhu, J. and Li, Y. Dream the impossible: Outlier imagination with diffusion models. NeurIPS 2024. Technical Quality: 4 Clarity: 4 Questions for Authors: See weaknesses. - How can we apply GeNIe for classification involving a larger number of classes? Could you please elaborate on this statement in the paper: _For GeNIe, instead of randomly sampling the source images from other classes, we use a confusion matrix on the training data to find the top-4 most confused classes and only consider those classes for random sampling of the source image. The source category may be from “Many”, “Med”, or “Few sets”._ If the training converges, how can the confusion matrix on train data be informative? - If the class of the source image and target class are not semantically compatible, how does the modification look like? - Do you have any examples of failure cases of algorithm 1? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 2 Limitations: Yes, limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[hT8y][W1]: seemingly comparable performance of Txt2Img and GeNIe:** - Thanks for this comment. Indeed diffusion-based augmentation techniques offer a notable margin compared to traditional approaches (such as Cutmix and Mixup). We offer an enhancement of a diffusion based approach by proposing GeNIe and further automating and enhancing it through GeNIe-Ada. - Besides, please note that GeNIe and GeNIe-Ada almost consistently outperform txt2Img, a few examples summarized in the following: (i) Few-shot classification scenarios, as demonstrated in Section 4.1 - Table 1 and Table 2. On miniImagenet, GeNIe and GeNIe-Ada offer improvements in the range of 1.3% - 3.1%, and in the range of 0.8% - 0.9% on tieredImagenet. (ii) Fine-grained classification, as demonstrated in Section A2 - Table A1. Here, GeNIe and GeNIe-Ada offer improvements in the range of 5.3% on the Aircrafts dataset and 6.3% on Cars196. - Furthermore, our newest experiments in Table X1 and X2 (attached PDF) further corroborate the superiority of GeNIe compared to Txt2Img when adopting a more recent and smaller diffusion models (i.e. Stable Diffusion v3, SDXL-Turbo). **[hT8y][W2]: consistently choosing a large $r$. Considering other application such as OOD.:** - The Fig. 6 and Table 4 of Section 4.2 combined tell the story a bit better. Even though it might seem a larger $r$ would be a sensible option to opt for (at least on average), in practice such a choice will distance the model from capturing the context and low-level features of the source category. So, striking a balance between as low as possible choice of $r$ (to capture low-level source features) and as high as possible to result in a high Oracle score (confirming the correct target category) is crucial, and this is what GeNIe-Ada does. - Note that the margin between GeNIe-Ada and GeNIe (for instance, with $r\geq 0.9$) becomes more pronounced when taking the $1$-shot setting into account, in all few-shot settings. - Considering GeNIe for OOD applications is a great idea, and we have already started looking into it, following your remark. While we will definitely cite [Your Ref 1, on OOD] in the revised paper, we feel this requires more work than we can afford for this revision and would fit best into our future work. **[hT8y][W3]: Txt2Img with lager number of augmentations to make up for its relatively lower computation complexity:** - Thanks for the insightful remark. Given that GeNIe-Ada searches for the best hard-negative between multiple noise-ratios $r$'s, it naturally requires a higher compute budget than Txt2Img that only uses $r=1$. For this experiment, we use GeNIe-Ada with $r \in \{0.6, 0.7, 0.8\}$ to compare with Txt2Img. Based on this, we only have $3$ paths, with steps of $0.1$), and for each of which we go through partial reverse diffusion process. E.g. for $r=0.6$ we do $30$ steps instead of standard $50$ steps of Stable Diffusion. This practically breaks down the total run-time of GeNIe-Ada to approximately $2$ times that of the standard reverse diffusion (GeNIe-Ada: total $r = 0.6 + 0.7 + 0.8 = 2.1$ vs Txt2Img total $r = 1$). Thus, to be fair, we generate twice as many Txt2Img augmentations as compared to GeNIe-Ada to keep a constant compute budget across the methods, following your suggestion. The results are shown in Table X3. As can be seen, even in this new setting, GeNIe-Ada offers a performance improvement of $0.8\%$ to $1.9\%$ across different backbones. - Note that GeNIe itself (due to partial reverse process) is actually faster than txt2Img, and yet consistently more accurate. **[hT8y][W4]: GeNIe is applicable to the fine-tuning stage rather than the pretraining stage:** - We agree, even though theoretically nothing stops one from applying GeNIe on a pretraining stage, especially in an offline augmentation setting where latency is of no concern. Note that, data augmentation is mostly beneficial in data deficient settings than the pretraining stage where data is typically abundant. **[hT8y][Q1-p1]: classification involving a larger number of classes:** - GeNIe can be applied to large number of classes, as we already do this in the long-tail ImageNet experiment (Section 4.2). However, as discussed in the paper, we use the confusion matrix to sub-select that source-target pairs in an efficient manner out of the large pool of classes. **[hT8y][Q1-p2]: using confusion matrix post training convergence:** - Thanks for highlighting this. In our ImageNet-LT experiments, we compute the confusion matrix on a held-out set split from the full training set. We will clarify this in the revised draft. **[hT8y][Q2]: incompatible source-target classes:** - Thanks for this comment. Semantic compatibility is indeed a difficult matter to quantify. That said, following your remark, Fig. X1 (see attached PDF) summarizes a few examples where source and target are incompatible. As can be seen, GeNIe starts from pizza and gradually transforms the image towards the volcano while capturing the low-level features of the pizza image. This transformation (semantic switch), however, in such incompatible cases seems to occur at larger noise ratios $r$ when compared to easier cases. We will include this figure in the camera ready version and elaborate on this point in the main text of the revised draft. **[hT8y][Q3]: failure cases of the algorithm:** - Great point. We are conscious that there are _exceptional_ failure cases (where the right choice of $r$ is ambiguous due to the presence of a mixture of both source and target predominant visual features - e.g in Fig. X1, the top row) in which the automated noise selection process does not return an ideal outcome. We will discuss this point further in the Limitations Section in the revised draft. We do hope this addresses all your concerns, please do not hesitate to let us know if you have any further suggestions. --- Rebuttal Comment 1.1: Title: Thank you! Comment: Thank you very much for your additional experiments and responses; I have raised my rating supporting acceptance! --- Reply to Comment 1.1.1: Title: Much Appreciated! Comment: We are pleased that you are happy with our additional experiments, and responses. Thank you for raising your score!
Summary: This paper introduces GeNIe, a data augmentation method for training vision models using synthetic images. GeNIe generates images by combining a source category image with a target category text prompt, selecting those that feature source characteristics but belong to the target category as negative samples. Experimental results show that GeNIe improves performance in both few-shot and long-tail distribution settings. Strengths: * The proposed GeNIe improves the performance in few-shot and long-tail distribution settings. * The paper provides extensive experiments to support the claims, including the selection of noise levels. * The paper is well-written and easy to follow. Weaknesses: The key idea of GeNIe is to use image editing to combine features from two categories. Here are several questions: * Regarding controllable image augmentation * Line 9 mentions that GeNIe "retains low-level and background features from the source image." How does GeNIe control which features are retained or changed? * To combine features from different categories, how about adding the attribute from the target category to the prompt? For example, a "[dog] with [wings]". This method does not require carefully selection of denoise steps. * Other image editing methods, such as those in [1] and [2], efficiently control image changes using prompts or user instructions. For example, they can transform a car into a motorcycle in Figure 2, while keeping the background unchanged for more challenging negative samples. What advantages does GeNIe offer over these methods? * GeNIe generates images "using images from all other classes as the source image" (line 227). Will all (source image, target prompt) pairs lead to effective image generation? Which types of pairs contribute the most to the final accuracy? [1] Prompt-to-Prompt Image Editing with Cross-Attention Control [2] InstructPix2Pix: Learning to Follow Image Editing Instructions, CVPR 2023 Technical Quality: 2 Clarity: 3 Questions for Authors: Please refer to the above weaknesses. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The paper has discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{[3KAK][W1]}$: $\textbf{How does $\texttt{GeNIe}$ control which features are retained or changed}$: We instruct the diffusion model to generate an image by combining the latent noise of the source image with the textual prompt of the target category. This combination is controlled by the amount of added noise and the number of reverse diffusion iterations. This approach aims to produce an image that aligns closely with the semantics of the target category while preserving the background and features from the source image that are unrelated to the target. - To demonstrate this, we have prepapred Fig X3 in the attached PDF. Here, we are progessivley moving towards the two key components of $\texttt{GeNIe}$: (i) careful choice of $r$ and (ii) contradictory prompt. The input image is a bird in a cage. The top row shows a Stable Diffusion model, unprompted. As can be seen, such a model can generate anything (irrespective of the input image) with a large $r$. Now prompting the same model with "a photo of a bird" allows the model to preserve low-level and contextual features of the input image (up to $r = 0.8$ and $0.9$), then for $r = 1.0$ it returns a bird but the context has nothing to do with the source input. This illustrates how a careful choice of $r$ can help preserve such low-level features, and is a key idea behind $\texttt{GeNIe}$. However, we also need a semantic switch to a different target class as shown in the last row where a hardly seen image of a dog in a cage is generated by a combination of a careful choice of $r$ and the contradictory prompt, leading to the full mechanics of $\texttt{GeNIe}$. This sample now serves as hard negative for the source image (bird class). $\textbf{[3KAK][W2]}$: $\textbf{adding attributes from the target category ... a ``[dog] with [wings]''}$: - Based on the reviewer's example, it appears they are suggesting adding attributes from the source category to the prompt. We should clarify the distinction between ambiguous examples and hard examples. For instance, using a prompt like "[dog] with [wings]" results in an image of a dog with wings (please see Fig. X2-[C]), but the label for this example remains unclear due to its ambiguity. Such ambiguous examples could potentially confuse the training process. In contrast, hard examples are those where the label is clearly defined. For example, a prompt like "dog in a cage" provides a clear context and should be labeled as "dog" for an animal classification task. We aim to generate these hard examples, where the correct label is unambiguous, to improve the clarity and effectiveness of the training. - We suspect the main point from the reviewer's remark is to asses whether the design engineering of $\texttt{GeNIe}$ can be replaced by a standard Stable Diffusion model given a more elaborate prompt reflecting on the low-level contextual information of the source image? This is a great suggestion, even though we need to highlight that giving a contradictory prompt opposing the source image is by itself part of $\texttt{GeNIe}$'s proposition and novelty. That said, Fig. X2 does exactly that: comparing a standard Stable Diffusion with a more elaborate prompt "a dog in a cage" and $\texttt{GeNIe}$. As can be seen, the former can result in dog in a cage where neither dog nor the cage resemble those in the source image. On the contrary, $\texttt{GeNIe}$ does preserve the contextual features of the source image, as such generating effective/challenging hard negatives for the given source image. $\textbf{[3KAK][W3]}$: $\textbf{advantage of $\texttt{GeNIe}$ over image editing approaches in hard negative generation}$: - According to our main contributions stated in line 57 of the main manuscript, we do not claim novelty on using diffusion models for image editing. Instead, our primary contribution lies in leveraging these tools to generate $\textit{hard negatives}$ for training. Note that advances in image editing techniques ([Your Ref 1] and [Your Ref 2]) is orthogonal to our contribution, and we believe that improvement in image editing techniques over time, could further enhance our results and increase the effectiveness of our approach. In that light, other diffusion based image editing techniques can also be used as the backbone engine of $\texttt{GeNIe}$, where the novelty is (i) to find the right noise threshold and (ii) to provide a contradictory prompt. - Notably, we will cite both [Your Ref 1] and [Your Ref 2] in the revised draft as alternatives for an image editing backbone. $\textbf{[3KAK][W4]}$: $\textbf{choice/effectiveness of source-target class pairs}$: - You are right. Not all pairs will lead to examples informative for training. For the very same reason, in Lines 136 and 258, we discuss choosing the pairs of source and target using the confusion matrix of an initially trained model (e.g. for long-tail distributions). We do hope this addresses all your concerns, please do not hesitate to let us know if you have any further suggestions. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the rebuttal, which addressed my concerns well. I will be increasing my rating. --- Reply to Comment 1.1.1: Title: Much appreciated Comment: Dear Reviewer 3KAK, We are pleased to hear we've have successfully addressed your concerns; many thanks for raising your final score. Best regards, Authors
Summary: In this paper, the idea is to generate data for data augmentation by utilizing a pre-trained diffusion model. The method employs different text prompts and an adjusted noise scheduler to generate hard negative samples for the source distribution. "GeNIe" creates new augmentations using diffusion by leveraging source images and contradictory target prompts. "GeNIe-Ada" adjusts noise levels on a per-sample basis, using the classifier as the condition boundary to select the right threshold. Strengths: - The method offers infinite possibilities to separate the source from the target. - The idea is simple, original, and convincing. - The ablation studies and experiments demonstrate strong performance. Weaknesses: - The method is slow, particularly GeNIe-Ada, as it requires generating an image through multiple forward passes of a diffusion model and using a classifier to select the appropriate threshold $r$. - The number of steps required to retain low-level features is crucial for optimizing the method's performance. - The method relies on access to a foundational text-to-image model trained on billions of images. Technical Quality: 3 Clarity: 3 Questions for Authors: - Stable diffusion utilizes data scraped from the web (LAION), and there is a high probability that the image from validation set of ImageNet is included in the LAION training set. Moreover, stable diffusion tends to replicate the training set [1]. How do you ensure that the augmented images used are not also present in our testing set? - How does the method perform when using a diffusion model trained exclusively on ImageNet or other diffusion models besides Stable Diffusion? - How do you ensure that the generated images still resemble natural images? Some prompts could diverge significantly from the source distribution, resulting in images that may be far from the original distribution. [1] Nicholas Carlini et al.: "Extracting Training Data from Diffusion Models" Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: / Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{[Q6Ew][W1]}$: $\textbf{slowness of \texttt{GeNIe-Ada}}$: - Thanks for this remark. As we highlight in our limitations, we acknowledge that $\texttt{GeNIe}$ is comparatively slower than traditional augmentation methods, while standing on par with (or even faster in the case of barebone $\texttt{GeNIe}$, due to partial reverse process) other generative methods. Optimization/Efficiency of diffusion based models is active line of research making approaches like our more favorable in the future due to their superior in performance and capacity. - When it comes to $\texttt{GeNIe-Ada}$, in practice we only probe $r \in [0.5, 0.9]$ (means $5$ paths, with steps of $0.1$), and for each of which we go through partial reverse diffusion process. E.g. for $r=0.5$ we do $25$ steps instead of standard $50$ steps of Stable Diffusion. This practically breaks down the total run-time of $\texttt{GeNIe-Ada}$ to roughly $2-3$ times any standard reverse diffusion. - Lastly, as discussed in Section 5, this latency might be even irrelevant/negligible when it comes to offline augmentation scenarios. $\textbf{[Q6Ew][W2]}$: $\textbf{crucial importance of $r$}$: - We agree, and that is the reason behind automating this critical parameter (through $\texttt{GeNIe-Ada}$) in favor of efficiency. $\textbf{[Q6Ew][W3]}$: $\textbf{relying on accessing a foundation model}$: - We believe with the current upsurge of interest, such foundation models are going to become commodities in the near future, further capitalizing on the importance of methodologies like ours when it comes to data augmentation. Notably, $\texttt{GeNIe}$ outperforms other Diffusion based competitors adopting similar engine as we illustrate in Sections 4.1 and 4.2. $\textbf{[Q6Ew][Q1]}$: $\textbf{ensuring augmented samples are not present in the test set}$: - This a great comment. To substantiate our understanding, we have a run set of experiments on the few-shot setting. To set the scene, we use pretrained image encoder as an oracle (DeiT-Base) to extract the latent embeddings corresponding to the train (i.e. support) set, test (i.e. query set) and augmentations generated by $\texttt{GeNIe}$. Fig. X4 demonstrates the distribution of distances between train-test and augmentation-test pairs across $600$ episodes. As can be seen, the (mean of the) distribution of augmentation-test pair is higher that that of train-test pair indicating that the augmented samples are indeed different from the test sets (based on the strong assumption of train and test sets being mutually exclusive). This is further illustrated in the last column of Fig. X4 on a UMAP embedding plot of a random episode where the embedding of train, test and augmentations are plotted. Here again there is noticeable separation between the augmentation and test samples as compared to train and test samples. $\textbf{[Q6Ew][Q2]}$: $\textbf{performance on a different diffusion model or one trained on ImageNet}$: - Using much larger dataset such as LAION seems to be the prevalent choice for training Diffusion models. We had two concerns in not adopting models based on ImageNet: (i) to avoid limiting the taxonomy of classes to the ones present in Imagenet, but a much larger set (present in LAION); (ii) also considering the fact that some of our few-shot settings (e.g. tiered/mini-ImageNet) are derivatives of ImageNet itself. - Following your suggestion, we have tried experimenting with both smaller as well as more recent diffusion models (please see the PDF, therein Table X1 and X2). More specifically, we have used Stable Diffusion XL-Turbo to generate hard-negatives through $\texttt{GeNIe}$ and $\texttt{GeNIe-Ada}$. Few-shot classification results on miniImagenet with these augmentations are shown in Table X1. The accuracies follow a similar trend to that of Table 1 in the main manuscript, where Stable Diffusion 1.5 was used to generate augmentations. $\texttt{GeNIe-Ada}$ improves UniSiam's few-shot performance the most as compared to $\texttt{GeNIe}$ with different noise ratios $r$, and even when compared to $\texttt{Txt2Img}$. This empirically indicates the robustness of $\texttt{GeNIe}$ and $\texttt{GeNIe-Ada}$ to different diffusion engines. Note that, Stable Diffusion XL-Turbo by default uses $4$ steps for the sake of optimization, and to ensure we can have the right granularity for the choice of $r$ we have set the number of steps to $10$. That is already 5 times faster than the standard Stable Diffusion v1.5 with $50$ steps used through the original submission. Our experiments with Stable Diffusion v3 (which is a totally different model with a Transformers backbone) also in Table X2 also convey the same message. As such, we believe our approach is generalizable across different diffusion models. It is to be noted that for SDv3.0 (Table X2), we test the few-shot classification accuracies on 200 episodes instead of the standard setting following 600 episodes. This leads to a higher standard deviation in the reported accuracy scores. We will report accuracy scores using the complete 600 episodes in the final draft of the camera ready version. $\textbf{[Q6Ew][Q3]}$: $\textbf{ensuring generated images resemble natural ones}$: - We agree that it is almost impossible to ensure every generated image necessarily lies in the manifold of natural images; however, the generated imaged are definitely closer to natural images when compared to traditional methods such as Cutmix and Mixup. We will modify the text to reflect on this accordingly. - Regardless of generated augmentation lying in the manifold of natural images, we demonstrate the efficacy of $\texttt{GeNIe}$ on $7$ different datasets, irrespective of whether or not the downstream datasets have semantic overlap with the pretraining data of the diffusion model. We do hope this addresses all your concerns, please do not hesitate to let us know if you have any further suggestions. --- Rebuttal Comment 1.1: Title: Follow Up - Deadline Approaching Comment: Dear Reviewer Q6Ew, Firstly, many thanks for your rigorous, positive and constructive feedback. The deadline for reviewer-author discussion is approaching soon. We have put in tremendous effort in compiling a detailed response trying our very best to address all your concerns (please see the attached PDF and our P2P responses). If you are convinced and happy with our responses, please kindly consider re-evaluating/raising your *final score*; please also let us know if you have any further questions or concerns; we'll be more than happy to address those. Many thanks for your insightful feedback. Best regards, Authors.
Summary: This paper introduces a novel augmentation method based on diffusion models. A latent diffusion model conditioned on a text prompt generates hard negatives, by adjusting the noise level. The hard negatives can be used as challenging augmentations. The authors demonstrate the effectiveness of their approach on long-tail and few-shot settings. Strengths: - Well-written paper with clear contributions and presentation. - Extensive experiments and evaluation. - Interesting and useful idea. - Code included in the supplementary. Weaknesses: I am generally happy with the paper, experiments, and presentation. A weakness seems to be the selection of the noise ratio r. The authors propose an algorithm for this. However, I am concerned how sensitive it is for different datasets or classification settings. This might affect performance in other settings or in real-world scenarios. If this is true, it might degrade the overall method's usefulness. Technical Quality: 4 Clarity: 4 Questions for Authors: Can the authors comment on the above weakness, regarding the selection of r and its sensitivity to the dataset or setting? Also, have the authors considered a different latent diffusion model? Would a smaller diffusion model and/or trained on a smaller amount of data give similar results/benefit? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors have added a section for limitations and a section for broader impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{[qsxy][W1]}$: "$\textit{happy with the paper, experiments, and presentation}$", $\textbf{selection of noise ratio $r$ across different datasets}$: - We are pleased with reviewer's positive feedback, also for finding our proposed ideas interesting. - Thanks for the interesting remark. Indeed, in Appendix A2 Table A1 we provide further experimentation on fine-grained classification (as an example of other $\textit{benchmarks}$) which we think corroborates that $\texttt{GeNIe-Ada}$ can handle such unforeseen circumstances to a good extent. Notably, throughout the paper we investigate the impact of GeNIe-Ada in $7$ different $\textit{datasets}$(tiered and miniImagenet, CUB200, Cars196, Imagenet-LT, Food101, Aircraft) demonstrating the robustness of $\texttt{GeNIe-Ada}$ across datasets. That said, we are conscious that there are $\textit{exceptional}$ failure cases (where the right choice of $r$ is ambiguous due to the presence of a mixture of both source and target predominant visual features - Fig. X1, top row) in which the automated noise selection process does not return an ideal outcome, as is already explained in response to reviewer $\textbf{hT8y}$. We will discuss this point further in our revised draft. $\textbf{[qsxy][Q1]}$: $\textbf{different diffusion model and/or smaller dataset}$: - Regarding smaller dataset, the smallest dataset we are aware of adopted for training diffusion models is ImageNet, and using much larger dataset such as LAION seems to be the prevalent choice. We had two concerns in not adopting ImageNet: (i) to avoid limiting the taxonomy of classes to the ones present in Imagenet, but a much larger set (present in LAION); (ii) also considering the fact that some of our few-shot settings (e.g. tiered/mini-ImageNet) are derivatives of ImageNet itself. - Following suggestion, we have tried experimenting with both smaller as well as more recent diffusion models (please see the PDF, therein Table X1 and X2). More specifically, we have used Stable Diffusion XL-Turbo to generate hard-negatives through $\texttt{GeNIe}$ and $\texttt{GeNIe-Ada}$. Few-shot classification results on miniImagenet with these augmentations are shown in Table X1. The accuracies follow a similar trend to that of Table 1 in the main manuscript, where Stable Diffusion 1.5 was used to generate augmentations. $\texttt{GeNIe-Ada}$ improves UniSiam's few-shot performance the most as compared to $\texttt{GeNIe}$ with different noise ratios $r$, and even when compared to $\texttt{Txt2Img}$. This empirically indicates the robustness of $\texttt{GeNIe}$ and $\texttt{GeNIe-Ada}$ to different diffusion engines. Note that, Stable Diffusion XL-Turbo by default uses $4$ steps for the sake of optimization, and to ensure we can have the right granularity for the choice of $r$ we have set the number of steps to $10$. That is already 5 times faster than the standard Stable Diffusion v1.5 with $50$ steps used through the original submission. Our experiments with Stable Diffusion v3 (which is a totally different model with a Transformers backbone) also in Table X2 convey the same message. As such, we believe our approach is generalizable across different diffusion models. It is to be noted that for SDv3.0 (Table X2), we test the few-shot classification accuracies on 200 episodes instead of the standard setting following 600 episodes. This leads to a higher standard deviation in the reported accuracy scores. We will report accuracy scores using the complete 600 episodes in the final draft of the camera ready version. We do hope this addresses all your concerns, please do not hesitate to let us know if you have any further suggestions. --- Rebuttal Comment 1.1: Title: Follow-up on our responses Comment: Dear Reviewer qsxy, Firstly, many thanks for your rigorous, positive and constructive feedback. The deadline for reviewer-author discussion is approaching soon. We have put in tremendous effort in compiling a detailed response trying our very best to address all your concerns (please see the attached PDF and our P2P responses). If you are convinced and happy with our responses, please kindly consider re-evaluating/raising your *final score*; please also let us know if you have any further questions or concerns; we'll be more than happy to address those. Once again, thank you for your insightful feedback. Best regards, Authors.
Rebuttal 1: Rebuttal: We do appreciate reviewer's constructive feedback which helped to further improve the quality and clarity of the paper. We are please by the positive feedback from reviewers [$\textbf{qsxy}$ and $\textbf{Q6Ew}$] for finding our proposed ideas "interesting", "original" and "convincing"; we also thank $\textbf{ALL reviewers}$ for finding our narrative "well-written" and our experimentation and ablation studies "extensive/comprehensive" and supportive of the proposed ideas. After perusing reviewer's remarks and recommendations, we have put in tremendous effort to provide further evidence (new experimentation and qualitative demonstrations) to corroborate the efficacy of $\texttt{GeNIe}$ as summarized below: - In response to reviewer's [$\textbf{qsxy}$ and $\textbf{Q6Ew}$] we present two sets of new experimentation with $\textit{different}$ and $\textit{smaller}$ Diffusion models. - In response to reviewer $\textbf{Q6Ew}$, regarding ensuring discrepancy between the generated augmentations and the test set, we present a statistical results on image embeddings. - In response to reviewer $\textbf{3KAK}$, we present two new sets of qualitative results elaborating on how $\texttt{GeNIe}$ preserves low-level features as well as on the impact of using more elaborate prompt instead of $\texttt{GeNIe}$. - In response to reviewer $\textbf{hT8y}$, we present new sets of results on comparing $\texttt{txt2Img}$ with larger number of augmentations and $\texttt{GeNIe}$, and qualitative results on source-target incompatibility as well as potential failure cases of $\texttt{GeNIe-Ada}$. $\textbf{Remark}$: Please find attached a PDF summarizing all our new experimentation and qualitative demonstrations. We will be referring to this PDF throughout our point-to-pint response to each reviewer. We do hope this addresses reviewer's concerns and questions, and look forward to engaging further during reviewer-author discussion period. Pdf: /pdf/6a46a21dc8e6083a67151a723d32191176041992.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DOPPLER: Differentially Private Optimizers with Low-pass Filter for Privacy Noise Reduction
Accept (poster)
Summary: The paper looks at the sequence of DP-SGD noisy gradients through a signal processing perspective, and argues that the exact gradients are likely a low-frequency signal, while the noise has higher frequencies. As a result, the paper proposes filtering the high-frequencies with a low-pass filter to improve the signal-to-noise ratio. The filter is a post-processing of the noisy gradients, so there is not extra privacy cost. The paper then studies the convergence rate of DP-SGD with the filter under standard assumptions, and finds that the filter can improve the constants of the convergence rate if the hyperparameters of the filter are chosen well. The paper also contains experiments comparing several DP first-order optimiser with and without the filter on several datasets and models. The filtered variant consistently outperforms the unfiltered one. Strengths: Investigating the behaviour of DP-SGD in the frequency domain is interesting, and to my knowledge novel. The proposed low-pass filter is justified theoretically and consistently improves performance across several models, datasets and optimisers. Since the filter is just a post-processing of the noisy gradients, the privacy analysis does not change, and the filter should be easy to implement with different variants of DP-SGD. Weaknesses: Many of the signal processing concepts are introduced too briefly, considering the audience, which makes fully understanding the theory difficult. Adding a section to the appendix explaining them clearly would make the theory much easier to understand for people not familiar with signal processing. The method used to compute the privacy bounds and the subsampling method in the experiments are not mentioned clearly in the paper. "Uniformly draw minibatch" or "randomly draw minibatch" from the algorithm listings is ambiguous, and could mean any of Poisson, with replacement or without replacement subsampling. Minor points: - Line 43: 250K steps for LLAMA training sounds like a typo. - It would be good to mention that there are multiple ways to interpret "neighbourhood" (substitute or add/remove), and that the paper's results do not depend on the precise definition. - As far as I know, the Gaussian mechanism bound in Definition 2 has only been proven for $\epsilon \leq 1$. - It is not clear how Algorithm 3 should be read for DPAdam or LP-DPAdam. The update on line 17 uses $\mathbf{d}_t$, but the Adam lines only compute $\tilde{\mathbf{d}}_t$. Technical Quality: 4 Clarity: 3 Questions for Authors: - Do the adaptively selected filter coefficients from Section 5.3 have the same convergence guarantee? - Did you experiment with adaptively selected filter coefficients from Section 5.3? - Why does the performance of LP-DPSGD drop with larger epsilons when fine-tuning on CIFAR-10 in Figure 10? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The paper clearly mentions the most important limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and providing constructive feedback to us. Below, we answered your questions. Please take a look at them and let us know if there are any remaining questions, and we would be more than happy to continue the discussion. $\quad$ > Many of the signal processing concepts are introduced too briefly, considering the audience, which makes fully understanding the theory difficult. Adding a section to the appendix explaining them clearly would make the theory much easier to understand for people not familiar with signal processing. We agree with the reviewer that the discussion on the signal processing would be very helpful to readers with no background in the signal processing domain. We provided a more detailed discussion on the background in our general response to the reviewers. We will expand and add this discussion to the revised manuscript of our paper. $\quad$ >The method used to compute the privacy bounds and the subsampling method in the experiments are not mentioned clearly in the paper. "Uniformly draw minibatch" or "randomly draw minibatch" from the algorithm listings is ambiguous, and could mean any of Poisson, with replacement or without replacement subsampling. Great point! To clarify, we are using the subsampling without replacement strategy. The privacy guarantee is discussed in Balle, et al. 2018. We will clarify this in our revision. $\quad$ > Minor points >Line 43: 250K steps for LLAMA training sounds like a typo. According to Touvron et al., 2023, LLAMA-7B/13B are pre-trained with 1T tokens, with a batch size of 4M tokens. Therefore, the total pre-training steps is 1T/4M=250K. $\quad$ > It would be good to mention that there are multiple ways to interpret "neighborhood" (substitute or add/remove), and that the paper's results do not depend on the precise definition. Agreed! We will discuss this point in our revised paper. $\quad$ > As far as I know, the Gaussian mechanism bound in Definition 2 has only been proven for $\epsilon\leq 1$. You are correct that in the original proof in Dwork and Roth, 2014, the guarantee for the Gaussian mechanism is provided only for case $\epsilon \in (0,1)$. While the refined proof for general $\epsilon >0,$ and improved bounds are given by Abadi et al., 2016, and Mironov et al., 2019. We will update the citations in the revised version. $\quad$ >It is not clear how Algorithm 3 should be read for DPAdam or LP-DPAdam. The update on line 17 uses $d_t$, but the Adam lines only compute $\tilde{d}_t$. We apologize for the confusion. If only LP-DPAdam or DPAdam is used, then $\tilde{g}_t = g_t$ in line 9, and $d_t =\tilde{d}_t$ in line 16 in Algorithm 3. We will revise the description of the algorithm. $\quad$ > Do the adaptively selected filter coefficients from Section 5.3 have the same convergence guarantee? Did you experiment with adaptively selected filter coefficients from Section 5.3? This is a great question. Currently, we have neither theoretical results nor numerical experiments for the optimal FIR filter approach proposed in Sec. 5.3. The major focus of this paper is to propose the frequency domain perspective and analysis for DP noise reduction. The filters investigated and analyzed in the paper are *time-invariant*, while the adaptive filter is *time-varying*. We will discuss this briefly in the paper and will leave it as a potential and promising future direction of the paper. $\quad$ > Why does the performance of LP-DPSGD drop with larger epsilons when fine-tuning on CIFAR-10 in Figure 10? Good observation! Notice that the test accuracy drop in such a high accuracy for fine-tuning is quite small (only 0.1%). And the training accuracy in such cases did not drop. Therefore, the performance drop should be attributed to overfitting. $\quad$ Thank you for reading our rebuttal. We hope the above responses have addressed your concerns. If you have any questions, we would be more than happy to continue discussing them with you. --- Rebuttal Comment 1.1: Comment: Thank you for the response. You have addressed most of my concerns. I especially appreciate the background to signal processing you have in the general response. > While the refined proof for general $\epsilon > 0$ and improved bounds are given by Abadi et al., 2016, and Mironov et al., 2019. We will update the citations in the revised version. Can you point you which of their results you mean? I looked through them quickly and did not find a result which has the same constants as your Definition 2. I'm assuming that by Mironov et al. (2019) you are referring to "Rényi Differential Privacy of the Sampled Gaussian Mechanism", since there is no Mironov et al. (2019) your bibliography. --- Rebuttal 2: Title: Further response to Reviewer WMCW Comment: Thank you for your kind response and for providing additional feedback. We apologize that we misunderstood your question on $\epsilon <1$ for DP guarantee. Our original response was on Thm.1, which is correct and used in the later proof of our paper. You are correct on Def. 2 of the Gaussian mechanism that requires $\epsilon <1$. A refined analysis of the Gaussian mechanism is given in Thm. 2, [R1], which states that $\sigma = \frac{\Delta}{\epsilon}\cdot \frac{\sqrt{2}(a+\sqrt{a^2+\epsilon})}{2},$ with $\mathrm{erfc}(a) - e^\epsilon \mathrm{erfc}(a^2+\epsilon) = 2 \delta.$ This bound works for all $\epsilon>0 , \delta <1.$ We will follow your comment and fix the error in our Def. 2. We would like to point out that we did not use this result in our experiments or proofs in the paper. We just used this definition to introduce the Gaussian mechanism. So we can easily fix this issue and cite proper references. Please also let us know if other references should be included for such a revision. [R1] Zhao, J., Wang, T., Bai, T., Lam, K. Y., Xu, Z., Shi, S., ... & Yu, H. (2019). Reviewing and improving the Gaussian mechanism for differential privacy. arXiv preprint arXiv:1911.12060. Again, thank you for pointing out this issue.
Summary: This paper augments DP-SGD with a low-pass filter-based postprocessing on the iterates of DP-SGD. This design is based on the intuition that the noise contributes more to the high frequencies, while the gradients (assuming sufficient smoothness of the objective) contribute more to the lower frequencies. The paper gives a theoretical convergence analysis, showing an improvement corresponding to a signal-to-noise ratio factor depending on the gradient auto-correlation. Importantly, precise knowledge of the gradient autocorrelation is not needed to run the algorithm, but it helps design better filters. The paper also shows broad empirical improvements from the proposed techniques in the vision setting. Strengths: - A strength of the proposed method is its simplicity: it gives a simple add-on to the iterates of DP-SGD. It includes momentum as a special case but allows for more sophisticated running averages of the gradients. - The paper is well-written: the main idea comes across easily (although clarity can be improved; suggestions to follow). - The theoretical analysis looks sound. - I think the presented results are significant: The experiments demonstrate a clear win from the proposed approach. They also nicely corroborate the intuition behind the auto-correlation of the gradients vs. noise. - The first-order filters that work quite well empirically only need a modest additional storage cost (two extra buffers) over DP-SGD. Weaknesses: While I really like this paper, my opinion is that it is currently lacking in some aspects. Given below are some suggestions to improve the paper along these dimensions. 1. Error bars: DP is a noisy process, so it would be good to see error bars across multiple repetitions of the experiments. This is especially important since the gaps appear to be small. Further, it would be good to see the final accuracies for the vision experiments in a table as it is hard to judge how much the gap between the proposed method and baseline is. 1. Autocorrelation assumption: The general form of Assumption A4 is quite natural given the intuition developed in the preceding sections. However, it is not clear why $c_\tau$ is unconstrained while $c_{-\tau}$ is required to be non-negative. This looks like a trick for the proof to go through but further justification and empirical evidence (potentially in toy problems) are necessary. 1. Clarity: There is much scope for improvement to make the paper more reader-friendly (especially for those unfamiliar with signal processing). Examples: - A review section in the appendix recapping the basic ideas of the signal processing tools used (in the appendix) - Example coefficients used in typical low pass filters on page 5, and the kinds of weighted averages (the $\kappa_\tau$ coefficients) they would lead to. Also, a few figures to visualize the types of filters that work well with different types of auto-correlations could be helpful. - Theorem 2 could be restructured to make it easier to parse. A lot of quantities are used before their definitions. - The partial fraction decomposition of eq (7) is not guaranteed to be real. It would be worth clarifying that complex coefficients are allowed. Also, how are the $\kappa_\tau$ coefficients are real? - What constraints are necessary on the filter coefficients so that the $\kappa_\tau$ coefficients define a proper weighted average (i.e. non-negative and sum to one)? 1. Parameter tuning: How are the filter coefficients tuned? It is only mentioned that the choices are ``empirical'' in Line 298. Ideally, for momentum SGD, we would tune the momentum and learning rate together; I would expect that tuning the filter coefficients and learning rate together would help. 1. Originality: the proposed approach is a straightforward application of classical ideas from signal processing to private optimization. While this can be viewed as a strength, I would have liked to see a more detailed investigation into the use of low-pass filters. For example: - An empirical investigation of the proposed adaptive filter. - A detailed empirical study on the choice of the filter coefficients. Only 3 different choices are explored (apart from DP-SGD and momentum), but it would be good to see the effect of changing filter coefficients for a first-order filter. - A plot of the performance vs. the order of the filter. - Some investigation on observed auto-correlation vs. choice of filter coefficients. 1. Minor comments and Potential typos: - what is $t$ in lines 263 and 265? - Top of page 13, in equation (c): should $\kappa_\tau \| \nabla F(x_{t-\tau})\|^2$ be $\kappa_\tau^2$ instead? - Top of page 13, eq 10: Extra $+$ sign between $\frac{L\eta^2}{2}$ and what follows. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. It would be nice to have a more detailed comparison to correlated noise approaches for DP optimization e.g. [KMST+](https://arxiv.org/abs/2103.00039) or [DMRST](https://arxiv.org/abs/2202.08312). For instance, can those approaches be interpreted as some filtering of the noise? Also [CDPG+ (Fig. 1 right)](https://arxiv.org/abs/2310.06771) appear to use a high-pass filter on the noise for DP optimization. This seems to be similar in spirit to the proposed low-pass filter approach and a detailed comparison would be nice. 1. There is no batch size factor in the privacy-utility tradeoff of Theorem 3. How? 1. How does the low-pass filter work on the iterates of DP-SGD instead of the gradients? For instance, it is common to obtain a sequence $x_1, x_2, ...$ from SGD but use the average $\frac{x_1 + \cdots + x_t}{t}$ for inference. I wonder if a sophisticated average from the low-pass filter can help for inference alone (without using it for training). Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, but a more detailed empirical investigation would be good, as detailed above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for providing feedback to us. We are glad that you found our paper well-written, the results significant, and the theory meaningful. We are also very excited to hear that you really liked the paper. Below, we respond to your main comments: > Error bars The attached PDF provides the error bar for the experiments in Fig.3 (a). We will repeat the other single-run experiments and report all error bars in our revised manuscript. > Autocorrelation assumption Great question! The way to look at this assumption is as follows: since there is flexibility in choosing $c_\tau, c_{-\tau}$, we can always choose a small $c_\tau$ so that $c_{-\tau}$ is non-negative, and our theory applies. As you correctly pointed out, the constraint on $c_{-\tau}$ is not restrictive, and is only necessary for simplifying the proof. > Clarity Thank you for providing such detailed constructive feedback. We will address these points as discussed next: > * A review section in the appendix We will do it as described in our general response to the reviewers. > * Example coefficients for low-pass filters We provided the filter coefficients in Table 2 of the paper. A more detailed discussion on filter design is given in our general response to the reviewers and will be included in the revised manuscript. > * Restructuring Thm. 2 Agreed. We will put the definition of quantities ($\underline{SNR}$ and $\kappa$) before the theorem. > * Construction of $\kappa, p_{a,\tau}$ Indeed, $p_{a,\tau}$, as the solutions to $1 + \sum a_\tau x^\tau = 0$, might not be real. But the resulting weights $\kappa_\tau$ are guaranteed to be real. This is because $\kappa_\tau$ are the weights of the past gradients by recursively expanding $m_t = -\sum_{\tau=1}^{n_a}a_\tau m_{t-\tau} + \sum_{\tau=0}^{n_b}b_\tau g_{t-\tau} = \sum_{\tau=0}^{t}\kappa_\tau g_{t-\tau}.$ Since $a_\tau, b_\tau$ are real, $\kappa_\tau$'s are also real. We will add this discussion in our revised definition of $\kappa$ before Thm. 2. > * Constraints on the filter coefficients Great question! The design of the filter coefficients and the constraints are provided in our general response to the reviewer. In short, our constraints imply that the filter is stable and has a unit gain. > Parameter tuning... The design of the filter coefficients and the constraints are provided in the general response. The other hyper-parameters are tuned using grid search (See App. B.1 Tab. 1 in the original paper) jointly to achieve the optimal performance. > Originality: the proposed ... For example: > * Empirical investigation of the adaptive filter We agree that adaptive filters are worth further investigation. However, we leave this investigation to future work. This is because this paper focuses on frequency domain analysis, and the investigated filters are *time-invariant*. However, the adaptive time-varying filters require a different set of analysis tools. > * On the choice of the filter coefficients In our PDF response, we further investigated different choices of first-order filter coefficients, in Fig 2. We would expand this and include it in the paper. > * A plot of the performance vs. the order of the filter Fig. 6 in the paper investigates the performance of filters of different orders. We observe that a first-order filter performs better than no or second-order filter. A refined version is included in the PDF response, where we compare filters with different combinations of $n_a$ and $n_b$. > Minor points > * $t$ in lines 263 and 265 From the definition of $\kappa_\tau: m_t = \sum_{\tau=0}^{t}\kappa_\tau g_{t-\tau},$ we see that $\kappa_\tau$ is also a function of $t.$ We apologize for the confusing choice of notations, we will replace $\kappa_\tau$ with $\kappa_{t,\tau}$ in the revised paper. > * should $\kappa_\tau |\nabla F(x_{t-\tau})|^2$ be $\kappa_\tau^2$ instead? We used Jensen's inequlity: $\|\sum\kappa_\tau\nabla F(x_{t-\tau})\|^2 \leq \sum\kappa_\tau\|\nabla F(x_{t-\tau})\|^2,$ with $\sum \kappa_\tau \leq 1, \kappa_\tau \geq 0.$ > * Top of page 13, eq 10, extra $+$ sign Agreed, we will fix it. > It would be nice to have a more detailed comparison to correlated noise approaches for DP optimization Thanks for bringing up these related references. Roughly speaking, these methods (KMST+, DMRST, and CDPG+) can be viewed as releasing a *weighted prefix sum* with DP noise, i.e., $A(G_{0:t}+W_{0:t}),$ where $A$ is the prefix sum matrix and $W_{0:t}$ is the i.i.d. DP noise. KMST+ and DMRST apply certain decomposition $A = BC$ and change the update to $B(CG_{0:t}+W_{0:t}) = AG_{0:t}+BW_{0:t},$ and CDPG+ provides a theoretical justification that when $B$ is a high-pass filter, and $g_t$ are correlated, the algorithm outperforms original DPSGD. In contrast, our method can be written as $AM(G_{0:t}+W_{0:t})$, where $M$ is a low-pass filter. We will discuss these methods in our revised paper. Due to response limitation, the detailed discussion is given in the comment below. > No batch size in Thm. 3 From eq(5) in theorem 2, we see that the only place $B$ appears in the proof is in the last term $\sigma^2_{SGD}/B$, which is dominant by the previous term $d\sigma^2_{DP}$. Therefore, the choice of $B$ does not play a critical role in the privacy-utility trade-off in Thm. 3. Similar results can also be found in e.g., Bassily et al., 2014, where $B$ also does not appear in the final bound. > LP filter on iterates of DP-SGD? Thank you for pointing out this interesting direction. The exponential averaging version of your suggestion has been tried in De et al. 2022. Since our paper aims at gradient denoising, LP filter on the model for inference is out of the scope of the current paper, and we would like to leave it as a possible future direction. $\quad$ Thank you for reading our rebuttal. We hope the above responses addresses your concerns, and if there are any remaining questions, we would be more than happy to continue discussing with you. --- Rebuttal Comment 1.1: Title: Additional comment to Reviewer vvxz Comment: > Detailed comparison to correlated noise approaches for DP optimization We would like to provide a detailed discussion between our method and the correlated noise method (KMST+, DMRST, and CDPG+) in this comment. As discussed in the above response, the update of correlated noise method can be written as $B(CG_{0:t}+W_{0:t}) = AG_{0:t}+BW_{0:t},$ where $B$ is a high-pass filter and our method is $AM(G_{0:t}+W_{0:t})$, where $M$ is a low-pass filter. **Connection:** The correlated noise methods and our proposed method can all be viewed as processing the signal in the frequency domain to "separate" the noise and the gradient. **Differences:** The existing correlated noise methods 1) pre-process the gradient/noise to separate the gradient and the DP noise in the frequency domain, and therefore require careful design of the matrices $B, C$ for each problem and optimizer, 2) require extra memory ($O(d\log(t))$ to $O(dt)$), which is unrealistic for large scale training, and 3) only work for SGD update, since Adam cannot be written as such a prefix sum of privatized gradient. In contrast, our method 1) post-processes the noisy signal to extract the gradient from the noise from the frequency domain, 2) only requires $O(d)$ extra memory, which is *independent* of $t$, and 3) is compatible with any first-order optimizer since it just post-processes the gradient. --- Rebuttal 2: Title: Further response to Reviewer vvxz Comment: Thank you for your timely response and for increasing your score! We sincerely appreciate the time you spent reading our response and providing additional feedback. We apologize for not elaborating further on our original response. We mistakenly assumed that you are asking about the power spectral density (PSD) plots, which are related to the auto-correlation coefficients and show the low-frequency property of the stochastic gradients in the frequency domain (as shown in Fig. 2 in the original manuscript and Fig. 4 in the PDF response). This was the reason we added Fig.4 in our rebuttal. The PSD plots are obtained by applying FFT to the auto-correlation coefficients. So, they are directly related to the auto-correlation plots. Having said that, it seems, unfortunately, we are unable to include more figures in the response, but by applying inverse FFT (iFFT) to the PSD, we can easily obtain the auto-correlation coefficients, and $c_\tau, c_{-\tau}.$ We will follow your suggestion and put the original auto-correlation coefficients directly showing $c_\tau$ and $c_{-\tau}$ in our revised paper. Regarding your comment on the investigation of the auto-correlation vs filter coefficients, a frequency-domain illustration is given in Fig. 2(b) in the original paper, which plots the auto-correlation of the stochastic gradients after applying the filter. By an inverse FFT on it, we can observe the relation between the coefficients of the low-pass filter and the resulting auto-correlation coefficients. To further address your question, we copy the first 10 coefficients of the auto-correlation of the stochastic gradients, and $\kappa_\tau$ of the filter: | Auto-correlation | 0.0436| 0.0273| 0.0194| 0.0151| 0.0131| 0.0058| 0.0026| 0.0021| 0.0005| 0.0004| | - | - | - | - | - | - | - | - | - | - | - | | Filter coefficients $\kappa_\tau$ | 0.0909| 0.1652| 0.1352| 0.1106| 0.0905| 0.0740| 0.0606| 0.0495| 0.0405| 0.0331| It can be observed that the auto-correlation and the filter coefficients are all gradually decreasing as $\tau$ increases. We would include these discussions in our revision. Please let us know if our response answers your question. Thank you again for your invaluable feedback. --- Rebuttal Comment 2.1: Title: Response Comment: Thank you for the response and the additional details. I'm in favor of acceptance: I will maintain my score as I feel that it is a fair assessment of the paper. Some additional suggestions: Further justification of the autocorrelation assumption would be nice. Why is it always possible to choose $c_{-\tau}$ to be positive? I would have expected some sort of symmetry in $c_{\tau}$ and $c_{-\tau}$. Further details about this, and exploration for toy examples (e.g. simple quadratic functions or logistic regression for a slightly harder problem) would be quite helpful for a reader.
Summary: This paper suggests the effects of a low-pass filter for private training with DP-SGD. After investigating the noise and true gradients during training, the authors propose using previous gradients and momentum to distinguish effective gradients from random noise. They empirically prove their idea across various settings, including different optimizers and datasets. Strengths: • The paper presents a very simple and effective theory based on the optimization of DP-SGD and the similarities between gradients $\nabla F(x_t)$. • The authors investigate the optimization dynamics in terms of the frequency domain rather than the time domain. • The paper conducts experiments on various datasets and existing well-known optimizers, demonstrating the effectiveness of the frequency-aware optimization. • The authors establish an interesting relationship between SNR and frequency domain approaches. Weaknesses: Please refer to the Questions section. Technical Quality: 2 Clarity: 4 Questions for Authors: I will happily increase the score if the authors can address the following questions: • The authors use assumptions of bounded variance and gradient norm. However, these assumptions might not be true in real DP-SGD situations. The strong underlying assumption of the authors is that the whole-batch gradient directions are similar between timestamps $\nabla F(x_t)$ and $\nabla F(x_{t-\tau})$. This is widely known in GD settings, but it may not be obvious in SGD (and DP-SGD) settings. Can the authors provide evidence with mini-batch gradients $\nabla f(x_{t})$? [1,2,3] • Using a low-frequency signal typically requires the use of FFT and iFFT to distinguish patterns in the frequency domain. While I understand that the authors try to investigate the auto-correlation and PSD of gradients (as shown in Figure 1), I cannot agree that this approach is orthogonal to conventional approaches depending on timestamps. As the authors mentioned momentum, this approach seems somewhat similar to the learning dynamics of DP-SGD, rather than the frequency domain. The authors should clarify this issue. • There are some existing papers that investigated the use of previous gradients in DP-SGD that are missing from the discussion. [1] investigated the use of previous gradients and their momentum approaches for sharpness-aware training. They explored the correlation between previous private gradients and the current gradient during optimization. • For the momentum approaches, why don’t you use $\alpha$ and $1-\alpha$ for the popular setup in EMA? In the appendix, I saw that tuning both $\alpha$ and $\beta$ requires quite a large search space. I wonder if the effectiveness of your methods comes from the enlarged search space. Could you provide an ablation study for this search space? • The authors try to make a theoretical analysis (line 233), however, it may not be true in the real application. The learning rate of private training is much larger than standard training, where it cannot be lower than $O(\sqrt{1/\tau})$ as far as I know. [1] Explicit loss asymptotics in the gradient descent training of neural networks, NeurIPS 21 [2] Measurements of Three-Level Hierarchical Structure in the Outliers in the Spectrum of Deepnet Hessians, ICML 18 [3] Implicit Jacobian regularization weighted with impurity of probability output, ICML 23 [4] Differentially Private Sharpness-Aware Training, ICML 23 Confidence: 4 Soundness: 2 Presentation: 4 Contribution: 3 Limitations: The authors clarify the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing the contribution of our paper and providing detailed feedback to improve the paper. Our responses to your specific comments are listed below. > The authors use assumptions of bounded variance and gradient norm. However, these assumptions might not be true in real DP-SGD situations. The strong underlying assumption of the authors is that the whole-batch gradient directions are similar between timestamps ... Let us review our main assumptions: Assumptions A.2 and A.3 require bounded gradient and bounded variance. These assumptions are standard for non-convex optimization and for DPSGD analysis, as discussed at the end of Sec. 2.1 of the manuscript. Specifically, the bounded variance assumption is standard for non-convex optimization. Notice that as long as the input is finite (e.g. pixels with finite RGB values), the variance is guaranteed to be bounded. The bounded gradient assumption is widely used in the analysis of DPSGD, and when the model parameters have a finite value, the gradient is always bounded. As you pointed out, A.4 is another major assumption that requires certain properties on the auto-correlation of the gradient. Notice that this assumption is on the gradient itself (and not noisy/stochastic versions). We provided a theoretical justification in the *worst-case*, in Sec. 3 and lines 232-235, that when the learning rate is sufficiently small, A.4 always holds. Moreover, we have provided the empirical verification in Fig. 2, where the mini-batch gradients are correlated. We provided more figures illustrating the auto-correlation of the mini-batch gradient in the attached PDF. Please let us know if this addresses your concern. > Using a low-frequency signal ... I cannot agree that this approach is orthogonal to conventional approaches depending on timestamps... We believe there is some misunderstanding about our claim of *orthogonal to noise reduction method in the time domain*. First, we agree with the reviewer that our approach is *not* orthogonal to the momentum method, and other possible methods that depend on timestamps, as we discussed its connection with the momentum method in Sec. 4, lines 218-221, Sec. 5.3, lines 265-268. Instead, our approach **is orthogonal** to the approaches that do not rely on timestamps, e.g., modifying clipping operation, changing model structures, and using noise scheduler. These approaches aims at reducing the impact of the DP noise for each step independent of other steps and DOPPLER can be combine with these approaches. We will further clarify this point in our revision. > There are some existing papers that investigated the use of previous gradients in DP-SGD that are missing from the discussion... Thank you for bringing up this relevant literature. We will cite these papers and discuss them in our revised version. Notice that the philosophy and motivation behind DP-SAT and DOPPLER are different: DP-SAT aims at finding a "flat" minimizer. Moreover, their approach is different. In particular, the momentum used in DP-SAT is only for an estimation of the perturbation direction for SAM, which is not used for DP noise reduction. In contrast, DOPPLER aims to denoise the gradient using the past gradient with a low-pass filter. > For the momentum approaches, why don’t you use $\alpha$ and $1-\alpha$ for the popular setup in EMA? In the appendix, I saw that tuning both $\alpha$ and $\beta$ requires quite a large search space. I wonder if the effectiveness of your methods comes from the enlarged search space. Could you provide an ablation study for this search space? Thank you for this nice and critical comment. We would like to clarify how the filters are designed: 1. As discussed in Sec. 4 and 5.3, Momentum-SGD (choose $\alpha, 1-\alpha$) is a special case of the low-pass filter. However, such a choice might not achieve the best performance. Therefore, filters with more coefficients are needed, which admits a larger search space and possibly better performance. 2. The effectiveness of the method indeed comes from the enlarged search space and the frequency-domain viewpoint to efficiently design the filter coefficients. As discussed in the general response, there exists a series of mature methods to choose the filter coefficient. 3. In Fig. 6 in the original paper, we provide an ablation study to the choice of filter coefficients, and more choice of filter coefficients are provided in the PDF response. > The authors try to make a theoretical analysis (line 233), however, it may not be true in the real application. The learning rate of private training is much larger than standard training, where it cannot be lower than $O(\sqrt{1/\tau})$ as far as I know. Thank you for your detailed feedback on our paper. In our paper, the requirement of the learning rate is $\eta = O(\sqrt{1/\tau})$, which ensures the auto-correlation to be positive in the **worst case**. This requirement aligns with what the reviewer has suggested. Moreover, in real-world applications, the gradient can still be positively correlated for larger values of the learning rate. For example, for quadratic problem $1/2\|Ax+b\|^2$, by choosing $\eta \leq 1/\|A^\top A\|,$ it is guaranteed that the grdients are positively correlated. This does not invalidate our result as we only provide a bound. Moreover, as illustrated in Fig. 2 (blue line), the PSD of the stochastic gradient is low-frequency, indicating the positive correlation of the stochastic gradients in real-world applications. Similar observations were also made in other papers, e.g., [DPDR] Liu et al. 2024, Fig. 1. $\quad$ Finally, we would like to thank you for reading our rebuttal and providing detailed feedback to us. We did our best to respond to all your comments. Please let us know if there are still any remaining questions and we would be more than happy to continue the discussion. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I carefully read the rebuttal of the authors, including the general response and answers to my questions and those of the other reviewers. Although I thought some of the mathematical support was still limited, I understand the authors' novelty points and the experimental results of the proposed method. Personally, the authors effectively addressed the questions, including mine and those of the other reviewers. Thus, I have raised the score. --- Reply to Comment 1.1.1: Comment: Thank you for your kind response and for increasing your score! We sincerely appreciate the time you spent reading our response. Please let us know if you have any further comments and suggestions, and we would be more than happy to include them in our revised manuscript.
Summary: This paper proposes DOPPLER mechanism to post-process DP gradients and reduce noise in them, before updating the model. The work makes an observation that, in frequency domain, there is a clear distinction between distribution of SGD gradients and DP noise; the earlier lies in a small window of lower frequencies while the latter is distributed uniformly over all frequencies. Based on this observation, the DOPPLER mechanism uses a low-pass frequency filters that filters all high frequency signals, and hence, most noise is filtered while only some of the useful gradient is filtered thereby increasing overall SNR. Paper presents theoretical connection between SGD gradients and DP noise in the time and frequency domains, privacy analysis and empirical evaluation on standard benchmarks to showcase efficacy of DOPPLER. Strengths: - Interesting signal processing perspective for DP optimization problem and observation about gradients and noise in frequency domain - DOPPLER is a post-processing method, hence can be used with SOTA DP methods - Experimental results show that in training-from-scratch settings DOPPLER is useful Weaknesses: - Connection between time/frequency domains is difficult to understand; adding some background might be useful - Motivation of the paper is to enable DP training for large, foundation models, but experiments are performed on relatively small models - DOPPLER can be combined with any SOTA DP training, but results with such SOTA methods are missing Technical Quality: 3 Clarity: 2 Questions for Authors: Section 3: - Can you provide some intuition about what does it mean to convert a series of gradients from time to frequency domain? It might be useful to have this somewhere (even if it’s in appendix) to help readers understand the approach better. - Line 167-168: In Figure 1a, auto-correlation coefficient first increases and then decreases with time, but description seems to state something else. Can you clarify? - Line 168-169: How do you compute PSD? How do you go from the first equation in Section 3 to drawing Figure 1b, 1c? - Line 187-188: Why can a linear low pass filter be written as in the first equation of section 3.1? Section 6: - What are the sizes of the models used? - Why do you not compare with SOTA methods, e.g., De et al. 2022 or Shejwalkar et al. 2022? Shejwalkar et al., Recycling Scraps: Improving Private Learning by Leveraging Intermediate Checkpoints, arXiv 2022 Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: - Proposed method, although motivated from enabling DP for large models, cannot be used for large models due to computation inefficiency. - Current set of results lacks depth and should be improved. - Paper writing/clarity can be improved (see questions/weaknesses). - Equations are not properly numbered. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing the contribution of our signal processing perspective, and the precious advice for improving the presentation. We would like to address the reviewer's concerns as follows. $\quad$ > Can you provide some intuition about what does it mean to convert a series of gradients from time to frequency domain? It might be useful to have this somewhere (even if it's in appendix) to help readers understand the approach better. Great suggestion! We agree that the paper can benefit from further discussion of the background. Please take a look at our proposed new appendix section in our general response to all reviewers and let us know if you would like to see further discussions on any specific background. The intuition of converting the gradient sequence in the time domain to the frequency domain is that the frequency domain representation helps us to capture and utilize certain properties of the gradient that are hard to observe in the time domain, e.g., the separability of the noise and gradient as discussed in Sec. 3. > Line 167-168: In Figure 1a, auto-correlation coefficient first increases and then decreases with time, but description seems to state something else. Can you clarify? Let us clarify that in Fig. 1(a) (also in 1b) and 1c)), we shift the x-axis so that $\tau = 0$ ($\nu = 0$) is in the middle of the plots, because the auto-correlation coefficients are symmetric, i.e., $\mathbb{E}[\nabla F(x_t)^\top\nabla F(x_{t+\tau})] = \mathbb{E}[\nabla F(x_t)^\top\nabla F(x_{t-\tau})].$ Therefore, the auto-correlation coefficients decreases as the time-lag $|\tau|$ increases, instead of first increase then decrease. > Line 168-169: How do you compute PSD? How do you go from the first equation in Section 3 to drawing Figure 1b, 1c? As explained in lines 162-163 in the original manuscript, the PSD of the gradient is computed by applying a Fourier transform to the auto-correlation coefficients in figure 1(a), i.e., $P(\nu) = \mathcal{F}\{\phi(\tau)\}.$ The explicit transform can be found in the general response. > Line 187-188: Why can a linear low pass filter be written as in the first equation of section 3.1? A linear filter is a filter whose output is a **linear combination** of the past input signals, which can be written in the general form of $$m_t = -\sum_{\tau=1}^{n_a}a_\tau m_{t-\tau} + \sum_{\tau=0}^{n_b}b_\tau g_{t-\tau}.$$ The choice of the filter coefficients $\{a_\tau\}, \{b_\tau\}$ determine whether the filter is a high-pass, low-pass, band-pass or band-stop filter. This should be clear after we add the suggested new appendix on the background. > What are the sizes of the models used? In the experiments, we report the results on a 5-layer CNN (223K), ViT-small (30.1M), EfficientNet (5.33M), and ResNet-50 (25.6M) for the CV experiments (in Appendix B.4 in the original manuscript), and a RoBERTa-base model with 125M parameters. Note that for DP pre-training, these models are considered as **large models** compared with the ones used in the existing literature. > Why do you not compare with SOTA methods, e.g., De et al. 2022 or Shejwalkar et al. 2022? We would like to clarify that the SOTA methods, De et al. 2022 and Shejwalkar et al. 2022, implement a series of engineering tricks to improve the performance of DP training, as discussed in Sec 2.3 in the paper. We implemented several methods proposed by these papers in our experiments, including model design, group normalization, bounded activation, and large batch size. However, the rest of the engineering tricks (in JAX) are not memory/time efficient and are incompatible with our implementation in PyTorch. Therefore, although we did not directly compare these SOTA methods with their DOPPLER version, our numerical experiments *implicitly* adopt part of these SOTA methods in our comparison. > Proposed method, although motivated from enabling DP for large models, cannot be used for large models due to computation inefficiency. We believe that the reviewer's comment on computation inefficiency is not entirely accurate. We agree with the reviewer that the proposed method requires more memory than the standard DP optimizer. However, the computational cost of applying the Low-pass filter is *at the same level* as momentum-SGD, with an overhead of combining the past gradients. This cost is negligible compared with the backward and clipping steps. For example, when using DPSGD to train a ResNet with Cifar-10, the backward and clipping takes 16240ms (406ms/50 samples) for one minibatch (2000 samples), while the SGD update takes 52ms, and for LP-DPSGD, the Low-pass filter only takes an extra 74ms, i.e., $\sim0.5\\%$ overhead in computation. > Current set of results lacks depth and should be improved. We believe our current results covers a wide range of algorithms, models, and datasets. We also provide more numerical results in the PDF response for in-depth investigation on the filter design. However, we understand that this is a bit subjective matter and readers may expect different/additional experiments. We would gladly add more if you could please tell us what is missing in the experiments, and how we can address it. > Paper writing/clarity can be improved (see questions/weaknesses). Equations are not properly numbered. We agree with the reviewer and appreciate for the valuable feedback, and we will an additional section to the appendix covering the background. Also, we will revise the main manuscript as the reviewer suggested to clarify the points. We will re-label the equations. However, we believe that in the current manuscript, all referred equations have been labeled. $\quad$ Thank you for reading our rebuttal. We hope the above responses have addressed your concerns.If there are still any questions, we would be more than happy to continue discussing with you. --- Rebuttal Comment 1.1: Comment: Dear reviewer, We would like to thank you again and kindly remind you that the discussion period will end soon. We hope our response has addressed your concerns. Thank you very much for the time you spent reviewing our work and providing constructive feedback to us.
Rebuttal 1: Rebuttal: # Response to all reviewers We would like to thank the reviewers for their constructive and detailed feedback. We are glad that the reviewers found our approach novel and effective despite its simplicity. Before responding to the individual reviewers' questions, we would like to thank the reviewers for their suggestion on adding a discussion on the signal processing background. We completely agree that such a discussion would improve the paper. Therefore, in our revised version, we plan to include an additional appendix discussing the background. Below is our suggested appendix on the background: ## Background in signal processing ### Frequency domain analysis * **What is frequency domain analysis.** In signal processing, frequency domain analysis is used to analyze the periodical or long-term behavior of a (time series) signal/data. In the frequency domain analysis, we use the frequency $\nu$ as the indices of the signal, e.g., $\{X(\nu)\}, X(\nu)\in \mathbb{C}$, where each term $X(\nu)$ records the amplitude and phase of the sine wave of frequency $\nu$ that composes the signal; in contrast, in time domain, we use time $t$ as the indices of a signal, e.g., $\{x_t\}$, where each term $x_t$ records the value of the signal at a given time $t$. In the paper, we treat each coordinate $i \in [1, \dots, d]$ of the privatized gradient over the iterates as an individual signal, i.e. $\{g_1[i], g_2[i], \dots, g_T[i]\}$. Thus the gradient over iterates gives us $d$ one-dimensional signals and we can look at their frequency domain representation of each signal. * **Why converting a signal to the frequency domain is beneficial.** 1) Certain properties of a signal can be hard to observe/characterize in the time domain. For example, a long-time correlation or a cyclic behavior of the signal is not easy to directly observe in the time domain. By converting the signal to the frequency domain, such properties can easily be captured and analyzed. For example, the signal $x_t = \sin( t)$ has nonzero entries in almost all times. However, the frequency domain representation of this signal has only one entry that is non-zero, i.e., $X(1) = 1$ and all other entries are zero, i.e., $X(\nu) = 0, \forall \nu \neq 1.$ This means $x_t$ has only one periodic signal in it. 2) Certain mathematical analysis can be significantly simplified in the frequency domain. For examples, linear differential equations in the time domain are converted to algebratic equations in the frequency domain; filters as convolutions in the time domain are converted to point-wise multiplication in the frequency domain. These properties greatly simplifies the analysis of the signals and filters' dynamic. See Sec. 3.7, 10.5 in Oppenheim et al., 1996 for detailed discussion. * **How to obtain a frequency domain representation of a signal.** To obtain a frequency domain representation of a discrete signal, one can apply Discrete Fourier transform (DFT) ($\mathcal{F}\{x_t\}: X(\nu) = \sum^{T-1}_{t=0} x(t)e^{\frac{-2\pi i t}{T}\nu}$)) to the signal. By directly applying DFT to a signal and obtaining $\{X(\nu)\}$, one can identify how the signal is composed of sin waves of different frequencies $\nu$ with their amplitudes and phases. In the paper, we apply DFT to the auto-correlation of a signal and obtain its power spectrum density (PSD). The PSD of a signal shows the distribution of the power of a signal on different frequencies. For example, the PSD of $x(t) = \sin(t)$ is $P(\nu) = 1/2$ for $\nu = \pm\frac{1}{2\pi}$ and 0 elsewhere. ### Low-pass filter * **Frequency filter.** A frequency filter is a transformation of a signal that only allows certain frequencies to pass and blocks/attenuates the remaining frequencies. For example, for a signal $x(t) = \sin(t) + \sin(10t),$ we can apply an (ideal) low-pass filter $F(\nu) = 1$ when $|\nu| \leq \frac{1}{2\pi}$ and $0$ otherwise. Then after applying the filter, $F*x(t) = \sin(t)$, the output signal only keeps the low-frequency signal. * In this work, we use (time-invariant) linear filters for DP noise reduction. A linear filter attenuates certain frequencies by using a linear combinations of the input signal. Considering $g_t$ as the time signal, the general form of a linear filter on $g_t$ is $$m_t = \sum_{\tau=0}^t \kappa_\tau g_{t-\tau} = -\sum_{\tau=1}^{n_a}a_\tau m_{t-\tau} + \sum_{\tau=0}^{n_b}b_\tau g_{t-\tau},$$ where $\kappa_\tau$ are the filter coefficients. The second formula is a recursive way of writing the filter. * **Filter design.** The property of the filter depends on the choice of the filter coefficients. Designing a filter consists of the following steps: 1. Decide filter order/tab $n_a, n_b$. Larger $n_a, n_b$ give the filter more flexibility and better possible performance, at a cost of more memory consumption. In our experiment, we tested on 0th-3rd order filters, i.e., $\max\{n_a, n_b\} \leq 3$. 2. Decide filter coefficients $\{a_\tau\}, \{b_\tau\}$. Filter design can in general be a complex procedure and it involves deciding on trade-offs among different properties of the filter. Two standard constraints on the filter coefficients are: a) $-\sum a_\tau + \sum b_\tau = 1$, to ensure the filter has unit gain, i.e., the mean of the signal remains unchanged; and b) the solutions $x$ to $1 + \sum a_\tau x^\tau = 0$ satisfies $|x|<1,$ to ensure the filter is stable, i.e., $\sum|\kappa_t| < \infty.$ In the paper, we directly follow the design of Chebyshev filter and Butterworth filter, and tuning their cut-off frequency (and ripple) to achieve the best performance and maintaining these properties. See Winder, 2002 for detailed discussion. * **Plot of the frequency response of the filters.** In the PDF response, we provide the time response ($\kappa_\tau$) and frequency response of the filters used in Tab. 2 in the original paper and additional experiments. Pdf: /pdf/a47539227433cb84decb5a6c20a40fe775c90b60.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum
Accept (poster)
Summary: This paper proposes to carefully segment the training data so that different documents won't be mixed together. They also propose a Grow-P2 curriculum that increases training efficiency and stability. Strengths: The proposed Grow-P2 curriculum is useful to practitioners if they want to pretrain large language models. Weaknesses: This paper presents only empirical results. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The authors only explore the training curriculum regarding lengths. There are other dimensions when it comes to curriculum design. For example, should we train the model on simpler topics first? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank reviewer uip1 for their feedback. We respond to the reviewer’s concerns and questions below and kindly request that you let us know if further clarification is needed. --- > This paper presents only empirical results. We appreciate that our work is considered an empirical contribution, yet valuable to the community. We respectfully do not consider this a weakness. --- > other dimensions for curriculum As suggested by the reviewer, different notions of difficulty can be explored when designing a curriculum. In this work, our focus is on sequence length-based curriculum. We show that such a curriculum results in both training efficiency (through variable sequence length training) and performance gain. --- Rebuttal Comment 1.1: Title: I have read the rebuttal. Comment: Thank you for providing additional details. I'll keep my positive score at 5. --- Reply to Comment 1.1.1: Title: Appreciate your response Comment: We would like to thank Reviewer *uip1* again for their positive feedback.
Summary: The paper explores dataset decomposition for LLM pre-training. The method decomposes documents into subsequences and organizes them into buckets. Sequences of similar lengths are grouped in the same bucket, and different buckets have different lengths. This amounts to more efficient training. The paper investigates various mixtures of lengths separately for their impact on performance. The paper also explores a length-based cyclic curriculum learning - treating smaller length buckets as "easy" examples and larger ones as "hard" examples. Curriculum learning can improve the results a bit. Strengths: * The high-level ideas are reasonably motivated. * Reasonably extensive experimental analyses of the methods are provided along with baseline dataset structuring methods for pre-training. * Results are generally promising. Generally, it results in better task performance while taking less training time. Weaknesses: On the one hand, the proposals can be seen as important in exploring some unorthodox training structures for the specific context of LLMs and informing future pre-training. However, on the other hand, the paper seems to be mainly an exploration of hyperparameter tuning. The main proposed techniques seem like extensions of existing strategies (bucketing and curriculum learning) for LLMs. Bucketing is already understood to make things efficient, and curriculum learning has some positive results in NLP in general in earlier papers. Cyclic curriculum learning was also used in earlier works [1]. [1] CYCLICAL CURRICULUM LEARNING - Kesgin et al. ArXiv 2022 Also, if I understand correctly, the training speed gain may be less significant with flash-attention-based LLMs and alternative models like (Mamba, Linear Transformers). Technical Quality: 3 Clarity: 2 Questions for Authors: Question: 1. Can cross-document attention be restricted with an attention mask? Can that be explored? 2. Wouldn't a cyclic curriculum still be a problem if the learning rate starts to degrade before the first curriculum cycle? If I understand correctly, you have to run the cycles quicker than the learning rate decay? More discussion on this can be helpful. 3. Can you elaborate more on the pacing strategy (see more on my confusion below in the suggestions)? Suggestions: * I did not quite understand the exact details of the pacing mechanism for the cyclic curriculum. It would be helpful to provide a pseudo-code or explain it through mathematical formalisms. The table shows a static distribution of sampling odds, but if the model is shifting from easy to hard examples, then the sampling odds should be changing, no? It's not clear to me how exactly that is being done. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer H2jF for their time and feedback. We are glad that the reviewer finds our work to be reasonably motivated, and the results to be extensive and promising. We respond to the reviewer’s concerns and questions below, address all of them in the revised paper, and kindly request that you let us know if further clarification is needed. --- > The paper seems to be mainly an exploration of hyperparameter tuning We respectfully disagree with this comment. We do not tune any hyperparameters in this work. The dataset (RefinedWeb), models, and training hyperparameters are all based on publicly available OpenLM repo, and are used for both the baselines and our method without any tuning. --- > The main proposed techniques seem like extensions of existing strategies While some components introduced in this work (multi-stage training based on sequence length, length-based bucketization, and cyclic schedules) have been used by previous works in different contexts, this is **the first work to show the efficacy of length-based curriculum for autoregressive LLM pre-training**, both in terms of training efficiency (faster training) and training performance (final model accuracy, especially for long-context metrics). We would like to emphasize that given the significant cost associated with LLM pretraining, the savings provided by the proposed method are very substantial (up to 45% faster training for models considered in the paper). In addition, while our bucketing and curriculum (e.g., [1]) may look similar to prior works for different domains and setups, they are novel and have major differences as explained below: * **Bucketing**: We introduce binary decomposition (Section 2.1), a novel method to preserve document length, form buckets with fixed sequence lengths without using pad tokens, and avoid forming multi-document sequences (thus achieving no cross-document attention without attention masking). This is different from existing bucketing methods in different domains, where cross-document attention may still occur (albeit with reduced chances) without padding. * **Sequence length-based curriculum**: Our analysis shows the importance of mixing sequences with different lengths during training and curriculum (Table 2), beyond the simple multi-stage training considered by previous works. Please also see [our response to reviewer bcif on the novelty](https://openreview.net/forum?id=r8M9SfYMDi&noteId=TtzwitydGc). [1] CYCLICAL CURRICULUM LEARNING - Kesgin et al. ArXiv 2022 --- > The training speed gain may be less significant with flash-attention-based LLMs and alternative models All results presented in this paper are with FlashAttentionV2, and we achieve up to 45% faster training and more accurate models (see Fig 1b) compared to the baseline, which also uses FlashAttentionV2 for sequence length up to 8192. Training speed gains will be even more when training on longer sequences. Please note that the proposed method is designed to speed up the training of transformers and further increase their performance without any architectural changes. Alternative architectures, such as state-space based models (e.g., Mamba) or approximations to attention (e.g., linear attention), do not suffer from quadratic attention costs but come with performance limitations. We would like to emphasize that transformers with multi-head attention are still the predominant architecture in most large-scale language models in the community. --- > Can cross-document attention be restricted with an attention mask? Can that be explored? Yes, we have already included results with attention masking to avoid cross-document attention. Baseline-8k-DM and Pack-8k+DM in Table 5 refer to models trained with Document-Masking (DM). Applying document masking mainly improves the regular evaluation, making it closer to our results. However, document masking does not provide the computational benefits (training speed) of our proposed method. Please **see Tables 1 and 2 in the rebuttal PDF**. --- > Cyclic curriculum vs learning rate schedule In all our experiments, we use a learning rate schedule with a short initial warmup followed by a one-cycle cosine learning rate decay, as in the OpenLM repository. For the length curriculum, we analyze both one-cycle and multi-cycle curricula. In all cases, the warmup period for the learning schedule is shorter than a single length curriculum. We did not observe any stability or convergence problems in any of the setups. In fact, in Appendix E, we show that our proposed curriculum significantly improves training stability compared to the baseline. To further clarify our cyclic length curricula with respect to the learning rate schedule, please **see Figure 2 in the rebuttal PDF** for a visualization overlapping both schedules (to be also included in the revised paper). --- > Details of the pacing mechanism We apologize for the confusion. We provide further clarification on our mixture and curriculum implementation below with a pseudo-code, as suggested (to be also included in the revised paper). We will also **release the full code** (a small patch on top of the OpenLM repo) which should further clarify the details. ``` # {D_i}: list of buckets such that D_i includes sequences with length 2^i # {n_i}: total number of tokens to be picked from each bucket (see Table 1 of paper) # {o_i}: sampling odd for each bucket (see Table 2 of paper) # c: number of cycles # b: number of tokens per optimization step # Form c non-overlapping random subsets from each bucket D_i: s_{i,j} = random subset of D_i with n_i/c tokens # for j = 1, 2, ..., c for j in [1, 2, ..., c]: # loop over cycles while at least one s_{i,j} is not empty: odds = [o_i if s_{i,j} not empty else 0 for i=1,2,3,...] probs = odds / odds.sum() randomly sample index i with probability probs[i] sample b/2^i sequences from s_{i,j} without replacement and use for training ``` --- Rebuttal 2: Title: Response Comment: Thank you for the rebuttal. Overall, I increased the score to 6. > We respectfully disagree with this comment. We do not tune any hyperparameters in this work. The dataset (RefinedWeb), models, and training hyperparameters are all based on publicly available OpenLM repo, and are used for both the baselines and our method without any tuning. I meant in the sense that training scheduling strategies and data sampling can be seen as a form of hyperparameter, and exploring small changes in the details of established strategies (bucketing, curriculum learning) can be in itself seen as hyperparameter tuning. I am not saying that there is any technical issue here, just that technical novelty may appear limited as a result. > All results presented in this paper are with FlashAttentionV2, and we achieve up to 45% faster training and more accurate models (see Fig 1b) compared to the baseline, which also uses FlashAttentionV2 for sequence length up to 8192. Training speed gains will be even more when training on longer sequences. Thank you for the clarification. > In all our experiments, we use a learning rate schedule with a short initial warmup followed by a one-cycle cosine learning rate decay, as in the OpenLM repository. For the length curriculum, we analyze both one-cycle and multi-cycle curricula. In all cases, the warmup period for the learning schedule is shorter than a single length curriculum. We did not observe any stability or convergence problems in any of the setups. In fact, in Appendix E, we show that our proposed curriculum significantly improves training stability compared to the baseline. To further clarify our cyclic length curricula with respect to the learning rate schedule, please see Figure 2 in the rebuttal PDF for a visualization overlapping both schedules (to be also included in the revised paper). My point is more related to the theoretical motivation you supplied for cyclic curriculum - that due to learning rate decay, by the time the model starts encountering hard examples. "Due to the presence of other hyperparameter schedules during the course of training (e.g., learning 259 rate and weight decay), a curriculum on length may result in a potential implicit bias. For example, if 260 we only see long sequences toward the end of training, long sequence learning occurs only when the 261 learning rate is too small. To address this potential issue, we also explore cyclic curricula, where a 262 curriculum is applied in cycles (similar to cyclic learning rate schedules [52])." My point is that the cyclic curriculum itself doesn't seem to be a complete solution, but it has to be balanced properly with learning rate scheduling if this motivation applies at all. Following this motivation, if you have a monotonic decay of learning rates , and it's faster than the first cycle - the same problem arises. Also, on the other hand, if you have cyclic learning rates, the theoretical motivation for a cyclic curriculum seems to diminish because it can allow harder examples to experience higher learning rates near the end without cycles. More can be discussed here in these regards. > We apologize for the confusion. We provide further clarification on our mixture and curriculum implementation below with a pseudo-code, as suggested (to be also included in the revised paper). We will also release the full code (a small patch on top of the OpenLM repo), which should clarify the details further. Thank you. That brought some much-needed clarity on the method. It seems you don't strictly have any explicit pacing function to directly control the odds of sampling easier samples (starting from high odds) as time goes on. Instead, you do sampling without replacement. So once the easy samples are made early on, near the end, mostly harder samples will remain to sample from. I think this should be discussed more in the text. --- Rebuttal Comment 2.1: Title: Appreciate your response Comment: We would like to thank Reviewer H2jF again for their time and positive feedback. As suggested by the reviewer, we will include more discussion in the revision on our investigation of length curriculum in relation to other schedules, such as learning rates. Regarding the method details: The reviewer’s understanding is correct. We do not explicitly control the pace; instead, we sample without replacement using fixed probabilities (that determine the curriculum). When a bucket is empty for the current cycle, it is excluded from sampling. Consequently, as the reviewer mentioned, mostly long sequences remain toward the end. Note that even when we train on a sequence of length $n$, we have $n$ next-token prediction losses (applied in parallel) with context lengths $0, 1, 2, …, n-1$. This implies some mixing: when training on a hard example (i.e., a long sequence), we also have easy examples (its shorter sub-sequences). Therefore, even toward the end of each cycle, we still have some losses with short contexts. As suggested, we will augment the text with additional discussion in the revision.
Summary: The paper introduces 'Dataset Decomposition' (DD), a method to enhance the pre-training of Large Language Models (LLMs). Contrary to the traditional 'concat-and-chunk' approach, which can lead to unwanted cross-document attention and computational inefficiency, DD organizes datasets into buckets with sequences of uniform length from individual documents, enabling variable sequence length training. This method is demonstrated to reduce training time, improve model performance, and scale effectively with the size of the dataset. Strengths: 1. Comprehensive Experiments: The authors conduct extensive experiments across various datasets and LLMs, with observations and analyses that are noteworthy. 2. Simplicity and Effectiveness: The proposed method is straightforward and efficacious. Weaknesses: The paper's writing requires improvement, as exemplified by the following: Line 14: "Our proposed method incurs a penalty proportional to the actual document lengths at each step, resulting in significant savings in training time." Even after careful reading, it remains unclear what is meant by "penalty" here. Line 158 (and many other instances): "We follow the exact setup as in [32]." When using citations within sentences, it is customary to use the \citet command, resulting in "We follow the exact setup as in Liu et al. (2024)," rather than using \cite or \citep. Table 1: It would be beneficial to clearly define the meanings of the elements in the first column within the main body of the paper to prevent confusion. Additionally, while it adds value to the paper to provide extensive discussions of various aspects through experiments, the overall organization of the paper should be enhanced to make the main focus of the paper clearer. In Section 3.6, DD shows a significant advantage over other methods in long-context scenarios. However, the authors do not offer a detailed analysis of why this is the case. In my understanding, with a fixed number of training tokens per gradient update, DD increases efficiency by enlarging the batch size and reducing sequence length (<8k). In contrast, the baseline and ICLM maintain an 8k sequence length. It is unclear why DD performs better, as most of its training samples are actually shorter in length. Technical Quality: 2 Clarity: 3 Questions for Authors: The same as the weaknesses Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank reviewer z2w6 for their time and feedback. We are happy that the reviewer finds our experiments comprehensive, analyses noteworthy, and the method simple and effective. We address the concerns and questions raised by the reviewer below and kindly request to be informed if further clarification is needed. --- > Paper's writing requires improvement We thank the reviewer for the editorial points. We have applied all of them in the revised manuscript including additional discussion and apologize for any potential inconvenience during the review. --- > It remains unclear what is meant by penalty We apologize for the confusion. By "penalty," we meant "computational cost." We fix this in the revised manuscript. --- > Significant advantage over other methods in long-context scenarios We provide a more detailed discussion here, and revise the paper accordingly. As pointed out by the reviewer, pre-training context length is an important factor in determining a model’s long-context performance. We empirically validate this in the results shown in Fig. 5a of the paper, where models trained on longer sequences perform better on multi-document QA. For the context length from the same document (i.e., the number of tokens from the same document a token can attend to), our proposed method has an average context length of 1,344 for the RefinedWeb dataset (as defined in equation 2 in Appendix F), compared to 930 for the baseline (see Figure 3c of the paper) and 1,064 when bin-packing [1] is applied. This explains why the dataset decomposition mixture, even without any length-based curriculum (the first row in Table 2 of the paper), outperforms Baseline-8k-DM and Pack-8k+DM (second and third rows in Table 5 of the paper). Here, DM refers to applying document masking to avoid cross-document attention. One can increase context length by concatenating different documents and putting them in the context, as in the Baseline-8k result. However, simply increasing the context length by filling it with multiple documents does not necessarily lead to long-range attention during training (and hence improved long-context capability of the model). Nevertheless, as discussed in the response to reviewer bcif, a multi-document context encourages the model to learn to discern and disregard irrelevant information. Comparing Baseline-8k and Baseline-8k-DM multi-document QA results in Table 5 of the paper shows such benefit. Baseline-8k multi-document QA performance is even slightly better than our proposed dataset decomposition mixture when used without length-based curriculum (first row of Table 2 of the paper). In-context pre-training LMs (ICLM [2]) proposes to put semantically relevant documents into the context. We observe that ICLM results in slightly better multi-document QA performance when 30 documents are in the context compared with Baseline-8k (22.0% vs. 20.5%). However, we do not observe such gains in shorter multi-document QAs (i.e., with fewer distractor documents in the context) and regular evaluations. Finally, in Table 2 of the paper, we show the importance of length-based curriculum. Note that the data mixture (and hence the average context length) is the same for all rows in Table 2 of the paper, differing only by the curriculum (i.e., the order in which different examples are seen during training). We show that using our proposed cyclic length-based curriculum, for example, Grow-P2 with 8 cycles, results in a significant improvement in the model’s long-context capability. For instance, multi-document QA with 30 documents improves from 19.6% with no curriculum (first row in Table 2 of the paper) to 24.6%. It is worth noting that the effect of length-based curriculum on regular metrics is less significant, with the average metric improving from 53.8% with no curriculum to 54.4% for Grow-P2 with 8 cycles curriculum. Please **see Table 1 of the rebuttal PDF** for a summary of all of the above contributing factors. [1] Fewer truncations improve language modeling, ICML 2024 [2] In-context pretraining: Language modeling beyond document boundaries, ICLR 2024 --- Rebuttal Comment 1.1: Title: Appreciate your response Comment: Thank you for your detailed response. I'm glad to see that the writing has been improved. I have changed the score accordingly. --- Reply to Comment 1.1.1: Title: Appreciate your feedback Comment: We would like to thank the Reviewer *z2w6* for their positive response.
Summary: The paper introduces a method called dataset decomposition for training large language models (LLMs) more efficiently. Traditional LLM training processes use fixed-length token sequences, leading to inefficiencies such as unnecessary computational costs from cross-document attention. The proposed method tackles this by organizing the training dataset into various "buckets," each containing sequences of a fixed length from a unique document. This allows for variable sequence length training, where different buckets can be sampled during training based on a curriculum that adjusts for sequence length. The approach significantly reduces the attention computation overhead, leading to faster training times and improved model performance across various language understanding benchmarks. This method enables efficient and scalable LLM pretraining on large datasets, with experimental results showing up to three times faster attainment of target accuracies compared to traditional methods. Strengths: 1: The paper is well organized and easy to follow. 2: The motivation is clear and the proposed method looks simple yet effective. 3: This paper conducted massive experiments and provided many valuable empirical results, which can support both the claim of this paper as well as many long existing guesses in the community. Weaknesses: 1. It might be more accurate to consider that "the cross-document attention allocates significant computational resources to attending to unrelated tokens that may not directly contribute to learning." However, it's also valuable for models to develop the ability to discern and disregard irrelevant information. 2. The concept of a length-based curriculum, while insightful, isn't entirely novel. It has been explored in previous studies, such as those detailed in references [1] and [2]. Furthermore, numerous models, including BERT, employ similar curriculum learning strategies, albeit without specific emphasis on this aspect. [1] World Model on Million-Length Video and Language with RingAttention [2] GrowLength: Accelerating LLMs Pretraining by Progressively Growing Training Length Technical Quality: 2 Clarity: 3 Questions for Authors: 1. How can we ascertain the outcomes when models are trained with sufficient data? Will there remain a gap between the proposed method and the baseline? In a short summary, my major concern is the originality of the proposed strategy. Nonetheless, the empirical findings presented in this paper certainly add valuable contributions to the community. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer, bcif, for their time and feedback. We are glad the reviewer finds the paper well-organized and motivated, the method effective, and the experiments valuable to the community. In the following, we address the concerns and questions raised by the reviewer and kindly request to be informed if further clarification is needed. --- > It's also valuable for models to develop the ability to discern and disregard irrelevant information We agree with the reviewer that, in principle, the model may benefit from attending to irrelevant documents during pre-training to develop a discerning capability. We mentioned this in the paper (line 291). Our results also confirm this. In Table 5, when comparing baseline-8k to baseline-8k with document masking (which stops cross-document attention), adding document masking improves the regular metric (51.5% → 52.4%). However, it results in weaker discerning capability, as seen in multi-document QA results with 30 documents in the context (performance drops from 20.5% → 16%). We will clarify this point further in the revision. We would also like to point out that our proposed method surpasses all baselines in discerning capability, as seen from its superior performance on multi-document QA. This capability emerges when a length-based curriculum is deployed (compare the multi-document QA performance with a uniform curriculum and, for example, the Grow-P2 curriculum in Table 2). Please **see the summary in Table 1 of the rebuttal PDF**. --- > Length-based curriculum, while insightful, isn't entirely novel We thank the reviewer for pointing out two relevant works. Both are **concurrent** with ours **and not published** in a peer-reviewed venue at the time of this rebuttal. We also appreciate the reference to the length-based curriculum in BERT [3]. We will include them in the revised paper. These works highlight the computational benefits of length-based batching; however, they differ significantly from our work, as explained below: * We are the first to show length-based curriculum for autoregressive LLM pre-training. Given the huge cost of LLM pre-training, the savings from our proposed method are significant (more than 6x speed up to reach the best accuracy of baseline as shown in Figure 1 of the rebuttal PDF). The related work [1] is a continual learning setup (starting from pre-trained Llama2) only for context-length extension using book data. GrowLength [2] does not show any results on large language models. BERT [3] is only for masked-language modeling. * Unlike [1-3], we introduce binary decomposition, a novel method to preserve document length, form buckets with fixed sequence lengths without using pad tokens, and avoid forming multi-document sequences (thus achieving no cross-document attention without attention masking). * Unlike [1-3], we analyze different forms of curriculum (not just a simple multi-stage training from short to long sequences) in Table 2, and show that mixing (i.e., a mixture of long and short sequences with a changing mixture) is important for the best results. Our analysis further shows that our introduced cyclic schedule is important for best results. * Unlike [1-3], we systematically show the effect of pre-training sequence length on model performance for different tasks, including those requiring long context (Sections 3.2 and 3.3). [1] World Model on Million-Length Video and Language with RingAttention, 2024 [2] GrowLength: Accelerating LLMs Pretraining by Progressively Growing Training Length, 2024 [3] Pre-training a BERT with curriculum learning by increasing block-size of input text, RANLP 2021 --- > The outcomes when models are trained with sufficient data Note that the benefits of using the proposed method are twofold: * **Computational benefit**: Using variable sequence length training (Section 2.2), the training cost per seen token is reduced, i.e., we reach a certain performance faster compared to the baseline. * **Performance benefit**: Using DD with a curriculum for the same total number of seen tokens, compared to the baseline, DD results in a more accurate model (for both regular and long context evaluations). To further demonstrate the above facts and the scalability of our results, we trained a 410M model with our proposed method and the baseline up to 1.1 trillion tokens. This is 128 times more tokens than the “optimal” number recommended based on the number of parameters of the model in Chinchilla paper [4]. Furthermore, 1.1 trillion tokens exceed the number used in recent state-of-the-art open LLMs (e.g., smolLM [5] uses 600 billion tokens to train their 360M model). We show **+2.4 accuracy improvement compared to baseline even at 1.1 trillion tokens**, where the baseline show a plateau in accuracy (indicative of sufficient data). Further we show **more than 4x data efficiency** and **more than 6x speed-up** to reach to baseline's best accuracy. Please **see Figure 1 of the rebuttal PDF**. [4] Training compute-optimal large language models, NeurIPS 2022 [5] SmolLM - blazingly fast and remarkably powerful, 2024 --- > The originality of the proposed strategy We would like to emphasize again that this is the first work demonstrating the computational and performance benefits of length-based curricula (and beyond simple multi-stage training) for LLM pre-training. Furthermore, the proposed binary dataset decomposition method is entirely new. As mentioned by the reviewer, we provide extensive experiments and insights on the importance of length in the mixture, training stability, and generalization to different model sizes and datasets. We believe the combination of our proposed binary decomposition method, extensive empirical results, and open-source code and models will be a useful contribution to the community. --- Rebuttal Comment 1.1: Comment: Thanks for the response. It solved my concerns. Although, the novelty is still questionable. I believe some empirical results from this paper is valuable to the community. Hope more discussion about those related works can be added to the revised version. I raised my score to 5. --- Reply to Comment 1.1.1: Title: Appreciate your response Comment: We are glad that Reviewer *bcif*’s concerns have been addressed and would like to thank them again for their positive feedback.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their time and feedback. We are pleased with the positive comments from the reviewers: Reviewer *bcif* finds the paper **well-organized and motivated**, the **method effective**, and the **experiments valuable to the community**; Reviewer *z2w6* finds our **experiments comprehensive**, our **analyses noteworthy**, and the **method simple and effective**; Reviewer *H2jF* finds our work **reasonably motivated**, and the **results extensive and promising**, and Reviewer *uip1* find our work **useful to practitioners**. We provide multiple tables and figures in the **one-page PDF rebuttal** to further support the contributions of the paper and clarify questions and concerns raised by the reviewers. We show **more than 4x data efficiency** and up to **more than 6x training speed up** compared to the baseline for **large-scale trainings up to 1.1 trillion tokens**. We would like to emphasize that all contributions of this paper, including code and all model checkpoints, will be **open-sourced** to facilitate follow up works. We address individual reviews below and kindly request that you let us know if any further questions or concerns remain unaddressed. Pdf: /pdf/0c3500f8bc565bc6f2e782727347771d4c841b4d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
SuperEncoder: Towards Iteration-Free Approximate Quantum State Preparation
Reject
Summary: This paper introduces SuperEncoder, a novel approach to Quantum State Preparation (QSP) that aims to combine the scalability of Approximate Amplitude Encoding (AAE) with the speed of traditional Amplitude Encoding (AE). SuperEncoder uses a pre-trained neural network to directly estimate the parameters of a Parameterized Quantum Circuit (PQC) for any given quantum state, eliminating the need for iterative parameter tuning during runtime. The authors explore different loss functions for training SuperEncoder, finding that state-oriented training using fidelity as a metric (L3) performs best. They evaluate SuperEncoder on synthetic datasets and downstream tasks like Quantum Machine Learning and the HHL algorithm, comparing it to AE and AAE. Results show that SuperEncoder achieves runtime similar to AE while maintaining the scalability of AAE, but with some degradation in fidelity. The impact of this fidelity loss varies across applications, being more tolerable in QML tasks than in precise algorithms like HHL. Strengths: Originality: The paper presents a novel approach to Quantum State Preparation with SuperEncoder, which innovatively combines the strengths of existing methods (AAE and AE). The idea of using a pre-trained neural network to directly estimate quantum circuit parameters is a nice solution to the QSP problem. Quality: The research demonstrates high quality through its comprehensive experimental design. The authors explore different loss functions, provide detailed analysis of their landscapes, and evaluate the method on both synthetic datasets and real-world applications. The comparison with existing methods (AE and AAE) across multiple metrics (runtime, scalability, and fidelity) shows a rigorous approach to validation. Clarity: The paper is well-structured and clearly written. Complex concepts are explained in an accessible manner, with helpful diagrams (like Figures 2 and 3) to illustrate key ideas. Significance: SuperEncoder potentially represents a step towards more efficient QSP, which is crucial for many quantum algorithms. Weaknesses: 1. The gradient evaluation of the loss function (e.g. Eq. 1) requires computing the derivative of the state $\rho$ with respect to model parameters. As the authors acknowledge, this could become complicated on real devices due to the enormous cost of quantum state tomography. The authors work around this by using the parameter-shift rule to compute the gradient. However, the parameter-shift rule does not scale as well as classical backpropagation with autodiff (see https://openreview.net/forum?id=HF6bnhfSqH -- I guess a citation to this work would be relevant here). This casts doubts on the whole scalability of this method. 2. Again related to scalability, the number of input neurons to the model has to be $2^n$. This again doesn't look too scalable past 20 qubits, which can already be realized experimentally. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What are the expected compute/memory/time costs in training SuperEncoder with larger qubit numbers? Is training with more than 10 qubits feasible? Minor: - In ine 284: Is $m$ for the number of entangling layers the same $m$ that appears in line 242? - In line 335: could you add a citation to the work of Li et al that you are referring to the first time it appears? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations have been discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for constructive feedback. We are elated that the reviewer found our idea a nice solution, the presentation clear, and the evaluation comprehensive. Following are our responses to each individual comment (which are highlighted in italics). > *The gradient evaluation of the loss function (e.g. Eq. 1) requires computing the derivative of the state $\rho$ with respect to model parameters. As the authors acknowledge, this could become complicated on real devices due to the enormous cost of quantum state tomography. The authors work around this by using the parameter-shift rule to compute the gradient. However, the parameter-shift rule does not scale as well as classical backpropagation with autodiff (see https://openreview.net/forum?id=HF6bnhfSqH -- I guess a citation to this work would be relevant here). This casts doubts on the whole scalability of this method.* We argue that parameter shift rule is not a limiting factor to the scalability of our method. The reasons are two-fold. 1. The analysis of lines 194\~216 is more a feasibility analysis. We just want to show that: *if* one wish to train SuperEncoder based on states obtained on real devices, it is possible to do so. However, it is not mandatory to train SuperEncoder on real devices; the training can be done on classical devices through noisy quantum circuit simulation. 2. The parameter shift rule is also not mandatory for calculating $\frac{\partial L}{\partial \theta}$. A recent study \[R0\] has introduced a hybrid method, which obtains $\hat{\rho}$ from real devices based on quantum tomography but calculates gradients based on classical backpropagation. We agree that tomography will be a bottleneck if we have to train on real device, but the tomography itself is an active research field. With more advanced tomography methods proposed, we believe the training efficiency of SuperEncoder on real devices will be significantly improved. We thank the reviewer for the constructive feedback and will enhance our paper with more discussions and citations regarding the overhead of gradient evaluation. We agree that such a discussion will provide a more comprehensive view for readers to understand our method. \[R0\] Wang, Hanrui, et al. "Robuststate: Boosting fidelity of quantum state preparation via noise-aware variational training." arXiv preprint arXiv:2311.16035 (2023). > *Again related to scalability, the number of input neurons to the model has to be 2^n. This again doesn't look too scalable past 20 qubits, which can already be realized experimentally.* We disagree that input size is a scalability issue. As stated in our paper (Sec. 2.2), the Quantum State Preparation (QSP) discussed in our paper refers to a process of **loading classical data into a quantum state**. Therefore, an implicit setting is that the classical data to be prepared has already been stored in classical systems, i.e., the state being prepared is within the capacity of classical storage space. In fact, the input to the SuperEncoder is also the input to our baselines (AE/AAE). If input size is a problem, it is a challenge for our baselines as well as the research field as a whole. Taking QML as an example, the role of QSP is loading classical image/language embeddings into quantum states. Thus the practical input size would be the same for other classical ML tasks including CV and NLP, and the number of input neurons is the same with models in these classical fields. If the input dimension is a problem, it will be a problem for all these classical CV/NLP models. In fact, the input size is bounded by the classical simulation power. Since quantum circuit simulation is only strictly bounded by the memory capacity. Consider an extreme case when we set the batch size to be 1, simulating a 30-qubit circuit requires a minimum of 32 GB memory (>16 GB), which can be accommodated by most modern GPUs. We believe this is already an enormous vector space that is capable of encoding most of classical data. > *What are the expected compute/memory/time costs in training SuperEncoder with larger qubit numbers? Is training with more than 10 qubits feasible?* As stated in the previous response, training with more than 10 qubits is absolutely feasible. We conducted experiments on a linux server with a A100 GPU that has 80 GB memory (see lines 231\~234). The compute/memory/time costs in training with larger qubit number are measured as follows. | Number of Qubits | Memory | Time | | - | - | - | | 10 | 960 MB | \~5 h | | 12 | 2520 MB | \~6 h | | 14 | 9722 MB | \~7.5 h | > *In line 284: Is $m$ for the number of entangling layers the same $m$ that appears in line 242?* We apologize for the confusing usage of the same symbol. These two $m$ have different meanings. $m$ in line 284 denotes the number of entangler layers in the QNN model, which is a downstream task of our propose QSP method. $m$ in line 242 is the output dimension of SuperEncoder, i.e., the classical model we use for QSP. We would distinguish the use of symbols in our paper to avoid confusion. > *In line 335: could you add a citation to the work of Li et al that you are referring to the first time it appears?* We apologize for the confusing citation. We will change the position of this citation to the end of line 335 for better readability. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. Now I understand that the scope of this work does not require large qubit numbers. I would still have a follow-up question (probably more related to QML, which is the motivation to develop this technique). Let's take OpenAI's text-embedding-3-large, which in principle can be accommodated by 12 qubits. Why would we use a quantum computer (or rather QML) at this scale? Since we can simulate 12 qubits classically, what is the value of using a quantum computer here? Are you thinking that these 12 qubits are only a subset of more qubits within a single QML pipeline? Or are you rather implying that the quantum-like architecture of a layered circuit can be a useful classical model in itself? --- Reply to Comment 1.1.1: Comment: > _Are you thinking that these 12 qubits are only a subset of more qubits within a single QML pipeline?_ We have found that this is truly an issue worth considering for the entire QML field. Taking ChatGPT as an example, the embedding we mentioned (3072-dim) corresponds to one token. Then question is: (a) Should we encode all tokens using one data loading quantum subroutine? (b) Or should we encode each token using a different subroutine? We have found some studies that employ the latter design (e.g., \[R0\]). In this scenario, there may be many 12-qubit data loading subroutines, if we create entanglements between all these qubits after data loading, there will be a large number of qubits (>100) and the complete system is far beyond the storage capacity of classical system. Currently, most QML research utilizes the same models as those demonstrated in our paper, which contain only one data loading block, i.e., akin to design (a). In this scenario, we may load a complete sequence of tokens using one data loading subroutine. The required number of qubits will not be very large. Which design is better is indeed an open question. However, we believe it is very safe to say: The data loading quantum subroutine will not involve very large number of qubits that is beyond the simulation power of classical computers. Therefore, we believe that the scope of our work is reasonable, and our work has practical value. \[R0\] G. Li, X. Zhao, and X. Wang, “Quantum Self-Attention Neural Networks for Text Classification,” May 11, 2022, arXiv: arXiv:2205.05625. Accessed: Jun. 03, 2022. --- Rebuttal 2: Comment: Thank you for your reply and thanks for raising this interesting question. > _Why would we use a quantum computer (or rather QML) at this scale? Since we can simulate 12 qubits classically, what is the value of using a quantum computer here?_ In QML, or more precisely the "quantum learning for classical data" problem, _data loading_ has long been considered as a significant obstacle \[R0\]\[R1\]\[R2\], which motivates us to conduct this study. Among all these previous studies of QML, the data to be loaded are all classical and thus can be accommodated by classical systems. If the qubits for loading classical data are all the qubits used in the complete quantum pipeline. Then it is true that all these QML circuits can be classically simulated. However, the advantage of quantum computing lies not only in its information storage capacity but also in its **information processing capabilities**. As discussed in the review by Biamonte, Jacob, et al.: "The input problem. Although quantum algorithms can **provide dramatic speedups for processing data**, they seldom provide advantages in reading data. This means that the cost of reading in the input can in some cases dominate the cost of quantum algorithms. Understanding this factor is an ongoing challenge." (The statement also highlights the importance of data loading) In other words, in QML, quantum computers are processing **the same data** as classical processors like modern GPUs. What we anticipate is a **faster processing speed** when quantum computers become more powerful. Other advantages of QML may include (1) better performance with the same amounts of parameters \[R4\]; (2) it may reach the same performance with classical model with fewer training data \[R6\]. In fact, fully understanding the advantages of QML is still an active research area. All these "quantum learning for classical data" problems assume the data to be able to be classically stored and thus do not rely on the storage capacity larger than classical system for data loading. **Or we can say, QML does not enforce large qubit numbers (at least for data loading) that are beyond the simulation capacity of classical computers.** > _Are you thinking that these 12 qubits are only a subset of more qubits within a single QML pipeline?_ This is a very interesting question, it may be a possible direction to develop certain QNN architectures, where only a subset set of qubits are responsible for data loading, and there can be many additional qubits responsible for data processing. Then the complete circuit may be beyond the simulation capacity of classical computers. While we do not know many QML studies that implement circuits with such a structure, some algorithms like HHL \[R3\] do have many more ancilla qubits in addition to data loading qubits. However, we do not assume "these 12 qubits are only a subset of more qubits" for QML in our study. > _Or are you rather implying that the quantum-like architecture of a layered circuit can be a useful classical model in itself?_ This is another interesting question. While we do not assume the quantum-like architecture to be a useful classical model, we are not sure whether it is entirely true that they are not useful. We do have seen some quantum-inspired designs for classical ML \[R5\]. \[R0\] Biamonte, Jacob, et al. "Quantum machine learning." Nature 549.7671 (2017): 195-202. \[R1\] Caro, Matthias C., et al. "Encoding-dependent generalization bounds for parametrized quantum circuits." Quantum 5 (2021): 582. \[R2\] Li, Guangxi, et al. "Concentration of data encoding in parameterized quantum circuits." Advances in Neural Information Processing Systems 35 (2022): 19456-19469. \[R3\] A. W. Harrow, A. Hassidim, and S. Lloyd, “Quantum Algorithm for Linear Systems of Equations,” Phys. Rev. Lett., vol. 103, no. 15, p. 150502, Oct. 2009, doi: 10.1103/PhysRevLett.103.150502. \[R4\] L'Abbate, Ryan, et al. "A quantum-classical collaborative training architecture based on quantum state fidelity." IEEE Transactions on Quantum Engineering (2024). \[R5\] A. Panahi, S. Saeedi, and T. Arodz, “word2ket: Space-efficient Word Embeddings inspired by Quantum Entanglement,” Mar. 03, 2020, arXiv: arXiv:1911.04975. Accessed: Dec. 08, 2022. (ICLR'20) \[R6\] Caro, Matthias C., et al. "Generalization in quantum machine learning from few training data." Nature communications 13.1 (2022): 4919.
Summary: In this paper, the authors propose a model, namely SuperEncoder, to solve the quantum state preparation problem. Instead of evolving the parameterized gates to generate the target quantum state, they train a model to predict the rotation parameters from the target states. Strengths: Solve the quantum state preparation problem from a new perspective. Weaknesses: 1. Poor results. The results seem ok with four qubits but decrease way too fast when increasing the number of qubits. The proposed method is not comparable to previous methods. 2. It is actually impossible to use an ML model to predict the parameters. Since training the AAE ansatz is a non-convex optimization problem, finding the optimal parameter is indeed an NP-hard problem. There are infinitely many pairs of quantum states and parameters, and I wonder how the size of the training set would scale with the number of qubits. 3. The training overhead is non-negligible. If we are preparing a quantum state that is beyond the simulation power of classical devices, the evaluation methods based on state fidelity would need an enormous number of quantum circuit executions, which I suspect would not be much less than training the AAE. Technical Quality: 2 Clarity: 2 Questions for Authors: No questions Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: Naive ideas with poor experimental results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. Following are our responses to each individual comment (which are highlighted in italics). > *Poor results. The results seem ok with four qubits but decrease way too fast when increasing the number of qubits. The proposed method is not comparable to previous methods.* This is an inaccurate conclusion. We compare the *results* of SuperEncoder with baselines in *three dimensions*: (1) Scalability (2) Runtime (3) Fidelity. Not all results decrease when increasing the number of qubits. - Scalability: SuperEncoder is comparable with AAE and is significantly better than AE (Fig. 7(a)). - Runtime: SuperEncoder is comparable with AE and is significantly better than AAE (Table 1 in the PDF attached to the global response). - Fidelity: SuperEncoder is worse than AE and AAE under ideal simulation (Table. 7), better than AE on real devices (Fig. 7(b)). Thus the correct conclusion is two-fold. - In terms of scalability and runtime, SuperEncoder remains comparable or better than baselines when increasing the number of qubits. - However, the fidelity of SuperEncoder is worse than baselines. We have acknowledged the loss of fidelity (Sec. 4.4), but we argue that this limitation does not negate the immense value and potential of this work for following reasons. 1. **We have successfully addressed the main challenges highlighted in the paper**. As emphasized in the paper (lines 51\~57), our goal is to realize a QSP method that is both *fast* and *scalable*. Here "scalable" means that the circuit depth does not grow exponentially with respect to the number of qubits. Specifically, we have addressed a significant drawback of AAE ---- the long runtime overhead introduced by iterative online optimizations. This drawback is particularly unacceptable in practical scenarios, such as using a QML model for image classification, where it is absolutely unacceptable for the data loading phase to consume the majority of the time (see lines 41\~50). Therefore, addressing this issue has significant practical value. 2. **Our loss in fidelity is not catastrophic.** Although SuperEncoder sacrifices fidelity, we demonstrate that it does not compromise the performance of important downstream tasks, such as QML, as shown in Figure 8. In fact, SuperEncoder remains great performance in QML when increasing the number of qubits. (see PDF attached to global response). 3. **Further improving the fidelity of SuperEncoder is an open challenge that will draw increased attention from our community and inspire innovative solutions.** The ultimate goal is finding a classical ML-assisted QSP method that is both fast, scalable, and with high fidelity. Achieving this goal would address a significant challenge in quantum computing and find another significant application of classical machine learning. > *It is actually impossible to use an ML model to predict the parameters. Since training the AAE ansatz is a non-convex optimization problem, finding the optimal parameter is indeed an NP-hard problem. There are infinitely many pairs of quantum states and parameters, and I wonder how the size of the training set would scale with the number of qubits.* This is a misunderstanding. - Firstly, **our framework does not involve any training of the AAE ansatz**. The training set only contains quantum states, thus there are no "pairs of quantum states and parameters". We adopt a "state-oriented" training methodology as described in lines 172\~185. Perhaps the reviewer is discussing the parameter-oriented training (lines 159\~171), this method is not adopted in our framework. In fact, we identify and address its limitation, as stated in lines 170\~171: "Consequently, required is a more effective loss function design without involving AAE." Please refer to Sec. 3.2 for more details. - Secondly, our empirical study **has proved that there exists some learnable mapping between target states and circuit parameters, demonstrating the possibility of using an ML model to predict the parameters**. We acknowledge that the current methodology may not be optimal, and we will continue to advance this direction in the future. Our research demonstrates that learning this mapping is non-trivial. We believe that identifying a machine learning problem that is both significantly valuable and challenging is meaningful. We thank the reviewer for the feedback and would enhance clarification in our paper. > *The training overhead is non-negligible. If we are preparing a quantum state that is beyond the simulation power of classical devices, the evaluation methods based on state fidelity would need an enormous number of quantum circuit executions, which I suspect would not be much less than training the AAE.* As stated in our paper (Sec. 2.2), the QSP discussed in our paper refers to a process of **loading classical data into a quantum state**. An implicit setting is that the classical data to be prepared has already been stored in classical systems, i.e., the state being prepared is within the capacity of classical storage space. As such, the assumption of "preparing a quantum state that is beyond the simulation power of classical devices" does not hold. Besides, the distinction between SuperEncoder and AAE is that: **AAE enforces training at runtime** while SuperEncoder is trained offline. For example, when using ChatGPT, the cost of model training is certainly not a concern. --- Rebuttal Comment 1.1: Comment: Let's first find some common ground that we both agree on. The essence of this paper is to train a neural network that is used to map the target quantum state to the parameters in the ansatz. The authors claim the proposed NN can achieve comparable results with much shallower circuits compared to the previous method. (I don't think that I have any misunderstanding in the original review) Then comes the disagreements. 1. We have infinitely many target states for any number of qubits, which means we have infinitely many pairs of target states (training data) and parameters (labels). I found it really hard to believe that such a mapping exists. Firstly, training a given quantum ansatz, as VQE does, is already a non-convex problem, and you are saying that you can "predict" all the parameters without onsite training. Secondly, we can use different ansatz (with different amounts of parameters) to achieve the same target state, and you are saying that you can map the target states to parameters in entirely different spaces. If this is possible, why do we need VQE anymore? Since you can map an arbitrary state to the ansatz parameters with your trainable neural networks. 2. I've checked with other reviewers, and it seems common sense that the proposed method lacks scalability. How can you map a quantum state with $1\times 10^9$ dimension to the parameter vector with $1\times 10^9$ dimension? 3. You are saying that the proposed method can achieve similar results with a shallower circuit, and I would like to point out that this is not an advantage at all. If you please try to alter the ansatz used in your experiment with extreme depth (extremely large number of parameters) and I suspect that you will find out the proposed NN is not able to map the state to such a large parameter space. The proposed method is only possible with a toy scale (including the state space and parameter space). I intend to keep my score. --- Rebuttal 2: Comment: We thank the reviewer for the quick response, and understand that our work is counter-intuitive. Following are our response to your additional comments (highlighted in italics). > *I found it really hard to believe that such a mapping exists. Firstly, training a given quantum ansatz, as VQE does, is already a non-convex problem, and you are saying that you can "predict" all the parameters without onsite training. Secondly, we can use different ansatz (with different amounts of parameters) to achieve the same target state, and you are saying that you can map the target states to parameters in entirely different spaces. If this is possible, why do we need VQE anymore? Since you can map an arbitrary state to the ansatz parameters with your trainable neural networks.* The reviewer believe that: If the methodology of SuperEncoder is feasible, we can predict the parameters of VQE and we do not need VQE anymore. Therefore, the methodology of SuperEncoder is not feasible. We believe this is a misunderstanding. We argue that AAE is fundamentally different from other Hamiltonian-oriented Variational Quantum Algorithms (VQA) (e.g., VQE). That is, the final state that we want the system to evolve to is **NOT known** for typical VQAs, whereas the final state is known for AAE. Essentially, AAE belongs to QSP, but VQE does not. Thus, it is certainly impossible to use SuperEncoder to predict parameters for these VQAs, but this does not necessarily mean that our methodology is unfeasible. We argue that our methodology is feasible. Besides the empirical evidence provided in our paper, we would like to illustrate it from following perspective, which we have also elaborated in lines 136\~148. - In AE, i.e., the precise QSP method. For arbitrary target state, we are using exactly the same procedure to generate the required QSP circuit. That is, for any given state $|\psi\rangle$, there exists an universal mapping $f: |\psi\rangle \to U_\theta$, such that $U_\theta |0\rangle = |\psi\rangle$. - In this paper, we simply take one step further and are basically asking following question: Given a quantum state, is there a deterministic mapping between this state and the QSP circuit that could *approximately* prepare the state? We argue that our intuition is natural and reasonable. > *I've checked with other reviewers, and it seems common sense that the proposed method lacks scalability. How can you map a quantum state with $1\times 10^9$ dimension to the parameter vector with $1\times 10^9$ dimension?* Based on our understanding, the concerns are more about the input size and the training efficiency when the number of qubits is large. As we emphasized in the global response, we refer to QSP as a process of loading classical data into quantum states. Realistic classical data such as image/text embeddings typically do not have a exceedingly large number of dimensions. According to [OpenAI documentation](https://openai.com/index/new-embedding-models-and-api-updates/), its latest embedding model `text-embedding-3-large` creates embeddings with up to 3072 dimensions, which can be accommodated by 12 qubits. Moreover, it is certainly possible to use neural network to map large vectors from one space to another. The classical text-to-image tasks are great examples. > *which means we have infinitely many pairs of target states (training data) and parameters (labels)* Is "infinitely many pairs of inputs and outputs" really a problem in machine learning? As long as the mapping between inputs and outputs is learnable, it is definitely possible to construct a dataset with finite number of data points. This can be verified by all classical ML problems. > *You are saying that the proposed method can achieve similar results with a shallower circuit, and I would like to point out that this is not an advantage at all. If you please try to alter the ansatz used in your experiment with extreme depth (extremely large number of parameters) and I suspect that you will find out the proposed NN is not able to map the state to such a large parameter space. The proposed method is only possible with a toy scale (including the state space and parameter space).* We disagree with this point. Deep circuit depth is a well-known challenge in QSP, thus we definitely prefer shallower circuits. We sincerely thank the reviewer for the feedback and look forward to any further questions. --- Rebuttal Comment 2.1: Comment: If the NN model can predict the parameters from the target state, then there will be an inverse model that can predict the target state from the parameters. Consider the random circuit sampling problem. We can preobtain a dataset with different circuit parameters and the final state. Under your assumption, can we train an NN model based on this dataset and predict the final state? --- Rebuttal 3: Comment: > If the NN model can predict the parameters from the target state, then there will be an inverse model that can predict the target state from the parameters This assumption is almost equivalent to: For any machine learning model trained by a dataset $(x,y)$, where $x$ refers to the input and $y$ refers to the output, we can train another model using a dataset $(y,x)$, with $y$ the input and $x$ the output. We are not sure why this assumption hold. --- Rebuttal 4: Comment: Dear Reviewer, Before making an ultimate judgment on the feasibility of the methodology behind SuperEncoder. We would like to humbly ask you to think about following questions. - What is the essence of AAE? It constructs a circuit with fixed structure and trainable parameters, then it iteratively update its parameters to approximate the target state. Indeed, this implies that, given an arbitrary quantum state, it is possible to utilize the same procedure to construct a QSP circuit that could approximately generate this state. However, this procedure utilizes a try-and-error methodology, its iterative optimizations at runtime becomes a bottleneck. - What is the essential goal of SuperEncoder? The goal is to build an AI designer to directly generate a QSP circuit that can approximately prepare an arbitrary quantum state, while minimizing online iterations to ensure efficiency. Our current work is just an initial exploration, we argue that there is significant room for further improvement. However, we may need to address some interesting but challenging research questions, which may require more people to engage in for a long time. We list some of these research questions as follows (Part of them are our ongoing work, we have to reveal these ideas for clarification). - Because the procedure of precise QSP (i.e., AE) is deterministic ---- essentially arithmetic decomposition, how can we let a ML model learn from this procedure? If we can make a model understand some fundamental principles and methods of QSP, is it possible for this model to find a universal, non-iterative QSP circuit construction method while limiting circuit depth under a given approximation ratio? - Currently we use a fixed circuit structure and predict its parameters. We ask, is it possible to train a model to generate different circuit structure (as well as the associated parameters) for different target states? The freedom of circuit structure, i.e., what type of gates to use and where to put them, can potentially enhance circuit expressibility. However, we may need to find an effective training methodology for this kind of models. The above dicussions may be beyond the scope of our paper. We just want to emphasize that there are many possibilities for exploration in this direction. Since we have shown the feasibility of our methodology with strong empirical evidence at the scale of 4\~8 qubits, we disagree that it is reasonable to make a conclusion that our methodology is definitely not able to be extended to larger quantum states. --- Rebuttal 5: Comment: We have found a concurrent work \[R0\] that explores a similar direction with us, which has been published. In short, their method is also NN-based arbitrary QSP, but with a focus on low-level quantum control. Specifically, the authors proposed to "use a large number of initial and target states to train the neural network and subsequently use the well-trained network to generate the pulse sequence" Since our study started before this paper is published, we were unaware of this work. We would incorporate relevant discussions in Sec. 5. This paper serves as strong evidence of the feasibility of our work. \[R0\] Li, Chao-Chao, Run-Hong He, and Zhao-Ming Wang. "Enhanced quantum state preparation via stochastic predictions of neural networks." Physical Review A 108.5 (2023): 052418.
Summary: The paper addresses the problem of Quantum State Preparation (QSP), which is critical for quantum computing but requires a circuit depth that scales exponentially with the number of qubits, making it impractical for large-scale problems. The authors propose SuperEncoder, a pre-trained classical neural network model designed to estimate the parameters of a Parameterized Quantum Circuit (PQC) for any given quantum state. This approach eliminates the need for iterative parameter tuning, making it a significant advancement towards iteration-free approximate QSP. Contributions 1. Introduction of SuperEncoder, which pre-trains a classical neural network to estimate PQC parameters directly, bypassing the need for iterative updates. 2. Provides empirical evidence that SuperEncoder significantly reduces the runtime for quantum state preparation compared to traditional methods, thus enhancing the efficiency of quantum algorithms. Strengths: See Contributions. Weaknesses: 1. [Scalability Issue] The most significant drawback of this work is its poor scalability. Since the input to the SuperEncoder is $2^n$ dimensional, the number of qubits cannot be too high, such as exceeding 20 qubits. This limitation severely restricts the applicability of the SuperEncoder to larger quantum systems. Discussing potential strategies to overcome this drawback would greatly enhance the practical value of the SuperEncoder. 2. [Barren Plateau Problem] Another major issue is that, even within a reasonable range of qubit numbers (e.g., 10-20), training the SuperEncoder is challenging due to the barren plateau problem. Consequently, the SuperEncoder is likely only suitable for situations involving fewer than 10 qubits. In these cases, the time difference between AAE and SuperEncoder is not as significant as one might expect, which greatly limits the potential impact of this work. Technical Quality: 3 Clarity: 3 Questions for Authors: See Weaknesses. Additionally, 1. [Target Parameters Acquisition] In the parameter-oriented training section, the process for obtaining the target parameters $\theta$ is not sufficiently clarified. If I understand correctly, these parameters are initially derived using the Variational Quantum Eigensolver (VQE) method. This approach inherently introduces errors and is prone to local minima, which could negatively impact the effectiveness of the SuperEncoder. It would be beneficial for the authors to address these issues and discuss the implications of using VQE-derived parameters. 2. [Gradient Calculation Complexity] The gradient analysis section appears overly complex. Specifically, the calculation of gradients could be simplified by using the parameter shift rule to directly compute the gradient of $L_3$ with respect to $\theta$, rather than calculating the gradient with respect to $U$. 3. [Runtime Clarification] In Table 3, the term "Runtime" needs clarification. It is unclear whether this refers to the training time required for the SuperEncoder or the inference time once the model is trained. Providing a clear distinction between these two would help in accurately assessing the efficiency and practicality of the proposed method. 4. [Data Distribution Scope] The distribution characterized by the SuperEncoder seems to be specifically tailored to a particular dataset. Theoretically, the SuperEncoder should be capable of characterizing vectors across the entire space. Have the authors tested the SuperEncoder on a broader range of vectors? If so, what were the results? Addressing this question could provide valuable insights into the versatility and generalizability of the SuperEncoder. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for constructive feedback. Following are our responses to each individual comment (which are highlighted in italics). > *[Scalability Issue] The most significant drawback of this work is its poor scalability. Since the input to the SuperEncoder is 2^n dimensional, the number of qubits cannot be too high, such as exceeding 20 qubits. This limitation severely restricts the applicability of the SuperEncoder to larger quantum systems.* We disagree that scalability is a weakness. As stated in our paper (Sec. 2.2), the Quantum State Preparation (QSP) discussed in our paper refers to a process of **loading classical data into a quantum state**. Therefore, an implicit setting is that the classical data to be prepared has already been stored in classical systems, i.e., the state being prepared is within the capacity of classical storage space. In fact, the input to the SuperEncoder is also the input to our baselines (AE/AAE). If input size is a problem, it is a challenge for our baselines as well as the research field as a whole. > *[Barren Plateau Problem] Another major issue is that, even within a reasonable range of qubit numbers (e.g., 10-20), training the SuperEncoder is challenging due to the barren plateau problem. Consequently, the SuperEncoder is likely only suitable for situations involving fewer than 10 qubits.* This is a misunderstanding. "Barren Plateau" does not affect the SuperEncoder. The workflow of SuperEncoder is as follows. 1. Building a *classical neural network*. The input is the target state vector, the output is the parameter vector of the QSP circuit. 2. Training this *classical neural network*. The loss is designed to be the divergence between the state prepared by the QSP circuit and the target state. The *Barren Plateau* phenomenon occurs when **optimizing the parameters of quantum circuits** \[R0\]. However, **parameters being optimized in SuperEncoder only include the weights of the classical neural network**. More specifically, the quantum circuit employed in SuperEncoder serves as a fixed tensor transformation, which maps the parameter vector generated by the classical NN model to the prepared state vector and contains no trainable parameters. > *[Target Parameters Acquisition] If I understand correctly, these parameters are initially derived using the Variational Quantum Eigensolver (VQE) method. This approach inherently introduces errors and is prone to local minima, which could negatively impact the effectiveness of the SuperEncoder.* This is a misunderstanding. The parameters are *not* derived using VQE. They are derived using AAE \[R1\] (our baseline). Although both AAE and VQE have the drawbacks of "inherently introduces errors" and "prone to local minima", parameter oriented training is **NOT** employed in our framework. Instead, we use *state-oriented training* without involving AAE, thereby avoiding these aforementioned drawbacks. In state-oriented training, we do not need to acquire target parameters. Please refer to Sec. 3.2 for more details. > *[Gradient Calculation Complexity] the calculation of gradients could be simplified by using the parameter shift rule to directly compute the gradient of $L_3$ with respect to $\theta$* We disagree with this point. $L_3$ is defined as $1 - \langle \psi | \hat{\rho} | \psi \rangle$ (line 196), where $|\psi\rangle$ is the target state, i.e., a constant state vector. $\hat{\rho}$ denotes the density matrix of the prepared state, thus we can focus on $\hat{\rho}$. The density matrix as a function of $\theta$ can be written as $$ \hat{\rho} = f(U(\theta)), $$ thus $$ \frac{\partial \hat{\rho}}{\partial \theta} = \frac{\partial f}{\partial U} \cdot \frac{\partial U}{\partial \theta} = \frac{\partial f}{\partial U} \cdot \frac{1}{2} (U(\theta_{+}) - U(\theta_{-})), $$ if it's possible to apply parameter shift rule to $L_3$, we have $$ \frac{\partial f}{\partial U} \cdot \frac{1}{2} (U(\theta_{+}) - U(\theta_{-})) = \frac{1}{2} (f(U(\theta_{+})) - f(U(\theta_{-}))). $$ This enforces $f(U(\theta_{+})) = \frac{\partial f}{\partial U} \cdot U(\theta_{+})$. However, the relationship between $\hat{\rho}$ and $U$ can be nonlinear due to the complexity of obtaining $\hat{\rho}$, thus $\frac{\partial L_3}{\partial \theta}$ cannot be directly calculated using the parameter shift rule. > *[Runtime Clarification] It is unclear whether this refers to the training time required for the SuperEncoder or the inference time once the model is trained.* Sorry about the confusion. Runtime refers to the inference time. SuperEncoder is a pre-trained model that could generate QSP circuit parameters for arbitrary target states. Training SuperEncoder is done offline and does not belong to runtime. We would clarify this definition in our paper. > *[Data Distribution Scope] The distribution characterized by the SuperEncoder seems to be specifically tailored to a particular dataset.* This is a misunderstanding. SuperEncoder is not tailored to a particular dataset. It is trained using FractalDB that contains artificial images. Instead of splitting FractalDB to training set and test set, we construct a test set that is independent of the training set, thereby ensuring the generalizability (Sec. 4.1). Specifically, the test set is composed of various distributions covering a wide range of the vector space. Notably, it contains state vectors sampled from uniform distribution, which can be considered as randomized states. The test fidelity on these states is 0.9731, affirming the generalizability of SuperEncoder. \[R0\] McClean, Jarrod R., et al. "Barren plateaus in quantum neural network training landscapes." Nature communications 9.1 (2018): 4812. \[R1\] Nakaji, Kouhei, et al. "Approximate amplitude encoding in shallow parameterized quantum circuits and its application to financial market indicators." Physical Review Research 4.2 (2022): 023136. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' detailed rebuttal. While I appreciate your responses, I still have a few disagreements. 1. Barren Plateaus: I respectfully disagree that my concern about the Barren Plateaus was a misunderstanding. I have conducted similar experiments, and while it is true that the trainable parameters are within the classical neural network (NN), the output of the NN (denoted as y) ultimately serves as the rotation angles in the quantum circuit. Applying the chain rule for gradients, we first need to compute the gradient of the loss with respect to y before computing the gradient of y with respect to the NN parameters. While the latter part is free from the barren plateau issue, the former is indeed subject to it. Therefore, the barren plateau phenomenon still poses a challenge in this context. 2. Data Distribution Scope: After considering the feedback from other reviewers, I tend to agree that the SuperEncoder may not fully cover the entire space. The primary contribution of this paper seems to be that the SuperEncoder can effectively handle certain particular datasets. However, the performance on these datasets may have led the authors to an optimistic view that it can scale to any qubit count, which may not be entirely accurate. In summary, while the SuperEncoder presents some notable contributions, I do not believe it meets the bar for acceptance at NeurIPS. Therefore, I maintain my original score. --- Rebuttal 2: Comment: Thank you for your constructive feedback. We are pleased that you acknowledged the contributions of our work, and we respect your overall evaluation of this study. However, we believe that it is worthy for us to further discuss on your remained two concerns. Particularly, we lookfoward to your further response on the Barren Plateaus problem, we appreciate your time and sincerely hope to learn more during the discussion phase - **Data Distribution Scope**: In our test sets, we have randomized quantum states covering a wide range of vector space. In this context, SuperEncoder is not tailored for any particular data distribution. However, we do acknowledge that the current SuperEncoder's performance degrades with the number of qubits increases. Fully covering the entire space when increasing the number of qubits is an open challenge, we will definitely head in this direction. - **Barren Plateaus**: The gradient of loss $L$ w.r.t. weights $W$ of MLP is given by $\frac{\partial L}{\partial W} = \frac{\partial L}{\partial y} \cdot \frac{\partial y}{\partial W}$. Here we let $y$ be the output of MLP, i.e., the parameters of quantum circuit to be consistent your previous comment. If we understand correctly, you believe that as long as this item: $\frac{\partial L}{\partial y}$ exists, it will become zero as the number of qubits increases and we will experience Barren Plateaus problem. We respectfully argue that this is questionable. As illustrated in the original Barren Plateaus paper \[R0\], Barren Plateaus occurs under the assumption that circuit is initialized to be Haar random unitary, so that variance of measurements will decrease exponentially in the number of qubits. But in our framework, we do not explicitly randomly initialize a quantum circuit since that it contains no trainable parameters. Since the parameters are the outputs from MLP, they carry the pattern of inputs and do not necessarily lead to a randomly initialized quantum circuit. In our framework, the quantum circuit acts more like a differentiable activation function. We have verified that the loss of training with 10+ qubits converges well (However it seems we could not add images here). Moreover, there have been extensive efforts on avoiding Barren Plateaus, we believe that this common concern in the entire field is not a significant drawback. \[R0\] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, and H. Neven, “Barren plateaus in quantum neural network training landscapes,” Nat Commun, vol. 9, no. 1, p. 4812, Nov. 2018, doi: 10.1038/s41467-018-07090-4.
null
null
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and comments, which have helped improve our paper. We are pleased that the reviewers acknowledged our contributions and found our idea a novel solution. Following are some of the main critiques; afterwards, we address each reviewer's comments individually. - Most reviewers had concerns over the scalability of the proposed method because the input size is $2^n$. We argue that this is not a problem, because the definition of QSP in this paper is: *loading classical data into a quantum state* (lines 100\~101). The input can be any data in the classical world (e.g., embedding vectors of images, texts, or videos). These inputs are already stored in classical systems and are assumed to fit within available storage space. As such, the value of $n$ will not be exceedingly large, thus the input size cannot be a limiting factor of our method. In fact, all QSP methods discussed in our paper (i.e., AE and AAE) have the same inputs. If the input size is a problem, it would be a problem for the research field as a whole. We would incorporate a more clear problem setting in our paper to enhance clarity. - Review dsnA and 4Ck1 misunderstood our approach. We do not rely on acquiring parameters through AAE or constructing a training dataset with pairs of states and parameters. In fact, we refer to this approach as *parameter-oriented training* in our paper, an unsuccessful approach initially explored in our study. We have identified its issue (lines 170\~171) and proposed to address it by using state-oriented training. - Review 4Ck1 expressed concerns about the performance of SuperEncoder. We have acknowledged the degradation of fidelity and discussed this limitation (Sec. 4.4). However, we argue that this limitation does not overwhelm the immense value of our work. Firstly, the major challenges highlighted in this paper have been successfully addressed. That is, the huge overhead of iterative online optimizations in AAE. We argue that this drawback of AAE significantly hinders its practical value and addressing it would be of great significance. Secondly, our fidelity degradation is not catastrophic. Particularly, it is able to achieve excellent performance in important downstream tasks, such as QML. We include more results of QML in the attached PDF, showing that SuperEncoder can achieve performance comparable with baselines when increasing the number of qubits. Finally, we will definitely continue to investigate the solutions to further enhance the fidelity of SuperEncoder in the future. We argue that SuperEncoder initiates an interesting but also challenging machine learning problem, which is of great significance for both quantum computing and machine learning. To summarize, we believe all concerns have been addressed. We would incorporate a part of this discussion into our paper to make it easier for readers to understand our problem settings and methodology. Pdf: /pdf/7fd00323d10efdfe811c30df85c24b7dc7a9d713.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Multi-Scale VMamba: Hierarchy in Hierarchy Visual State Space Model
Accept (poster)
Summary: This paper presents a multi-scale Vmamba model, which incorporates multi-scale information into the design of the Vmamba architecture. Additionally, the authors analyze how the attenuation coefficient between tokens increases as the modeled distance in Vmamba grows, whereas MSVmamba alleviates this attenuation issue by reducing the sequence length. The authors have conducted extensive experiments to thoroughly validate the effectiveness of MSVmamba. Moreover, the authors have also employed SE modules and ConvFFN to further enhance the model's performance. Strengths: 1. The author's writing is clear and easy to follow. 2. The authors analyzed the issue of long-distance forgetting and proposed an effective method to address this problem. 3. The authors conducted comprehensive ablation experiments to validate the role of each module (MS2D, SE, ConvFFN) in MSVmamba for classification tasks. Weaknesses: 1. Although MSVmamba has achieved better results compared to other Vmamba models, its scalability has not yet been validated. Could the authors provide training results of MSVmamba on larger models, such as those with 50M and 90M parameters, to demonstrate the model's scalability? 2. The authors' ablation experiments only validated the performance of the modules in classification tasks. It would be better to further verify the roles of each module in more fine-grained tasks such as detection and segmentation. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weakness Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and for the time you have dedicated to reviewing our manuscript. We greatly appreciate your feedback, which has been instrumental in enhancing the quality of our paper. Your concerns about the **scalability (Q1)** and the **ablation of our model in more fine-grained tasks(Q2)** are addressed as below. ### **Q1: Scalability.** Thanks for this valuable suggestion! It’s truly important to validate the scalability of the proposed model. To compensate for this, we have updated the results of the proposed model when scaling up to small and base size on ImageNet-1K. Please check further details in **Table 1** of the uploaded PDF. MSVMamba-S and MSVMamba-B consistently outperform the VMamba baseline by 0.6% and 0.6% top-1 acc respectively. Additionally, we are conducting further experiments on downstream tasks for small and base size models, including COCO detection and ADE20K segmentation. Due to time constraints, these results are still pending, but we commit to including them in the revised paper. ### **Q2: Ablation on more fine-grained tasks.** Thanks for this suggestion. As suggested, we have conducted an ablation study to evaluate the impact of each module within the detection and instance segmentation tasks on the COCO dataset, utilizing the Mask R-CNN framework. Detailed results of this study can be found in **Table 4** of the uploaded PDF. All experiments employed an ImageNet-1K pretrained backbone with a 100-epoch training schedule for initialization. The MS2D module alone brings improvements of 1.0% in box AP and 0.7% in mask AP compared to the VMamba baseline. Integrating additional components has further enhanced performance. The results on ADE20K segmentation are still undergoing due to the time limit. We will update all of the ablation details on fine-grained tasks in our revised paper. We hope that these revisions could address your concerns. **We thank you once again for your constructive feedback, which has significantly contributed to the improvement of our work.** --- Rebuttal Comment 1.1: Title: final reviewer Comment: Thanks the author's efforts. My concerns are well addressed. I will keep my initial rates (7).
Summary: This paper introduces a Multi-Scale Vision Mamba (MSVMamba) for computer vision tasks. It uses a multi-scale 2D scan operation on both the original and sub-sampled features to preserve long-range information and reduce computational costs. In addition, they address the problem of channel mixing in Mamba-based models by introducing a Convolutional Feed-Forward Network (ConvFFN) module. The resultant model achieves favorable performance on image classification and a variety of downstream tasks such as detection and segmentation. Strengths: 1. The paper is well-written and easy to follow. 2. It attempts to address an important issue in making Mamba-based vision models more suitable for computer tasks. In order to achieve this, it introduces a hierarchical architecture, although not novel, but also integrates various modules such Convolutional Feed-Forward Network (ConvFFN) to further enhance the performance. 3. The analysis of effectiveness in 2D selective scan approach, initially introduced in VMamba, seems to be interesting and provide further insights on how to improve scan for vision tasks. Weaknesses: 1. The major issue with this work is lack of novelty. Hierarchical Mamba-based vision models have been already introduced in VMamba and also its other variants. The MSVSS block seems to be a minor improvement over the existing VMamba block. In addition, the role of ConvFFN seems to be quite marginal. This is due to the fact that MLP blocks themselves can inherently perform channel mixing to a great extent. 2. Experiments are insufficient. This work only presents three small variants of MSVMamba-N, MSVMamba-M and MSVMamba-T with the biggest model having only 33M parameters. Hence, it is not really clear how the proposed approach scales for mid to larger sized models which have better accuracies. It even can't be compared to the small variants of many models (e.g. Swin-S) due to its contrived setting. 3. The paper only focuses on number of FLOPs as a representative for efficiency. However, a more practical scenario involved measuring throughput (or latency) on different devices (GPU, TPU, etc.). In particular, it is important to understand if the 2D selective scan approach introduces any significant overhead. Post-rebuttal: The following issues and weaknesses were revealed during rebuttal after interactions with the authors: 1. The proposed MSVMamba is slower than models such as ConvNeXt in both smaller and higher resolutions in terms of throughput (see Table 1 and Table 2 in rebuttal). This limits the practical usage of this work due to its lower throughput and can present significant challenges. 2. The authors deliberately presented results from the first version of VMamba which is not optimal. Although the second version (https://arxiv.org/abs/2401.10166v2) was released 42 days before the submission deadline, the authors claim that it should be considered as concurrent. This argument is not well-founded since we need to fairly evaluate the contribution of this work against VMamba and other methods. 3. The issue of limited novelty presents itself when comparing against VMamba (both first or second iterations). As expected, MSVMamba does not significantly improve the results -- authors claimed that MSVMamba addressed the long-distance forgetting issue in VMamba which is not backed up by these results. Considering these issues, I lower my score to strong reject (2). I encourage the authors to revise their manuscript, include best results from VMamba and try to evaluate the contributions of their work quantitatively. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. How does the model compare in terms of image throughput to other Mamba-based vision models as well as CNN-based and ViT variants on a GPU ? 2. Is the 2D selective scan approach faster than the naive selective scan that is originally introduced in Mamba ? 3. Did authors try to scale up the model size to observe if performance is comparable to other models ? Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 2 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and for the time you have dedicated to reviewing our manuscript. We appreciate your feedback, which has helped us improve the quality of our paper. We address your concerns in a few points as described below: ### **Q1: Concerns About Novelty.** Thank you for this nice concern. We agree that the hierarchical architecture is not something new, as similar structures have been utilized in VMamba and various Vision Transformers. However, our contribution lies in the introduction of **an additional hierarchy within a single layer**, which we refer to as **"Hierarchy in Hierarchy"** in our title and is distinct from the hierarchical designs previously explored. Specifically, traditional hierarchical models that primarily focus on creating a feature pyramid between different stages. Our method makes the scanning process conducted on full-resolution and downsampled feature maps simultaneously, which introduces **a hierarchy inside one layer** or one stage. This **"Hierarchy in Hierarchy"** has not been explored in previous Vision Mamba-based works. Besides, unlike **traditional multi-scale strategies** that primarily **enhance hierarchical feature learning**, our **MS2D** module is motivated by the need to **mitigate the long-range issue** prevalent in Mamba models. This novel approach is not merely structural but is specifically designed to tackle the long-range problem in selective scanning, a critical issue for Mamba-based vision models in computational tasks. The multi-scale 2D scan, though a straightforward strategy, effectively addresses this issue and achieves notable improvements. To further demonstrate its efficacy, we have conducted additional experiments on different scanning strategies and fine-grained tasks, the results of which are detailed in **Tables 3 and 4** of the uploaded PDF. Please check it for other details. Furthermore, ablations on tiny-size models with a 100-epoch training schedule are also reported **in table below**: | Model | Param(M) | GFLOPs | Top-1 Acc(%) | Thru. (imgs/sec) | Train Thru. (imgs/sec) | Memory (MB) | |----------|----------|--------|--------------|------------------|------------------------|-------------| | VMamba | 23 | 5.6 | 80.3 | 603 | 151 | 6639 | | +MS2D | 24 | 4.8 | 80.9 | 866 | 205 | 4780 | | +Others | 33 | 4.6 | 81.4 | 1092 | 331 | 4532 | Our findings indicate that the proposed MS2D module contributes to an improvement of 0.6% in Top-1 accuracy for the tiny-size model. Other components of our model collectively contribute an additional 0.5% increase in accuracy. This ablation reveals that MS2D is more important in accuracy gain compared to other components on tiny-size model. Furthermore, the MS2D module not only enhances performance but also contributes to further speed gains and reductions in memory usage. In terms of the ConvFFN in our model, it is intended to maintain consistency with established methodologies, as detailed in lines 250 to 253 of our manuscript. We acknowledge that using an MLP is also a viable alternative. We apologize for any confusion this may have caused and will ensure to clarify this point more explicitly in the revised version. We hope this explanation helps to clarify the innovative aspects of our work and the specific challenges it addresses. ### **Q2: Insufficient experiments.** Thank you for this suggestion! It’s important to validate the scalability of the proposed model. To compensate for this, we have updated the results of the proposed model when scaling up to small and base size on ImageNet-1K. Please check further details in Table 1 of the uploaded PDF. Concretely, the MSVMamba-S and MSVMamba-B outperform VMamba by 0.6% and 0.6% top-1 acc. More experiments on downstream tasks are still undergoing due to the time limit. We will include more results of downstream tasks in the revised version. ### **Q3: Efficiency Comparison.** Thanks for this valuable suggestion! We apologize for the initial omission of a detailed efficiency comparison in our manuscript. To address this, we have complemented the efficiency comparison, including training/inference FPS and memory usage, with our baseline VMamba and widely-used SwinTransformer and ConvNeXt in **Table1 and 2** of the uploaded PDF for your reference. Compared to our baseline, our proposed model achieves nearly **1.5x speedup in inference** and **2.0x speedup in training**. Additionally, it requires approximately **30% less memory**. The efficiency of our models at a 224x224 image resolution did not match that of well-established architectures such as Swin Transformer. However, when the image resolution is increased, our model achieves comparable efficiency to the Swin Transformer, which can be found in Table 2 of the uploaded PDF. In addition, we also complemented the efficiency comparison of tiny-size model between different scanning strategies in 2D selective scan and naive selective scan in Mamba with the same setting as Table1 of the uploaded PDF: | Model | Param | Thru. (imgs/sec) | Train Thru. (imgs/sec) | Memory (MB) | |-------------|-------|------------------|------------------------|-------------| | Vallina Scan| 22.9 | 602 | 151 | 6623 | | Cross Scan | 22.9 | 603 | 151 | 6639 | | MS2D | 24.2 | 866 | 205 | 4780 | It's worth noting that **no** FFN or ConvFFN is introduced in this comparison. As we can see, the cross scan in VMamba yields the same efficiency as the vallina scan in Mamba, while our multi-scan 2D scan further improves the efficiency . We hope these revisions could address your concerns and **thank you once again for your constructive feedback**! --- Rebuttal Comment 1.1: Title: Reviewer's Response to Rebuttal Comment: I would like to thank the authors for providing responses to my feedback as well as uploading the rebuttal. I have the following concerns: 1. As mentioned in my other comment, the reported Top-1 accuracy from VMamba seem to be lower than their existing benchmarks. The authors claim that they have used the results from the first iteration of the VMamba arXiv submission. However, the benchmarks I refer to can also be found in the 2nd iteration which was available since Apr 10: https://arxiv.org/pdf/2401.10166v2 Can the authors provide an updated version of Table 1 for VMamba and MSVMamba models alone (Tiny, Small and Base) with the updated Top-1 accuracy as reported in the above arXiv submission ? it should suffice to just reply to my comment with this table. 2. The reported numbers in Table 2 for efficiency comparison are for batch size 32. Why not report numbers for standard 224x224 resolution with batch size 128 which matches the same setup in Table 1 ? Our goal here is to have a fair comparison to previous models to further understand the contributions of the proposed effort. --- Reply to Comment 1.1.1: Comment: Thanks for your feedback and we are very appreciated for this opportunity for further clarification. We will do our best to **achieve the goal of comparison to previous models to further understand the contributions of the proposed effort**. ### Q1: Why not include the results of the second version results in VMamba for comparison? We thank the reviewer for proposing this reasonable concern. As you mentioned, the second version of VMamba (VMambav9) was available since **Apr 10**, and the DDL of submission is **May 22**. According to NeurIPS 2024 official instruction, we argue that the second version of VMamba is not expected to be included in our comparison. The detailed evidence is provided by the NeurIPS 2024 official instruction (in NeurIPS-FAQ page). In Section **Submission format and content**, one question is “What is the policy on comparisons to recent work?”, and the given answer is “**Papers appearing less than two months before the submission deadline are generally considered concurrent to NeurIPS submissions. Authors are not expected to compare to work that appeared only a month or two before the deadline.**” Thus, given the timeline of VMamba, we only include the first version of VMamba in our comparison. Besides, even if we take into consideration the contribution of VMambav9, our core contribution MS2D discusses and tackles a general long-range forgetting problem of VMamba, which is orthogonal to the contributions of other versions of VMamba. Thus, the appearance of existing VMamba variants does not weaken our contributions. ### Q2: Updated version of Table 1 for VMamba and MSVMamba models alone. As suggested, we add the updated version of Table 1 as below for your reference, which updated the results of the VMamba with **VMambav9** in the v2 version of VMamba paper. | Model | Param(M) | GFLOPs | Top-1 Acc(%) | Thru. (imgs/sec) | Train Thru. (imgs/sec) | Memory (MB) | |-------------|----------|--------|--------------|------------------|------------------------|-------------| | VMambav9-T | 31 | 4.9 | 82.5 | 1135 | 313 | 5707 | | MSVMamba-T | 33 | 4.6 | 82.8 | 1092 | 331 | 4532 | | VMambav9-S | 50 | 8.7 | 83.6 | 749 | 207 | 5785 | | MSVMamba-S | 51 | 9.2 | 84.1 | 665 | 232 | 4910 | | VMambav9-B | 89 | 15.4 | 83.9 | 542 | 151 | 7918 | | MSVMamba-B | 91 | 16.3 | 84.3 | 476 | 127 | 6347 | As we can see, **VMambav9** exhibits **higher inference speed** and **similar training speed** compared to our model. However, its **Top-1 accuracy** on ImageNet still **lagged behind our model by 0.3%, 0.5% and 0.4%** for the tiny, small and base model respectively. ### Q3: Why Table 2 adopt a batch of 32 instead of a standard 128 for comparison? We apologize for this confusion as this explanation should be contained in the caption of Table 2. When testing the throughput, we follow the code provided in the official VMamba repo ( L580 in utils.py of analysis) and adjusted the batch size downwards to fit the GPU’s memory constraints. With the input resolution increase, the memory cost for models also increase significantly. Take Swin-tiny as an example, it takes more than 20000 MB memory with an input resolution of 768 and a batch size of 32. Adopting a batch size of 128 or 64 will cause Out-Of-Memory error in some model. To make all comparisons under the same configuration, we take a batch size of 32 in the Table 2 of the uploaded PDF as a compromise. We hope that these revisions could address your concern and we sincerely look forward to your feedback. --- Reply to Comment 1.1.2: Comment: Table: Comparsion of different versions of VMamba and our models. | Model | Param(M) | GFLOPs | Top-1 Acc(%) | Thru. (imgs/sec) | Train Thru. (imgs/sec) | Memory (MB) | |-------------|----------|--------|--------------|------------------|------------------------|-------------| | VMamba-T [01-18] | 23 | 5.6 | 82.2 | 603 | 151 | 6639 | | VMambav9-T [04-10] | 31 | 4.9 | 82.5 | 1135 | 313 | 5707 | | MSVMamba-T | 33 | 4.6 | 82.8 | 1092 | 331 | 4532 | | VMamba-S [01-18] | 44 | 11.2 | 83.5 | 425 | 106 | 6882 | | VMambav9-S [04-10] | 50 | 8.7 | 83.6 | 749 | 207 | 5785 | | MSVMamba-S | 51 | 9.2 | 84.1 | 665 | 232 | 4910 | | VMamba-B [01-18] | 76 | 18.0 | 83.7 | 314 | 77 | 8853 | | VMambav9-B [04-10] | 89 | 15.4 | 83.9 | 542 | 151 | 7918 | | MSVMamba-B | 91 | 16.3 | 84.3 | 476 | 127 | 6347 | --- Rebuttal 2: Title: Comment by Reviewer Comment: Thank you for your response. In this case, as shown in Table 1, there exists a substantial gap between the throughput of well-established models such as Swin and ConvNext and the proposed model. Even for larger resolutions, models such as ConvNeXt-T are significantly faster than the proposed MSVMamba-T. This may complicate the usage of this model where throughput is important (which is almost all use-cases at this time). And to add to this, ConvNeXt itself is not a very fast model in terms of throughput. Regarding the comparison between different versions of VMamba (released in less than two months after each other) and MSVMamba, the authors initially claimed that the 2nd iteration (available 42 days before submission deadline) should be disregarded. However, a later comment provided the performance for both models. Upon closer examination, we observe that MSVMamba does not significantly improve the Top-1 performance of the VMamba model. I appreciate the authors' efforts in this stage of the rebuttal. I believe I have all information I need to make a final decision. --- Rebuttal 3: Title: Post-Rebuttal Score Update Comment: I thank the authors for providing the rebuttal and their engagement during this period. Considering all aspects, I have decided to lower my score to strong reject (2). Here's the key reasons behind this decision: 1. The proposed MSVMamba is slower than models such as ConvNeXt in both smaller and higher resolutions in terms of throughput (see Table 1 and Table 2 in rebuttal). This limits the practical usage of this work due to its lower throughput and can present significant challenges. 2. The authors deliberately presented results from the first version of VMamba which is not optimal. Although the second version (https://arxiv.org/abs/2401.10166v2) was released 42 days before the submission deadline, the authors claim that it should be considered as concurrent. This argument is not well-founded since we need to fairly evaluate the contribution of this work against VMamba and other methods. 3. The issue of limited novelty presents itself when comparing against VMamba (both first or second iterations). As expected, MSVMamba does not significantly improve the results -- authors claimed that MSVMamba addressed the long-distance forgetting issue in VMamba which is not backed up by these results. I encourage the authors to revise their manuscript, include best results from VMamba and try to evaluate the contributions of their work quantitatively. --- Rebuttal Comment 3.1: Comment: We thank Reviewer QVsi for the detailed summary in the post-rebuttal, where they pointed out three issues during the discussion. We would like to clarify each issue in details. Q1: The proposed MSVMamba is slower than models such as ConvNeXt. Note that our model design is specifically tailored for the Mamba architecture, allowing any subsequent optimizations related to efficiency to be directly inherited. Thus, we focused on comparing the VMamba baseline. It’s important to note that while our baseline VMamba is indeed much slower than ConvNeXt, it remains highly valuable and has inspired many subsequent works. For example, the first version of VMamba was released on 2024.01.18 and its citation is 300+ currently, which highlights its importance. Given the observation that our baseline VMamba is much slower than existing CNNs, we are motivated to improve its efficiency. Our proposed model achieves significant improvements, with nearly 1.5x speedup in inference and 2.0x speedup in training compared to our baseline VMamba. Taking other versions of VMamba into consideration, our contribution to efficiency is orthogonal to theirs, which could be integrated to achieve a better efficiency. Concretely, our core contribution, the MS2D, focuses on the optimization of CrossScan in our baseline VMamba. In all three versions of VMamba, CrossScan inherently exists, and the speedup mainly comes from implementational optimizations and hyper-parameter adjustments in the Mamba block. Thus, our improvement is orthogonal to the subsequent techniques used in VMamba and can be integrated with them seamlessly. Q2: The authors deliberately presented results from the first version of VMamba which is not optimal. As clarified in our previous comments, the second version of VMamba is released 42 days before the submission deadline and should be considered as concurrent work. This argument is well-founded by the NeurIPS 2024 official instructions (https://neurips.cc/Conferences/2024/PaperInformation/NeurIPS-FAQ). You can find the question “**What is the policy on comparisons to recent work?**” in the Section Submission format and content, and the corresponding answer is “**Papers appearing less than two months before the submission deadline are generally considered concurrent to NeurIPS submissions. Authors are not expected to compare to work that appeared only a month or two before the deadline.**” Since 42 days is obviously less than 2 months, we do not include the second version of VMamba in our comparison and focus on the comparison with the first version. Although the second version of VMamba is defined as concurrent work based on the NeurIPS 2024 official instructions, we provide a detailed comparison with the second version as a reference during the rebuttal discussion because it is strongly recommended by Reviewer QVsi. Concretely, the Top-1 accuracy in the second version of VMamba on ImageNet still lagged behind our model by 0.3%, 0.5% and 0.4% for the tiny, small and base model respectively. This comparison highlights the superiority of our method over the concurrent work. Q3: The issue of limited novelty presents itself when comparing against VMamba (both first or second iterations). In comparison with VMamba (the first version of VMamba), our models not only exhibit a 0.6% improvement in Top-1 accuracy on ImageNet across different model sizes but also show nearly 1.5x speedup in inference and 2.0x speedup in inference and training FPS. The Multi-Scale 2D (MS2D) module, as the core contribution of our work, is well motivated by the need to mitigate the long-range issue prevalent in Mamba models. This is a critical challenge that has not been adequately addressed by existing methods. The subsequent optimization in VMamba focuses on the implementational optimizations and hyper-parameter adjustments in the Mamba block. Thus, the long-range issue inherently exists. VMamba utilizes multi-scan strategy with redundant FLOPs to alleviate this issue, while MS2D utilizes multi-scale strategy to efficiently tackle this issue. As our contribution in accuracy and efficiency gain is also orthogonal to the subsequent versions of VMamba, these techniques could be integrated for further improvement. We hope this clarifies our position and we appreciate your understanding.
Summary: This paper presents a multi-scale vision mamba aimed at improving the performance of state space models (SSMs) in vision tasks while maintaining efficiency. The motivation stems from analyzing the multi-scan strategy in vision mamba, where the authors link its success to alleviating the long-range forgetting issue of SSMs in vision tasks. To address this problem effectively, the authors propose a multi-scale 2D scanning technique on both original and downsampled feature maps, which reduces the number of tokens in the multi-scan strategy. This method enhances long-range dependency learning and cuts down computational costs. Additionally, a ConvFFN is incorporated to overcome channel mixing limitations. Experimental results across various benchmarks validate the proposed multi-scale vision mamba's effectiveness. Strengths: The analysis of the multi-scan strategy's success in vision mamba is intriguing and could be valuable to the research community. The proposed approach is straightforward yet effective, addressing the long-range forgetting problem while significantly reducing computational costs. The approach strikes a better balance between performance and FLOPs, as demonstrated in the experiments. The paper includes comprehensive experiments on widely-used datasets and tasks, with comparisons to leading neural architectures showing the proposed networks' superiority. Weaknesses: The authors should consider including some simple baselines in additional ablation studies. For instance, besides the half-resolution branches in the proposed MSVMamba, reducing the scanning number in vision mamba would be a useful baseline. Incorporating these simple baselines could further highlight MSVMamba's effectiveness. The paper outlines that the proposed MSVMamba uses 3 half-resolution branches and 1 full-resolution branch (Equations 8-11). However, the scanning direction for these branches is not clearly described. There is a lack of discussion on how the scanning direction for the full-resolution branch is chosen and how this choice impacts performance. Technical Quality: 3 Clarity: 3 Questions for Authors: Could you provide more comparisons with simple baselines? Could you clarify the selection of different branches concerning the scanning direction and discuss its impact? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Reflecting on the weaknesses mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and for the time you have dedicated to reviewing our manuscript. We greatly appreciate your feedback, which helps enhance the quality of our paper. Below, we address your concerns regarding the **baseline involving the reduction of scanning numbers(Q1)** and the **scanning direction for the full-resolution branch(Q2)**. For clarity, we define the scanning directions as follows: Scan1 refers to the horizontal scan from the top-left corner to the bottom-right corner, and Scan2 refers to the vertical scan from the top-left corner to the bottom-right corner. Conversely, the reverse directions of Scan1 and Scan2 are denoted as Scan3 and Scan4, respectively. ### **Q1: Reducing the scanning number as additional baselines.** Thanks for the valuable suggestion. In response to this suggestion, we have added new baselines that involve only Scan1 (Uni-directional Scan) and a combination of Scan1 and Scan3 (Bi-directional Scan). These are now presented in **Table 3** of the uploaded PDF. Concretely, our MS2D further outperform **Uni-directional Scan** and **Bi-directional Scan** baselines by **3.0%** and **2.4%** top-1 acc respectively. We will include these results in the revised paper. ### **Q2: Ablation of scanning direction for full-resolution branch.** Thanks for your suggestion. First of all, we apologize for the initial lack of clarity regarding the scanning direction in the full-resolution branch. In the original experiments, Scan1 was used as the full-resolution branch. To thoroughly explore the impact of different scanning directions, we conducted additional ablation studies as **table below**. The results indicate that while different scans yield similar accuracy, Scan1 was selected for its marginally superior performance consistency. | Full-res Scan | Scan1 | Scan2 | Scan3 | Scan4 | |---------------|-------|-------|-------|-------| | Top-1 Acc(%) | 71.9 | 71.8 | 71.8 | 71.9 | These findings and the rationale for our choice of scanning direction will be updated in the revised paper to ensure clarity. We hope that these revisions adequately address your comments. **We thank you once again for your constructive feedback, which has significantly contributed to the improvement of our work**. --- Rebuttal Comment 1.1: Title: Thanks for addressing my concerns Comment: I'd like to thank the authors' response to my question. I think the response is clear enough to address my confusion.
Summary: The paper introduces a novel vision backbone model, MSVMamba, which incorporates State Space Models (SSMs) to address limitations in computational efficiency and long-range dependency capture in vision tasks. The model utilizes a multi-scale 2D scanning technique and a Convolutional Feed-Forward Network (ConvFFN) to improve performance with limited parameters. Strengths: The paper is easy to follow. Weaknesses: 1. Lack of novelty. The paper propose MSVMamba, the main contribution is shown in table.4. MS2D, SE, ConvFFN have already been proposed in previous papers. The main improvements come from existing knowledge. 2. Speed The paper does not systematically measure the speed of their model on all the tasks and scales. The training and inference of MSVMamba could be slow compared with current models, like CAFormer, Conv2Former, CSwinTransformer. 3. Scalability The model in the experiments is relatively small. Model with more than 300M or 600M parameters could show whether MSVMamba does perform better than current sota model. Now the experiments just show that the model converges quickly w.r.t. model size. Technical Quality: 2 Clarity: 2 Questions for Authors: Please refer to the weakness. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: Please refer to the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and the time you have dedicated to reviewing our manuscript. We appreciate your feedback, which helps us improve the quality of our paper. We address your concerns in a few points as described below: ### **Q1: Lack of novelty.** We agree that multi-scale strategy, SE, and ConvFFN have been introduced in previous CNN or ViT-based works. However, the utilization of these techniques, especially the multi-scale strategy, is well-motivated by the analysis and empirical observation in this paper, which could provide insights to Mamba model design for vision tasks. Specifically, we take our core contribution MS2D as an example. To begin with, **MS2D is first introduced in this paper**, which is carefully designed for Mamba in vision tasks. Unlike traditional multi-scale strategies that primarily enhance hierarchical feature learning, our **MS2D module is motivated by the need to mitigate the long-range issue** prevalent in Mamba models. This is a critical challenge that has not been adequately addressed by existing methods. As detailed in lines 151 to 161 of our manuscript, our analysis demonstrates that the contribution of tokens significantly decays with increased scanning distance. To address this, we innovatively downsample the feature map to effectively shorten the sequence length. This adaptation not only introduces a multi-scale design within a single layer but also specifically targets the reduction of computational complexity and improves the efficiency of long-range interactions. Figure 4 and Table 4 in our paper provide compelling evidence of the effectiveness of our approach. The multi-scale design significantly alleviates the long-range problem, as visually demonstrated in Figure 4. Furthermore, the quantitative ablations presented in Table 4 underscore the substantial improvements our model achieves over existing techniques. Thus, **the introduced multi-scale method, although similar to previous works, originates from a different motivation and addresses a distinct problem**. We hope this explanation helps to clarify the innovative aspects of our work and the specific challenges it addresses. ### **Q2: Efficiency Comparison.** Thanks for the suggestion. We have complemented the efficiency comparison as suggested, including training/inference FPS and memory usage, with our baseline VMamba and widely-used SwinTransformer and ConvNeXt in **Table1 and 2** of the uploaded PDF for your reference. We acknowledge that at the time of submission, the efficiency of our models at a 224x224 image resolution did not match that of well-established architectures such as Swin Transformer and ConvNeXt. However, it is important to highlight that the Mamba-based models still serve as a crucial backbone that achieves a competitive trade-off across various settings and is currently under continuous optimization. For instance, when the image resolution is increased, our model achieves comparable efficiency to the Swin Transformer, which can be found in Table 2 of the uploaded PDF. This is further supported by related experiments in ViM[1], which demonstrate the efficiency of the Mamba block in downstream tasks like detection. Our model design is specifically tailored for the Mamba architecture, allowing any subsequent optimizations related to efficiency to be directly inherited. In this work, our focus was primarily on comparing Mamba-based baselines. Our proposed model achieves significant improvements, with nearly **1.5x speedup in inference** and **2.0x speedup in training** compared to VMamba. We understand the importance of systematic speed measurements across all tasks and scales. While our current manuscript may not cover all possible configurations, we are committed to extending our evaluations and will consider including more comprehensive speed comparisons in future revisions or subsequent works. ### **Q3: Scalability** Thanks for the suggestion! We have updated the results of the proposed model when scaling up to small and base size on ImageNet-1K. Please check further details in Table 1 of the uploaded PDF. MSVMamba-S and MSVMamba-B consistently outperform the VMamba baseline by 0.6% and 0.6% top-1 acc respectively. When it comes to the model size that exceeds 300M, we acknowledge the potential benefits of testing larger models with parameters exceeding 300M to compare against current state-of-the-art (SOTA) models. However, it is important to note that most existing works involving vision mamba models [1,2,3,4,5] operate under 100M parameters. In this work, we strictly follow the setting of [1,2,3] to conduct evaluation and comparison. For example, the largest model in VMamba exhibits 76M parameters. Thanks for your valuable suggestion. We will include the evaluation of Mamba models with much larger sizes in future works. We hope these revisions could satisfactorily address your concerns and **thank you once again for your feedback**! **References:** [1] Zhu, Lianghui, et al. "Vision mamba: Efficient visual representation learning with bidirectional state space model." arXiv preprint arXiv:2401.09417 (2024). [2] Liu, Yue et al. “VMamba: Visual State Space Model.” arXiv preprint arXiv:2401.10166v1 (2024). [3] Huang, Tao, et al. "Localmamba: Visual state space model with windowed selective scan." arXiv preprint arXiv:2403.09338 (2024). [4] Pei, Xiaohuan, Tao Huang, and Chang Xu. "Efficientvmamba: Atrous selective scan for light weight visual mamba." arXiv preprint arXiv:2403.09977 (2024). [5] Yang, Chenhongyi, et al. "Plainmamba: Improving non-hierarchical mamba in visual recognition." arXiv preprint arXiv:2403.17695 (2024).
Rebuttal 1: Rebuttal: Dear reviewers and ACs: First and foremost, we wish to express our sincere gratitude for the time and effort you have dedicated to reviewing our manuscript. Your insightful suggestion and comments could further enhance the quality of this paper. We have conducted additional experiments to address the key points raised by the reviewers, as detailed in the uploaded PDF. Specifically: - Table 1 addresses scalability concerns. - Tables 1 and 2 complement the detailed efficiency comparison. - Table 3 compares our approach with additional baselines. - Table 4 explores ablations on fine-grained tasks. Please refer to these tables for more detailed information. Thank you once again for your constructive feedback! Pdf: /pdf/5815470449c25cb7d44cdbe032f3802c66a9262b.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper introduces Multi-Scale VMamba (MSVMamba), a novel vision backbone model that leverages State Space Models (SSMs) to address the challenges of quadratic complexity in Vision Transformers (ViTs). The proposed Multi-Scale 2D Scanning (MS2D) and Convolutional Feed-Forward Network (ConvFFN) contributes to the final performance. Strengths: - This paper presents a multi-scale design of VMamba, reducing the sequence length by downsampling the input features. - The SSM module proposed in this paper incorporated the strength of residual, dwconv, ssm, se-block. - This paper achieves good results on classification and dense prediction benchmarks. Weaknesses: - This paper lacks the efficiency comparison among training/inference FPS and memory usage on real GPU devices. - The proposed multi-resolution processing seems to contain too many inductive biases. I wonder if the gains would be reduced when applying to a standard-ablation-size model, i.e., VMamba-T. Technical Quality: 2 Clarity: 2 Questions for Authors: Please refer to the weakness Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and for the time you have dedicated to reviewing our manuscript. We appreciate your feedback, which helps us improve the quality of our paper. Below, we address your concerns regarding the **efficiency comparison(Q1)** and the **ablation of our model to a tiny-size variant(Q2)**. ### **Q1: Efficiency Comparison Among Training/Inference FPS and Memory Usage.** Thanks for the suggestion. As suggested, we have complemented the efficiency comparison, including training/inference FPS and memory usage, with our baseline VMamba and widely-used SwinTransformer and ConvNeXt in **Table1** and **Table 2** of the uploaded PDF for your reference. Compared to our baseline, our proposed model achieves nearly **1.5x speedup in inference** and **2.0x speedup in training**. Additionally, it requires approximately **30% less memory.** The efficiency of our models at a 224x224 image resolution did not match that of well-established architectures such as Swin Transformer. However, when the image resolution is increased, our model achieves comparable efficiency to the Swin Transformer, which can be found in Table 2 of the uploaded PDF. ### **Q2: Ablation for tiny-size model.** Thanks for this nice concern. In response to this query about the performance of our model on a standard-ablation-size, we have conducted additional ablation studies with a 100-epoch training schedule. The results are detailed in the **table below** for your reference: | Model | Param(M) | GFLOPs | Top-1 Acc(%) | Thru. (imgs/sec) | Train Thru. (imgs/sec) | Memory (MB) | |----------|----------|--------|--------------|------------------|------------------------|-------------| | VMamba | 23 | 5.6 | 80.3 | 603 | 151 | 6639 | | +MS2D | 24 | 4.8 | 80.9 | 866 | 205 | 4780 | | +Others | 33 | 4.6 | 81.4 | 1092 | 331 | 4532 | Our findings indicate that the proposed **Multi-Scale 2D (MS2D)** module contributes to an improvement of **0.6%** in Top-1 accuracy for the tiny-size model. **Other components** (SE, ConvFFN and N=1 in Table 4 of our paper) of our model collectively contribute an additional **0.5%** increase in accuracy. This ablation reveals that MS2D is more important in accuracy gain compared to other components on tiny-size model. Furthermore, the MS2D module not only enhances performance but also contributes to further speed gains and reductions in memory usage. We hope that these revisions could satisfactorily address your comments and **thank you once again for your constructive feedback**. --- Rebuttal Comment 1.1: Comment: Thanks for your response. Your responses solve my concern to some extent. I maintain my score.
null
null
null
null
null
null
Neural Signed Distance Function Inference through Splatting 3D Gaussians Pulled on Zero-Level Set
Accept (poster)
Summary: This paper aims to infer neural signed distance functions (SDF) for Gaussian Splatting. To this end, this paper introduce an MLP to represent the SDF. To learn SDF from sparse and non-uniform Gaussian points, this paper introduces a differentiable pulling operation from Neural-Pull to align Gaussians on the zero-level set of the SDF and updates the SDF by pulling neighboring space to the pulled 3D Gaussians. This paper designs tangent loss, pull loss and orthogonal loss to encourage the above operations. Experiments on DTU and Tanks and Temples datasets show that the proposed method achieves SOTA reconstruction performance. Strengths: * This work combines a differentiable pulling operation with Gaussian Splatting for surface reconstruction, which helps to learn neural SDF for GS. This is interesting. * The approach designs the tangent loss to encourage the pulled Gaussians to be the tangent plane on the zero-level set. * To tackle the sparsity and non-uniform Gaussian distribution, the method proposes to pull randomly sampled points to disks. * The method achieves promising surface reconstructions on Tanks and Temples dataset. Weaknesses: * The proposed method cannot reconstruct geometric details well as shown in Figures 4, 13 and 14. * The ablation results in Table 4 are not consistent with the results in Table 2. * The ablation study is bad and confusing. The quantitative results are conducted on Tanks and Temples but the qualitative results are conducted on DTU. * The rendering evaluation setting is not consistent with prior works, such as 2DGS and GOF. Using lower resolution training images can improve rendering results. However, the training setting for NeRF-based methods still use the original setting. Technical Quality: 2 Clarity: 2 Questions for Authors: * Gaussian splatting will generate many points that are far away from surfaces, how does the pulling operation tackle these points? * When encouraging the thin disk to be a tangent plane on the zero-level set, why not make $f(\mu'_j)\approx0$? * For Table 4, can you explain the difference from Table 2 in terms of the performance of full model? Moreover, can you explain the detailed operations of w/o Pull GS? I wonder how to train the SDF without Pull GS. * As the proposed method uses the densification from GOF, it is better compare the rendering performance of proposed method with GOF. * How many Gaussian points and random points are sampled to train SDF during each optimization? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The authors have discussed the limitations and their future works. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Reconstructing details.** Since we use the marching cubes algorithm to extract the zero-level set as a mesh surface, our results is limited by the resolution used by the marching cubes. While 2DGS uses multi-view depth images to reconstruct meshes with Poisson surface reconstruction, which achieves higher resolution and recover more geometry details. We additionally use the same way of reconstruction as 2DGS, and show visual comparison in **Fig. F** in our rebuttal PDF, where we can see that our reconstruction show more geometry details than 2DGS. **2. Results between Tab.2 and Tab.4.** Thanks for pointing this out. Before the deadline, we updated our results frequently, but we forgot to update the latest results everywhere in the paper. We wrongly reported an older version of ablation study, which was conducted on the first four scenes in TNT dataset. We will update the ablation study with the final version, which was conducted on all of the TNT scenes, as reported in **Tab. B** in the rebuttal PDF. **3. About ablation study.** The TNT scenes are large and complex, therefore we believe the quantitative results on TNT dataset are more convincing than that of DTU dataset. However, the qualitative differences are more distinct on small object scenes such as DTU dataset, therefore we chose DTU dataset for visualization. Due to the time limit, we select the first four scenes in DTU dataset to conduct additional ablation study to demonstrate the effectiveness of our method, as shown in **Tab. C** in our rebuttal PDF. We will update the results with all scenes of DTU in our revision. **4. Rendering evaluation setting.** For fair comparison, we directly borrowed the results of all baseline methods from Table 4 in 2DGS paper. Due to the limit of memory, current methods, including NeRF-based and 3DGS-based, coherently train outdoor scenes with a down-sampling factor of 4 and train indoor scenes with a factor of 2. Our experiments keep the same settings with baseline methods. We will clarify this in our revision. **5. About far away points.** Indeed, most of the Gaussian points are distributed around the ground truth surfaces, which can be seen as a noisy point cloud. The Gaussians that are significantly far away from the surfaces usually have small opacity values. In our implementation, we practically utilize a filtering strategy before pulling operation to filter out the Gaussians that have too small opacities, similar as what SuGaR does at 9000 epoch. Through this strategy, almost all of the Gaussian points are distributed around the surfaces. If there are several points that are still far away, the loss gradient of these points will be averaged by other points and they will be ignored by the learned SDF. **6. why not make $f(\mu_j’)=0$.** Our preliminary experiments show that directly constraining the Gaussians to be the zero-level set will hurt the learning of SDF, as shown in **Fig. G** in our rebuttal PDF. The reason is that the signed distance field has large uncertainty during the optimization, and the pulled queries produced by inaccurate pulling will lead to inaccurate zero-level set. Directly using a hard constraint will make the optimization inefficient. Our results show that implicitly aligning Gaussian disks to the zero-level set resolves this problem. **7. About w/o Pull GS.** We train SDF by pulling randomly sampled queries to their nearest Gaussians on the zero-level set. The setting of “w/o Pull GS” means that we do not pull Gaussians onto the zero-level set and just pull randomly sampled queries onto their nearest Gaussians. The full model with pulling GS means that we align the Gaussians with zero-level set of SDF, which means that the positions of Gaussians are encouraged to move towards the zero-level set. **8. Comparison with GOF.** Our method shows comparable rendering results with GOF, as reported in **Tab. D** in our rebuttal PDF. **9. Number of Gaussian points.** We sample 100,000 points to train SDF in each epoch. Because we need to calculate the nearest Gaussian for each sampling point, the number of sampled Gaussian points per epoch is also 100,000. --- Rebuttal 2: Comment: Thank you for the response. I have also read the other comments and responses. I have some following questions. **1. Reconstruction details.** 2DGS uses TSDF fusion to reconstruct meshes. However, the method combines Poisson surface reconstruction to reconstruct meshes from Gaussian points to show the details. It is unfair. Moreover, it is easy to extract higher resolution meshes from SDF, as shown in MonoSDF [1]. Can you show higher-resolution meshes on DTU or TNT dataset? In fact, DTU is a small-scale object-centric dataset. However, the proposed method cannot reconstruct details well on DTU dataset. Yu, Zehao, et al. "Monosdf: Exploring monocular geometric cues for neural implicit surface reconstruction." Advances in neural information processing systems 35 (2022): 25018-25032. **3. About ablation study.** Since TNT dataset is more challenging, it is better to show ablation qualitative results on this dataset. Since the the proposed method cannot handle details well, I would like to know how the proposed strategies improve surface reconstruction on the TNT dataset. **6. Why not make $f(\mu'_j)\approx0$?** In fact, I wonder if the **$f(\mu'_j)\approx0$** operation can help reconstruction instead of $f(\mu'_j)=0$. Moreover, can you try this using the filtered Gaussian points as you mentioned in *5. About far away points.* **8. Comparison with GOF.** The results shown in Table D are not consistent with those in the main paper. Can you check them carefully? In addition, since the proposed method is efficient, why only using 4 scenes to conduct ablation studies on DTU dataset? I am curious whether the results in Table C are convincing. --- Rebuttal Comment 2.1: Comment: Dear reviewer ZhFP, Thanks for your comments. We are happy to answer your questions. Unfortunately, we are not allowed to either 1) show you any reconstruction visualization or visual comparison during this reviewer-author discussion period or 2) provide you any URL towards these visual results, according to the rebuttal policy. 1. Reconstructing details. We are sorry for missing the details of 2DGS, since we confused 2DGS and GaussianSurfels, one uses TSDF to extract the zero-level set while the other uses Poisson surface reconstruction. We just ran TSDF fusion on our results to compare, and got comparable results to our results obtained by Poisson surface reconstruction. This is because we use the same depth as input, and the resolution used either in Poisson or TSDF fusion is similar. Indeed, we can reconstruct details at a higher resolution, but enhancing these details does not significantly improve the numerical results. As illustrated in Figure C in our rebuttal PDF, though 2DGS contains more geometric details, it achieves a lower numerical results than our method. Our superiority results come from the way that we estimate a more accurate zero-level set. 3. How we improve accuracy on TNT. As we mentioned in the reply to “2. Visual results.” for reviewer SW8k, we improve the accuracy on TNT by estimating a more accurate zero-level set. Although current methods can reconstruct geometry details, the reconstructions are usually fat, which means their surfaces usually drift away from the real zero-level set. Our method can pull 3D Gaussians onto the zero-level set, which enables us to impose constraints directly on the zero-level set. This really helps us estimate a more accurate zero-level set. Please refer to Figure C in the rebuttal PDF for comparisons of error maps between 2DGS and our method. Additionally, we will provide more quantitative ablation results on TNT dataset in our revision. 6. Constraints on pulled Gaussians. We did not find a proper way to directly implement the constraint of "approximately equal to" in the code. When we were trying verify if “equal to” helps, we tried different weights on this loss term. When the weight is small, we can achieve the same effect as “approximately equal to”, while a large weight can turn the loss into a hard constraint "equal to". But both of them can not work with the uncertainty in the signed distance field during optimization. If you have any better ideas on how to impose a constraint of "approximately zero," please let us know, as we are willing to undertake further attempts. Additionally, our results in Fig.G in our rebuttal PDF were produced with removing far away Gaussians using the same way explained in "5.About far away points". 8. Comparison with GOF. Results in Tab.D are averaged results over indoor scenes, not the averaged results over both indoor and outdoor scenes. The full comparison is provided in the following. | | **Indoor Scene** | | | | **Outdoor Scene** | | | |-------|---------------------|-------------|-------------|-|----------------------|-------------|-------------| | | **PSNR$\uparrow$** | **SSIM$\uparrow$** | **LPIPS$\downarrow$** | | **PSNR$\uparrow$** | **SSIM$\uparrow$** | **LPIPS$\downarrow$** | | **GOF** | 30.79 | 0.924 | 0.184 | | 24.82 | 0.750 | 0.202 | | **Ours** | 30.78 | 0.925 | 0.182 | | 23.76 | 0.703 | 0.278 | As we mentioned in the rebuttal, we can not report the results on all scenes in DTU due to the time limit. We are trying our best to report the results on all scenes before the discussion ends. We are limited to a handful computational resources to finish so many results in the rebuttal PDF. --- Rebuttal 3: Comment: Dear reviewer ZhFP, Thanks for your questions. 1. Reconstruction Details If compared methods can recover comparable surface positions, then more geometry details definitely produce better numerical results. However, if one method fails to recover accurate surface positions, then more geometry details could not improve the numerical results a lot. If a reconstructed surface does not come from an accurate zero-level set, it would have inaccurate surface position which drifts away a lot from the GT surface, which hurts the numerical results. To show this, we draw a figure below to illustrate the comparison. This figure shows that our surface (▲) is nearer to the ground truth (*), although the geometry details recovered by the marching cubes algorithm are not many. In contrast, the other surface (■) has much more geometry details which also look quite similar to the ground truth, but fails to recover accurate surface position, drifting far away from the ground truth, which hurts the numerical results a lot. This is also indicated quite clearly in Fig.C in our rebuttal PDF, where the error map across the whole surface obtained by 2DGS looks much darker (bigger errors) than ours. ```plane text *: GT surface, ▲: Our surface, ■: 2DGS surface --------------------------------------------------- Our surface vs. GT * *** *▲* *▲ *▲ ▲▲▲ ▲▲ ▲▲▲▲ ▲▲▲ ▲▲ ▲▲▲▲ ▲▲ * *▲ ** --------------------------------------------------- 2DGS surface vs. GT * *** * * * ** ** * * * **** * * ** ** ■ ■ ■ ■■■ ■ ■■ ■ ■■ ■ ■ ■ ■ ■ ■ ■ ■■ ■ ■■ --------------------------------------------------- ``` Moreover, we seriously checked our code, especially the evaluation part, we did not see bugs. For fair evaluations on TNT, we use the official github code released with the dataset of the original paper “Tanks and Temples: Benchmarking Large-Scale Scene Reconstruction”. 8. Comparison with GOF We achieved comparable rendering quality to GOF in indoor scenes. We also noticed that our rendering quality is worse than GOF in outdoor scenes. The main reason is that outdoor scenes include lots of huge Gaussians, each of which covers a large area. This may be caused by the lack of view coverage. These huge Gaussians usually struggle to meet all the constraints that we imposed for SDF inference. For example, they can not be aligned well with the gradient at the zero-level set, and their large size amplifies little normal adjustment to a large impact on the rendering, since it can not take all area it covers into account. This also raises up a future work direction, that is how to control the size of Gaussians during our inference, so that we can maintain good rendering quality of GS. --- Rebuttal Comment 3.1: Comment: Thanks for your further response. I still have the following questions. **Reconstruction details.** As you explained, the proposed method cannot recover potential details like 2DGS, although these details are not accurate. The current SOTA methods achieve ~0.6 performance on DTU (lower is better) and ~0.5 performance on TNT dataset (higher is better) as they can recover accurate details. Since Gaussian points are usually dense in details, this should help recover potential details, I wonder what limits the proposed method to recover the potential details. **Comparison with GOF.** Thanks for your explanation. Can you also conduct a rendering ablation study on MipNeRF 360 for the proposed losses like Table B? I wonder what component influences the rendering. --- Rebuttal 4: Comment: Dear reviewer ZhFP, Thanks for your questions. 1. Reconstruction details Since our method infers signed distances from 3D Gaussians, we believe the 3D Gaussians are the key to recover geometry details. Although 3D Gaussians are dense (for most time but we still have some sparse cases), they are not dense enough like point cloud scans to recover fine geometry. This is because 3D Gaussian is not a point but a sphere or a plane which covers some space. This is fine for rendering, since the pixel color is usually produced by several overlapped Gaussians, but obviously using one Gaussian is hard to infer fine geometry in the area it covers, especially when the Gaussian is very large, which makes them much different from points back projected from the rendered depth. Just like the extreme case raised up by reviewer SW8k, the large holes in the middle of Figure 15 in the outdoor scene, we observed huge Gaussians in these areas. Although these huge Gaussians may work well in rendering, but they cannot recover any geometry covered by them. Thus, how to control the size of Gaussians for SDF inference could be an interesting future work direction. Another reason is like what we mentioned in the limitation. The MLP we use to estimate an SDF prefers low-frequency information, which also affects the performance of recovering fine geometry. This can also be resolved in our future work by capturing high-frequency geometry either using high frequency positional encoding or without using MLP. We will add this discussion in our revision. 2. MipNeRF360 ablation We conduct additional ablation studies on MipNeRF360 dataset, as shown in the following table. Due to the time limit, we select two scenes from indoor and outdoor scenes, respectively. The terms that are irrelevent to rendering are omiited. We are trying our best to report the results on all scenes along with the ablation results on DTU before the discussion ends. We would like to clarify in advance that ablations related to rendering metrics do not show as clear distinctions as those related to reconstruction metrics. This is because the rendering metrics are usually more stable than reconstruction metrics, which is evident in previous works. The term "w/o $L_{Tan}$" has the most significant impact on rendering quality because we align the Gaussian normals with the surface normals to obtain consistent orientated Gaussians, which enhances the rendering quality. The term "w/o Pull GS" has the second most significant impact on rendering metrics, because we optimize the Gaussian positions by pulling them towards the zero-level set to achieve better rendering quality. The other terms "w/o $L_{Thin}$", and "w/o $L_{Oth}$" are mainly designed for learning SDF and have a minor improvement on the rendering metrics. | | **Indoor** | | | **Outdoor** | | | |-------------|----------------------|-----------|-----------|----------------------|-----------|-----------| | | **PSNR↑** | **SSIM↑** | **LPIPS↓**| **PSNR↑** | **SSIM↑** | **LPIPS↓**| | **w/o Pull GS** | 29.74 | 0.920 | 0.207 | 23.97 | 0.730 | 0.255 | | **w/o $L_{Thin}$** | 30.07 | 0.919 | 0.190 | 24.11 | **0.747** | **0.237** | | **w/o $L_{Tan}$** | 29.35 | 0.913 | 0.213 | 22.75 | 0.702 | 0.261 | | **w/o $L_{Oth}$** | 30.18 | 0.927 | 0.190 | 24.11 | **0.747** | 0.239 | | **Full model** | **30.19** | **0.929** | **0.189** | **24.12** | **0.747** | **0.237** | Please feel free to let us know if you have any more questions. Best, The authors --- Rebuttal Comment 4.1: Comment: Thank you very much for your response. However, the answer on the **Reconstruction details** cannot address my concern. For many detail regions, the Gaussian points are with small scales. However, the pulling operation cannot reflect these details. As shown in the Figure F, when applying Poisson reconstruction to extract meshes from Guassion points, the dotted details on the front of the Lego can be reconstructed well. This demonstrates the Guassian points can better represent these details. However, the proposed method cannot reconstruct these details. In addition, the high frequency positional encoding and the Instant-NGP are commonly used with MLP to reconstrcut surfaces. I wonder why the proposed method does not leverage these techniques. Can you combine these tehniques with the proposed method to test the Ignatius scene of the TNT dataset and show the evaluation results? --- Reply to Comment 4.1.1: Comment: Thanks for your comments. We are pleased to discuss more about the details. 1. Ours (Screened Poisson) in Fig. F in our rebuttal PDF was obtained using points back projected from rendered depth rather than 3D Gaussians. Reviewer ZhFP misunderstood how our mesh reconstruction was produced by the screen Poisson. As we mentioned in our response “1. Reconstruction details” posted at 14:53 on Aug.7, we used points back projected from rendered multi-view depth maps to reconstruct the mesh surface with Poisson, rather than the raw Gaussian points. Points from depth maps have been used by SuGaR and GaussianSurfels to reconstruct surfaces with Poisson reconstruction, and we have not seen methods directly using Gaussian positions to reconstruct surfaces so far. The reasons are twofold. One is that Gaussians are not guaranteed to be dense everywhere in a scene since they are just responsible for good rendering quality, some poor reconstruction caused by sparse Gaussians can be found in the large holes in the middle of Fig.15 in the outdoor scene in our paper. Another key reason, we believe, is that Gaussian positions do not accurately represent the surface, even if previous methods and we have tried various constraints, such as our pulling, to locate Gaussian positions on the estimated surface. However, splatting makes a difference. When we splat these Gaussian on a 2D plane and render them, the Gaussians in the view frustum will work together to approximate more accurate geometry and also surface details inferred from multi-view photometric consistency, just like what we show in Fig. F in our rebuttal PDF. Thus, Gaussian positions do not represent geometry details quite well, but depth maps rendered by splatting Gaussians do. We believe pulling works well in recovering geometry details, since it has been justified by high fidelity reconstructions in NeuralPull[1], and its following works, such as [2-4]. 2. Why not Instant-NGP As we explained in our previous post, using Instant-NGP like hash encoding and high-frequency positional encoding will be our future work. We did not use them in this project, since we have seen pretty stunning reconstructions obtained by previous methods using pulling with MLP. Although most of these works are working with point clouds (not learnable), we thought this mature framework can also recover geometry details from 3D Gaussians (learnable) with proper other designs. Thus, this is a straightforward and intuitive attempt, but definitely a good start for us to keep working in this direction. We will start working with integrating hash encoding with the SDF representation in our current method immediately, and try our best to post results as you requested before the reviewer-author discussion period ends. Our implementation may take some time. If we are unable to complete it before the discussion period ends, we guarantee that we will further discuss this point in the revision. [1]. Ma B, Han Z, Liu Y S, et al. Neural-Pull: Learning Signed Distance Function from Point clouds by Learning to Pull Space onto Surface. International Conference on Machine Learning. PMLR, 2021: 7246-7257. [2]. Ma B, Liu Y S, Zwicker M, et al. Surface reconstruction from point clouds by learning predictive context priors. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 6326-6337. [3]. Ouasfi A, Boukhayma A. Few-Shot Unsupervised Implicit Neural Shape Representation Learning with Spatial Adversaries. Forty-first International Conference on Machine Learning. [4]. Chou G, Chugunov I, Heide F. Gensdf: Two-stage learning of generalizable signed distance functions. Advances in Neural Information Processing Systems, 2022, 35: 24905-24919. --- Rebuttal 5: Comment: We tried two implementations to add high-frequency information. The first one is appending positional encoding to the input XYZ coordinates, like the original NeRF. The other is replacing the input 3D points with multi-resolution hash encoding, like Instant-NGP. All parameter settings are consistent with those of the original implementations. The experiment is conducted on the Ignatius scene of the TNT dataset as requested. Using the same experimental setting, as reported in the following table, both adding positional encoding and hash feature grid from instant-ngp can slightly improve the reconstruction visually and numerically. We do not have time to try more parameter options like frequencies in pe and grid resolution in hash feature grid, but we will report that in our revision. Ours: 0.71 +positional encoding: 0.73 +hash encoding: 0.75 --- Rebuttal Comment 5.1: Comment: Thank you very much for your feedback. I have increased my score!
Summary: This paper combines 3D Gaussian Splatting and NeuralPull to extract surface. By this way, it can utilize the existing extracting method Marching Cubes algorithm to extract the zero-level set as a mesh surface. Strengths: 1. With a neural SDF network, this paper can utilize the Marching Cubes algorithm to extract the surface instead of TSDF fusion or Poisson. 2. The paper uses three datasets to validate the proposed method. Weaknesses: 1. Lack of explanation of how this method can work. As we all know, NeuralPull needs point clouds as ground truth to let a neural SDF learn surface, while the pseudo ground truth provided by 3D Gaussians is noisy, so how can your method learn? In addition, based on your description, you jointly train the 3D Gaussian and the neural SDF and use Eq.2 to pull the 3D Gaussian onto the zero-level set. However, at the beginning of training, the neural SDF will not provide good guidance and even provide a poor direction leading to making the 3D Gaussian away from the surface, leading to catastrophe. 2. Unsatisfactory visual results on TNT. Although your F1-score is better than 2DGS, in Fig.14 your results look over smooth and lack details. This figure only shows your results, can you provide more comparisons with other methods, like 2DGS? Also, in Fig.15, why are there many holes in the mesh in the middle? 3. The results of 2DGS are worse than the original paper. For example, on the TNT dataset, 2DGS is 0.3 in the original paper, while it is 0.26 in Fig.2 of your paper. Also, the qualitative results of 2DGS are better than yours. In detail, the mesh shown in Fig.10 of 2DGS's paper has more details, which are better than yours, and 2DGS's results are shown in Fig.4 of your paper. 4. Your quantitive results are not consistent between Tab.2 and Tab.4 on TNT. In detail, in Tab.2, the full model F1-score is 0.43, while in Tab.4 it is 0.46, leading to suspicion of the accuracy of your results. Can you explain it? 5. Lack of correct citation. For example, your first constraint $L_{Thin}$ is actual $L_s$ in NeuSG, but you don't illustrate the origin. Technical Quality: 1 Clarity: 3 Questions for Authors: Some questions have been listed in the weaknesses. I still have other questions here: 1. According to [1,2], SDF with Neural Implicit Functions (NIFs) can only reconstruct closed surfaces. The limitation prevents NIFs from representing most real-world objects. However, on the TNT dataset, some scenes are not closed surfaces, like 'Barn' with the ground, 'Meetingroom' with windows. How do you solve this problem using SDF with NIFs? 2. For Eq.4 and Eq.6, you use L1 loss to regulate vectors, however, it is more common to use the cosine distance function to regulate vectors. Can you explain why you use L1 loss instead of cosine distance? 3. For rendering, you only use L2 loss, however, D-SSIM is also used to optimize Gaussian splatting. Can you explain why you don't use it in your method? 4. The method is based on NeuralPull, which learns a neural SDF from ground truth point clouds, while this method optimizes a neural SDF from 3D Gaussian splatting. Therefore, the original NeuralPull is the upbound of this method. Can you provide an experiment that trains NeuralPull on the TNT dataset and compares it with it to show the upbound of the proposed method? 5. How do you deal with the surface of 'ground'? For example, in Fig.4 (right, 'Truck') and Fig.14 (left, 'Barn'), the ground is built. However, in Fig.14 (middle, 'Caterpillar'), the ground is missing. Do you manually delete it or something else? In your evaluation, do you also delete it? As I know, on the TNT dataset, the extracted meshes are not modified manually for evaluation. [1] CAP-UDF: Learning Unsigned Distance Functions Progressively from Raw Point Clouds with Consistency-Aware Field Optimization. [2] Neural unsigned distance fields for implicit function learning. Confidence: 5 Soundness: 1 Presentation: 3 Contribution: 2 Limitations: In Fig.12, the paper shows this method can not reconstruct the details of 'lego'. Can the authors show the comparsion with this method and SuGar as well as 2DGS? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Explanation of the method.** When training 3DGS, 3D Gaussians are progressively approaching to the zero-level set. At the same time, we pull Gaussians to align with the zero-level set of SDF. Under the joint optimization of these two process, along with our novel pulling operation and constraints, the Gaussian point clouds that get pulled on the zero-level set gradually turn out to be clean, which can be used as the pulling target to infer signed distance in the field, leading to surfaces fitted by the SDF that gradually recovers detailed and accurate geometry. We agree that the learned SDF field is coarse at the very beginning, which is not a good guidance for Gaussians. Therefore, we firstly train 3DGS for 7000 epochs, leading to relatively stable Gaussians, and then start to pull queries onto Gaussians to estimate a rough SDF, and finally we pull Gaussians on to the zero-level set, and queries are also following to get pulled onto the pulled Gaussians since then. **2. Visual results.** The surfaces learned by 2DGS are usually fat and a little bit drift away from ground truth surfaces, although their meshes seem to show more details. Our method is able to capture more accurate surfaces by using 3D Gaussians pulled onto the zero-level set and pulling query points onto Gaussian disks at the same time, leading to much more accurate zero-level set. We additionally report comparisons of error maps on meshes obtained by 2DGS and ours in **Fig. C** in rebuttal PDF, which highlights our superiority in terms of the accuracy of extracted surfaces. Additionally, the holes in the middle of Figure 15 are due to the ill-conditioned distribution of Gaussians, making it difficult to distinguish geometric structures. **3. 2DGS numerical results.** 2DGS wrongly reported the mean result on TNT in its original paper. The calculated average value is indeed 0.26. **4. Results between Tab.2 and Tab.4.** Thanks for pointing this out. Before the deadline, we updated our results frequently, but we forgot to update the latest results everywhere in the paper. We wrongly reported an older version of ablation study, which was conducted on the first four scenes in TNT dataset. We will update the ablation study with the final version, which was conducted on all of the TNT scenes, as reported in **Tab. B** in the rebuttal PDF. **5. $L_{Thin}$.** We did mention NeuSG in Line 175. We'll make it more clear that the thin loss is inspired by NeuSG in our revision. **6. Closed Surfaces.** SDF can also represent open structures. As a result, it will reconstruct double-layer surfaces. You can refer to the second and third row of “NeuS” in Figure 4 in the paper of NeuralUDF (CVPR2023), where the SDFs successfully distinguish the collar and cuffs, but the reconstructed cloths have tight double-layer surfaces. To avoid the influence of double-layer surfaces on evaluation accuracy, we practically delete the back faces according to the visibility of each face under each camera view. Through this way, we can accurately reconstruct open structures with single-layer surfaces. Additionally, although UDF can reconstruct open surfaces, extracting the zero-level set from UDF as a mesh surface is still a challenge, resulting in artifacts and outliers on the reconstructed meshes. Our method can also learn UDF, and we additionally conduct an experiment using the same setting to learn UDF and compare surfaces extracted from the SDF and the UDF, as shown in **Fig. D** in the rebuttal PDF, which is a corner of “Barn”, showing the shortcomings of UDF learning. **7. Normal loss.** We also tried cosine similarity as the normal loss. We did not see any difference on the performance for these two constraints on normal. **8. D-SSIM loss.** We also tried the D-SSIM loss. But we did not see any difference on the performance. For simplicity, we do not include it in the loss function. **9. Performance of NeuralPull.** In this paper, we resolve the problem of learning SDF from multi-view images through sparse and noisy Gaussians by innovatively pulling queries onto Gaussian disks on the zero-level set of the SDF. Although we are using pulling, our pulling is much different from NeuralPull. The difference lies in that NeuralPull pulls a query to a point while we pull a query onto a Gaussian disk. 3D Gaussians are constrained to be a disk-like shape which covers an area rather than a point during optimization, which inspires us to introduce this pulling variant. Therefore, the performance of our method is not limited to the bottleneck of NeuralPull. Figure 2 in our paper shows that merely pulling queries to the point cloud, like NeuralPull, does not work well with 3D Gaussians, due to the sparsity. Additionally, we conduct an experiment that using a sparse ground truth point cloud of “Ignatius'” scene to train both NeuralPull and our pulling operation. The point clouds are initialized as spheres to simulate the Gaussians used by our method. The results in **Fig. E** in our rebuttal PDF highlights the superiority of our proposed method over NeuralPull. **10. Ground of surface.** The ground truths of TNT dataset have provided bounding box for culling the reconstructed meshes. For example, the ground truth points of “Truck” and “Barn” contain the ground while the points of “Caterpillar” do not contain the ground. We first crop the meshes using the provided bounding box and then visualize them. **11. Reconstructing details.** Since we use the marching cubes algorithm to extract the zero-level set as a mesh surface, our results is limited by the resolution used by the marching cubes. While 2DGS uses multi-view depth images to reconstruct meshes with Poisson surface reconstruction, which achieves higher resolution and recover more geometry details. We additionally use the same way of reconstruction as 2DGS, and show visual comparison in **Fig. F** in our rebuttal PDF, where we can see that our reconstruction show more geometry details than 2DGS. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. I still have some questions: 1. Why do you say '2DGS wrongly reported the mean result on TNT in its original paper'? Can you explain it or give some proof? 2. About 'Performance of NeuralPull': The Fig. E in your rebuttal PDF only shows one scene. Can you give a table across six scenes from TNT dataset to show your method is better? Also, the NeuralPull used in your experiment is trained on sparse point clouds from colmap, right? Since TNT has the ground truth point cloud, can you use the ground truth point cloud to train the NeuralPull. I just want to see the upbound of your method, which is the NeuralPull trained with ground truth. It's okay that the neuralpull trained with ground truth is better than your method. 3. About 'the holes in the middle of Figure 15 are due to the ill-conditioned distribution of Gaussians'. However, why is there not a hole in the right figure in Fig. 15 of your paper? What's the difference between the middle and the right? 4. About 'UDF': The Fig. D in your rebuttal PDF only shows one scene. Can you give a table across six scenes from TNT dataset to show your method is better? --- Reply to Comment 1.1.1: Comment: Dear reviewer SW8k, Thanks for your comments. We are happy to answer your questions. 1. About 2DGS numerical results. According to Table 2 in 2DGS paper, if we average the per-scene result provided in the table, we can get (0.36+0.23+0.13+0.44+0.16+0.26)/6=0.26, but 2DGS reported the mean value as 0.30 instead of 0.26. Thus we say '2DGS wrongly reported the mean result on TNT in its original paper'. 2. About "Performance of NeuralPull". We trained both NeuralPull and our method on sparse *ground truth* point clouds, instead of COLMAP point clouds in our rebuttal (see our response "9. Performance of NeuralPull"). Following your advice, we additionally conduct experiments on all of the six scenes of TNT dataset, following the same setting used in our response "9. Performance of NeuralPull." The numerical results in terms of F-score are reported in the following table. | Methods | Barn | Caterpillar | Courthouse | Ignatius | Meetingroom | Truck | Mean | |------------|------|-------------|------------|----------|-------------|-------|------| | NeuralPull | 0.70 | 0.40 | 0.51 | 0.51 | 0.56 | 0.73 | 0.57 | | Ours | **0.78** | **0.58** | **0.66** | **0.66** | **0.65** | **0.76** | **0.68** | The comparison indicates that our method is a variant of NeuralPull, but not exactly the same as NeuralPull. Thus, we can produce much better accuracy on points than the upper bound of NeuralPull, due to the ability of pulling queries on plans rather than points. 3. About the holes. Due to overly complex geometric structures and a lack of view covering, there is a significant under-fitting issue in the flowerbed area of the middle scene. This results in a set of extremely sparse, huge, and unevenly distributed Gaussians, which makes Gaussians are thick ellipsoid like shape rather than relatively thin plans, leading to poor sense of surface. While this issue does not occur in other scenes. 4. About learning UDF. Following your advice, we also additionally conduct experiments of learning UDFs on all of the six scenes in TNT dataset, as reported in the following table, in terms of F-score. The superiority results of learning SDF beyond learning UDF demonstrates the drawbacks of UDF. | Methods | Barn | Caterpillar | Courthouse | Ignatius | Meetingroom | Truck | Mean | |------------|------|-------------|------------|----------|-------------|-------|------| | NeuralPull | 0.55 | 0.34 | 0.15 | 0.60 | 0.17 | 0.46 | 0.38 | | Ours | **0.60** | **0.37** | **0.16** | **0.71** | **0.19** | **0.52** | **0.43** | The comparison in the above table indicates that our method can learn not only SDF but also UDF, which also show dvantages over NeuralPull in learning of UDFs.
Summary: This paper focuses on the challenge of inferring a signed distance function (SDF) for multi-view surface reconstruction from 3D Gaussian splatting (3DGS), which is hindered by the discreteness, sparseness, and off-surface drift of the 3D Gaussian. To overcome these challenges, the authors propose a method that seamlessly integrates 3DGS with the learning of neural SDFs. This approach constrains SDF inference with multi-view consistency by dynamically aligning 3D Gaussians on the zero-level set of the neural SDF and then rendering the aligned 3D Gaussians through differentiable rasterization. Through the utilization of both differentiable pulling and splatting, the approach jointly optimizes 3D Gaussians and neural SDFs with both RGB and geometry constraints, resulting in the generation of more accurate, smooth, and complete surfaces. Extensive experimental comparisons on various datasets demonstrate the superiority of the proposed method. Strengths: - The approach of optimizing the neural Signed Distance Function (SDF) using only the regularization from 3DGS, without any monocular geometry supervision, is both novel and compelling. It effectively circumvents the time-consuming point sampling process for volume rendering, commonly utilized in prior studies for SDF learning. The method's natural balance between 3DGS and the neural SDF is simple and intuitive, showcasing the potential for seamless integration of explicit 3DGS and implicit neural fields. - The proposed method has demonstrated state-of-the-art reconstruction results on the DTU and TNT datasets, with each component of the approach undergoing thorough examination in ablation studies. - This paper is well-structured and easy to follow. Weaknesses: - It seems that the authors did not incorporate the Eikonal loss, which is commonly used to regularize SDF values in space during optimization. Could the authors provide specific reasons for this omission? Additionally, it is not entirely clear to me how the pull loss ensures that the optimized field satisfies the properties of SDF. - In Eq8, the weight of the splatting loss appears to be substantially larger than the others. Could the authors explain how these weights are determined? - I noticed that the densification operation and the pull/constraint losses are employed at different stages. How does the simultaneous use of densification and pull loss affect the final result? Stopping the densification operation in the second stage seems unreasonable because it may limit the performance of 3DGS. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness for details. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have thoroughly discussed the limitations of the proposed method. I believe that incorporating monocular priors during optimization would be beneficial to the learning of SDF. Additionally, I think that further exploration of the relationship between densification and SDF values would be an interesting and valuable research pursuit. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Eikonal loss.** We believe pulling based methods may not need an Eikonal loss to guarantee the learned distance field is an SDF. This is because we use a normalized gradient and a predicted signed distance when pulling a query. It has been proved by NeuralPull[1] that adding the Eikonal loss will significantly degenerate the learning of SDF, as reported by ``Gradient constraint'' in Table 8 in the original paper of NeuralPull. The reason is that NeuralPull depends on both predicted SDF values and gradient directions to optimize the SDF field. It makes the optimization even more complex when adding additional constraint on the gradient length. We also additionally report a result with an Eikonal loss in **Fig. B** in our rebuttal PDF. The comparison indicates that the results degenerate significantly with an Eikonal loss. **2. SDF property.** It has been proved that an MLP which is trained using a pulling loss can converge to a signed distance function. The reason is that the sign of distances needs to turn over when across the surface points so that the pulling loss can be minimized on both sides of the surface points. Please see Theorem 1 in the original paper of NeuralPull for more details. **3. Weight of splatting loss.** We set the weight of splatting loss to 1.0, following all 3DGS baseline methods. And we set the weights of thin loss, tangent loss, pull loss, orthogonal loss as 100, 0.1, 1, 0.1, respectively. The weight of thin loss is larger than others because the scaling factor is usually small and we want the smallest scaling factor to be as small as possible, which is the same as NeuSG[2]. A smaller weight of thin loss will cause Gaussians to be not flat enough, leading to an inaccurate alignment of Gaussians to the zero-level set. **4. Densification operation and pull loss.** There are three reasons why we separate the densification operation and pulling operation. **(1)** The 3D Gaussians are sparse and unstable at the early stage of training. Current works usually start to add additional losses after certain epochs of training 3DGS, such as the distortion loss (Eq.13) in 2DGS[3] and the regularization loss (Eq.8) in SuGaR[4]. **(2)** The densification operation causes the number of Gaussians to grow rapidly, leading to a collapse of memory overflow. Therefore, current works generally stop densification after certain epochs of training. For example, original 3DGS stops densification at 15000 epoch while SuGaR stops it at 7000 epoch. Here we follow the setting of SuGaR. **(3)** The pulling operation needs to sample query points for each Gaussian, which is time-consuming. Therefore, we only sample query points for all Gaussians once before adding pulling loss for efficiency. If the number of Gaussians are changing during densification, we need to re-sample query points, which will bring additional time consumption. We conduct an ablation study on different stages of stopping densification or start pulling, on DTU scan37 scene, as reported in **Tab. A** in our rebuttal PDF. The first one is to start pulling loss from beginning, which significantly degenerates the performance and slows down the convergence. Another one is to add densification operation until end of training, which takes much longer training time and degenerates the performance. [1]. Ma B, Han Z, Liu Y S, et al. Neural-Pull: Learning Signed Distance Function from Point clouds by Learning to Pull Space onto Surface. International Conference on Machine Learning. PMLR, 2021: 7246-7257. [2]. Chen H, Li C, Lee G H. NeuSG: Neural Implicit Surface Reconstruction with 3D Gaussian Splatting Guidance. arXiv preprint arXiv:2312.00846, 2023. [3]. Huang B, Yu Z, Chen A, et al. 2D Gaussian Splatting for Geometrically Accurate Radiance Fields. ACM SIGGRAPH 2024 Conference Papers. 2024: 1-11. [4]. Guédon A, Lepetit V. SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 5354-5363. --- Rebuttal Comment 1.1: Comment: Thanks for the efforts of the authors. Most of my concerns have been addressed. I am willing to improve my rating. --- Reply to Comment 1.1.1: Title: Thanks for the accept recommendation Comment: Dear reviewer 5rKf, Thanks for your time and expertise. We are glad to know that our rebuttal addressed your concerns and you are willing to raise your rating. We really appreciate the accept recommendation. We will also follow your advice to revise our paper accordingly. Best, The authors
Summary: The paper proposes a extension to 3D Gaussian Splatting by making it consistent with a neural SDF that is learnt along with the 3D Gaussians. To make it consistent, the Gaussians are projected to the zero level set, and the neural SDF is optimized to represent the SDF of the surface implied by the Gaussians. As the latter requires the Gaussians to be thin (essentially discs), they encourage this with a loss, and further encourage other properties that hold when the two representations are consistent. Strengths: - Simple but effective method with well thought out losses - Good visualisations to explain the benefits of components (Figs 2,6-9) Weaknesses: - The proposed method uses a loss to make the Gaussians to be flat, why not directly use 2D Gaussians/Surfels? Those papers mention that direct parameterisation is much better than using a loss to flatten Gaussians, why does your method not suffer from issues from not having an exactly flat Gaussian? It would be interesting to have a comparison of your method with 2D Gaussians. However, I would expect at least some discussion of this (whether it is actually important to have exactly flat Gaussians or not). - While I like this way of constraining the Gaussians better than previous methods, the novelty is a bit limited since there are similar methods (improving GS using SDFs and having Gaussians be flat) Technical Quality: 3 Clarity: 4 Questions for Authors: - For $L_{Tangent}$, how do you determine orientation of the normal? - Ablations study results don't line up with each other and with TNT results (0.41 vs 0.42 vs 0.43 vs 0.46)? A little confused/concerned here. - End of page 5, should $n_j$ be $\bar{n}_j$? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Addressed in the appendix/supplemental Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Comparison between flat loss and surfels.** Our method requires calculating the inverse of a three-dimensional covariance matrix (Eq.(5) in our paper) to deterimine the distribution probability of a query point within its nearest Gaussian ellipsoid. This allows us to maximize the probability for pulling query points towards nearest Gaussians. The surfel setting, however, merely provides two dimensional of scaling factors, which does not meet our requirement. In general, pushing Gaussians to be flat or setting Gaussians as surfels serves a similar purpose in mitigating the bias of rendering depth and determining Gaussian normals, thus the flatting loss is a reasonable choice. As an evidence, we replace our flat loss (Eq.(3) in our paper) with the surfel setting in GaussianSurfels[1], as shown in **Fig. A** in the rebuttal PDF. The optimization procedure fails at the start with surfel setting, which demonstrates that it is not a good choice in the differentiable pulling operation. **2. About L_tangent.** We consider the direction of the smallest scaling factor as Gaussian normals, following previous works[2,3]. The loss pushes the normal to get aligned with the gradient on the zero-leve set. **3. About $n_j$.** It should be $\bar{n_j}$. We will correct it in our version. [1]. Dai P, Xu J, Xie W, et al. High-quality Surface Reconstruction using Gaussian Surfels. ACM SIGGRAPH 2024 Conference Papers. 2024: 1-11. [2]. Guédon A, Lepetit V. SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 5354-5363. [3]. Chen H, Li C, Lee G H. NeuSG: Neural Implicit Surface Reconstruction with 3D Gaussian Splatting Guidance. arXiv preprint arXiv:2312.00846, 2023. --- Rebuttal Comment 1.1: Comment: My weaknesses/questions have been sufficiently addressed. However, looking at the other reviewer's comments and the authors rebuttal to them, I have the following concerns: - I was not aware of NeusG, and after looking into it, their method seems highly related. NeusG is essentially GS losses + IGR losses + thin loss + consistency losses, while the proposed method is GS losses + NP losses + thin loss + consistency losses. However NeusG is barely mentioned: the authors respond that it is mentioned in line 175 but that is a citation about the general trend of flattening Gaussians. There should be at least a sentence only about NeusG given it is very similar. Either the work is considered concurrent work and should be stated as such, or it is considered a work that inspired your work (including having first introduced the thin loss which you borrow from them) and this should be made clear in the related work. Furthermore, for the latter case NeusG should also compared to in the results!! Their main results are on TnT and it looks like your method performs better, but can you check they use the same settings and then put it into your paper? - As for the other comments, while some seem quite concerning, the authors have addressed most of them sufficiently. The things that are left are: - Normal loss: since cosine sim is what NeusG does and you change it, there should be an ablation on it (in the final version of the paper, not suggesting you get results in time to show us) - '2DGS wrongly reported the mean result on TNT in its original paper': also want explanation about this statement, did you consult the authors and figure this out, or are you saying there is a numerical error in the mean operation (doesn't seem to be?) --- Reply to Comment 1.1.1: Comment: Dear reviewer CCgx, Thanks for the comments. It is a pleasure to further clarify some points. **1. We are much different from NeuSG** *i) The task* Our method directly learns to capture the geometry inherently represented by 3D Gaussians, while NeuSG aims at utilizing Gaussians as guidance to improve the effect of neural implicit reconstruction. *ii) The framework* Our method is based on 3DGS (learning radiance fields from multi-view images) and NeuralPull (learning SDF from points), and we train SDFs by unsupervisedly pulling query points onto the Gaussian disk. In contrast, NeuSG is based on 3DGS (learning radiance fields from multi-view images) and NeuS (learning NeRF and SDF from multi-view images), therefore it trains SDFs through classical neural rendering techniques, including ray marching, point sampling and volume rendering. This makes the whole framework so different from each other. *iii) Losses* We would like to clarify that, we introduce a variant of NeuralPull, which pulls queries on Gaussian plans but rather on points like the original NeuralPull. To achieve this, our pulling loss is different from the NeuralPull loss. In addition, the idea of Gaussian normal constraints comes back from the era of Surface Splatting[1], and the idea of constraining gradients of SDF is also not new since NeuralPull[2] came out, methods like DiGS[3] and Towards Normal Consistency[4] also followed this idea. Thus, manipulating gradients and Gaussian normals has become a commonly used strategy, which does not originally come from NeuSG. Our reference to NeuSG in Line 175 is just for the flatten of Gaussians. Since we focus on Gaussians on the zero-level set, the implementations of our constraints on Gaussian normals and SDF gradient are much different from NeuSG. For example, we need to pull Gaussians onto the zero-level set, calculate SDF gradients at the pulled Gaussians on the zero-level set, and align Gaussian normal to these gradients (our Tangent loss in Eq.4 of the original paper). Moreover, we also pursue alignment across different level-sets by aligning gradients at queries (may appear on any level sets in the field) to the normal of Gaussians on the zero-level set (our orthogonal loss in Eq.6 of the original paper), which makes the gradients on other level sets orthogonal to the corresponding Gaussian disk on the zero-level set. We will make sure to include these discussions and NeuSG in our revision. **2. We are also more efficient than NeuSG** Since NeuSG is based on NeuS, which needs to sample points along a ray for integration, it inherits the drawback of NeuS, i.e., poor efficiency. On a scene, we just need about half an hour, while NeuSG needs about 16 hours. Obviously, our solution can better leverage the advantages of 3DGS than NeuSG. We will make sure to include these discussions and NeuSG in our revision. **3. Normal loss** Using either cosine or L1 to constrain normal is also widely used in point cloud normal estimation. As we stated in our response “7. Normal Loss” to reviewer SW8k, we did not see any differences on the performance for these two constraints on normal. We will include this ablation study in our revision. **4. 2DGS results** There is a numerical error in the mean operation. According to Table 2 in 2DGS paper, if we average the per-scene result provided in the table, we can get (0.36+0.23+0.13+0.44+0.16+0.26)/6=0.26, but 2DGS reported the mean value as 0.30 instead of 0.26. [1]. Zwicker M, Pfister H, Van Baar J, et al. Surface Splatting. Proceedings of the 28th annual conference on Computer graphics and interactive techniques. 2001: 371-378. [2]. Ma B, Han Z, Liu Y S, et al. Neural-Pull: Learning Signed Distance Function from Point clouds by Learning to Pull Space onto Surface. International Conference on Machine Learning. PMLR, 2021: 7246-7257. [3]. Ben-Shabat Y, Koneputugodage C H, Gould S. DiGS: Divergence guided shape implicit neural representation for unoriented point clouds. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 19323-19332. [4]. Ma B, Zhou J, Liu Y S, et al. Towards better gradient consistency for neural signed distance functions via level set alignment. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 17724-17734.
Rebuttal 1: Rebuttal: We thank reviewers for comments and highlighting our ***simple and interesting idea*** (Reviewer CCgx, 5rKf, ZhFP), ***good performance and visualization*** (Reviewer CCgx, ZhFP), ***well-written manuscript*** (Reviewer 5rKf). We have provided detailed responses to each reviewer's comments. All the figures and tables referenced are included in the rebuttal PDF. We gratefully thank the reviewers for taking the time to review our paper and for their valuable suggestions. We are looking forward to the feedback. Pdf: /pdf/5a686bb7d1d6ebd09c331a26b87dfcd9c43ceea2.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
GSDF: 3DGS Meets SDF for Improved Neural Rendering and Reconstruction
Accept (poster)
Summary: The paper introduces GSDF, a new approach for both novel view synthesis and surface reconstruction that relies on a dual-branch architecture combining 3D Gaussian Splatting (3DGS) with neural Signed Distance Functions (SDF). **Details:** The paper aims to improve the quality of both the rendering compared to vanilla 3DGS and the reconstructed surface compared to neural SDF approaches like NeuS. To this end, it uses two branches: 1. **3DGS Branch**: Inspired by Scaffold-GS, this branch outputs RGB images with splatting-based, rasterized rendering. 2. **Neural SDF Branch**: Inspired by Instant-NSR, a custom implementation of NeuS using hash grids (similar to Instant-NGP) to accelerate optimization. This branch outputs RGB images using volumetric rendering. The method establishes mutual guidance and joint supervision between the two representations, relying on three main ideas: 1. **Efficient Sampling**: Using depth maps rasterized with Gaussian splats to guide the volumetric rendering in the SDF branch, allowing for much more efficient sampling along rays. In practice, the sampling range for a ray $r$ going through a pixel $p$ is computed using the SDF value of the point $x$ resulting from the backprojection of the 3DGS depth map at pixel $p$. 2. **Guided Densification and Pruning**: Using the SDF branch to guide the densification and pruning processes in the 3DGS representation. The underlying idea is straightforward: Densification should be stronger near the surfaces, and Gaussians far from the surfaces should be pruned. In practice, the paper proposes to compute a criterion depending on both the gradient of Gaussian primitives (similarly to Scaffold-GS) and the SDF value of the center of a Gaussian. The higher the gradient and the closer the Gaussian is to the surface, the more likely it is to be densified. In the meantime, Gaussians located far from the surface are pruned. 3. **Geometric Alignment**: Aligning geometric properties (depth and normal maps) of both branches to ensure the consistency of the two branches as well as better geometric accuracy of the 3DGS representation, as Gaussians generally tend to cheat on the geometry to allow for better rendering. The proposed approach limits floaters and ensures better alignment of 3D Gaussian primitives with the underlying surface. The overall geometric quality and details of the 3DGS representation are improved, reaching higher performance than concurrent state-of-the-art methods. Moreover, the paper explains that the 3DGS branch allows for accelerated convergence (and better performance) of the SDF branch thanks to much more efficient sampling along rays. Strengths: 1. The paper is clear and well-written. The details are easy to follow. 2. The proposed approach shows excellent performance on quantitative benchmarks, effectively outperforming concurrent state-of-the-art works. 3. The qualitative results are impressive and clearly demonstrate the improvement in geometric accuracy and surface details brought by the approach. 4. As the paper explains, *“[The] framework is versatile and can be easily adapted to incorporate future advanced methods for each branch”* (lines 121-122). In other words, from a high-level perspective, I believe the paper offers a simple plug-in strategy that may allow jointly optimizing any Gaussian-based approach with any Neural SDF approach. I would be very curious to know if the authors tried their strategy with a regular, vanilla 3DGS representation rather than Scaffold-GS; I have no doubt that the results would be worse, but for generalization purposes, it would be interesting to know if the Scaffold-GS structural properties are essential to the approach, or if it can easily extend to 3DGS-based representations that do not rely on anchor points and MLP-decoders for the parameters. Weaknesses: 1. **Concerning Speedup Contribution**: It is undeniable that the proposed approach allows for better reconstruction quality. However, it is unclear how the dual-branch system helps to make the SDF optimization faster, as claimed in the paper (see subsection 4.2.2, for example, line 246: “*our method optimized the SDF field significantly faster than previous methods*”). Although the paper claims a speedup over NeuS (which requires 8 hours for DTU scenes), the proposed approach uses a custom implementation of NeuS, called Instant-NSR, that is supposed to be able to train NeuS models in 10 minutes on some scenes. Is the speedup over NeuS really due to the dual-branch system (and the benefits each approach brings to the other), or is it due to the Hash-grid architecture? In the end, it appears that the proposed approach actually slows down the optimization compared to a single branch relying on Instant-NSR (or a single branch relying on Gaussians, as the optimization time is longer than the concurrent 2DGS approach). Theoretically, the speedup claimed in the paper would make sense; however, in practice, it is not clear how using two branches allows for establishing a virtuous circle and speeding up optimization. On the contrary, it might slow down optimization compared to using a method relying on a single branch. 2. **Surface extraction method**: I did not see any discussion about the mesh extraction method for evaluating the surface reconstruction. Since the approach relies on two different representations that are supposed to become consistent with each other, I assume it would be possible to leverage one representation or the other (or both) to extract a surface mesh. I suppose the authors use a Marching Cubes algorithm on the neural SDF, but I might be wrong. If Gaussians are indeed consistent with the SDF and well-aligned with the surface, would it be possible to extract a mesh using Poisson reconstruction, similar to SuGaR, or TSDF, similar to 2DGS (although it would not scale well to background regions)? Would it be worse than applying a marching algorithm on the SDF? An ablation regarding the surface extraction method would have been very interesting, especially in a setup where two representations are available at the same time. 3. **Memory Footprint**: 3DGS has a pretty high memory footprint in itself. Scaffold-GS might help reduce this footprint, but what about the proposed dual-branch approach? The approach sounds very heavy memory-wise, as it simultaneously optimizes two NVS models. The paper explains that a single NVIDIA A100 GPU with 80GB of VRAM (which is a lot) was used, but does not give details about the memory consumption of the approach. What is the minimum memory requirement to optimize the proposed model? Memory (and time) consumption are very important criteria for graphics-based applications. 4. **Novelty of the Approach**: The approach heavily relies on two existing works (Scaffold-GS and Instant-NSR), and might look like a combination of these works with limited novelty. However, I think the combination of both works is not trivial, and the paper proposes a satisfying strategy to make each model benefit from the other. The proposed work is also quite reminiscent of NeuSG, a CVPR 2024 paper that proposes to jointly optimize a set of 3D Gaussians and a neural SDF. Even though the novelty of the proposed work might not be so great, this is no point for rejection in my opinion, as the work is technically solid and the results are of high quality. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Why rely on Scaffold-GS? I understand that the underlying voxel+anchor-based structure of Scaffold-GS helps enforce a better structure of 3D Gaussians as well as quickly identifying which Gaussians could be pruned or densified. However, it would be really interesting to have an ablation using vanilla 3DGS rather than Scaffold-GS for the 3DGS branch, to know if the underlying regularization coming from the Scaffold-GS structure is essential to the overall approach, or if it can generalize to unstructured 3D Gaussians. 2. What resources (time and memory) are needed for optimizing the model? The paper explains that the approach requires 2 hours on DTU compared to 8 hours with NeuS. However, the paper uses Instant-NSR’s implementation, which makes NeuS converge much faster (10 minutes for scenes from the Blender dataset, for example). Is the speedup really due to the proposed method, or is it due to the hash-grid implementation from Instant-NSR? 3. What is the optimization time (and memory requirement) for unbounded scenes, like Mip-NeRF 360 or Tanks&Temples? 4. What surface extraction method is used? Is it possible to use the Gaussians for extracting the surface? Or both branches? 5. (*Bonus question!*) Looking at Figure 4, it is undeniable that the proposed method achieves sharper results than the concurrent 2DGS approach. However, in the particular case of the Truck scene from Tanks&Temples (2nd row of Figure 4), it seems that 2DGS better reconstructs fine holes in the topology, looking at the back and the top of the truck. I’m curious to know if the authors have an idea of what component in their approach causes such a limitation in this particular example. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors addressed most of the limitations of their work. However, they do not discuss the memory requirements of the approach (relying on two different models at the same time), which might be much higher than concurrent approaches such as 2DGS, for example. Moreover, the limitations are only discussed in the supplementary material, which is a problem in my opinion. I encourage the authors to try to move the limitations to the main paper in the final version. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your efforts and valuable comments. Below we address concerns for each question. Common concerns are detailedly responded in the global rebuttal. Additional Figures and Tables are provided in the attached PDF, in which the index is denoted as *Figure A-D* and *Table A-B*. **Q1. Why using Scaffold-GS; Generalization of the framework.** **A1**: We used Scaffold-GS because it manages 3D Gaussians more efficiently, resulting in less memory consumption, more accurate depth predictions, and better rendering quality. Following the reviewer's suggestion, we tested our framework's generalization by switching the GS branch to vanilla 3DGS. As shown in *Figure D*, GSDF_3DGS results in better reconstruction quality than using only the SDF branch (Instant-NSR). Additionally, the rendering quality improved from 28.21 to 28.31, as shown in *Table B*. **Q2. Time consuming and memory usage.** **A2**: Thanks for pointing out. Indeed the speedup partially comes from the hash-grid implementation. We additionally compared the reconstruction quality of GSDF and only SDF-branch (I.e. instant NSR) with regard to training time, as illustrated in *Figure A*. It shows that GSDF can achieve higher reconstruction quality than the baseline when trained for the same amount of time and iterations. Please also refer to the global response. Regarding resources, we recorded the memory usage of our method and other methods in *Table A*. GSDF indeed uses more memory and we will clarify the limitations and move the limitations from supplementary material to the main paper. **Q3. The surface extraction method and the geometry of GS-branch.** **A3**: We extract the mesh using Marching Cubes on the SDF branch. As described in the paper (L26-30), we identified that enforcing Gaussian primitives to align with scene surfaces by restricting their shape and position can lead to compromise in rendering. Therefore, we keep the diversity of 3D Gaussians in pursuit of a better rendering quality. In addition, the placement of the Gaussian primitives of our GS-branch is truly closer to the potential surfaces compared to using only our GS branch, as illustrated in *Figure B*. **Bonus Question: fine holes in 2DGS.** **Answer**: 2DGS optimizes discrete 2D Gaussians, while SDF optimizes a continuous field. Compared to global representation, the discrete primitives are more flexible to represent holes, while global representation is better at capturing continuous surface. Additionally, we introduced a curvature term in the loss function when optimizing SDF to avoid incorrect holes. However, this term can lead to over-smoothing, which may also lead to the missing fine holes in the Truck scene. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the clarifications as well as the efforts they made during the rebuttal. The rebuttal provides convincing additional experiments, and addressed all of my concerns. While the overall strategy of the paper is not particularly novel (combining 3DGS and SDF branches for surface reconstruction and high-quality rendering), I believe the authors presented a technically solid work with convincing quantitative and qualitative results. I also appreciate the additional experiment consisting in replacing Scaffold-GS by vanilla 3DGS; The high-quality extracted surface obtained with vanilla 3DGS (see the Barn reconstruction in the rebuttal PDF) shows that the proposed regularization may indeed act as a more general pipeline for combining neural SDFs and 3DGS-based radiance field, and inspire future work. For these reasons, I decide to increase my rating. --- Reply to Comment 1.1.1: Comment: Thanks for your comment, we appreciate your effort to help us validate the generalization of the proposed framework.
Summary: This paper proposes to jointly optimize 3D Gaussian Splatting (3DGS) and SDF (like NeuS). During GS optimization, the Gaussians are aligned to the zero-level set (and normals) of SDF. During the NeuS-like optimization, Gaussians are used to limit the range of ray sampling, resulting in efficient optimization. Experiments show that the proposed method achieves better reconstruction and rendering compared with SDF-based methods (e.g., NeuS) and GS-based methods, including recent SuGaR and 2D-GS, which explicitly align Gaussians to surfaces. Strengths: + This paper proposes a nice combination of 3DGS and SDF (but not the first to do, which I'll detail in the weakness section). Both Gaussians and SDF representations provide merit to each other, resulting in accurate reconstruction and rendering. + Experiments are well done. Very recent methods, like 2D Gaussian splatting and SuGaR are adequately evaluated, showing the merit of doing joint optimization of Gaussians and SDF, not just by aligning Gaussians on the surfaces. Weaknesses: ## Related work This method is not the first one combining the 3DGS and SDF, as mentioned in related work (NeuSG [5]). Regarding CfP of NeurIPS, "papers that appeared online within two months of a submission will generally be considered "contemporaneous" in the sense that the submission will not be rejected on the basis of the comparison to contemporaneous work." In other words, NeuSG, which was submitted on Dec 1, 2023, in arXiv, has to be regarded as an "official" existing work. Thus, the paper should discuss more the technical differences from the NeuSG work. I basically agree with the argument in L81-86 that mentions the subjective merit of the proposed method compared to NeuSG; the proposed method does significantly more for tighter integration of 3DGS and SDF. Meanwhile, the readers should wonder if the proposed method achieves better than NeuSG or not. Unfortunately, the code of NeuSG does not seem available, and an official comparison may be difficult. I did not come across the NeuSG paper deeply and may mention something wrong, but I'm wondering if the authors may show a quick ablation study using a baseline that somewhat mimics a simplified version of NeuSG by only using normal supervision by SDF and force the Gaussians flat. Technical Quality: 3 Clarity: 3 Questions for Authors: - Although I may miss some information, an interesting potential merit of the proposed method compared to SDF-based methods is the efficient ray sampling, while the overall optimization should take longer than vanilla 3DGS. I would like to see the discussions on training time compared to those methods. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I did not find notable negative social impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your efforts and valuable comments. Below we address concerns for each question. Common concerns are detailedly responded in the global rebuttal. Additional Figures and Tables are provided in the attached PDF, in which the index is denoted as *Figure A-D* and *Table A-B*. **Q1. Comparison to NeuSG.** **A1**: Please refer to the common responses for a detailed explanation. Following the reviewer's recommendation ("by only using normal supervision by SDF and force the Gaussians flat"), we implemented the NeuSG. *Figure D* shows that GSDF achieves better reconstruction than NeuSG. Additionally, NeuSG's rendering quality decreased from 28.77 to 28.63 PSNR, while GSDF increased PSNR to 28.93 **Q2. Time consuming.** **A2**: Please refer to the common responses for a detailed explanation. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttals Comment: Thanks for the rebuttals and additional results. I would also like to discuss those with other reviewers. --- Reply to Comment 1.1.1: Comment: Thanks for your comment, we appreciate your effort.
Summary: This paper tackles the challenge of representing 3D scenes from multiview images by introducing a novel dual-branch architecture named GSDF, which combines 3D Gaussian Splatting and neural Signed Distance Fields. This architecture enhances both rendering and reconstruction through mutual guidance and joint supervision. By aligning Gaussian primitives with potential surfaces and speeding up SDF convergence, the method achieves finer geometric reconstructions and minimizes rendering artifacts. Demonstrating robustness and accuracy, the approach is effective in both synthetic and real-world scenarios. Strengths: The paper employs a dual-branch approach for simultaneous scene rendering and mesh reconstruction. It utilizes an SDF (Signed Distance Field) branch to guide the geometric optimization of the Gaussian branch, including operations such as adding and removing points. By leveraging bidirectional optimization, the method simultaneously enhances the reconstruction and rendering quality of both branches. This approach ensures high rendering quality while effectively reconstructing the scene's mesh, achieving smooth reconstruction results on both object-level and scene-level datasets. Additionally, the method has also demonstrated commendable performance in novel view synthesis tasks. Weaknesses: It appears that in the reconstruction and rendering branches, the authors have merely pieced together appropriate solutions. Although bidirectional optimization has been applied to both branches, the unity of the method seems lacking. The interrelation between the methods is weak, resembling more of an incremental improvement rather than a cohesive, integrated advancement. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The rendering and reconstruction results mentioned in the paper are derived from which branch? Are both rendering and reconstruction results obtained from the 3DGS branch, or is rendering performed by the 3DGS branch while mesh is obtained from the Instant-NSR branch? 2. Can using pseudo-depth and normals (obtained from other estimation algorithms) directly enhance Instant-NSR or Gaussian, potentially achieving good reconstruction and rendering results without the need for bidirectional optimization? 3. Which branch's rendering results are displayed in the ablation study? If it is the Instant-NSR branch, then different strategies for adding and removing points should not affect its rendering results. Similarly, if it is the 3DGS branch, should depth-guided sampling also not affect its rendering outcomes? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your efforts and valuable comments. Below we address concerns for each question. Common concerns are detailedly responded in the global rebuttal. Additional Figures and Tables are provided in the attached PDF, in which the index is denoted as *Figure A-D* and *Table A-B*. **Q1. Output of each branch.** **A1**: In our framework, we use the SDF-branch to reconstruct accurate geometry and the GS-branch to render the images. Please refer to the common responses for a detailed explanation. **Q2. Pseudo-depth and normals from other estimation algorithms.** **A2**: Using pseudo-depth and normals from other estimation algorithms is possible. However, it would make the method highly dependent on the quality of these algorithms. In contrast, our framework allows the two branches to mutually promote each other, resulting in a more robust approach. **Q3. Ablations on sampling process.** **A3**: The rendering results in the ablation study are from the GS branch, and the reconstruction results are from the SDF branch. 'Depth-guided sampling' and other operations affect both branches due to the 'Mutual Geometry Supervision' mechanism. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the clarifications as well as the efforts they made during the rebuttal. --- Reply to Comment 1.1.1: Comment: Thanks for your comment, we appreciate your effort.
Summary: This paper introduces GSDF that utilizes a joint optimization of the GS-branch and SDF-branch to constrain the inherent geometric issues of the 3DGS. Furthermore, it proposes three mutual guidances to ensure satisfactory outcomes in both rendering and reconstruction. The extensive experiments on datasets such as DTU, Tanks and Temples (T&T), and Mip-NeRF 360 demonstrate that the method can achieve high rendering quality while obtaining better geometric results. Strengths: 1. The article is clearly written, allowing one to easily understand the motivation in designing the two branches. 2. The author conducted extensive experiments across various datasets to illustrate the high quality of rendering and geometric results achieved by GSDF. 3. Utilizing the SDF-branch to supervise the inaccurate geometry or depth of 3DGS seems to be reasonable. Weaknesses: 1. Instead of sampling relying on predicted SDF, GSDF samples near the depth from GS branch. However, I guess the imprecise depth of 3DGS could infulence the SDF-branch. I am curious whether there is a difference between the results of the SDF-branch and the baseline's SDF-branch results? 2. Which branch outputs the final result of GSDF method? If the GS branch outputs both rendering and gemetry results, how about the rendering and geometry outputs of another branch? 3. It seems that the time-consuming aspect is missing in Table 1 or Table 2, and the quantitative result of NeuS is missing in Figure 4. Additionally, as reported on lines 248-249, the time consumption of GSDF (2 hours) is significantly higher compared to 2DGS (5-10 minutes) and the original 3DGS. This substantial gap may be attributed to the volumetric rendering of the SDF branch; thus, the improved geometry results might be benefiting from this branch. It may be necessary to compare it with some neural implicit reconstruction methods (like Neuralangelo) to better evaluate its performance given the additional computational cost. 4. In Table 3 of the ablation study, if depth-guided sampling is removed (w/o depth-guided sampling), one would need to consider whether the SDF-branch is now completely identical to the baseline NeuS. At this point, it would be important to assess if the results of the SDF-branch are close to baseline of SDF-branch. Furthermore, as mentioned in Lines 248-249, one should investigate whether the time consumption has significantly increased due to the absence of depth-guided sampling, given that this might affect the efficiency of the volumetric rendering performed by the SDF-branch. 5. The result in table1, the CD results of NeuS and 2DGS is not as good as in their papers. And there might be some ambiguity in the symbolic expressions presented in the text. e.g. is line154 Fsdf the same with Line139 Fs? Technical Quality: 2 Clarity: 3 Questions for Authors: please refer to weaknesses section. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors didn't explicitly address the limitation of their approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your efforts and valuable comments. Below we address concerns for each question. Common concerns are detailedly responded in the global rebuttal. Additional Figures and Tables are provided in the attached PDF, in which the index is denoted as *Figure A-D* and *Table A-B*. **Q1. Imprecise depth; SDF-branch vs. Baseline's SDF-branch.** **A1**: As we described in the paper (L124-133), the GS-branch is effective in locating the sampling area while accurate depth is not a strict requirement. The depth guided sampling uses the position of depth point's SDF value as the interval of sampling, considering both the depth from the GS-branch and the predicted SDF from the SDF-branch. Additionally, to enhance robustness, we warm up the GS-branch to provide a coarse depth. Extensive experiments demonstrate that GSDF remains robust even if the depth from the GS branch is not precise. There are noticeable differences between the results of SDF-branch and the baseline's SDF-branch. In Figure 4 of the main paper, the first column is the results of the baseline's SDF-branch and the last column shows the results of our SDF-branch. **Q2. Each branch's output.** **A2**: Note that, our framework features a two-branch design, where each branch excels in its respective task while benefiting from mutual supervision: the SDF-branch specializes in accurate geometry reconstruction, while the GS-branch focuses on high-quality image rendering. This mechanism enables our approach to achieve superior results in both individual methods (L24-27). **Q3. Time consuming, comparison with NeuS and Neuralangelo.** **A3**: We did not include NeuS results in Figure 4 of the main paper because NeuS struggles with reconstructing complex scenes. We present several object-level cases in *Figure C*, showing GSDF consistently outperforming NeuS. We included the time consumption for GSDF and single-branch methods in *Table A*. Although GSDF is slower per iteration, it achieves faster SDF convergence compared to the SDF branch alone in terms of both iteration and training time (see *Figure A*). Importantly, GSDF yields significantly better quality results. Following the reviewer's suggestion, we compared GSDF with Neuralangelo. Neuralangelo requires over 12 hours on 2 GPUs and produces inferior results compared to GSDF, as shown in *Figure D*. Additionally, the actual time for 2DGS in real scenes is longer than 5-10 minutes, and its rendering quality is degraded, as shown in *Tables A* and *B*. **Q4. Ablation of the sampling process.** **A4**: In Table 3 of the main paper, when depth-guided sampling is removed, we switch to SDF-guided sampling, as used in most NeuS-based reconstruction methods. We compared the SDF convergence speed between GSDF and our SDF branch alone. As shown in Figure A, GSDF achieves faster SDF convergence in both training iterations and time. Specifically, following [1], we use kernel size as an indicator of reconstruction quality, where a smaller kernel size indicates better geometry. Our method consistently achieves better results with faster convergence. *Reference*: [1] Wang Z, Shen T, Nimier-David M, et al. Adaptive shells for efficient neural radiance field rendering. ACM Trans. Graph., 42, 2023. **Q5. The configuration of the reported report and notation.** **A5**: Since our method is not purely a reconstruction method, we split the dataset into training and test sets, with 1 test view for every 8 views. This differs from the default settings in NeuS and 2DGS, which used all images for training. Therefore, we used their officially released code to train the models with our settings. Regarding the notation, F_sdf and F_s are indeed the same MLP. We will clarify this in the revised version of the paper. --- Rebuttal Comment 1.1: Comment: Thanks for your clarification. My concerns are partially resolved, and I raised my rating. --- Reply to Comment 1.1.1: Comment: Thanks for your comment, we appreciate your effort.
Rebuttal 1: Rebuttal: We thank all reviewers for their valuable feedback. We are encouraged that reviewers find - our two-branch design is novel and effective in boosting reconstruction and rendering quality simultaneously; - our analysis and experiments are useful and comprehensive. We will release our code for reproduction and future research. Here we address some common concerns. **1. Function of each branch** Note that, our framework features a two-branch design, where each branch excels in its respective task while benefiting from mutual supervision: the SDF-branch specializes in accurate geometry reconstruction, while the GS-branch focuses on high-quality image rendering. We emphasize that achieving good qualities in both rendering and reconstruction is extremely hard and no recent method can attain this goal to the best of our knowledge. As we described in the paper (line 24-26), a representation alone struggles to achieve good quality in both reconstruction and rendering. Through thorough analysis, we noticed that a naive integration (like guidance merely by loss) hardly balances the learning priority during training, thus, it can only boost the quality of either rendering or reconstruction at the cost of sacrificing the other. Instead, we dug deeper into the architecture characteristics of the two branches and propose a tight guidance from model architecture, e.g., using the predicted depth from GS-branch to guide the ray sampling of SDF-branch (significantly guide the convergence of SDF branch) and use the SDF field to guide the densification of the GS-branch (geometry-based density control rather than heuristic strategy). The effectiveness is confirmed through comprehensive evaluations, showing improved reconstruction and rendering quality. **2. Time consumption** *Inference time* remains unaffected as each branch can be used individually. Regarding *training time*: - In each iteration, the training overhead of our two-branch design is higher than single-branch designs (Tab A). However, GSDF achieves better reconstruction quality comparing to the SDF-branch alone when trained for the same amount of time/iterations, as illustrated in Fig A. - The improvement in both the rendering and reconstruction quality are nonnegligible. The effectiveness of our proposed framework comes from the combination of local and global optimization. By introducing mutual optimization, the training time will inevitably increase. However, without much overhead on training efficiency, we can achieve improvement in both rendering and reconstruction, whilst our design does not have an impact on the inference stage, guaranteeing efficiency in downstream applications. - The core contribution is our design of the two-branch (geometry + rendering) framework, where each branch can be updated to a more efficient version in the future. We demonstrated the generalizability of our framework by switching the GS-branch from Scaffold-GS to 3DGS, which still exhibited superior rendering and reconstruction quality (Fig D, Tab B) **3. Comparison with Concurrent work** The concurrent work NeuSG aims for an improved reconstruction, which augments the SDF branch by a SDF loss to encourage the SDF value of Gaussian points and MVS points to be 0, and a normal loss to encourage normal consistency between SDF and Gaussian. Extra regularization loss (including flattening the gaussian shape) is introduced to make gaussian more friendly to geometry, regardless of the potential sacrifice in rendering quality. GSDF has critical differences comparing to NeuSG: - Instead of only focusing on reconstruction, GSDF aims to improve both geometry and rendering, whose effectiveness has been verified by extensive experiments. - Beyond loss-based guidance, we investigate the combination inside the model architecture and propose a tightly-coupled two-branch design including depth-guided sampling and SDF-guided densification. - Moreover, GSDF does not require an extra MVS process to provide accurate geometry, making ours a more versatile method requiring only coarse initialization. Pdf: /pdf/59c04dbe9ec058ff0fd24a32bf1124d74a0b5315.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
RG-SAN: Rule-Guided Spatial Awareness Network for End-to-End 3D Referring Expression Segmentation
Accept (oral)
Summary: This paper presents RG-SAN, a new method for 3D Referring Expression Segmentation. It combines spatial reasoning with textual cues to segment 3D objects accurately. RG-SAN uses a Text-driven Localization Module and a Rule-guided Weak Supervision strategy. It outperforms existing methods on the ScanRefer benchmark. It handles spatial ambiguities well and sets a new precision standard for 3D scene understanding. Extensive ablation studies and visualizations validate the effectiveness of the proposed modules. Strengths: 1) The motivation of this paper is clear. Figure 1 shows how spatial relationship info in natural language matches with 3D scenes, making the motivation obvious. Figure 4 reinforces this motivation from a quantitative perspective through statistical analysis. 2) I agree that spatial relationship reasoning is crucial for understanding 3D scenes. The complex spatial relationships in natural language give important spatial clues. The proposed Text-driven Localization Module (TLM) performs spatial reasoning explicitly, aligning with how humans understand 3D scenes, which makes sense to me. 3) The Rule-guided Weak Supervision (RWS) module enables localization and segmentation of auxiliary object nouns in natural language without any supervision. This aspect is interesting and shows the model's generalization capability. 4) The appendix includes an analysis of how well large language models (LLMs) can localize target words, comparing this with the RWS module's results. This comparison further validates RWS and offers new insights into using LLMs for the 3D-RES task. 5) The paper reports extensive experiments on common 3D-RES datasets like ScanRefer and ReferIt3D, achieving state-of-the-art performance. Ablation studies also show the effectiveness of the TLM and RWS modules Weaknesses: 1) The paper does a good job comparing 3D-RES methods. But traditional 3D Visual Grounding methods using bounding boxes are more mature. I'd like to see how these older methods perform on this task. This would give a more complete quantitative comparison with the proposed method. 2) The appendix talks about LLMs for target word localization. But it doesn't compare them directly with 3D Visual Grounding methods based on LLMs. Could LLM-based approaches be better? I'd like to see a comparison between specialized lightweight models and general LLMs. 3) The paper should provide details on the superpoint feature extraction mentioned in line 119. I'm curious if superpoint features cover all targets. If not, what's the missing rate? 4) The layout of Tables 2 and 3 is off, and the font in Figure 2 is too small. This makes it hard to read the details. 5) The text processing procedure isn't detailed enough. For example, the interaction process of the DDI module. I'd recommend including this description in the main text. Technical Quality: 3 Clarity: 4 Questions for Authors: 1 Will the code for RG-SAN be open-sourced in the future? 2 Could the authors provide a quantitative comparison of RG-SAN with a broader range of 3D visual grounding methods? Others please look at weaknesses. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: Limitations are discussed [line 576]. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and recognition of our contributions. We greatly appreciate your commendation of our clear articulation of motivation and quantitative analysis. We're pleased you acknowledged our method's effectiveness in spatial relation reasoning for 3D scenes, expressing interest in our rule-guided weak supervision strategy. Now, let's address the specific concerns you've raised and offer further clarification: --- > **Q1: The paper does a good job comparing 3D-RES methods. But traditional 3D Visual Grounding methods using bounding boxes are more mature. I'd like to see how these older methods perform on this task. This would give a more complete quantitative comparison with the proposed method.** > A1: Thank you for your insightful suggestion. Based on your advice, we adapted the high-performing methods 3DVG-Transformer and 3D-SPS from 3D-REC for 3D-RES and tested their performance, as shown in Table A. Our method still demonstrates a significant advantage, outperforming by over 10 points. | Method | Unique mIoU | Multiple mIoU | Overall mIoU | | --- | --- | --- | --- | | 3DVG-Transformer [a]* | 49.9 | 27.0 | 31.4 | | 3D-SPS [b]* | 54.7 | 26.7 | 32.1 | | RG-SAN (Ours) | 74.5 | 37.4 | 44.6 | Table A: Comparison with 3D Visual Grouding methods. * we reproduce results by extracting points within the boxes as segmentation mask predictions using their official codes. [a] 3DVG-Transformer: Relation modeling for visual grounding on point clouds. ICCV 2021 [b] 3d-sps: Single-stage 3d visual grounding via referred point progressive selection. CVPR 2022 --- > **Q2: The appendix talks about LLMs for target word localization. But it doesn't compare them directly with 3D Visual Grounding methods based on LLMs. Could LLM-based approaches be better? I'd like to see a comparison between specialized lightweight models and general LLMs.** > A2: Thank you for your valuable suggestion. We compared our model with LLM-based 3D RES models SegPoint [c] and Reason3D [d], as shown in Table B, and our model still demonstrates a significant advantage, leading by more than 2.6 points. | Method | Unique mIoU | Multiple mIoU | Overall mIoU | | --- | --- | --- | --- | | SegPoint [c] | - | - | 41.7 | | Reason3D [d] | 74.6 | 34.1 | 42.0 | | RG-SAN (ours) | 74.5 | 37.4 | 44.6 | Table B: Comparison with LLM-based methods. [c] SegPoint: Segment Any Point Cloud via Large Language Model. ECCV 2024 [d] Reason3D: Searching and Reasoning 3D Segmentation via Large Language Model. Arxiv 2024 --- > **Q3: The paper should provide details on the superpoint feature extraction mentioned in line 119. I'm curious if superpoint features cover all targets. If not, what's the missing rate?** > A3: We appreciate your insightful feedback. The superpoint generation mechanism in RG-SAN ensures that no instances are missed, as the superpoints comprehensively cover the entire scene. Superpoints are essentially fine-grained fragments that group semantically similar points together. They do not overlap with each other and collectively constitute the whole scene, guaranteeing that all objects are included within superpoints. This setup ensures that every object is included within the superpoints, as detailed in the upper left corner of Figure 2 in our paper. Therefore, the issue of missing objects does not arise in our framework. --- > **Q4: The layout of Tables 2 and 3 is off, and the font in Figure 2 is too small. This makes it hard to read the details.** > A4: Thank you for your suggestions. To improve readability and facilitate understanding, we will adjust the layout of Tables 2 and 3 and enhance Figure 2, including resizing the font, in the new version. --- > **Q5: The text processing procedure isn't detailed enough. For example, the interaction process of the DDI module. I'd recommend including this description in the main text.** > A5: Thank you for your suggestion. We will include the details of DDI interactions in the new version to enhance understanding for readers. --- > **Q6: Will the code for RG-SAN be open-sourced in the future?** > A6: Thanks for your interest. In the paper, we have already provided the code via an anonymous link and also uploaded a copy in the supplementary materials. We also commit to releasing the code once the paper is accepted. --- Rebuttal Comment 1.1: Title: Sincere Request for Further Discussions Comment: Dear Reviewer MnE2, Thanks again for your great efforts and constructive advice in reviewing this paper! With the discussion period drawing to a close, we expect your feedback and thoughts on our reply. We put a significant effort into our response, with several new experiments and discussions. We sincerely hope you can consider our reply in your assessment. We look forward to hearing from you, and we can further address unclear explanations and remaining concerns if any. Regards, Authors --- Rebuttal Comment 1.2: Title: Raise the score to 8 Comment: Thanks to the authors for an excellent rebuttal—I'm pleased to say that all of my concerns have been thoroughly addressed. The inclusion of both traditional visual grounding and the latest LLM-based approaches really strengthens the paper's conclusions. I'm particularly excited about the LLM-based approach, and I believe expanding on this in future versions could really push the field forward. It’s clear that this aspect has a lot of potential to guide future research. I also took the time to review the other reviewers’ comments, and I stand by my initial impression. The authors' exploration of how spatial relationships in natural language correspond with 3D scenes tackles a crucial and challenging area, especially compared to purely visual 3D segmentation. Spatial and relational reasoning is one of the major hurdles in cross-modal 3D vision today, and it’s great to see the authors making strides in this direction. I’m confident this work will inspire further progress in embodied intelligence. I will fully support this paper and raise its score. --- Reply to Comment 1.2.1: Comment: Thank you very much for your recognition. We will incorporate everyone’s feedback into the final version and make the code open source for the community to learn from. Once again, we sincerely appreciate your suggestions.
Summary: This paper presents a novel and high-performing 3D referring segmentation network. Specifically, it approaches the problem from both 3D spatial relationships and natural language spatial descriptions, innovatively using explicit spatial position modeling and multimodal interaction. This allows the query corresponding to textual entities to understand both semantics and spatial locations. Additionally, the use of weak supervision techniques enables the model to achieve strong generalization capabilities even under incomplete annotations. Comprehensive experiments further validate the superior performance of the proposed method. Strengths: I commend the authors for their insightful paper, particularly the proposed text-guided spatial perception modeling approach. This method aligns with human cognitive habits and has the potential to significantly advance the field of multimodal 3D perception. Several notable advantages are highlighted: 1. The paper deeply explores text-conditioned 3D spatial perception from both 3D spatial relationships and natural language structure perspectives, advancing the community's exploration of multimodal spatial perception modeling. 2. The proposed TLM module effectively addresses the challenge of explicit spatial reasoning in previous end-to-end segmentation paradigms, significantly improving segmentation performance while maintaining high inference speed. 3. The RWS module demonstrates data efficiency, generalizing capabilities to all entities without requiring mask labels for all textual entities. 4. The experiments are comprehensive, evaluating the model's performance on both the ScanRefer and ReferIt3D datasets, thoroughly validating its robust performance. 5. The ablation studies are detailed, thoroughly analyzing the proposed TLM and RWS modules, as well as the backbone selection and hyperparameter settings. 6. The video demo in the open-source link is engaging, and the visualizations in Figure 3 of the paper are intuitive, effectively illustrating the core ideas and powerful performance of the proposed method. Weaknesses: 1. More details can be added regarding the superpoint feature extraction and text feature processing sections. 2. The paper mentions that the Sparse 3D U-Net used as the visual backbone is pre-trained. On which datasets was it pre-trained? Would using different pre-trained backbones result in performance variations? 3. Figure 2 has too many colors, making it somewhat cluttered and potentially confusing for readers. It is recommended to simplify and optimize the color scheme. 4. It is suggested to include some bad cases to enhance the completeness of the work. Technical Quality: 4 Clarity: 4 Questions for Authors: Will the complete code for the paper be open-sourced for additional exploration? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors discuss the limitations in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and recognition of our work. We appreciate your acknowledgment of our exploration of text-conditioned 3D spatial perception and the effectiveness of the TLM and RWS modules. We're also glad you found our video demo and Figure 3 visualizations clear and insightful. Now, let's address your specific concerns and provide further clarification: --- > **Q1: More details can be added regarding the superpoint feature extraction and text feature processing sections.** > A1: We appreciate your valuable suggestions. We will provide a detailed description of superpoint feature extraction and text feature processing in the new version. --- > **Q2: The paper mentions that the Sparse 3D U-Net used as the visual backbone is pre-trained. On which datasets was it pre-trained? Would using different pre-trained backbones result in performance variations?** > A2: Thank you for your insightful question. The 3D U-Net we used has been pre-trained on 3D instance segmentation tasks [45]. Additionally, following your suggestion, we explored alternative backbones, including PointNet++ [39], used by the classic work 3D-VisTA [a], and another superpoint-based backbone, SSTNet [28], as detailed in Table A. Our findings indicate that the performance with PointNet++ [45] and our employed SPFormer [45] are comparable, demonstrating the adaptability and effectiveness of our proposed modules across different backbone architectures. We will include this discussion in the final version. | Visual Backbone | Unique mIoU | Multiple mIoU | Overall mIoU | | --- | --- | --- | --- | | SSTNet [28] | 73.9 | 33.9 | 42.0 | | PointNet++ [39] | 75.5 | 36.1 | 44.0 | | SPformer [45] | 74.5 | 37.4 | 44.6 | Table A: Ablation study of the Visual Backbones. [a] 3D-VisTA: Pre-trained Transformer for 3D Vision and Text Alignment. ICCV 2023. --- > **Q3: Figure 2 has too many colors, making it somewhat cluttered and potentially confusing for readers. It is recommended to simplify and optimize the color scheme.** > A3: Thank you for your detailed feedback. We will optimize Figure 2 in the next version to enhance its clarity and readability. --- > **Q4: It is suggested to include some bad cases to enhance the completeness of the work.** > A4: Thank you for your valuable suggestions. We will include bad cases and corresponding analyses in the new version. --- > **Q5: Will the complete code for the paper be open-sourced for additional exploration?** > A5: Thank you for your interest. In the paper, we have already provided the code via an anonymous link and also uploaded a copy in the supplementary materials. We also commit to releasing the complete code once the paper is accepted. --- Rebuttal Comment 1.1: Title: Sincere Request for Further Discussions Comment: Dear Reviewer 8gkL, Thanks again for your great efforts and constructive advice in reviewing this paper! With the discussion period drawing to a close, we expect your feedback and thoughts on our reply. We put a significant effort into our response, with several new experiments and discussions. We sincerely hope you can consider our reply in your assessment. We look forward to hearing from you, and we can further address unclear explanations and remaining concerns if any. Regards, Authors --- Rebuttal Comment 1.2: Comment: Dear Reviewer 8gkL, We are grateful for your thorough review and the constructive feedback provided on our submission. Your insights have significantly contributed to the refinement of our paper. We have endeavored to address all the points raised in your initial review comprehensively. As the discussion period for NeurIPS 2024 is drawing to a close, we would appreciate knowing if there are any further clarifications or additional details you might need. We are fully prepared to continue discussions to further enhance the quality of our work. With appreciation, Paper 9950 Authors --- Rebuttal 2: Comment: I apologize for the delayed response. I've been quite busy lately, but I wanted to take a moment to wrap things up. First, I’d like to thank the authors for their detailed response. It’s impressive to see that the proposed method performs effectively across different backbones. After carefully reviewing all the discussions, I find this paper to be very valuable. The exploration of text-conditioned 3D spatial perception from both 3D spatial relationships and natural language structure perspectives provides constructive guidance for 3D cross-modal understanding, which is indeed a challenging aspect of human-computer interaction. The authors have elegantly addressed this issue without introducing additional data or annotations, which is truly inspiring. The contributions of this paper have been widely recognized by everyone involved. I also noted Reviewer 6dEW's comments regarding some minor issues with the formula descriptions. In my view, these do not affect the overall readability of the paper and can be easily addressed with minor revisions. Therefore, I believe this paper deserves a strong score, and I am willing to champion it. --- Rebuttal Comment 2.1: Comment: Thank you very much for recognizing our work on text-conditioned 3D spatial perception. We will incorporate your feedback into the final version and make the code open source for the community to learn from. Once again, we sincerely appreciate your suggestions.
Summary: This paper presents the Rule-Guided Spatial Awareness Network (RG-SAN) for 3D referring expression segmentation (3D-RES), offering a novel approach to understanding spatial relationships in the visual-language perception domain. It aligns 3D and linguistic features not only at the semantic level but also within geometric space. The proposed network incorporates modules for textual feature extraction, text-driven localization, and rule-guided weak supervision. In the experimental setup, the model builds upon the efficient Superpoint Transformer for 3D feature extraction, as developed by Sun et al. (2023). The experimental results are promising, particularly in the overall 0.25 threshold setting on the ScanRefer dataset. Extensive ablation studies demonstrate the effectiveness of the proposed modules, while vivid visualizations showcase the method's impressive generalization capabilities. Strengths: 1. RG-SAN approaches 3D-language semantic alignment and spatial perception from a novel perspective. It not only aligns language features with 3D point cloud features at the semantic level but also explicitly assigns spatial positions to textual entity words within the geometric space. This explicit alignment helps address the spatial ambiguity inherent in directional natural language, allowing RG-SAN to achieve a more precise understanding of spatial relationships described in text. 2. The authors conducted comprehensive experiments on the 3D-RES task, with particularly notable performance improvements on the ScanRefer dataset. It is impressive that the method significantly enhances performance while maintaining rapid inference speed, which is beneficial for real-time applications of this task. 3. The authors performed detailed ablation studies on the proposed TLM and RWS modules, thoroughly examining the settings and hyperparameter choices. Additionally, they conducted ablation studies on the visual backbone and text backbone in the supplementary materials. These comprehensive ablations help readers understand the efficacy and rationale behind the proposed modules. 4. In the supplementary materials, the authors provided a statistical analysis of the importance of spatial information for the 3D-RES task. This quantitative analysis supports the motivation of the paper and gives readers a clearer understanding of the role of spatial information in this task. 5. The authors' visualizations are illustrative. In particular, Figure 3 demonstrates RG-SAN's text-guided spatial understanding and localization capabilities, showcasing excellent generalization. 6. The authors have committed to open-sourcing their method, providing a link that includes the source code and an engaging video demo. This openness will promote development and knowledge sharing within the community. Weaknesses: 1. RG-SAN adopts superpoints as fundamental visual units for feature extraction and segmentation. While previous works have employed similar approaches, I am interested in understanding the segmentation quality of superpoints themselves. For instance, how many superpoints exist solely within individual objects? This is crucial because if a superpoint spans across two objects, it inevitably affects the segmentation results. 2. RG-SAN trains spatial awareness networks using the centroid coordinates of target objects. Here, does "object center" refer to the geometric centroid or the center of mass (where the former denotes the center of the bounding box and the latter denotes the mean coordinate of all points belonging to the object)? Given the inherent sparsity of point clouds, these two centers may exhibit significant differences. 3. The statistical analysis of the importance of spatial information for 3D-RES should be included in the main text. This will help readers understand the motivation of the paper from both qualitative and quantitative perspectives. 4. Although this paper discusses its limitations, it does not provide failure cases or corresponding analyses. Specifically, for the segmentation of plural nouns, it remains unclear whether only one object is recognized or if segmentation fails altogether. It would be beneficial to include either qualitative or quantitative analysis in this regard. Including this part would make the paper more comprehensive and facilitate follow-up research and improvements. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. Does "object center" in this paper refer to the geometric centroid (center of the bounding box) or the center of mass (mean coordinate of all points)? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors have discussed the limitations, and I think it is somewhat okay for this work. It would be even better if an analysis of bad cases could be included. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and acknowledgment of our paper's strengths. We're pleased you appreciate our approach of attributing 3D spatial properties to text for 3D multimodal spatial perception modeling, recognize the promising performance of our model on ScanRefer, and underscore the effectiveness of our proposed methods. Now, we'll address the specific concerns you've raised to provide further clarification: --- > **Q1: RG-SAN adopts superpoints as fundamental visual units for feature extraction and segmentation. While previous works have employed similar approaches, I am interested in understanding the segmentation quality of superpoints themselves. For instance, how many superpoints exist solely within individual objects? This is crucial because if a superpoint spans across two objects, it inevitably affects the segmentation results.** > A1: Thank you for your insightful feedback. In practice, due to the fine granularity of superpoints and their tendency to aggregate semantically similar points, most superpoints cover only a single object. To verify this, we conducted a statistical analysis of the superpoints in the ScanRefer [5] dataset. If a superpoint contains points from more than one object, it is classified as containing multiple objects; otherwise, it is categorized as containing a single object. Our analysis reveals that 99.55% of the points are within superpoints that cover a single object, with a missing probability of less than 0.5%. This indicates that the issue of multiple objects within a single superpoint has a negligible impact on the final results and does not warrant special attention. --- > **Q2: RG-SAN trains spatial awareness networks using the centroid coordinates of target objects. Here, does "object center" refer to the geometric centroid or the center of mass (where the former denotes the center of the bounding box and the latter denotes the mean coordinate of all points belonging to the object)? Given the inherent sparsity of point clouds, these two centers may exhibit significant differences.** > A2: Thank you for your valuable comments. We use the centroid of all superpoints belonging to an object, representing the average coordinates of these superpoints. We conducted comparative experiments between this centroid setting and another centroid setting. As shown in Table A, the results indicate no significant differences between the two approaches. We will include this discussion in the revised version to enhance the clarity of the paper. | Setting | Unique mIoU | Multiple mIoU | Overall mIoU | | --- | --- | --- | --- | | Center of Box | 74.8 | 37.4 | 44.7 | | Center of Mass | 74.5 | 37.4 | 44.6 | Table A: Comparison of the Center of Box and Center of Mass. --- > **Q3: The statistical analysis of the importance of spatial information for 3D-RES should be included in the main text. This will help readers understand the motivation of the paper from both qualitative and quantitative perspectives.** > A3: Thank you for your suggestion. We will include the statistical analysis on the importance of spatial information for 3D-RES in the paper to enhance the rigor of the paper. --- > **Q4: Although this paper discusses its limitations, it does not provide failure cases or corresponding analyses. Specifically, for the segmentation of plural nouns, it remains unclear whether only one object is recognized or if segmentation fails altogether. It would be beneficial to include either qualitative or quantitative analysis in this regard. Including this part would make the paper more comprehensive and facilitate follow-up research and improvements.** > A4: Thanks for your insightful suggestion. We will include visualizations of failure cases and provide a qualitative analysis in the new version. --- > **Q5: Does "object center" in this paper refer to the geometric centroid (center of the bounding box) or the center of mass (mean coordinate of all points)?** > A5: Thank you for your detailed question. We are referring to the centroid, where the center coordinate is defined as the mean of the coordinates of all superpoints belonging to the object. --- Rebuttal Comment 1.1: Title: Sincere Request for Further Discussions Comment: Dear Reviewer p7dx, Thanks again for your great efforts and constructive advice in reviewing this paper! With the discussion period drawing to a close, we expect your feedback and thoughts on our reply. We put a significant effort into our response, with several new experiments and discussions. We sincerely hope you can consider our reply in your assessment. We look forward to hearing from you, and we can further address unclear explanations and remaining concerns if any. Regards, Authors --- Rebuttal Comment 1.2: Comment: Dear Reviewer p7dx, We are grateful for your thorough review and the constructive feedback provided on our submission. Your insights have significantly contributed to the refinement of our paper. We have endeavored to address all the points raised in your initial review comprehensively. As the discussion period for NeurIPS 2024 is drawing to a close, we would appreciate knowing if there are any further clarifications or additional details you might need. We are fully prepared to continue discussions to further enhance the quality of our work. With appreciation, Paper 9950 Authors
Summary: The paper proposes a new framework for 3D referring expression segmentation. The main contributions include analyzing the spatial information among objects and rule-guided target selection. Extensive experiments validate the effectiveness of the proposed method. Strengths: The authors develop the method to achieve the state of the art performance on ScanRefer benchmark for 3D Referring Expression Segmentation. The authors conduct detailed experiments and comparison to validate the design. Weaknesses: 1. Both the idea of incorporating the spatial information is not new. For spatial relation, section 3.2.2 and section 3.2.3 are very similar to [22]. Equation (7) to equation (10) are almost identical to equation (5) to equation (7) in [22]. Equation (12) is similar to equation (8) in [22]. What's more, there have been a lot of work on spatial information in 3D visual grounding, such as [6]. 2. The writing needs improvement. Some notations are not well explained. For example, K_i in line 120 and c_i in line 123 are not introduced. P^t in equation (3) with two subscripts is inconsistent with P^t in equation (5). Table t in line 166 and q in equation (7) are not introduced. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. The text-driven localization module is similar to the architecture in [22]. Have you tried to report their performance in ScanRefer in table 1? 2. In equation (2), how do you initialize W_E and W_S to obtain the initial representations? 3. In terms of equation (5), what is the intuition of adding positional encoding to position features? Does the order of the visual feature affects the final attention output? 4. In equation (6) the positional encoding is added while in equation (9) and (10) it is concatenated. Why are these different? 5. For section 3.3.1, do you have separate evaluation on how your algorithm performs in terms of finding the target? 6. The explanation for table 4 points out the Top1 tends to select different nodes variably (line 281), but RTS is also choosing different nodes? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors addresses the limitations and societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1: Using spatial information is not new. Sec. 3.2.2 and 3.2.3 and Eq. (7) to (12) resemble those in [22]. Spatial information in 3D visual grounding has been explored, like [6]. > A1: Thank you for your valuable feedback. Indeed we acknowledge that our use of positional encoding is based on previous methods[51][22]. However, our primary contribution lies in modeling the spatial positions and relationships of noun entities within sentences for the 3D-RES task, which has also been recognized by reviewers p7dx, 8gkL, and MnE2. This approach has two core differences compared to [22]: (1) Unlike [22], which uses zero initialization for queries and random initialization for positional information, our queries and positions are text-driven from the start. This improves mIoU by 4.4 points, as shown in Tab. 2 and the newly added Tab. A. (2) Unlike [22], which supervises all target instances' positions, in 3D-RES, only the core target word is supervised. Our novel RWS method constructs spatial relationships for all noun instances using only the target word's positional information, improving mIoU by 2.3 points, as shown in Tab. 4. | Method | Initialization method of Queries | Initialization method of Position | Multiple mIoU | Overall mIoU | | --- | --- | --- | --- | --- | | MAFT [22] | Zero | Random | 29.7 | 37.9 | | | Text-driven | Random | 30.1 | 38.8 | | RG-SAN w/o RWS | Text-driven | Text-driven | 34.7 | 42.3 | | RG-SAN (Ours) | Text-driven | Text-driven | 37.4 | 44.6 | Table A: Comparison of MAFT [22] with our RG-SAN in ScanRefer Dataset. **In summary, our core innovation lies in constructing spatial positional information, rather than just using positional encoding, as done in [6].** Using positional encoding is a regular operation after generating spatial information. We also explored other positional encodings, such as 5D Euclidean RPE, achieving similar results in Tab. 2. Following your suggestion, we will further compare and analyze our work with the highly relevant and interesting study [22] in the new version to clarify our contributions. --- > **Q2: The writing needs improvement: K_i (line 120) and c_i (line 123) are not defined, P^t in Eq. (3) with two subscripts is inconsistent with P^t in Eq. (5), and Table t (line 166) and q (Eq. 7) are not introduced.** > A2: Thank you for your detailed feedback. We adopted the representation from [28] to formulate our approach as concisely as possible. Following your suggestion, we will make improvements in new version. --- > **Q3: The TLM module resembles [22]'s architecture. Have you compared their performance on ScanRefer in Tab. 1?** > A3: Thank you for your constructive suggestions. We conducted a detailed comparison with [22] in Q1A1 and reported the suggested performance. Our proposed RG-SAN improves by 6.7 points, demonstrating its effectiveness. We will include this discussion in new version. Your suggestions will enhance the robustness of our contributions. --- > **Q4: In Eq. (2), how do you initialize W_E and W_S to obtain initial representations?** > A4: Thank you for your attention to the details of our paper. In Eq. (2), W_E and W_S are initialized randomly. We will include this information in new version to enhance clarity. --- > **Q5: For Eq. (5), what is the intuition of adding positional encoding to position features? Does the order of visual features impact the attention output?** > A5: Thank you for your constructive question. Adding absolute position encoding is common in computer vision [51]. Changing the input order of visual features does not affect the final attention output because (1) the attention mechanism is order-invariant, and (2) the position encoding is tied to the visual tokens' 3D positions (xyz), so altering input order does not impact these positions. --- > **Q6: Why is positional encoding added in Eq. (6) but concatenated in Eqs. (9) and (10)?** > A6: Thank you for your insightful inquiry. Our experiments show that addition and concatenation for positional encoding in Eq. (6), (9), and (10) yield similar results, as shown in Tab. B. The differences are negligible, so either approach can be used without significantly impacting the outcome. | Eq. (6) | Eqs. (9), (10) | Unique mIoU | Multiple mIoU | Overall mIoU | | --- | --- | --- | --- | --- | | Cat | Cat | 74.6 | 37.4 | 44.6 | | Cat | Add | 74.7 | 37.4 | 44.7 | | Add | Cat | 74.5 | 37.4 | 44.6 | | Add | Add | 75.1 | 37.5 | 44.8 | Table B: Ablation of positional encoding usage, where "Cat" denotes concatenation, while "Add" denotes direct addition. --- > **Q7: Evaluation of RTS for finding the target.** > A7: Thank you for your question. We previously evaluated RTS's ability to find the target using LLAMA2 70B in Sec. F of the supplementary materials, achieving an 80% match rate. However, LLAMA2 is not entirely accurate, making it an unreliable benchmark. To better validate RTS, we annotated the text of 9,508 Val set samples to mark the target word positions. RTS achieved an accuracy of 93.4%, compared to 63.7% for the Top1 method, confirming our algorithm's effectiveness. We will include this evaluation in new version and open-source the annotations to ensure reproducibility. --- > **Q8: The explanation for Tab. 4 points out Top1 tends to select different nodes variably (line 281), but RTS is also choosing different nodes?** > A8: We apologize for the confusion. In line 281, "different" refers to predicted nodes that are not the target noun word. Top1 often selects nodes other than the target word, such as adjectives or verbs, leading to semantic confusion and an accuracy of only 63.7%. In contrast, RTS accurately identifies the target word based on syntax, regardless of its position, achieving an accuracy of 93.4% as Q7A7 points out. This precise selection enhances semantic accuracy and significantly improves performance, as shown in Tab. 4. We will revise the description in new version to improve clarity. --- Rebuttal Comment 1.1: Title: Sincere Request for Further Discussions Comment: Dear Reviewer 6dEW, Thanks again for your great efforts and constructive advice in reviewing this paper! With the discussion period drawing to a close, we expect your feedback and thoughts on our reply. We put a significant effort into our response, with several new experiments and discussions. We sincerely hope you can consider our reply in your assessment. We look forward to hearing from you, and we can further address unclear explanations and remaining concerns if any. Regards, Authors --- Rebuttal 2: Comment: Thanks for the detailed rebuttal. The authors have addressed some of my concerns. Here are some feedback: For Q2: You should not follow the notations from other work and assume the readers could follow. Please clearly define the notations in the revised version. For Q5: If the order of visual features should not affect the output, then you should **not** add positional encoding to the visual features. Same to the positional features. If you add positional encoding, then the attention output would change if the inputs are permuted. This is a technical flawless. For Q6: I hope you could be consistent about how you deal with the positional encoding (if you need to add it). Looks from the new ablation result using 'add' for both lead to the best performance. I have also read the reviews from other reviewers. I would maintain my original rating as some of my concerns are not well-addressed. --- Rebuttal Comment 2.1: Title: Response to Reviewer 6dEW (part-1) Comment: Thank you very much for your prompt, positive, and clear feedback. We greatly appreciate your recognition of our core motivation and the primary technological innovations. We will now address the remaining concerns you raised, including those related to the presentation and the positional encoding aspect. We hope our forthcoming responses will meet your expectations. > Feedback1: For Q2: You should not follow the notations from other work and assume the readers could follow. Please clearly define the notations in the revised version. > **FA1:** Thank you for your constructive feedback. We are committed to addressing your comments and will refine the revised version to improve the clarity of the notations. Your input is invaluable to us, and we appreciate your guidance in helping us strengthen our work. --- > Feedback2: For Q5: If the order of visual features should not affect the output, then you should **not** add positional encoding to the visual features. Same to the positional features. If you add positional encoding, then the attention output would change if the inputs are permuted. This is a technical flawless. > **FA2:** Thank you very much for your feedback. To clarify this concern further, we will explain the rationale behind positional encoding to ensure better understanding. **(1) Input Order vs. Positional Information:** Firstly, changing the input order is not equivalent to altering the positional information of the inputs. Therefore, stating that the output remains unaffected by changing the input order does not imply that positional encoding is irrelevant. In fact, as shown in Table 3 of our paper, incorporating appropriate positional encoding can improve performance by 0.6 to 1.2 points. Although this improvement may not be as significant as the gains from our core module, positional encoding remains a classic operation in computer vision. We have retained this module as a byproduct of modeling positional information in our work. We will now provide examples to illustrate the difference between altering input order and positional information in the follow. **(2) 2D Positional Encoding vs. 3D Positional Encoding:** Unlike 2D positional encoding, which typically uses indices, 3D point clouds exhibit unordered and sparse characteristics, making index-based encoding unsuitable. Instead, 3D positional encoding employs Fourier encodings of the 3D coordinates (xyz), where `xyz` represents the spatial positions of the point cloud relative to the scene center (0, 0, 0). We will further illustrate the difference between these two approaches and how input order affects them in the following **part-2**. --- Rebuttal Comment 2.2: Title: Response to Reviewer 6dEW (part-2) Comment: ## **Supplement to FA2:** ### **Example of 2D Positional Encoding:** Assume an input image of size 3x3: | Patch0 (0, 0) | Patch1 (0, 1) | Patch2 (0, 2) | | --- | --- | --- | | Patch3 (1, 0) | Patch4 (1, 1) | Patch5 (1, 2) | | Patch6 (2, 0) | Patch7 (2, 1) | Patch8 (2, 2) | In a Vision Transformer (ViT), positional encoding (PosEmb) is based on token indices: - PosEmb(0, 0) + Patch 0 -> [Final Embedding 0] - PosEmb(0, 1) + Patch 1 -> [Final Embedding 1] - PosEmb(0, 2) + Patch 2 -> [Final Embedding 2] - PosEmb(1, 0) + Patch 3 -> [Final Embedding 3] - PosEmb(1, 1) + Patch 4 -> [Final Embedding 4] - PosEmb(1, 2) + Patch 5 -> [Final Embedding 5] - PosEmb(2, 0) + Patch 6 -> [Final Embedding 6] - PosEmb(2, 1) + Patch 7 -> [Final Embedding 7] - PosEmb(2, 2) + Patch 8 -> [Final Embedding 8] Assume the modified input order is as follows: | Patch8 (0, 0) | Patch0 (0, 1) | Patch7 (0,2) | | --- | --- | --- | | Patch1 (1, 0) | Patch6 (1, 1) | Patch2 (1, 2) | | Patch5 (2, 0) | Patch3 (2, 1) | Patch4 (2, 2) | The final features will be: - **PosEmb(0, 0) + Patch 8** -> [Final Embedding 0] - **PosEmb(0, 1) + Patch 0** -> [Final Embedding 1] - **PosEmb(0, 2) + Patch 7** -> [Final Embedding 2] - **PosEmb(1, 0) + Patch 1** -> [Final Embedding 3] - **PosEmb(1, 1) + Patch 6** -> [Final Embedding 4] - **PosEmb(1, 2) + Patch 2** -> [Final Embedding 5] - **PosEmb(2, 0) + Patch 5** -> [Final Embedding 6] - **PosEmb(2, 1) + Patch 3** -> [Final Embedding 7] - **PosEmb(2, 2) + Patch 4** -> [Final Embedding 8] If the order of the input tokens is changed, the final embeddings will differ, leading to different outcomes. ### **Example of 3D Positional Encoding:** Assume a point cloud with the following data: | Point Index | xyz | rgb | | --- | --- | --- | | Point 1 | (1.0, 2.0, 3.0) | (255, 0, 0) | | Point 2 | (4.0, 5.0, 6.0) | (0, 255, 0) | | Point 3 | (7.0, 8.0, 9.0) | (0, 0, 255) | | Point 4 | (1.5, 2.5, 3.5) | (255, 255, 0) | | Point 5 | (4.5, 5.5, 6.5) | (255, 0, 255) | | Point 6 | (7.5, 8.5, 9.5) | (0, 255, 255) | | Point 7 | (2.0, 3.0, 4.0) | (128, 128, 128) | | Point 8 | (5.0, 6.0, 7.0) | (64, 64, 64) | | Point 9 | (8.0, 9.0, 10.0) | (192, 192, 192) | **Where:** - The `xyz` column represents the coordinates (x, y, z) of the point cloud, indicating the three-dimensional spatial positions relative to the scene center (0, 0, 0) in the scene coordinate system. - The `rgb` column denotes the color (r, g, b) of the point cloud. An example of the 3D absolute positional encoding is as follows: | Point Index | xyz | Point Feature | Positional Encoding | Final Embedding | | --- | --- | --- | --- | --- | | Point 1 | (1.0, 2.0, 3.0) | PointFeat 1 | PosEmb(1.0, 2.0, 3.0) | PointFeat 1 + PosEmb(1.0, 2.0, 3.0) | | Point 2 | (4.0, 5.0, 6.0) | PointFeat 2 | PosEmb(4.0, 5.0, 6.0) | PointFeat 2 + PosEmb(4.0, 5.0, 6.0) | | Point 3 | (7.0, 8.0, 9.0) | PointFeat 3 | PosEmb(7.0, 8.0, 9.0) | PointFeat 3 + PosEmb(7.0, 8.0, 9.0) | | Point 4 | (1.5, 2.5, 3.5) | PointFeat 4 | PosEmb(1.5, 2.5, 3.5) | PointFeat 4 + PosEmb(1.5, 2.5, 3.5) | | Point 5 | (4.5, 5.5, 6.5) | PointFeat 5 | PosEmb(4.5, 5.5, 6.5) | PointFeat 5 + PosEmb(4.5, 5.5, 6.5) | | Point 6 | (7.5, 8.5, 9.5) | PointFeat 6 | PosEmb(7.5, 8.5, 9.5) | PointFeat 6 + PosEmb(7.5, 8.5, 9.5) | | Point 7 | (2.0, 3.0, 4.0) | PointFeat 7 | PosEmb(2.0, 3.0, 4.0) | PointFeat 7 + PosEmb(2.0, 3.0, 4.0) | | Point 8 | (5.0, 6.0, 7.0) | PointFeat 8 | PosEmb(5.0, 6.0, 7.0) | PointFeat 8 + PosEmb(5.0, 6.0, 7.0) | | Point 9 | (8.0, 9.0, 10.0) | PointFeat 9 | PosEmb(8.0, 9.0, 10.0) | PointFeat 9 + PosEmb(8.0, 9.0, 10.0) | **Where:** - The **Point Cloud Feature (PointFeat)** column represents the features of the point cloud using `PointFeat`. - The **Positional Encoding (PosEmb)** column displays the positional encoding for each point in the point cloud, where `PosEmb` denotes the positional encoding function. - The **Final Feature (PointFeat + PosEmb)** column illustrates the combined result of the point cloud features and the positional encoding, represented as `PointFeat` plus `PosEmb`. In 3D encoding, even if the input order is changed, each point’s representation remains consistent, and thus, the final output remains unchanged. However, positional information is still inherently embedded within the point cloud. We will incorporate this discussion into the revised supplementary material to make the explanation clearer. --- Rebuttal Comment 2.3: Title: Response to Reviewer 6dEW (part-3) Comment: > Feedback3: For Q6: I hope you could be consistent about how you deal with the positional encoding (if you need to add it). Looks from the new ablation result using 'add' for both lead to the best performance. > **FA3:** Thank you for your feedback. Previously, we focused primarily on the Overall mIoU metric, where the differences were indeed minimal. We appreciate you pointing this out, and as a result, we will update both operations to the "Add" setting in our final version. Your suggestion will help make our paper more robust. In summary, your suggestions have been incredibly helpful to us. On a broader scale, your input has clarified our motivation and technical innovations. On a detailed level, your attention to specifics has made our paper more rigorous and solid. We sincerely appreciate your contributions to improving this work. If there are any other questions or areas you'd like to discuss, we welcome further conversation.
Rebuttal 1: Rebuttal: We would like to express our gratitude to the reviewers for their valuable feedback and positive comments on our paper. Their insightful reviews have greatly contributed to improving the clarity and overall quality of our work. We appreciate Reviewer **6dEW** $\color{red}{(Rating:\mathbf{3},\ Confidence:\mathbf{3})}$ for acknowledging the strengths of our paper. Specifically, they mention our new framework for 3D RES and highlight the analysis of spatial information among objects. They also recognized the excellent performance of our method and the thorough validation of the designed modules. Reviewer **p7dx** $\color{red}{(Rating:\mathbf{7},\ Confidence:\mathbf{5})}$ appreciated our novel perspective of explicitly assigning spatial positions to text for 3D-language modeling and acknowledges our extensive comparative experiments with state-of-the-art methods and detailed ablation studies. Furthermore, they highlighted the clarity of our motivation and the thoroughness of our statistical analysis. Additionally, we are grateful for their recognition of the comparison of qualitative results with previous models. Such acknowledgment reinforces the validity of our research findings. Reviewer **8gkL** $\color{red}{(Rating:\mathbf{7},\ Confidence:\mathbf{5})}$ acknowledged our in-depth exploration of text-conditioned 3D spatial perception, addressing both 3D spatial relationships and natural language structure. They appreciate the TLM module's role in enhancing performance and high inference speed in explicit spatial reasoning for 3D scenes. Furthermore, they acknowledge the data efficiency and generalization capabilities of our RWS module. Additionally, they acknowledge that our video demo and the visualizations in Figure 3 intuitively demonstrate the capabilities of our model. Reviewer **MnE2** $\color{red}{(Rating:\mathbf{6},\ Confidence:\mathbf{5})}$ commended the clear articulation of our motivation and the quantitative analysis presented. They agree that spatial relation reasoning is crucial for understanding 3D scenes and recognize that our proposed method effectively extracts spatial relationships from complex language descriptions, enabling text-centric spatial reasoning. Additionally, they express interest in our rule-guided weak supervision strategy, which demonstrates the ability to perform localization and segmentation using natural language object nouns without any explicit supervision. The reviewer also supports our extensive experiments, which thoroughly validate the effectiveness of the proposed modules. We sincerely thank the reviewers for recognizing these strengths, and we appreciate their positive feedback on the clarity, novelty, and effectiveness of our proposed methods. Their comments have further motivated us to address the concerns and improve the weaknesses pointed out in their reviews. We are committed to addressing their concerns and providing a detailed response in our rebuttal.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
From Causal to Concept-Based Representation Learning
Accept (poster)
Summary: The paper focus on recover human-interpretable concepts from observation. It proposes a concept based representation learning method, which relax causal notions with a geometric notion of concepts. Experiments on synthetic data, multimodal CLIP models and large language models supplement their results and show the utility of their approach. Strengths: 1.The authors hope to find a middle ground where they can simultaneously identify a smaller set of interpretable latent representations, which is an interesting idea. 2.This work can be interpreted as a new direction for identifiable representation learning in order to study when interpretable concepts can be recovered from data. 3.Experiments on synthetic data, multimodal CLIP models and large language models supplement the results and show the utility of the approach. Weaknesses: 1.This approach sacrifices causal semantics. This can be particularly problematic in situations where a deep understanding of causality is crucial, such as in root cause analysis, where the goal is to identify the fundamental reasons behind a problem or an event. Without causal semantics, one might only address the symptoms rather than the core issues, leading to temporary or ineffective solutions. 2.The author has made numerous assumptions within the article, which could potentially impact the universal applicability of the theory presented. Technical Quality: 3 Clarity: 2 Questions for Authors: Please see the weaknesses. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their review. We address their comments below. > This approach sacrifices causal semantics. This can be particularly problematic in situations where a deep understanding of causality is crucial In this work, we focus on the many applications where a relaxation of causal semantics is allowed or necessary, such as when interventional data is out of reach (L46-50, L118-124). While a deep understanding of causality would be ideal, concept learning is itself a mature field independent of causality and we present a middle ground that is useful in contemporary applications. > The author has made numerous assumptions within the article, which could potentially impact the universal applicability of the theory presented. We would like to highlight that without any assumptions, we cannot have identifiability (L29-31), therefore we make natural assumptions to prove our results. However, we tried to keep the assumptions as general as possible, e.g. we allow non-linear mixing (Assumption 1) and use Gaussian noise (Assumption 3). We welcome additional suggestions to improve the work.
Summary: This paper proposes a theory to identify latent concepts from model representations. In contrast to previous work in the concept-based models' field, concepts are expected to be linearly represented in the model representations, and a linear transformation A is associated with such concepts. The paper is theory-driven and presents rigorous results for the identifiability of such concepts. The authors also present some experiments related to the theory, one of which verifies in a synthetic setup the validity of the proposed theory, one on CLIP and one on LLMs. Strengths: ### **Originality** The paper proposes a new perspective on concept-based learning by providing also identifiability guarantees. The notion of concepts as linear subspaces in the representation space is interesting although not entirely new, see [1]. The results on identifiability are new and potentially useful for follow-up works: it is a valid idea to learn such concepts from conditional distributions. ### **Quality and Clarity** The paper is of high quality, providing new important results for identifiability of latent concepts, and addresses the important problem of learning high-level concepts from data. The presentation is somehow clear, although it can be improved. ### **Significance** Bridging identifiability in (causal) representation learning and concept-based learning is an open problem. The main contribution by the authors is indicating a viable route to achieve it by relating with theory. It also can be related to visual-language models, like CLIP, that are typically learned without any concept supervision but, due to interpretability concerns, it is often of interest to know whether concepts are linearly represented and used by the model. The theory seems solid and is of interest for advancing research in concept-based models. [1] Concept Whitening for Interpretable Image Recognition, Chen et al. (2020) Weaknesses: ## Major ### **Experiments** One thing that is particularly dangling in the paper is the experimental section and the supposed evidence in support of the theory: 1) The experiments on CLIP and LLMs seem to me unrelated to the theory devised by the authors and rather support on one hand that concepts can be found in the representations in CLIP to some extent (which was previously observed, e.g. [2,3]), on the other hand, that it is possible to improve the steering of LLMs predictions with some matrix operation rather than vector addition. How is this related to the theory the authors propose? How is it that CLIP training aligns with the assumptions that would lead to the identifiability of latent concepts? LLMs like LLaMA or GPT are next-token predictors learning with a different objective, see [4], how are even these models related to the theory proposed? 2) It would have been more useful to provide experimental evidence of the proposed contrastive learning method on semi-synthetic datasets like MPI3D or Shapes3D. Some real-world datasets are of particular interest to the community in concept-based models, being more tackling, like CUB200 and OAI [5], CelebA [6], and many more [7,8]. 3) The synthetic experiment is impossible to understand from the main text and many details concerning the data, the learning procedure proposed (which is also very detailed), and the metrics are confined to appendices rendering it necessary to consult them in length. Nonetheless, it seems that under the working assumptions the model trained in a contrastive manner on the synthetic data and environments captures the right latent concepts. How do you evaluate $A^e$? ### **Assumptions** It would be beneficial to present the assumptions and discuss the intuition behind them. Assumption 1 is fine and is common in studying identifiability, but if the aim is to identify concepts, it seems unnaturally restricting to consider only invertible functions $f$. One could hope to extend the results also to non-invertible functions. However, this is not a serious limitation given the novelty of the results. \ Assumption 2 takes the concepts to be linearly independent, what happens if for some of them is not the case? \ Assumption 3 requires a Gaussian distribution for the noise, it is remarked that other distributions work as well (which distributions?) but there is no citation. \ How should Assumptions 4 and 5 be understood, what data are expected to be collected? Are these expensive to obtain in practice? They seem to presuppose a lot of knowledge on what latent concepts should be identified, which may not be the case when concepts are not known apriori. ## Minor ### **Conditional distributions in practice** It is a bit puzzling how data should be gathered for the theory to work, and thus the scaling of the proposed method. The synthetic experiment offers a proof of principle but it does not show how the model behaves in settings where more concepts (and structured ones, like the color of an object) and conditional distribution are to be considered. ### **Related work** Causal Representation Learning seems more of an inspiration to the paper rather than having a tight connection to the theory. The authors consider conditional distributions for latent concepts and the proof techniques are inspired by the iVAE work (2019), so there is no clear link to what should be the causal aspects that should be taken into consideration. It seems more that works about identifiability in Causal Representation Learning become relevant if one wants to extend authors' theory to causal variables, not being essential to support their claims and the connection. I was also expecting to see a comparison to Concept-based models' current practices, which require dense supervision of the concepts in cases [1, 7, 8], partial [6], or language-guided [2,3]. Some approaches aim to learn concepts only by leveraging supervision on the classification task, see [9], and seemingly related work on continuous latent variable identification [10]. ## Summary The experiments do not complement the theory and dilute the message by showing two post-hoc analyses on CLIP and LLMs. I struggle to see how both constitute valid evidence for the theory proposed. On the other hand, additional experiments on known semi-synthetic datasets or real ones would highlight the extent of the theory to the community in concept-based interpretability. The presentation of the material also requires some clarification around the assumptions the authors make and their validity in practice. [2] Label-free Concept Bottleneck Models, Oikarinen et al. (2023) \ [3] Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification, Yang et al. (2023) \ [4] On Linear Identifiability of Learned Representations, Roeder et al. (2021) \ [5] Concept Bottleneck Models, Koh et al. (2020) \ [6] GlanceNets: Interpretable, Leak-proof Concept-based Models, Marconato et al. (2022) \ [7] Concept embedding analysis: A review, Schwalbe (2022) \ [8] Concept-based explainable artificial intelligence: A survey, Poeta et al. (2023) \ [9] Provable concept learning for interpretable predictions using variational autoencoders, Taeb et al., (2022) \ [10] Synergies between Disentanglement and Sparsity: Generalization and Identifiability in Multi-Task Learning, Lachapelle (2023) Technical Quality: 4 Clarity: 2 Questions for Authors: **Recovering the concepts without post-hoc analysis?** As I understood correctly (correct if I am wrong), the identifiability class studied in the paper presupposes recovering the latent concepts (the matrix A) up to an invertible transformation $T$. Thus, the theory offers only guarantees to which a linear probe on the latent representations would recover the latent concepts, is it correct? One in practice has still to find the matrix $A$ related to the concept, and that cannot be done without concept annotation. Is that the case? **Typo?** In Def It seems that $d_Z$ and $\tilde{d_Z}$ have to be the same to guarantee the inverse exists (and how it is used in the proofs). Is that the case? Would the theory hold also for models with different latent dimensions $d_Z \neq \tilde{d_Z}$? I asked other questions in the weakness part. Confidence: 4 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: All assumptions are natural limitations of the proposed theoretical results. It is not clear whether foundation models are trained under the conditions that the author found for assessing identifiability of the concepts and the connection to LLMs seems a bit weak. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful review and are glad that they appreciate our contributions towards the important open problem of bridging causal representation learning and concept based representation learning. > The experiments on CLIP and LLMs seem to me unrelated to the theory...How is this related to the theory the authors propose? The CLIP experiments showcase that the learned representations of concepts indeed lie approximately within an affine hyperplane which is an illustration of our key hypothesis. Moreover, the training data approximately satisfies our assumptions if we view the caption as the environment, i.e., images with similar/equal captions form a concept conditional distribution and the joint image distribution is the observational distribution. Our experiments also show that the representations of different CLIP models agree up to linear transformation. As noted by the reviewer, a more rigorous connection would be to establish if CLIP models learn a representation such that concepts are represented by affine hyperplanes and whether concept conditional distributions are approximately given by eq. (2). In such a case, we can apply our identifiability theory to conclude that different CLIP models necessarily learn approximately the same representation up to linear transformation. Proving these facts rigorously for the representation based on the CLIP-loss is a challenging problem we leave for future work. The LLM experiment is a bit more exploratory and the connection to our theory is via our conceptualisation of concepts. Indeed, this helps us to obtain intuition what the right steering vector should look like and guided our construction. > It would have been more useful to provide experimental evidence ... on semi-synthetic datasets like MPI3D or Shapes3D. Some real-world datasets ... like CUB200 and OAI [5], CelebA [6], and many more [7,8]. We appreciate the nice suggestions. Since the primary thrust of the work is theoretical and it is currently quite challenging to scale these type of methods to large datasets, we leave this interesting direction for future work. > The synthetic experiment is impossible to understand from the main text and many details ... are confined to appendices...How do you evaluate $A^e$? We apologize that this part is difficult to understand as details were deferred due to lack of space. In order to evaluate $A^e$ in our synthetic experiments, we compute the linear correlation metrics $R^2$ and MCC between the ground truth $Z$ (restricted to the projection space) and the predicted $Z$ (from our model), which we report in our tables. As outlined to other reviewers, we will use additional space to add experimental details to the main text. About assumptions in general, we believe our assumptions could likely be further relaxed at the expense of more technical work, which we leave for future work. > Assumption 1 is fine and is common in studying identifiability. One could hope to extend the results also to non-invertible functions. However, this is not a serious limitation given the novelty of the results. We agree. > Assumption 2 takes the concepts to be linearly independent, what happens if for some of them is not the case? Then we can in general not identify the concept matrix, think, e.g., of the case where two concepts are collinear. > Assumption 3 requires a Gaussian distribution for the noise, it is remarked that other distributions work as well (which distributions?) but there is no citation. The result could be extended to exponential families, we will clarify this and add a reference (see e.g. Khemakhem et al. [50]). > How should Assumptions 4 and 5 be understood, what data are expected to be collected?...how data should be gathered for the theory to work, and thus the scaling of the proposed method. Assumptions 4 and 5 state that the environments need to be sufficiently diverse. We do not think that the approach should be used to collect data in practice because this would indeed require a lot of prior knowledge which could probably be directly used to learn the concepts. Instead we think that often real-world data comes with subtle heterogeneity that allows us to identify concepts. > Causal Representation Learning seems more of an inspiration to the paper... not being essential to support their claims and the connection. We generally agree with the reviewer and refer, e.g., to the discussions in L43-50, 60-63, for this remark. We're happy to add more details to clarify this. > I was also expecting to see a comparison to Concept-based models' current practices We appreciate the relevant references and are happy to include them in the paper. In short, some of these works study very useful empirical methods whereas we focus on rigorous theoretical contributions, whereas the others differ in the kind of assumptions and settings studied, i.e. they're related but not directly comparable at a technical level. > Recovering the concepts without post-hoc analysis?...One in practice has still to find the matrix $A$ related to the concept, and that cannot be done without concept annotation. Is that the case? Yes, our identifiability is only upto linear transformations. After having learned the nonlinearity, in order to recover the transformation itself, additional information such as concept annotation may be utilized in practice. > It seems that $d_Z$ and $\tilde{d_Z}$ have to be the same to guarantee the inverse exists (and how it is used in the proofs). Is that the case? Would the theory hold also for models with different latent dimensions $d_Z \neq \tilde{d_Z}$? Yes, for technical reasons this is the case. To consider $d_Z\neq \tilde{d_Z}$ we need to relax the injectivity assumptions otherwise $d_Z$ has to correspond to the dimension of the data manifold. We hope the rebuttal answered the questions raised, and are also happy to take additional feedback to improve the writing. --- Rebuttal 2: Title: Reply to authors Comment: Thank you for your clarifications. After reading the reply and other reviews I feel some parts of the paper still present the limitations we pointed out (experimental section). I have some comments for your rebuttal: > We appreciate the nice suggestions. Since the primary thrust of the work is theoretical and it is currently quite challenging to scale these type of methods to large datasets, we leave this interesting direction for future work. I understand that scaling to real-world datasets is challenging and a bit out of scope. However, it should be shown that the method can applied to semisynthetic datasets (even MNIST, which is 28x28 dimensional), otherwise leaving me puzzled about the practical utility of the theory and method proposed. Given the theoretical focus of the authors, I accept their decision to do that in future work. > The CLIP experiments showcase that the learned representations of concepts indeed lie approximately within an affine hyperplane which is an illustration of our key hypothesis. This is ok. It seems to me more of a verification of an interesting empirical fact rather than a consequence or an explanation that is already captured by the theory. > In such a case, we can apply our identifiability theory to conclude that different CLIP models necessarily learn approximately the same representation up to linear transformation. Proving these facts rigorously for the representation based on the CLIP-loss is a challenging problem we leave for future work. That would be interesting, but I would expect the theory would require several adjustments to consider the captions to train visual representations. > The LLM experiment is a bit more exploratory and the connection to our theory is via our conceptualisation of concepts. Indeed, this helps us to obtain intuition what the right steering vector should look like and guided our construction. Similarly, this is an interesting empirical fact, but not explained by the theory. > As outlined to other reviewers, we will use additional space to add experimental details to the main text. Thank you. > We do not think that the approach should be used to collect data in practice because this would indeed require a lot of prior knowledge which could probably be directly used to learn the concepts. Instead we think that often real-world data comes with subtle heterogeneity that allows us to identify concepts. What do you mean by real-world data having this heterogeneity, is it something that could be verified? It seems to me that you have first to assume what are your concepts of interest (being linearly related to generative variables) and then to have those environments. Both points seem challenging. > We generally agree with the reviewer and refer, e.g., to the discussions in L43-50, 60-63, for this remark. We're happy to add more details to clarify this. Thank you. > In short, some of these works study very useful empirical methods whereas we focus on rigorous theoretical contributions, whereas the others differ in the kind of assumptions and settings studied, i.e. they're related but not directly comparable at a technical level. I also agree with this. Common practice has been shifting towards using concept-supervised examples (which trivially address the identifiability of concepts) which is a limitation and your theory is very relevant in relaxing it. > Yes, our identifiability is only up to linear transformations. After having learned the nonlinearity, in order to recover the transformation itself, additional information such as concept annotation may be utilized in practice. Thank you for the clarification. It would be interesting to connect to [1] to see if the same can be done without concept supervision. [1] When are Post-hoc Conceptual Explanations Identifiable? Leeman, UAI 2023. --- Rebuttal Comment 2.1: Comment: Thanks for your comments concurring with the positioning of this paper in relation to the existing literature. We also appreciate your open-mindedness and willingness to allow some of the experimental concerns to be sorted out in more dedicated future work. And yes, the work [1] you cited is along those lines (we briefly highlight this work in L163-165). We again thank the reviewer for their great effort in reviewing the paper and are happy to take additional questions or suggestions.
Summary: This work takes a step toward learning human-interpretable concepts while relaxing the restrictions of (interventional) causal representation learning, and they do so inspired by the linear representation hypothesis. The authors claim that learning the generative process and the "true" causal factors $f^{-1}, Z$ from observations $X$ using interventions has important caveats: 1) One needs many interventions ($\Omega(d_z)$) for identifiability of $f^{-1}, Z$ which might be too much of a requirement in many cases, 2) There's no reason that a priori such latent representations are interpretable, 3) Interventions in many examples might not be possible at all 4) One might not need the whole $f^{-1}, Z$, and there are cases where we can seek only a handful of interpretable concepts for an application without learning the full encoder and latent representation. The authors then introduce the geometric notion of concepts as linear subspaces in the latent space of $Z$. This is inspired by the abundant evidence on the linear representation hypothesis. Based on this notion, they define the concept conditional distributions as a source of supervision for learning concepts which will replace interventional distributions as the source of supervision for learning causal representations $Z$. Concept conditional distributions are simply defined by filtering the dataset with samples that are $\textit{perceived}$ to satisfy a concept (see eq. 1) The problem then becomes whether given an observational distribution $X^0$ and a set of concept-conditional distributions $X^1, \dots, X^m$ corresponding to $m$ concepts, one can identify the linear subspaces $A^mf^{-1}(x)$ corresponding to those concepts. The main theorem then proves the identifiability (according to definition 4) of those concepts given linear independence of concepts, as well as some diversity constraints on the environments. The authors then try to validate the claim using 3 experiments: 1) Synthetic experiments with various linear and non-linear mixing $f$, and different dimensions for $Z,X$. 2) Evaluating the linearity of the valuations of the concepts learned via multi-modal CLIP (inspired by the similarity of CLIP objective to their contrastive algorithm 3) Showing that concepts can be used to steer LLM outputs. Strengths: - I find this work original and novel. There have been attempts at learning concepts recently, but from my understanding (as well as the author's mention of the related work) such attempts have been limited to specific domains, while the work at hand seems to be addressing that challenge in a broad way. - Moreover, I believe they have nicely translated the linear representation hypothesis to concept learning, and relaxing the restrictions of interventional causal representation learning is an important endeavor (see questions though) as is also nicely motivated in multiple places in the paper. - I find the theoretical result insightful and important; not only does this work move away from interventions as a not-so-ideal tool, but also clearly demonstrates the theoretical advantage of concepts vs. interventions (only theoretically though). - The experiments touch upon different modalities showing the versatility of the claims. - The presentation and arguments are generally well-constructed (up until page 8) Weaknesses: - In the synthetic experiments, I was expecting to see large $d_z$ and small $n$ to match the claims that had been made earlier as to the advantage of concept conditional distributions, but the dimensions are quite small. I can see that they show a proof of concept, but still, it would have been nice to be consistent with the claims made earlier (unless there's a reason why the authors didn't do so) - Could the authors think of any experiment to contrast concept learning to CRL? Maybe with simple datasets like CLEVR or 3d-shapes? Isn't there a way to try to learn causal representations and concepts, and show empirically that one is easier to be achieved? I understand that the premise was that $Z$ is not always interpretable in the first place, but I think there would exist situations where it will be. If I understand correctly, do the authors think such an experiment would add to the empirical evidence for their method? - I'm also a bit unclear about where this work is taking us, and would have liked it if it was explained better. In particular, are we hoping to change our representation learning towards learning concepts? If so, are you proposing this for reasoning tasks? For alignment? Interpretability? Or what else? The reason I'm asking is that wouldn't we probably still need some causal representations in some reasoning tasks, say in vision? Basically a short discussion of when we would prefer concepts over causal representations (or else) would be helpful. ---- Writing and Clarity: - The notion of environment in the context of concepts appeared out of the blue on page 7 (same with its notation that followed) - Not a weakness of the method - but the learning method seems like an important component of the paper which is deferred altogether to the appendix, i.e., one would learn about the identifiability, but there is no mention of the actual method to learn such (identifiable) concepts which might hide the difficulties and challenges associated with it. Please also see the questions. Technical Quality: 3 Clarity: 3 Questions for Authors: - Could you connect/contrast this work the recent advances in sparse autoencoders (SAEs)? While the problems are somewhat differently motivated, the resulting concepts bear resemblance to the learned sparse features from activations of transformers. Could the authors comment on this please? - If we care about a few features, why not use CRL methods that guarantee weaker identification requiring much less interventions? (See question below). - I generally agree with the motivation and direction of the paper, but I feel like the CRL community has been aware of these restrictions and recently took steps to address them; for instance, from what I understand the multi-node intervention line of work (cited by the authors) alleviates the challenge of perfect interventions and relaxes the requirement of learning all of $Z$, instead weaker and more general notions of identifiability have been introduced. A proper discussion on this would be helpful (more than what is in the appendix). Wouldn't it make sense to leverage such methods in tasks where we have a prior that there is some underlying causal representation? - Does equation 2 come from the independence of concepts (asking because that is introduced later). Why is not $k=dim(C)$? - Line 294 onwards, is $e$ properly defined before and used here? - Line 304, is $S_n$ a typo? $n$ was used as superscript before that, here it's used as a subscript. - Could assumption 4 be explained in words as well? Where does it come from? When/in what situations does it break? - How should one interpret table 2 of the appendix? Is there a reason why it's not plotted, and presented as numbers? Shouldn't we expect a linear plot? - Line 165, what do you mean by entangled concepts, is there an experiment to show that? Or do you mean the superposition of atomic concepts? - (Not a question impacting the score) Related to independence of concepts: Can the theory say anything about hierarchical concepts similar to hierarchical representations? Remark: I am willing to increase my score if some of the questions and weaknesses are discussed since I find the direction of this work quite interesting and important. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: See questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review and nice summary of the paper. We also appreciate their insightful comments on the importance of this work. > In the synthetic experiments, I was expecting to see large $d_z$ and small $n$ Thanks for the suggestion, we ran additional experiments and included the results in the global response. We kept $n$ as $4$ and increased $d_z$. While the metrics naturally degrade a bit, they're still comparable to other nonlinear CRL works. > Could the authors think of any experiment to contrast concept learning to CRL? This is an interesting suggestion and would potentially add to the empirical evidence. However, it is beyond the scope of our work to compare against CRL comprehensively, partly because we're studying settings where CRL is not directly applicable (e.g., due to lack of interventional data). Moreover, our work is primarily theoretical and we leave for future work to extensively compare concept learning methods to CRL techniques empirically. > I'm also a bit unclear about where this work is taking us, and would have liked it if it was explained better. In particular, are we hoping to change our representation learning towards learning concepts? Our hope with this work is to provide concept learning a rigorous footing via the theory of identifiability. We do not focus on a specific task and instead build a conceptual bridge between causal representations and concept learning. While causal representations may be ideal or even necessary for some tasks, our work presents a middle ground for many contemporary settings where this may not be possible (please see also L43-50). As reviewer `ohha` points out, "Bridging identifiability in (causal) representation learning and concept-based learning is an open problem" and this is one of the main motivations of our work. We will include this discussion in the paper. > The notion of environment in the context of concepts appeared out of the blue on page 7 Thank you, we will make sure to introduce this terminology carefully. > Not a weakness of the method - but the learning method seems like an important component of the paper which is deferred altogether to the appendix We agree that the contrastive learning method is an important component of the work, however, we have regrettably deferred the details to Appendix F due to lack of space. With the additional page available in the final version, we are happy to include this in the main paper. We next address the questions in order. > Could you connect/contrast this work the recent advances in sparse autoencoders (SAEs)? We're briefly aware of works on SAEs that learn interpretable features in models. However, to the best of our knowledge, they do not provide theoretical identifiability guarantees for learning concepts, which our work endeavors. > If we care about a few features, why not use CRL methods that guarantee weaker identification requiring much less interventions?...the multi-node intervention line of work (cited by the authors) alleviates the challenge of perfect interventions and relaxes the requirement of learning all of $Z$ Could the reviewer please clarify which specific work they mean? We're more than happy to comment further and have a proper discussion on this in the paper as well. > Does equation 2 come from the independence of concepts. Why is not $k = dim(C)$? Yes, equation 2 arises if the noise for the noisy estimates is independent for all atomic concepts. The product runs from 1 to $\mathrm{dim}(C)$ so our expression is the same as $\prod_{k=1}^{\mathrm{dim}(C)}$. > Line 294 onwards, is $e$ properly defined before and used here? We will clarify that $e$ is just an environment label for a concept conditional distribution. > Line 304, is $S_n$ a typo? $n$ was used as superscript before that, here it's used as a subscript. No, here $S_n$ corresponds to the permutation group on $n$ elements, we will use a different font to make the distinction clear. > Could assumption 4 be explained in words as well? Where does it come from? When/in what situations does it break? This condition is similar to other diversity conditions in identifiability theory. It ensures that there are sufficiently many, non-redundant datasets. It breaks, e.g., if there are fewer datasets than atomic concepts of interest or when several of the concept conditional datasets disagree. > How should one interpret table 2 of the appendix? Is there a reason why it's not plotted, and presented as numbers? Shouldn't we expect a linear plot? Note that for the hue variables the different numbers correspond to different colors but the numbers do not correspond to meaningful concept valuations, but are just discrete labels for the different colors. Therefore, we do not expect to recover a linear relation between the evaluated concept valuations and the label index. This is different for attributes such as, e.g., size or orientation where the value should correspond approximately to the valuation (potentially up to a non-linear transformation). Note that it is moreover not clear whether color is an atomic concept as we assume here (e.g., standard color representations are 2 dimensional). We nevertheless observe a high correlation coefficient between the representations learned by different models (see Table 5). > Line 165, what do you mean by entangled concepts...Related to independence of concepts: Can the theory say anything about hierarchical concepts similar to hierarchical representations? We would like to clarify that in assumption 2, we talk about linear independence (and not statistical independence) of atomic concepts. However, the concepts we actually allow can each consist of multiple atomic concepts and different non-atomic concepts can overlap (not superposition), which can be interpreted as hierarchical or entangled concepts as well. We thank the reviewer for their suggestions and welcome additional feedback to improve the text. --- Rebuttal Comment 1.1: Comment: Thank you for your efforts and for the clarifications. Also, thanks for carrying out experiments with larger $d_z$ and smaller $n$. Although I originally brought it up so there is an experiment aligning with the premise of the paper about very large $d_z$, I find this additional experiment fine as well. I appreciate the following comment > This is an interesting suggestion and would potentially add to the empirical evidence. However, it is beyond the scope of our work to compare against CRL comprehensively, partly because we're studying settings where CRL is not directly applicable (e.g., due to lack of interventional data). Moreover, our work is primarily theoretical and we leave for future work to extensively compare concept learning methods to CRL techniques empirically. However, since the paper contrasts itself a number of times to CRL requiring many interventions, I don't see it totally beyond the scope of this work to make a minimal effort at contrasting to basic CRL methods. I agree with the authors that the motivation for concept-based learning is to go beyond where CRL is not applicable, but still would find it compelling if there could be a setup where the advantage (in terms of the fewer number of domains required) could be showcased. > Could the reviewer please clarify which specific work they mean? We're more than happy to comment further and have a proper discussion on this in the paper as well. The works that I can recall are: https://proceedings.mlr.press/v238/ahuja24a/ahuja24a.pdf, https://arxiv.org/pdf/2311.12267, https://arxiv.org/pdf/2406.05937 (there are probably more) Thanks again for your rebuttal. I like this work and its direction and will maintain my score for now and adjust it if needed after discussion with other reviewers. --- Rebuttal 2: Comment: > The works that I can recall are: https://proceedings.mlr.press/v238/ahuja24a/ahuja24a.pdf, https://arxiv.org/pdf/2311.12267, https://arxiv.org/pdf/2406.05937 (there are probably more) Thanks for clarifying. We have taken another look at these papers, and to the best of our knowledge, these papers still require a number of environments that is lower bounded by the dimension of the latent space $d_z$ (with few notable exceptions with purely observational data, on which we comment below, please see also the footnote in page 3 where we state this). Thus, these papers do not seem to support the claim that there are "CRL methods that guarantee weaker identification requiring _much less interventions_". If we are mistaken, and the reviewer can pinpoint a specific result in one of these papers that accomplishes this, we would be happy to discuss further. It is true that the work https://proceedings.mlr.press/v238/ahuja24a/ahuja24a.pdf (reference [2] in the paper) and similarly [54, 40] show identifiability from just a single environment as we state in the footnote in the paper. Note, however, that they all make restrictive assumptions on the mixing function and on the distribution of all variables of the latent space, e.g., in the case of [2] it is assumed that the mixing function is polynomial and the support of the latent variables is the Cartesian product of bounded intervals. Therefore, neither these results nor their techniques extend to general mixing functions or settings where we only make assumptions on the distribution on some of the latent variables. This contrasts with our work, where we make minimal assumptions on the mixing function and only make assumptions on the latent distributions with respect to the concepts of interest. > However, since the paper contrasts itself a number of times to CRL requiring many interventions, I don't see it totally beyond the scope of this work to make a minimal effort at contrasting to basic CRL methods. I agree with the authors that the motivation for concept-based learning is to go beyond where CRL is not applicable, but still would find it compelling if there could be a setup where the advantage (in terms of the fewer number of domains required) could be showcased. Thank you for the question. Making an apples-to-apples comparison to CRL is not straightforward because the goals are different. However, we can say the following: - Experimentally, we would like to clarify that in cases where _CRL is also applicable_, our algorithm, which is similar to Buchholz et al. [13], can always be applied. Indeed we clarify in the paper that our algorithm is inspired by their works, please see L370-371, 1451. However, in the case of sublinear number of environments in our setting, standard CRL techniques are not applicable whereas our work is applicable, and this is the main motivation for our paper. - Theoretically, if we can intervene as per the concept distribution we work with (as also described in L129-134), then our results show that $2n$ concept interventions suffice to learn $n\ll d_z$ concepts, and existing methods would not handle this setting to the best of our knowledge (please see also Appendix C for alternate technicalities). Thus, in our setting, current CRL results require $\Omega(d_z)$ interventions, whereas we only need $o(d_z)$ interventions. Of course, part of our contribution is to propose a different goal (learning concepts) under different assumptions (conditioning vs. intervention), which makes such comparisons difficult. Nonetheless, we hope this offers some insight to understand how our work compares. --- Rebuttal Comment 2.1: Comment: Thanks again to the authors for their helpful clarifications and explanations. If I remember correctly, the results in [2] apply to general diffeomorphisms, and the number of required environments grows slower than $\Omega(d_z)$, however, I totally agree that the goals are a bit different and that is why I liked the paper in the first place. Relaxing the uncovering of the unmixing for the identification of concepts is quite nice, and I agree (with the help of the author's clarification) that an apples-to-apples comparison might not be possible or fair. Thanks again, and I'm optimistic that the authors will take the various feedback from all the reviewers into account to update the manuscript and the presentation, and I'd like to see this work accepted, therefore, I'm raising my score.
Summary: The authors argue the shift from causal representation learning (CRL) to concept-based representation learning since the current CRL framework relies on strong requirements such as interventional datasets and stands far from realistic, practical use-cases. The paper formalizes the notion of concepts and establishes a theoretical foundation on the identifiability of concepts. The experiments demonstrate the utility of the framework. Strengths: - The motivation is convincing and the framework is novel. It also provides a rigorous foundation for the notion of concept and its identifiability. - The writing is clear and easy to follow. The paper provides a thorough literature review which makes it very helpful to understand the paper positioning and key contributions. - Experimental results on CLIP and LLMs are interesting. It supports the paper’s motivation to move from CRL to concept-based representation learning. Weaknesses: Datasets $X^e$ from each environment is associated with different concept $C^e$ and corresponding valuation $b^e$. The proposed method using contrastive learning requires how the dataset is partitioned into each environment, i.e., $X^0, \cdots, X^m$. This implies that the framework is naturally more useful for **discrete** concepts (i.e., discrete valuation), as showcased in the experiments where the authors use discrete labels. However, as the motivation suggests, concepts could be continuous in many cases (e.g., intensity of the color). Therefore, I have doubt on the practical utility of the proposed framework since it cannot handle such continuous concepts. In other words, the requirement of data partition $X^0, \cdots, X^m$ goes against the motivation of the proposed framework of handling continuous concept valuations. Technical Quality: 3 Clarity: 4 Questions for Authors: - (line 208) Can you elaborate? I mean isn’t $A$ a projector matrix? - Is there any way to quantitatively measure concepts learned by two different models are how much linearly-related to each other? - The proposed method using contrastive learning should be described in the main section in more detail. Currently, it appears at the appendix, but I think that the algorithm is a key part of the paper which illustrate the practical utility of the proposed framework. (minor) - (line 1171-1172) Parenthesis is not closed. - (line 370, 1491) “the number of concepts $m$” should be “the number of environments $m$” Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive review and are glad they find the paper novel, well-written and rigorous. We address the weakness below. > In other words, the requirement of data partition $X^0, \ldots, X^m$ goes against the motivation of the proposed framework of handling continuous concept valuations. Apologies for the confusion, but we would like to clarify that our setting indeed handles continuous concepts (as claimed) as follows. The actual valuations could be continuous (e.g. we allow Gaussian noise) and our identifiability results do hold in such settings. In other words, the concept conditional distribution do not condition on a fixed value but rather allow for noise. Let us consider this using the example of intensity which was brought up in the review. Assume that we have datasets consisting of images taken at different times of the day. Then the intensity of the colors will fluctuate within each dataset due to slight variations of the time or weather conditions, but they will fluctuate around different mean valuations for each dataset. This matches our assumptions for this concept with continuous valuations. We also note that the experiments we conduct also involve some continuous valuations, since the latent points lie in only an approximate hyperplane. However we acknowledge that our methods may have not been fully probed in complex non-discrete settings and we leave this exciting direction for future work. We now address the questions raised. > (line 208) Can you elaborate? I mean isn’t $A$ a projector matrix? The formal definition 1 allows $A$ to be any linear transformation (e.g. it can be scaled), but as the reviewer noticed, there is no loss in generality in assuming it's a projector matrix, and we choose this definition for technical convenience. > Is there any way to quantitatively measure concepts learned by two different models are how much linearly-related to each other? Yes, one option are the $R^2$ metric and the Mean Correlation Coefficient (L373-375) that we report in our synthetic experiments (while we compute them against the ground truth since we know it in the case of synthetic data, these can also be computed across models). Indeed, in our CLIP experiments, we use the correlation coefficient to measure to what degree the learned concepts for two different models are linearly related (L1177-1188). > The proposed method using contrastive learning should be described in the main section in more detail. Currently, it appears at the appendix, but I think that the algorithm is a key part of the paper which illustrate the practical utility of the proposed framework. We appreciate the reviewer's acknowledgment of the contrastive learning method. However, we have regrettably deferred the details to Appendix F due to lack of space. With the additional page available in the final version, we are happy to include this in the main paper. We thank the reviewer for the additional typographic suggestions and welcome additional feedback to improve the paper.
Rebuttal 1: Rebuttal: We appreciate the thoughtful reviews and suggestions by the reviewers. We are glad that the reviewers found our approach original and novel (reviewers KRMD, jSX2, kVe2), well-written and rigorous (reviewers ohha, KRMD) and significant (reviewers jSX2, ohha). The reviewers appreciated our theoretical results and its significance, e.g. "I find the theoretical result insightful and important ... also clearly demonstrates the theoretical advantage of concepts vs. interventions", ""The paper is of high quality, providing new important results for identifiability of latent concepts"". We first address repeated comments below. **On additional experiments:** We appreciate the interesting suggestions to extend our method to semi-synthetic datasets and other real-world datasets. Following the suggestion of reviewer jSX2, we have also scaled up our synthetic experiments, please see attached pdf. However the main contribution of this work is theoretical. Indeed, not all of the theory community is aware of the empirical support for linearity of representations, and this is a key motivation for our work. We leave it to future work to comprehensively study experimental methods towards concept learning via our framework. **On relation between theory and experiment:** Our synthetic experiments serve to verify the theory whereas the other experiments are a bit more exploratory and serve to probe the different assumptions and conclusions of our theory (see also the response to reviewer ohha). In the individual responses, we have addressed the weaknesses and answered the questions raised by the reviewers. Pdf: /pdf/be74c0320c123c9a349a5bbe8683d27725390565.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
How Control Information Influences Multilingual Text Image Generation and Editing?
Accept (poster)
Summary: This article investigates an intriguing issue: how controlling information affects performance in the process of text image editing and generation. Through a series of observations and experiments, the authors extract valuable insights. Building upon these findings, they craft a novel framework that demonstrates impressive performance in both the generation and editing of text images. Strengths: 1. The experimental design, performance analysis, and underlying motivations are presented in a systematic manner with high-quality writing. Some findings offer valuable guidance for the community. 2. The performance gain in recognition accuracy is substantial. 3.The dataset introduced holds significant value. Weaknesses: 1. There is an absence of detailed quantitative analysis for the internal workings of the designed modules. 2. Although some perspectives are effectively illustrated through visualizations, they lack the backing of quantitative analysis. 3. Typos in references, such as Diffute: Universal text editing diffusion model. Trivial writing of the experimental section. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Within an equitable comparison framework, the performance improvement in TextGen on the FID metric is less pronounced compared to its advances in recognition accuracy. Are there any further analyses on this issue? 2. The author notes, "Although TG2M... it is highly effective for training and achieves superior performance." Is there any quantitative evidence available to support this claim? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors list the potential limitations and analyze the reasons briefly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your valuable comments and positive attitude to our paper. Below are my rebuttal and discussion for these questions 😊. **Question 1: Internal workings.** The pipeline contains two control models and a diffusion process. As shown in Figure 4, the early timestamps in diffusion process using global control model and the latter timestamps in diffusion process using local control model. **Question 2: Quantitative analysis for perspectives.** For the control input, the first line in Table 1 presents a general diffusion model using ControlNet, which performs worse than our proposed methods. This indicates that general diffusion cannot generate accurate text. More visualizations will be added in the Appendix in the final version of our paper. For control at different stages, we conducted an additional experiment by removing control at various stages. As shown in the table below, removing early control information results in a slight performance decrease, whereas removing late control information leads to a significant performance decline. This supports our conclusion that early control information is crucial for generating coherence and realism, while late control information ensures text detail and accuracy. For the control output, we calculated the frequency distribution shown in Figure 5 and performed an ablation study presented in Table 1 (lines 4 and 5). | | English | Chinese | |-------------------------|:---------:|:---------:| | Baseline | 60.18 | 61.42 | | Removing early control | 54.36 | 53.28 | | Removing late control | 34.84 | 23.42 | **Question 3: Incorrect citation.** We apologize for the incorrect citation, which may have occurred due to an error during the copying from Google Scholar. We will correct this mistake in the final version of the paper. **Question 4: The FID score.** The FID score measures the feature similarity between generated images and target datasets. We evaluate the FID score using the AnyWord-FID benchmark. 1) A higher FID score does not necessarily indicate lower visual quality of the generated images. This conclusion has been noted in earlier works [1]. Our generated images exhibit greater diversity, such as word art, which is scarce in the AnyWord-FID dataset, resulting in a higher FID score, as described in [1]. 2) The images in the AnyWord-FID benchmark are selected from the AnyWord training set, while we use a different training set, making it reasonable for AnyText to achieve a better FID score. **Question 5: The evidence for dataset effectiveness.** Our TG2M dataset contains only 2.5M images, which is significantly smaller than other training sets. As shown in Table 2, our baseline model is a general diffusion model similar to AnyText [2], trained for only 5 epochs, in contrast to other methods trained for 10 epochs. It can be observed that using our TG2M dataset achieves performance comparable to other methods despite using fewer data and fewer epochs. Recently, we employed more stringent filtering methods to process and clean the data, resulting in higher-quality data. Such data can further enhance generation performance. We will fully release our entire dataset and code without reservation after the paper is accepted. [1] Li T, Chang H, Mishra S, et al. Mage: Masked generative encoder to unify representation learning and image synthesis[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 2142-2152. [2] Tuo Y, Xiang W, He J Y, et al. Anytext: Multilingual visual text generation and editing[C]//Thirty-seventh Conference on Neural Information Processing Systems. 2023. --- Rebuttal Comment 1.1: Comment: Most of my concerns have been addressed. One remaining issue is that the authors pointed out that FID can not indicate the visual quality of the generated images effectively. Is there any other quantitative metric about image quality that can verify the effectiveness of TexGen? --- Reply to Comment 1.1.1: Comment: Thank you for your positive response! **First**, I would like to clarify that the FID metric does assess image generation quality to some extent. However, the evaluation dataset used in my experiments is the FID subset of the publicly available AnyWord dataset, which was selected from the AnyWord training set. Consequently, AnyText naturally yields a lower FID score, so our comparisons are not really fair to us. As shown in Table 2 in our paper, our method achieves a better FID score compared to other models when AnyText is excluded. **Additionally**, since FID is widely considered an inadequate measure of image generation quality, as noted by many scholars, the question you have raised is a very valuable one and well worth thinking about. Personally, We think that aesthetic scoring offers a better evaluation method. Therefore, we employed Laion's publicly available aesthetic scoring model [1] to evaluate the images generated by our model, and we calculated the average aesthetic score of these images, as detailed in the table below. | | English | Chinese | |------------------|:-------:|:-------:| | AnyText | 4.27 | 5.06 | |Ours | 4.42 | 5.18 | Lastly, we emphasize that your question is crucial and worth deep consideration by everyone. The solution we provided is one approach, and we hope other researchers will also explore improved methods for evaluating generation quality. [1] https://laion.ai/blog/laion-aesthetics/
Summary: This paper introduces a novel framework to enhance the quality of multilingual text image generation and editing by optimizing control information. The authors investigate the impact of control information from three perspectives: input encoding, the role at different stages, and output features. They find some insight conclusion and propose a framework (TextGen) employs Fourier analysis to emphasize relevant information and reduce noise, uses a two-stage generation framework to align the different roles of control information at various stages. Furthermore, they introduce an effective and lightweight dataset for training. The method achieves state-of-the-art performance in both Chinese and English visual text generation. Strengths: 1. The paper investigate the influence of control information, which can greatly inspire future works of the community. In summary, this paper is well organized, with reasonable motivation and insights. 2. The use of Fourier analysis to enhance input and output features, along with the two-stage generation framework, offers innovative methodological approaches in the field of text image generation. 3. The creation of a new lightweight yet effective dataset, TG2M, provides a valuable resource for training in visual text generation and editing. Weaknesses: 1. To my best knowledge, there exist other benchmarks such as the benchmark in TextDiffuser[1]. Although these benchmarks may not be of good quality, it would be better to compare TextGen with other works on these benchmarks or construct a high-quality benchmark based on TG-2M. 2. The data building process is not described in detail. Even though there are some differences (the recaption using Qwen-VL, the data selection), it looks a bit similar to AnyWord-3M and the author needs to describe the differences in paper or Appendix. [1] Chen, Jingye, et al. "TextDiffuser: Diffusion Models as Text Painters" NeurlPS 2023 [2] Tuo, Yuxiang, et al. "AnyText: Multilingual Visual Text Generation And Editing" ICLR 2024 Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Could TextGen generate other languages, such as Korean? 2. How about the diversity of textgen's generation of the same prompt? The authors can put some visualization examples. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: refer to weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for their positive attitude towards our paper and for their valuable comments 😊. Below are our rebuttal and discussion of the questions. **Question 1: The evaluation.** We conducted evaluations on the AnyWord benchmark, which is the highest quality benchmark. Your suggestion is valuable. We recently employed more stringent filtering methods to process and clean the data, resulting in higher-quality datasets. Such data can significantly enhance generation performance, and we have constructed a new benchmark from it. We will fully release our entire dataset and code without reservation after the paper is accepted. **Question 2: The data building pipeline.** We recaptioned the data using both BLIP and Qwen-VL. Due to the input length restriction in the CLIP text encoder, the diffusion model requires conditions shorter than 77 tokens. Qwen-VL often generates longer captions and sometimes hallucinates. Therefore, we generated the initial captions using BLIP, identified failure cases, and then recaptioned those using Qwen-VL. Additionally, we filtered the data using aesthetic scores and a watermark detection model. **Question 3: More languages.** Our training set only contains English and Chinese cases, which enables our model to generate high-quality English and Chinese visual texts. Moreover, by training with the glyph conditions, the model has developed a capacity to adhere to the glyph conditions. For other languages such as Korean and Japanese, the model can still generate them effectively in some cases when the glyph condition is provided. But as for more difficult languages such as Arabic, it is hard for our model to generate. We will try to construct a more comprehensive multi-lingual model in the future. **Question 4: The diversity.** Similar to other diffusion-based models, our model can generate diverse results when given different random seeds. Due to rebuttal restrictions, we are unable to provide visual examples here. We will include visual results related to diversity in the Appendix of the final version of our paper. --- Rebuttal Comment 1.1: Comment: Thanks for the response. All my concerns have been properly solved. I thus give ACCEPT as my final score. --- Reply to Comment 1.1.1: Comment: Thank you for your appreciation of our work.
Summary: This study explores the advancement of visual text generation using diffusion models within a ControlNet-based framework, focusing on the impact of control information. It examines input encoding, the role of control information during different stages of the denoising process, and the resulting output features. The authors propose TextGen, a method that aims to enhance generation quality by optimizing control information with Fourier analysis and a two-stage generation process. Strengths: 1. Analyzed the impact of three control factors on text image generation. 2. Achieved comparable results in text image generation and editing. Weaknesses: 1. The method lacks novelty, the writing of the paper is unclear, and the experimental results are insufficient. The experimental analysis does not provide new insight into the field. 2. The paper's analysis of control factors for text image generation is limited to three control conditions, which may not be the most effective. For example, character embeddings could be used as input, sentence prompts through an LLM could be used, or text coordinate encodings could be used as control conditions. 3. The necessity of the FEC module is not very clear, as it does not specifically target text image scenarios. Additionally, the computational efficiency of FFT and IFFT needs to be considered. 4. Why are global control and detail control split into two stages? Would it not be possible to consider both global and detail control simultaneously during the whole diffusion process? 5. The improvement of this method over existing methods is not obvious, especially in terms of visual and perceptual quality. Training on a small dataset also affects the method's generalizability. For example, can it still correctly generate text that appears infrequently or not at all in the proposed dataset? 6. The results lack demonstrations of “scene” text image generation; most images resemble text printing and have less realism compared to Anytext. This may be due to dataset bias, as shown in Figure 11. Thus, a small dataset is not a contribution of this paper and could be considered a drawback for large models. 7. The ablation experiments are insufficient, and the effectiveness of the modules is not convincingly demonstrated. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Why only consider these three control methods. 2. What is the reason and necessity for introducing Fourier transform. 3. Why can't both global and detail control be considered simultaneously during the diffusion process? Wouldn't it be possible to use a learnable module to control the weights of these two controls. The two seperated control stages proposed in this paper is clearly not optimal. 4. It is recommended to add more ablation experiments, such as without the Fourier module, and considering both global and detail control simultaneously. More visualization results should be provided, such as visualizations of the ablation experiments. 5. How does the method perform on characters that appear infrequently in the dataset. 6. It is recommended to include failure cases and analyze the reasons. Additional results for text image editing should also be provided. 7. Please adequately address the concerns raised in the weaknesses section. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your insightful comments and have carefully considered your questions and suggestions. Below is our rebuttal and discussion of these questions 😊. Due to rebuttal restrictions, we are unable to include images. We will provide the corresponding visual results in the Appendix of the final version of our paper. **Question 1: Why only consider these three control methods.** Our research is based on currently common visual text generation networks such as AnyText [1] and TextDiffuser [2]. These methods typically require glyph images for guidance. Since existing methods do not consider some unique properties of text, we investigated the impact of this control information. Additionally, *we examined the influence of the same control information from three different aspects, rather than studying three control methods.* **Question 2: Fourier transform.** Our motivation for employing FEC is detailed in Sections 1, 3.1.1, and 3.3 of the paper. We analyze it more clearly below. **1)** As shown in Figure 2, general ControlNet conditions primarily affect macroscopic styles and edges and minor texture inaccuracies are acceptable, but *visual text generation requires precise control over textures to avoid content errors and unrealistic details.* **2)** Text glyph conditions are sparse, with most areas being black, acting as noise in standard spatial convolutions. *The Fourier transform filters frequency*, enhancing attention to specific components in detail-rich areas of glyph images. **3)** Direct convolution in the spatial domain is limited by the receptive field, while each point in the frequency domain captures global information of the same frequency. *Convolution in frequency domain enables global interactions among similar frequency components.* **Overall, while such an approach is unnecessary for general generation, it is crucial for text generation.** Experimental results in Table 1 further confirm the necessity of utilizing the Fourier transform. **Question 3: Simultaneously global and detail process.** **1)** Our objective is to *investigate the differences between early and late control stages*, as compared to general scene images. This guides us to employ some strategies to promote the model’s learning. **2)** We have implemented a method that combines both local and global control, using MLP to generate control factors that weight both components. However, challenges in balancing these control factors during gradient descent led to noticeable oscillations in training, ultimately resulting in degraded generation quality. **3)** Using both global and local modules would **double the training cost** and significantly decrease the model's speed, thereby limiting its generalization capability. Consequently, the two-stage method proposed in this paper emerges as the optimal solution. **Question 4: More ablation studies.** **1)** The ablation study without the Fourier block has been conducted in Table 1, lines 1 and 2. Results show that our FEC significantly improves generation quality, especially for Chinese text, due to its complex edges and details. **2)** The ablation study considering both global and detailed control simultaneously is shown below. This model requires a weighting mechanism to balance the two conditions, posing a challenge that we discussed in detail in our response to question 3. ||English|Chinese| |-|:-:|:-:| |Two stage model|60.18|61.42| |One stage model|58.64|59.56| **Question 5: Characters frequency.** By training with the glyph conditions, the model has developed a capacity to **adhere to the glyph conditions**. For characters that are infrequently represented in the dataset, the model can still generate them effectively when the glyph condition is provided. The model even demonstrates some generation capability for languages that are not present in the training set. We will add relevant visual results in the Appendix. To further support this, we calculated the character generation accuracy of our model for both high-frequency and low-frequency characters, as shown in the table below. The difference in accuracy is not significant. ||high-frequency characters|low-frequency characters| |-|:-:|:-:| | Accuracy|61.95|58.41| **Question 6: Failure cases.** Our model encounters failure cases in generating small text due to VAE limitations, which is not the focus of this paper (detailed in Section 5). Some text generation errors also stem from poor-quality training data. Despite our data being of the highest quality, it contains some incorrect detections and captions, limiting the model's capabilities. We have recently applied stringent filtering methods to improve data quality, which can significantly enhance performance. We will release the entire dataset and code after the paper is accepted. **Question 7: Dataset.** Many recent works, such as LLaVA [3], propose that the quality of training data is essential for a model, even if the quantity is small. We believe it is highly significant to construct high-quality datasets, even if they are relatively small. Additionally, our model demonstrates superior performance on small datasets. When the amount of data is increased, the model can achieve even greater performance. **However, scaling up is not our primary goal.** **Question 8: Scene generation cases.** We list more generated results in the Appendix. As shown in Figure 12, there are many scene cases. We can also generate realistic images. [1] Tuo Y, Xiang W, He J Y, et al. Anytext: Multilingual visual text generation and editing[C]//Thirty-seventh Conference on Neural Information Processing Systems. 2023. [2] Chen J, Huang Y, Lv T, et al. TextDiffuser: Diffusion Models as Text Painters[C]//Thirty-seventh Conference on Neural Information Processing Systems. 2023. [3] Liu H, Li C, Li Y, et al. Improved baselines with visual instruction tuning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 26296-26306. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The author addressed most of my concerns. I also read the questions and responses from other reviewers in detail. I have improved my score, but I still think that this article lacks sufficient novelty. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your timely feedback and the strong support for our work. We are committed to incorporating all of the clarifications you suggested in the next version of our paper.
Summary: This paper analyzes the issue of control in image generation models. Specifically, the article addresses three aspects: input control information, the impact of control information at different stages, and output control information. The model was optimized for two tasks: text-to-image and image editing. Using a method similar to FreeU, the paper conducts Fourier frequency domain analysis on the input and output features and proposes a two-stage generation model based on previous findings. Strengths: 1.The writing of the article is excellent, with clear and concise sentences and well-organized structure. 2.The proposed final model FEC+GP+TS+IFE shows significant improvement. 3.The method is quite innovative, with a novel approach. Weaknesses: 1.Some models lack detailed theoretical analysis, which makes the purpose of the model unclear. 2. It is not clear what the task of this paper issue. Please point it out in a prominent position in the first chapter. 3. The layout is a bit messy. For example, should Table 2 be placed above Table 1? Technical Quality: 2 Clarity: 3 Questions for Authors: 1.Will the dataset be released? If so, please address the privacy, copyright, and consent of the data. 2.The convolution theorem states that the Fourier transform of a function's convolution is the product of the Fourier transforms of the functions. In other words, convolution in one domain corresponds to multiplication in another domain, such as time-domain convolution corresponding to frequency-domain multiplication. Can the FEC network in the article be considered as performing a simple point-wise multiplication operation? 3. I have questions about the effectiveness of FEC. The article only shows that using FEC is better through ablation experiments, but I have two concerns: 1) The improvement for English is not significant, while it is more noticeable for Chinese. Is the main reason for this that the model was fine-tuned on Chinese? 2) When using methods like depth control, the output might not match the control, but the output could still be reasonable. Using glyph images for control requires a strong match between the output and the input text. From a human observation perspective, glyph control might seem more precise, but for model training with MSE loss, both approaches could be similar. 4. Section 4.3 does not have accompanying text? 5. Why does convolution perform poorly in understanding text? Shouldn't there be some small experiments to demonstrate this? Alternatively, could it be explained through some visualization analysis or derivation process? 6. Figure 4, which serves as the overall pipeline diagram, does not seem to reflect the inference structure discussed in Chapter 3.5. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The article does not specifically discuss the potential negative societal impacts. For example, generating high-quality images guided by text might facilitate the creation and spread of fake news. Additionally, using more complex models could lead to increased resource consumption, among other issues. Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent', 'Ethics review needed: Safety and security'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your valuable comments and some of the affirmations of our paper. These questions are insightful and deepen my thinking. Below are my rebuttal and discussion for these questions 😊. **The writing.** We apologize for the writing errors in our paper. In this paper, we primarily studied the impact of control information. Through experiments, we derived some conclusions regarding control information. Based on these conclusions, we designed a novel and effective model to address the issues. We will clear the presentation in the final version of the paper, specifically in the first chapter, according to your suggestions. Furthermore, we believe that the placement of Table 1 and Table 2 is appropriate, as the body of the paper first presents ablation experiments followed by comparative experiments. If you find this sequence unreasonable, we will adjust the order in the final version of the paper. **Question 1: Privacy and copyright.** We recently employed more stringent filtering methods to process and clean the data, resulting in higher-quality data. Such data can significantly enhance generation performance. We will **fully** release our all dataset and code **without reservation** after the paper is accepted. **Question 2: Convolution and Fourier transform.** In mathematical theory, convolution in the frequency domain is equivalent to multiplication in the spatial domain. However, in our model, performing convolution in the frequency domain exhibits some differences. This is discussed in Section 3.3 of our paper, but we offer a more detailed discussion here: 1) In the frequency domain, **additional operations such as activation functions are applied alongside convolutions**, making it not strictly equivalent to point-wise multiplication in the spatial domain. 2) Direct convolution in the spatial domain is limited by the receptive field. However, each point in the frequency domain represents global information of the same frequency. This allows convolution in the frequency domain to enable **global interactions of similar frequencies**. 3) Performing convolution in the frequency domain can capture features that are difficult to learn in the spatial domain during gradient descent training. **The training effect of using gradient descent is not equivalent to the point-wise product operation in the spatial domain.** **Question 3: Concerns in FEC.** 1. **Concern 1:** The data and data mixture used in the ablation study in Table 1 are entirely consistent with Anytext [1], which randomly selected 200k training samples from the dataset. We did not perform any separate fine-tuning. The noticeable improvement in Chinese data is due to the model's enhanced sensitivity to details under the frequency domain enhancement from FEC. For generating more complex Chinese content, the performance improvement achieved with less data will be even greater. 2. **Concern 2:** Using MSE loss, the model's training objective remains consistent. However, we propose to enhance the model's ability to perceive details and high frequency. This allows the model to leverage more detailed information, thereby improving the quality of detail generation under the same MSE loss. While such an approach is unnecessary for depth-controlled generation, it is crucial for text generation, which demands fine-grained control. **Question 4: Section 4.3.** We apologize for the typographical errors and we will revise the typography in the final version of the paper. **Question 5: Discussion about convolution.** Our argument is not that convolution performs poorly in understanding text, but that standard convolution faces challenges. Due to the restricted receptive field of convolution, conventional operations struggle to capture global information. **Text often appears in elongated forms**, covering a large area in one direction, which hinders the performance of ordinary convolution in extracting text features and semantics. **Similar observations have been noted in several scene text recognition studies [2] [3]**. Moreover, in glyph images, **most regions are purely black**, serving as noise relative to the text regions during standard spatial convolution operations. We will reference the aforementioned studies to support our perspective in the final version of the paper. **Question 6: Question about the figure.** The figure 4 contains the inference pipeline we proposed. The lower part of the diagram illustrates the inference strategies. Different inference strategies are used for generation and editing tasks. The color of the UNet corresponds to either the global or detail ControlNets, indicating the use of different control models at various stages. The diagram is consistent with the description in Section 3.5. [1] Tuo Y, Xiang W, He J Y, et al. Anytext: Multilingual visual text generation and editing[C]//Thirty-seventh Conference on Neural Information Processing Systems. 2023. [2] Fang S, Xie H, Wang Y, et al. Read like humans: Autonomous, bidirectional and iterative language modeling for scene text recognition[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 7098-7107. [3] Atienza R. Vision transformer for fast and efficient scene text recognition[C]//International conference on document analysis and recognition. Cham: Springer International Publishing, 2021: 319-334. --- Rebuttal 2: Title: Sincere Invitation to Participate in the Discussion Comment: Dear Reviewer bu9j, We sincerely appreciate your time and feedback. Given the rush in finalizing the writing, some aspects may have caused confusion or misunderstanding. It is our priority to ensure that the rebuttal aligns with your suggestions, and we are open to further discussions to clarify any remaining questions or concerns. We would be grateful if you could improve the evaluation after reviewing our responses. Thank you very much for your consideration. Sincerely, The Authors
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
AsCAN: Asymmetric Convolution-Attention Networks for Efficient Recognition and Generation
Accept (poster)
Summary: The paper performs a thorough empirical analysis of design choices made in neural networks designed for computer vision tasks. Based on reasonable assumptions, they first narrow down the design space of the building block of such networks to combinations of Fused MBConv and vanilla transformer layers followed by an empirical analysis of the chosen combinations in terms of classification accuracy, latency (on GPUs) and model size. The top-performing combinations exhibit significantly better latency on GPUs compared to similar accuracy models optimized for efficiency, such as Swin Transformers and MaxViT. Their model demonstrates the effectiveness of a simple convolution + vanilla transformer model as compared to several efficient/hybrid attention models. Insights gained from this design are also used to design a similar model for Text-image generation (asymmetric mix of conv + transformer), which also gives a better latency-performance (FID, human preference) trade -off than existing models (Pixart alpha, SDXL). Strengths: 1. The paper is well-motivated, analyzing the benefits of a simple neural network architecture across a variety of tasks (recognition, segmentation, generation) on current hardware. The insights derived are of broad interest to a wide range of researchers. 2. The paper provides a thorough empirical analysis of design choices in image recognition experiments. Weaknesses: The contribution of the difference in training pipeline versus that of the architecture for improving image generation performance is unclear. If an empirical ablation study is infeasible for computational reasons, a more detailed discussion in Section 3.2.2 would significantly enhance clarity. Technical Quality: 4 Clarity: 3 Questions for Authors: See weaknesses Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The authors have addressed limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback on this work. As already pointed out by the reviewer, ablations on large-scale text-to-image generation tasks are computationally very expensive. Hence, we performed small ablations on the ImageNet-1K dataset. We list these ablations below. We will include a detailed discussion of these characteristics in the final version. **Training algorithm ablations.** - We analyzed the performance of the model with and without pre-training on the ImageNet-1K task. Appendix Figure 8 shows the evolution of the FID score with the number of training iterations. It shows that training the model without pre-training on the ImageNet-1K task results in slower convergence, and the final FID achieved by the model without pre-training is worse than the one achieved by pre-training on this task. Thus, we demonstrate that pre-training on this task helps speed the convergence. - We analyzed the importance of noise levels between $256\times256$ and $512\times512$ resolution stages in Appendix Figure 7. It shows that different resolutions prefer different input noise for the diffusion process. It helps us decide the use of noise=0.01 for $256$ resolution and noise=0.02 for $512$ resolution. We also ablated on other hyper-parameters (learning rate, weight decay) while training on $256\times256$ and $512\times512$ resolution stages. **Architecture ablations.** - We train on the class conditional $256\times256$ generation on ImageNet-1k. As seen in Table 3, our asymmetric architecture achieves a similar FID score at less than half FLOPs under the same training setup as other baselines. We provide additional results on this task in the rebuttal pdf (see Table 1), including inference latency and generation on the $512\times512$ resolution. It shows that our FLOP gains also translate to the inference latency gains.
Summary: The authors propose a principled way to design hybrid architectures for a variety of tasks, including image classification, semantic segmentation, class-conditional generation, and text-to-image generation. The goal is the resulting models, called Asymmetric Convolution-Attention Networks (AsCAN), to have the following characteristics, (1) to offer favorable performance-throughput trade-offs compared to existing SOTA models, (2) to be efficient across various modern hardware accelerators, and (3) to scale efficiently in terms of both compute and the amount of training data. As a hybrid architecture, AsCAN consists of a sequence of convolutional and transformer blocks. The reason the authors opt for this kind of architecture, is because they aim to combine the advantages of both convolutional layers, e.g., translation equivariance, and transformer layers, e.g., global dependencies. The authors experiment with different designs on the task of image classification by using ImageNet-1K, and then they generalize their findings to image generation. First, they experiment with different existing convolutional (C) and transformer (T) blocks, concluding to FusedMBConv for C, and vanilla attention for T, based on their accuracy vs throughput trade-off on different GPUs. Then, they experiment with different distributions of C and T blocks in a classification architecture. They always use a convolutional stem, four processing stages with multiple blocks, and a classification head. They conclude in an asymmetric design, meaning that they use more C blocks in the early stages, and more T blocks in the latter stages. They follow the same principles to design the U-Net backbone of a latent diffusion model for image generation tasks. All models are scaled by adding more blocks at each stage, maintaining the asymmetric distribution between C and T blocks. To efficiently scale training to large datasets, the authors propose a multi-stage training regime, where a model is first trained on a smaller dataset, and then, in subsequent stages, is fine-tuned for fewer iterations on the larger dataset. The authors first experiment on ImageNet-1K, showing that AsCAN models offer a better accuracy-throughput trade-off compared to SOTA baselines. The benefits in throughput are demonstrated with different GPUs and batch sizes. Then, they show that their AsCAN diffusion model performs on par with SOTA models on ImageNet-1K class conditional generation, but with considerably less FLOPs. Similarly, AsCAN demonstrates similar performance to the baselines on semantic segmentation on ADE20K, but with higher FPS. Finally, the authors train a 2.4B params AsCAN latent diffusion model on an internal dataset of 450M images, for the task of text-to-image generation. They show that their model outperforms the baselines on most aspects of the GenEval benchmark, and either outperforms or is on par with the baselines on image-text alignment, based on a human study. Strengths: Quality: 1. The authors develop the proposed design principles in a structured way, ablating different options, and offering adequate metrics, which include accuracy, number of parameters, and throughput on 2 GPUs and different batch sizes. 2. Similarly, the experiments are well-structured, using multiple baselines and metrics. For example, on image classification the authors compare against SOTA CNNs, Transformers and hybrid architectures of different sizes, and they provide accuracy, actual throughput and number of parameters. Importantly, the experiments support the main claims of the work. 3. The setup of all experiments is described in detail in the Appendix to aid reproduction. Clarity: 1. The manuscript is very well written, and easy to follow. The authors explain their method, and provide clear Figures and Tables, with appropriate captions. Significance: 1. The proposed models show clear benefits in terms of their performance-throughput trade-offs in different tasks, thus, they contribute to the community. 2. The authors provide specific design principles that they used to build AsCAN, and they could be useful to other domains as well. Weaknesses: Originality: 1. Hybrid architectures exist already in the literature, and AsCAN are built out of pre-existing components, like FusedMBConv, with known benefits, so, this limits the novelty of the approach. Quality: 1. The authors constraint their design to always have convolutional (C) blocks before Transformer (T) blocks within a stage. However, in Table 2, which includes the comparisons for the macro design of the architecture, MaxViT, which achieves the best accuracy, alternates C and T blocks. In addition, the “C before T” constraint breaks between stages, when a stage ends with T and the subsequent stage starts with C. So, it is not clear to me why this constraint is set. 2. I think the authors should provide actual throughput for the class conditional generation in Table 3. If I am not mistaken, this is the only experiment that doesn’t include this metric. The authors provide FLOPs in Table 3, however, as can be seen in other results, FLOPs don’t always translate to actual timings. Clarity: 1. Ln 23-24, “CNNs encode many desirable properties like translation invariance”, I think it should be “translation equivariance”. 2. Ln 160, the authors mention “C1 vs C10”, but I think a more direct comparison is C2 vs C10. 3. Ln 161, “While increasing the number of transformer blocks in the network improves the throughput”, I think Table 2 shows that increasing transformer blocks hurts the throughput, e.g., C6 has higher throughput on A100s compared to C9. 4. In Appendix A.3, I think the “Experimental Setup” part in Ln 639-644 repeats for the most part information already provided in the “Training Procedure & Hyper-parameters” part (Ln 628-638), so, the two parts can be merged. 5. Some minor typos, Ln 658, “most contains”, I think should be “mostly”; Ln 607 “the full results corresponding to the Fig. 3 in the Tab. 7”, I think the “the” before “Fig.” and “Tab.” are not necessary; Ln 681, “larger much”, I think “much” is not needed; Ln 733, “for this task similar”, I think there should be “is” after “task”; Ln 770, “achieving” is repeated twice. Significance: 1. One of the main contributions of this work is the favorable performance-throughput trade-offs offered by the proposed models. Specifically, in many experiments, AsCAN achieves similar performance compared to baselines, but manage to do it with higher throughput, so, I think the efficiency of the models makes them standout. However, I think it is not clear enough from the discussion of the method or the experiments, what are the causes of this efficiency. For example, in Section 3.1 (Ln 159-160), the authors point out that increasing T blocks in early stages decreases throughput (C8 vs C9 in Table 2), however, comparing C6, C7 and C8, which have an increasing number of T blocks earlier on, we see that C7 has slightly lower throughput compared to C6 on A100s and a bit higher on V100s, while C8 has considerably higher throughput compared to both C6 and C7 on V100, and on A100 for batch size 64, and lower for batch size 16 on A100. Also, C9, which has the most T blocks, compared to C6, which has T blocks only in the final stage, has slightly higher throughput on V100 and considerably lower on A100. So, it is not clear what exactly causes the variation in throughput, and to what extent is a matter of the hardware, e.g., would the same results persist on H100s? In addition, the baselines are very diverse, not only hybrid architectures, so, it becomes even harder to interpret the differences in behavior. For example, in Table 7, AsCAN-L has similar accuracy with EfficientNetV2-M, while AsCAN-L has more than $\times 3$ parameters, and about $\times 1.3$ MACs. At the same time, EfficientNetV2-M is optimized to take advantage of FusedMBConv, but still, AsCAN-L has almost $\times 1.5$ throughput on A100 with batch size 64, and about $\times 1.3$ on V100 with batch size 16. Similarly, in Table 11, AsCAN with 2.4B parameters has higher throughput compared to PixArt-$\alpha$, which has 0.6B params. I think the significance of the paper would be higher if the reported efficiency benefits were analyzed in more detail. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In NeurIPS checklist 15, the authors answer NA, however, in Section 4.3 (Ln 276-284) and Fig. 5, the authors report a user study with human subjects, doesn’t this study require approval for research with human subjects? 2. In Ln 763, Section A.8, it is mentioned “we are much better at avoiding NSFW generations compared to baselines (due to careful curation of the training data)”, how is this measured in the actual output of the models? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discuss limitations and societal impact in Sections A.8 and A.9 respectively. One thing that in my opinion could be added in the limitations, is that AsCAN sometimes require considerable more parameters to achieve favorable performance-throughput trade-offs, leading to higher memory footprint. For example, I think this is the case in Table 7 between AsCAN-L and EfficientNetV2-M, as I mentioned in a previous section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for reading the paper thoroughly and providing invaluable feedback. Below, we have tried to answer their questions and concerns. While we answered some questions in the main rebuttal, we reiterate their highlights for completeness. **Originality: Limited Novelty.** - Our main contribution is the asymmetric distribution of convolution and transformer blocks in various stages in a hybrid architecture. - We show that simple design choices (Sec. 3.1) yield architectures with existing blocks that achieve state-of-the-art performance and latency trade-offs. We demonstrate that this design is easily applicable to various applications. - Many works design architectures based on parameter count and FLOPs, but this typically does not translate into inference throughput gains. Some of these issues come from using operators that do not contribute to parameter count and FLOPs but require non-trivial runtime, such as reshape, permute, etc. Others originate from the lack of efficient CUDA operators for these specialized attention and convolutional operators. - In contrast, our proposal directly measures the inference throughput on different accelerators and incorporates building blocks that yield higher throughput. **Quality: Justification of C before T constraint.** Our intuition behind this constraint comes from the following observations: - Interleaving C and T in a stage performs many tensor reshape operations that do not add any FLOPs. However, these operations count towards runtime, and many such operations lower the model throughput. - Given a feature map, C blocks can capture local and scale-aware features, while T blocks try to work out the dependency between all feature values. Thus, it would be more beneficial to perform a convolutional operator first to capture these local and scale features, followed by pairwise dependencies between all tokens. - It helps to narrow the search process since interleaving these blocks would result in many possibilities and be hard to evaluate computationally. - Further, to validate our assumptions, we have included configurations where T appears before C in Table 2 in the rebuttal pdf. In the models where the first stage consists of T, the throughput is significantly lower than in instances where C is the first stage. Similarly, models with T before C do not achieve similar accuracy vs performance trade-off as the configurations where C appears before T. **Quality: Missing throughput in the class conditional generation.** Thank you for pointing this out. We benchmark the throughput (images generated per second) on an A100 GPU for all the baselines. In the attached rebuttal pdf, table 1 shows the throughput for one forward pass of batch size $64$ for all the models. To achieve an FID score of $2.23$, our $52$G FLOPs model achieves a $556$ samples/sec throughput while state-of-the-art DiT-XL/2-G with $118$G FLOPs achieves $293$ samples/sec. It shows that our asymmetric model still has nearly double the throughput compared to other models while achieving similar FID. **Clarity.** We have included your comments in the manuscript. These will be reflected in the final version. **Significance.** - *EfficientNetV2-M vs AsCAN-L.* There seems to be a bit of confusion in reading EfficientNetV2-M numbers. This model has been trained with an input resolution of $384\times384$ and evaluated at an input resolution of $480\times480$. Thus, even with a smaller parameter count, it has much higher FLOPs than other models with nearly $50$M parameter range. Further, all the hybrid models have been trained and evaluated at $224\times224$ resolution. For instance, with train/test input resolution as $224$, to achieve close to $85.1\%$ top-1 accuracy, MaxViT-L requires 212M parameters, FasterViT-4 requires 425M parameters, MOAT-3 requires 190M parameters. When we follow the EfficientNetV2-m training strategy, AsCAN-L achieves $86.2\%$ top-1 accuracy, while the larger EfficientNetV2-L variant with $120$M parameters achieves $85.7\%$ top-1 accuracy. - *How much do the results depend on hardware?* The impact of C and T blocks in a hybrid architecture results in non-linear behavior across accelerators and batch sizes. This is precisely the reason we do not rely too much on the number of floating point operations to estimate inference latency. We choose 16GB V100 and 80GB A100 GPUs as representatives of two popular RAM and accelerator designs. We expect the trend on H100 to be similar to A100 and other lower-end GPUs (such as A10G, L4, etc.) to be similar to V100. While currently we do not have access to these accelerators, we will try to include benchmarks on these accelerators in the final version. - *T2I Speed Up.* PixArt-$\alpha$ is a purely transformer architecture while AsCAN is a hybrid architecture involving convolutional blocks in early stages. As we have observed in our ablative experiments over C and T distribution in Table 2 as well as configurations where T appears before C in rebuttal pdf, transformer layers in the early part of the network significantly reduce the throughput. We will add further investigations into PixArt-$\alpha$ in the final version. **Questions: User study review process.** Thank you for noticing this. We did get approval for the user study. We did not disclose these details to preserve anonymity. We will disclose the review process in the final version. **Questions: NSFW evaluation.** We used a set of internal prompts that are suggestive in nature, i.e., they do not explicitly ask for generating unsafe images, but rather hide these details in words. We generated images with SDXL and our model. We compared the amount of NSFW images in these two models. Almost all of the SDXL generations are NSFW while our generations are safe and do not include any nudity. **Limitations.** We have included the inference memory consumption in Table 3. We will include the discussion on parameter aspects in the final version. --- Rebuttal 2: Comment: Dear Reviewer R6Bu, Thank you very much for your valuable feedback and the positive evaluation of our work. We have included detailed explanations in response to your questions. As the deadline for the discussion period approaches, we would appreciate your review of these explanations to confirm that they resolve any remaining concerns. Let us know if you have any other questions. Thank you once again for your insightful review. Best regards, Authors --- Rebuttal Comment 2.1: Title: Thank you for your reply Comment: I would like to thank the authors for their detailed reply. I would also like to acknowledge their effort to address comments with additional experiments in the rebuttal pdf. I find the arguments with respect to novelty and quality convincing, however, my main concern is about the significance, which relates to getting a deeper understanding about the causes of the better performance-throughput trade-off AsCAN provide across tasks. For example, about EfficientNetV2-M, the impact of higher input resolution in computation is captured by MACs, which are still less than those of AsCAN, so, it seems to me that it would be important to have a clear discussion about the reasons that MACs/FLOPs don't translate to throughput. The authors already provide some justification in their rebuttal, by mentioning the impact of operations like reshape and permute, or the lack of efficient CUDA operators for specialized operations. If the authors expand such observations into a clear discussion where they pinpoint the causes of inefficiencies of current SOTA designs, I think the impact of the work will significantly increase, because it would allow members of the community to make more informed decisions about their designs, without the need to make numerous ablations. About hardware, I agree that focusing solely on FLOPs is not sufficient, and throughput should be a major consideration as well. Given that, in my review, I gave a number of examples where the "C before T" design gives conflicting throughput outcomes in Table 2. So, similar to my previous point, I think AsCAN provide a valuable contribution through their favorable performance-throughput trade-off, but the contribution would be higher if it was more thoroughly discussed what are the causes of the observed behavior. In light of such a discussion/analysis, I would be happy to increase my rating. --- Reply to Comment 2.1.1: Comment: Dear Reviewer R6Bu, Thank you for your quick response and positive rating! In the main text, we will include a detailed discussion as to why FLOPs do not translate to throughput (latency) gains and pinpoint the causes of inefficiencies of current SOTA designs. We will also incorporate further analysis of various configurations in Table 2 to break down the impact of C and T block arrangement. Best Regards, Authors --- Rebuttal 3: Title: Official Comment by Reviewer R6Bu Comment: Thank you very much for your willingness to address my comments. I will read the changes in the main text as soon as they become available, and update my review accordingly. --- Rebuttal Comment 3.1: Comment: Dear Reviewer R6Bu, Thanks for your feedback and your willingness to further read our improved main text. We will include the following discussion to the revised paper. --- Reply to Comment 3.1.1: Comment: Below are the primary reasons MACs do not translate to the throughput gains.  - **Excessive use of operations that do not contribute to MACs**. Tensor operators such as reshape, permute, concatenate, stack, etc., are examples of such operations. While these operations do not increase MACs, they burden the accelerator with tensor rearrangement. The cost of such rearrangement grows with the size of the feature maps. Thus, whenever these operations occur frequently, the throughput gains drop significantly. For instance,  - MaxViT uses axial attention that includes many permute operations for window/grid partitioning of the spatial features. See Table 7 for a throughput comparison between MaxViT and AsCAN. Also, Table 1 shows that Multi-Axial attention yields significantly lower throughput when compared to vanilla transformer block.   - Similarly, the “Scale-Aware Modulation Meets Transformer“ (SMT-S, SMT-B) architecture includes many concatenation and reshape operations in the SMT-Block. It reduces the throughput significantly even though their MACs are lower than AsCAN (see Table 3 in the rebuttal pdf). - **MACs do not account for non-linear accelerator behavior in batched inference.**  Another issue is that MACs do not account for the non-linear behavior of the GPU accelerators in the presence of larger batch sizes. For instance, with small batch sizes (B=1), the GPU accelerator is not fully utilized. Thus, the benchmark at this batch size is not enough. Instead, one should benchmark at larger batch sizes to see consistency between architectures.  - **Lack of efficient CUDA operators for specialized building blocks.** Many new architectures propose specialized and complex attention or convolution building blocks. While these blocks offer new perspectives and better MACs-vs-performance trade-offs, it is likely that their implementation relies on naive CUDA constructs and does not result in significant throughput gains. For instance,  - Bi-Former architecture introduces Bi-Level Routing Attention (BRA), which computes regional queries and keys and constructs a directed dependency graph. It computes attention between top-k close regions. Their implementation (see Algorithm 1) uses a top-k sorting operation and performs many gather operations on the queries and keys. We believe such an implementation would benefit from writing custom efficient CUDA kernels.  - RMT (Retentive Networks Meet Vision Transformers) architecture extends the notion of temporal decay in the spatial domain. It computes the Manhattan distance between the tokens in the image. It includes two separate attention along the height and width of the image. This process invokes many small kernels along with reshape and permute operations. - **Use accelerator-friendly operators.** Depending on the hardware, some operators are better than others. Depth-wise separable convolutions reduce the MACs, but they may not be necessarily efficient for your particular hardware. Excessive use of depth-wise separable convolutions should be avoided in favor of the full convolutions wherever possible. For instance,  - MogaNet extensively uses depth-wise convolutions with large kernel sizes along with concatenation operations. These operators reduce the multiply-addition counts, but these are not necessarily efficient on high-end GPU accelerators. Similarly, MaxViT uses MBConv as the convolutional block.  - Even on mobile devices, recently proposed MobileNetV4 architectures include full convolutions in the early layers to fully utilize the mobile accelerators. We design AsCAN keeping above points for reference. Namely, - We use highly efficient building blocks. FusedMBConv provides much higher throughput than MBConv blocks and invokes a full convolution followed by a point-wise projection. Both of these operations are highly efficient on a high-end GPU accelerator. Similarly, we rely on the vanilla transformer block with efficient native CUDA implementation (PyTorch has FlashAttention operators). Our ablations in Table 1 in the main text demonstrate the effectiveness of these blocks compared to other complex building blocks. - We do not excessively use operations that do not contribute to MACs, such as window partitioning (permute, reshape) in MaxViT or concatenation operations in SMT. We further do not alternate C and T blocks within a stage to avoid multiple reshape operations between these blocks.   - Further, compared to pure convolutional blocks such as EfficientNetV2, we heavily use transformer blocks in the later stages. The transformer blocks capture global dependencies between the features computed by convolutional layers in the early stages.  It yields better performance. We believe the above discussion should help clarify why MACs do not translate directly to throughput/latency gains on the high-end GPU accelerators and why our design choices in AsCAN help us achieve better throughput gains than existing architectures. --- Rebuttal 4: Title: Official Comment by Reviewer R6Bu Comment: Thank you very much for the provided discussion, I updated my rating assuming that this analysis will be part of the final manuscript. --- Rebuttal Comment 4.1: Comment: Dear Reviewer R6Bu, Thank you for your feedback and positive rating! We are glad to know that our responses address all of your concerns. We will include the discussed analysis in the final manuscript. Thanks & Regards, Authors
Summary: Asymmetric Convolution-Attention Networks for Efficient Recognition and Generation In this paper, AsCAN combines both convolutional and transformer blocks. The authors revisit the key design principles of hybrid architectures and propose a simple and effective asymmetric architecture, where the distribution of convolutional and transformer blocks is asymmetric, containing more convolutional blocks in the earlier stages, followed by more transformer blocks in later stages. This paper contains tremendous experimental results to show its efficiency. However, it has several weak points. Strengths: This paper contains tremendous experimental results to show its efficiency. Weaknesses: 1. The main contribution is to combine convolutional and transformer blocks to obtain the better performance with reduced latency. However, its theoretical analysis is not shown enough. The asymmetric architecture is also used to generate images. Although the intensive experiments seem to make the contents more fluent, intensive analysis in terms of theoretical points is not enough. 2. The marginal performance enhancements are given in this paper. For example, Figure 3 shows the proposed architecture achieved the best optimization. However, I think that the enhancements were very small. More perspective analysis and expectation are required. 3. In the image generation part, the proposed model is used as a backbone. Is there any internal analysis such as weight distribution and training characteristics? Technical Quality: 2 Clarity: 2 Questions for Authors: Please, see the section of Weakness. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We provide additional clarifications in this response to the best of our understanding of the reviewer's concerns. **(Q.1) Lack of theoretical analysis.** We propose a new design for hybrid convolutional-transformer architectures and apply this new architecture to different applications (image recognition, class conditional generation, text-to-image generation, etc.). We show that our proposal achieves significantly better performance vs latency trade-offs. We analyze the throughput on various accelerators and show the theoretical FLOPs required by all the models. Our asymmetric architectures consist of transformer blocks with a quadratic complexity in the input sequence length. It would be good if the reviewer could elaborate on the theoretical analysis required in this neural network architecture design. **(Q.2) Marginal performance enhancements.** It is unclear why the reviewer thinks performance enhancements are marginal in this work. Other reviewers have already acknowledged that performance improvements are non-trivial (see strengths mentioned by reviewers 3CqY, R6Bu, hpXB, and fxge). To be concrete, - For image recognition tasks (see Figure 3, Table 7), AsCAN architectures achieve similar accuracy with up to $2\times$ throughput increase compared to other architectures such as MaxViT, FasterViT, ConvNeXt, CoAtNet, etc. We have incorporated other related works in the rebuttal pdf (see Table 3 and Figure 1) that show similar gains in performance. - On class conditional generation (see Table 3), our architecture achieves similar FID scores with less than half the FLOPs. We have included latency and $512\times512$ generation tasks in the rebuttal pdf (see Table 1). - Similarly, on the Text-to-Image generation task (see Table 4, Table 5, Table 11), our models achieve much better resource efficiency and image generation quality when compared to existing baselines. **(Q.3) Analysis on training characteristics.** We have performed experiments to ablate and improve the training quality. - We analyzed the model performance with and without pre-training on the ImageNet-1K task. Appendix Figure 8 shows the evolution of the FID score with the number of training iterations. It shows that training the model without pre-training on the ImageNet-1K task results in slower convergence, and the final FID achieved by the model without pre-training is worse than the one achieved by pre-training on this task. - We analyzed the importance of noise levels between $256\times256$ and $512\times512$ resolution stages in Appendix Figure 7. It shows that different resolutions prefer different input noise for the diffusion process. It helps us decide the use of noise=0.01 for $256$ resolution and noise=0.02 for $512$ resolution. --- Rebuttal 2: Comment: Thanks for your rebuttal. In Tables 1 and 2, your architectures had significantly reduced the inference time on GPUs, having marginal performance enhancements. I understand the contributions in terms of reduced computations in GPUs. The ratio of enhanced throughput with batch=1 seems to be small compared to the case with batch= 16, and 64 in Table 7. I think that the reason of the throughput enhancements on GPUs could be explained in a structural point of view. --- Rebuttal 3: Title: Response to Reviewer Comments Comment: We thank the reviewer for reading the rebuttal and responding promptly. We would like to clarify some remarks below. - We appreciate that the reviewer acknowledges we achieved significantly reduced inference time. In all our experiments, we analyze the trade-off between performance (top-1 accuracy / FID score) and latency (throughput measure in images processed per second) achieved by the architectures. We would like to humbly clarify that we do not claim that we significantly improve performance. Instead, we claim to achieve significantly better performance *vs.* latency trade-offs. This can be observed by our empirical evaluations (see Table 7 in the main text, Figure 1, Table 1, and 3 in the rebuttal pdf), where we obtain significantly better throughput to achieve similar performance as other models. Vice-versa, we achieve better performance at the same throughput (see Figure 1 in rebuttal pdf). Thanks for the comments from the reviewer. We will further improve the writing of this paper to make it more clear about the claims of this paper. - Since we focus on designing architectures suitable for GPUs, we provide throughput numbers for different batch sizes (B=1, 16, 64). At this large scale, batched inference (with B>1) makes a lot more sense since lower batch sizes (*e.g.*, B=1) do not utilize the GPU memory fully and end up returning a very non-linear behavior. Even at B=1, we still achieve much better performance *vs.* throughput trade-off than many baselines. We hope the above response could help address the concern of the reviewer. Please let us know if the reviewer has other questions and we would be very happy to help answer. --- Rebuttal Comment 3.1: Comment: Dear Reviewer jBHn, We would like to thank you again for your valuable feedback on our paper. As the period for the Author-Reviewer discussion is closing very soon, we would like to use this opportunity to kindly ask if our responses sufficiently clarify your concerns. We sincerely appreciate your time and consideration. Best Regards, Authors --- Reply to Comment 3.1.1: Comment: Dear Reviewer jBHn, We would like to thank you again for your valuable feedback on our paper. As the period for the Author-Reviewer discussion is closing today, we would like to use this opportunity to kindly ask if our responses sufficiently clarify your concerns. We sincerely appreciate your time and consideration. Best Regards, Authors
Summary: The authors present AsCAN, a hybrid architecture that combines convolutional and transformer blocks, that can applied to both visual recognition and generation. This architecture features an asymmetric distribution, with more convolutional blocks in early stages and more transformer blocks in later stages. It demonstrates favorable results in large-scale text-to-image tasks, outperforming recent models in both public and commercial domains, and provides better throughput. Strengths: [Intuitive Design]The authors leverage existing vanilla attention along with the FusedMBConv block to design the new architecture, called AsCAN. Their main philosophy revolves around the asymmetric distribution of the convolutional and transformer blocks in the different stages of the network; more convolutional blocks in the early stages with a mix of few transformer blocks, while it reverses this trend favoring more transformer blocks in the later stages with fewer convolutional blocks. [Extensive Experiments] The proposal is thoroughly evaluated in both classification and generation tasks, achieving state-of-the-art results in both. [Scalability] The authors show that the proposal scales well in the regime of Tiny-Large recognition models. Weaknesses: [Unclear explanation] - Please provide clearer and more intuitive explanations of how the proposal can achieve higher throughput while maintaining or improving accuracy. [Model/Data/Resolution Scalability] - Please demonstrate how the proposal can scale up to larger recognition models (XL/XXL/huge regime), as it currently only shows tiny to large models in the appendix. - Please illustrate how the proposal can scale up to larger generation models. Specifically, when FLOPs are matched similarly to the FLOPs shown in Table 3 (100-150G), can we achieve better FID scores? - Please illustrate how the proposal is scalable w.r.t the data size. The appendix Table 11 provides only the snapshot on 450M images. - Please illustrate how the proposal is scalable for higher resolution image processing. For example, does the current proposal perform better in both recognition and generation compared to other architectural designs when the input resolution increases (224 -> 512 -> 1k, etc.)? Especially focus on generation task. [Minor] - The terminology (e.g., equal blocks in remaining stages, asymmetric vs. symmetric) can be improved for better clarity. - The table formats (e.g., Table 1 and Table 2 heights are different) can be improved. Technical Quality: 3 Clarity: 3 Questions for Authors: Several design choices are not thoroughly evaluated. - Why is C placed before T? Do you have quantitative results on placing T before C? - Why is the first stage fixed? We can introduce (adaptive) pooling to incorporate T in the early stage and compare it with pure convolutions. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Macro-level architectural ablation experiments are mostly performed on the classification task due to resource limitations. This fundamentally limits the generalizability of the conclusions to other tasks. We cannot draw any strong conclusions derived from classification experiments for the generation task. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for reading the paper thoroughly and providing invaluable feedback. Below, we have tried to answer your questions and concerns. While we answered some questions in the main rebuttal, we reiterate their highlights for completeness. **Explanation of higher throughput while maintaining or improving accuracy.** - Many existing works design architectures based on parameter count and FLOPs, but this typically does not translate into inference throughput gains. Some of these issues come from using operators that do not contribute to parameter count and FLOPs but require non-trivial runtime, such as reshape, permute, etc. Others originate from the lack of efficient CUDA operators for these specialized attention and convolutional blocks. - For instance, MaxViT consists of MBConv and Axial-attention transformer blocks. MBConv block is more friendly for mobile devices due to separable convolutions but hurts throughput on high-end GPU accelerators. Similarly, Axial-attention heavily invokes the permute operations. It results in additional overhead and hurts throughput. - In contrast, we utilize building blocks that are efficient for high-end GPU accelerators, and our experimental design directly measures the inference throughput on different accelerators to search for optimal performance vs throughput trade-offs. This search favors asymmetric design with more convolution blocks in the early stages and more transformer blocks in the later stages. This helps reduce the occurrence of reshape and permute operations appearing in instances where C and T blocks are interleaved repeatedly. **Scaling to larger architectures.** There are many strategies to scale the asymmetric design. In one direction, we can train our existing models with larger input resolution to achieve better top-1 accuracy. For instance, training AsCAN-L trained with $384$ resolution results in $86.2$\% top-1 accuracy, compared to training with $224$ resolution which yields $85.2$\%. On the other direction, we can scale these base architectures similar to previous works (CoAtNet, MaxViT, EfficientNetV2, etc.) by proportionally scaling the width and block repetitions in the stages (S0, S1, S2, S3). For instance, we can scale to AsCAN-XL variant by increasing the number of C and T blocks as (CC, C$^{4}$T$^{2}$, C$^{8}$T$^{8}$, C$^{4}$T$^{8}$), resulting in a $340$M parameter model that achieves $86.7$% top-1 accuracy. We can similarly scale these configurations. We can obtain even larger models by scaling the stages S1, S2, and S3. **Scaling Text-to-Image Generation w.r.t. data size.** While it would be good to understand the model performance scaling w.r.t. data size, unfortunately, we do not have compute resources to train multiple such text-to-image generative models at any higher data scale than $450$M image-text pairs. **Class conditional generation with $512\times512$ resolution.** We trained our small UNet variant on the $512\times512$ generation task. We report its FLOPs, throughput, and FID scores in the rebuttal pdf (see Table 1). On this task, DiT-XL/2 with $525$G FLOPs and $51$ throughput achieves an FID score of $3.04$. Similarly, U-ViT-H/2 with $546$G FLOPs and $45$ throughput achieves an FID score of $4.05$. In contrast, our smaller model with $224$G and $130$ throughput achieves an FID score of $3.15$. Thus, it has more than twice the throughput with a similar FID score. **Class conditional $256\times256$ generation with 100G FLOPs.** We created an additional class conditional generation model on $256\times256$ resolution with an asymmetric model using 103G FLOPs. It achieves a throughput of $360$ samples/sec with an FID score of $2.08$ while DiT-XL/2-G model achieves a throughput of $293$ samples/sec with an FID score of $2.27$. We will include higher FLOPs models in the final version. **Design choice ablations.** We have included ablative results for configurations with T before C as well as where the first stage consists of T blocks (see Table 2 in rebuttal pdf). In the instances where the first stage consists of T, the throughput is significantly lower than in the instances where C is the first stage. Similarly, the configurations with T before C do not achieve similar accuracy vs performance trade-off as the configurations where C appears before T. **Generalizability of architecture search on ImageNet-1k**. We use the ImageNet-1K task for the architecture search problem since it is computationally cheaper for ablations when compared to other tasks and is still representative enough for the vision domain. To show its benefits, we use this simple design in other tasks without any task-specific optimizations. In the class-conditional generation, our asymmetric architecture achieves similar FID as various state-of-the-art models with less than half the FLOPs (translating into half latency). We see similar improvements in other tasks like Text-to-Image generation. Thus, we can conclude that the search on the ImageNet-1k task generalizes to other tasks. Indeed, we would expect to achieve even better performance once we optimize these modules per task, but this process is computationally expensive. **Clarity in terminology.** Thanks for pointing out the formatting and terminology issues. We will address these in the final version. --- Rebuttal 2: Comment: Dear Reviewer 3CqY, Thank you very much for your valuable and constructive feedback. We have included detailed explanations and additional experiments in response to your questions. As the deadline for the discussion period approaches, we would appreciate your review of these explanations to confirm that they fully meet your expectations and resolve any remaining concerns. Thank you once again for your insightful review. Best regards, Authors --- Rebuttal Comment 2.1: Comment: Dear Reviewer 3CqY, We would like to thank you again for your valuable feedback on our paper. As the period for the Author-Reviewer discussion is closing very soon, we would like to use this opportunity to kindly ask if our responses sufficiently clarify your concerns. We sincerely appreciate your time and consideration. Best Regards, Authors --- Reply to Comment 2.1.1: Comment: Dear Reviewer 3CqY, We would like to thank you again for your valuable feedback on our paper. As the period for the Author-Reviewer discussion is closing today, we would like to use this opportunity to kindly ask if our responses sufficiently clarify your concerns. We sincerely appreciate your time and consideration. Best Regards, Authors
Rebuttal 1: Rebuttal: We are grateful to all the reviewers for their constructive and detailed feedback. We included following additional experiments in the attached rebuttal pdf: 1. Figure 1 top-1 accuracy vs throughput trade-off figures for both A100 and V100 GPUs 2. Table 1 with additional details on class conditional generation (throughput benchmark and experiments for $512$ resolution) 3. Table 2 analyzes the impact of placing $C$ blocks before $T$ in our hybrid architecture. 4. Table 3 compares other related works pointed out by reviewer hpXB with AsCAN architectures on the ImageNet-1k classification task. We also include the memory consumption. Below, we address common questions and concerns raised by the reviewers. **(Q.1) Novelty and Intuition Behind Higher Throughput.** - At the heart of our proposal lies the asymmetric distribution of convolution and transformer blocks in various stages in a hybrid architecture. - Asymmetric architectures outperform many existing models that utilize specialized attention and convolutional operators. These works design models based on parameter count and FLOPs, but it typically does not translate into inference throughput gains. Some of these issues come from operators that do not contribute to parameter count and FLOPs but require non-trivial runtime, such as reshape, permute, etc. Others originate from the lack of efficient CUDA operators for these specialized attention and convolutional operators. - For instance, MaxViT consists of MBConv and Axial-attention blocks. MBConv is mobile-device friendly due to separable convolutions but hurts throughput on GPUs. Similarly, Axial-attention heavily invokes the permute operations, lowering throughput. - In contrast, our proposal directly measures the throughput on different accelerators and incorporates blocks that yield higher throughput. - We show the benefits of our design on multiple tasks such as recognition and generation. We show that simple design choices (Sec. 3.1) yield architectures with existing blocks (FusedMBConv and Transformer) that achieve state-of-the-art trade-offs between performance and latency. **(Q.2) Base architecture for search.** There are many options for a base architecture. We chose a popular design used in MaxViT and CoAtNet papers. A more thorough approach would involve a neural architecture search for the base architecture and the search for optimal components (C and T blocks). However, this search process is computationally expensive due to the exponentially large search space. Thus, for resource efficiency, it is reasonable to use an existing base architecture. We provide simple scaling strategies to obtain larger models from the base architecture. Besides, our experiments on the class conditional and text-to-image generation tasks show that asymmetric architectures achieve better performance-latency trade-offs. **(Q.3) Generalizability of architecture search on ImageNet-1k.** We use the ImageNet-1K task for the architecture search problem since it is computationally cheaper for ablations when compared to other tasks and is still representative enough for the vision domain. To show its benefits, we use this simple design in other tasks without any task-specific optimizations. In the class-conditional generation, our asymmetric architecture achieves similar FID as various state-of-the-art models with less than half the FLOPs (translating into half latency). We see similar improvements in other tasks like Text-to-Image generation. Thus, we can conclude that the search on the ImageNet-1k task generalizes to other tasks. Indeed, we would expect to achieve even better performance once we optimize these modules per task, but this process is computationally expensive. **(Q.4) Class Conditional Generation.** We have included some requested experiments on this task in Table 1 in the attached rebuttal pdf. - *Throughput numbers.* We benchmark the throughput (images generated per second) on an A100 GPU for all the baselines. Table~1 shows the compute time for one forward pass of batch size $64$ for each of these baselines. To achieve an FID score of $2.23$, our $52$G FLOPs model achieves a $556$ samples/sec throughput while state-of-the-art DiT-XL/2-G with $118$G FLOPs achieves $293$ samples/sec. - *Scaling to higher resolution.* To study the scaling of the model for higher resolution generation, we train the same small asymmetric UNet architecture for class-conditional $512\times512$ ImageNet-1k generation. Similar to the $256\times256$ task, we observe more than twice the throughput while achieving similar FID scores as the baseline architectures. Note that our text-to-image model can generate images in higher resolutions such as $1024\times1024$, $1536\times768$, etc. **(Q.5) C before T constraint.** Our intuition behind this constraint comes from the following observations: - Interleaving C and T in a stage performs many tensor reshape operations that do not add any cost to FLOP. However, these operations count towards runtime, and having many such operations ends up lowering the model throughput. - Given a feature map, convolutional operators can capture local and scale-aware features, while transformer blocks try to work out the dependency between all feature values. Thus, it would be more beneficial to perform a convolutional operator first to capture these local and scale features, followed by pairwise dependencies between all tokens. - It helps to narrow the search process since interleaving these blocks would result in many possibilities and be hard to evaluate computationally. - Further, to validate our assumptions, we have included configurations where T appears before C in Table~2 in the rebuttal pdf. In the models where the first stage consists of T, the throughput is significantly lower than in instances where C is the first stage. Similarly, models with T before C do not achieve similar accuracy vs performance trade-off as the models where C appears before T. Pdf: /pdf/e1068c98109e9467f2e2734f7bca0774e14d1fa0.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper presents a hybrid neural network architecture that incorporates both convolution-based and vision transformer (ViT)-based building blocks for discriminative and generative modeling. The proposed convolutional block, labeled as (C), is identical to the EfficientNetV2's FusedMBConv, where a standard 3x3 convolution with a 4x dimension expansion and an extra squeeze excitation module (SE) is within, followed by a 1x1 convolution where GELU replaces SiLU. The transformer-like block, termed vanilla Transformer (T), processes the vanilla self-attention and MLP blocks in parallel, which is in contrast to the original sequential Transformer block. For ImageNet experiments, the authors follow the standard protocol to evaluate the proposed models with the ImageNet-1k top-1 accuracies, where the proposed model is empirically adjusted in the order of C and T blocks to optimize the trade-off between ImageNet-1K top-1 accuracies and computational budgets, including throughput and the number of parameters. The resulting optimal model is like asymmetric architecture, wherein the number of convolution and transformer blocks is asymmetric in different stages. Additionally, the architecture is applied to UNet-based diffusion models (i.e., DDPM), structured similarly to the ImageNet architecture, to assess the effectiveness of the proposed design. The generative model is evaluated by the FID score on generating 256x256 ImageNet-1K 2 images using class conditional generation. The optimal architecture for this turned out to be asymmetric across the UNet stages as well. Strengths: + This paper is easy to follow and well-written. + Extensive experimental results are provided, proving the claim in both discriminative and generative modeling. + The performance on the ImageNet-1K classification looks impressive compared with some recent models. Weaknesses: - The paper lacks intuition and clear reasoning for the architectural design choices. No explanation or key insights are provided on why the proposed design or employed modules perform better than other alternatives. - It is unclear why the S1 stage should consist solely of two convolution layers. It seems that the traditional stem (having the stride 4) is separated into two stages (i.e., S0, S1). Are there any other reasons for this design choice? - The authors claimed that the searched architecture is asymmetric, but it is unclear why an asymmetric combination of convolution and transformer blocks would perform better. The rationale behind this architectural choice needs further clarification. - While the chosen FusedMBConv and Vanilla Transformer in Table 1 appear promising, it is unclear why these options outperform others. - The rationale for why the generative model achieves a better FID score using modules optimized for the recognition task is not clearly explained. - The positioning of self-attention modules after convolution-based layers (in a stage or across the entire network), as seen in the architectural choices, has already been explored in previous studies [1, 2]. Therefore, the architectural results presented in Table 2 can be considered expected outcomes based on the known knowledge. The authors are encouraged to provide new insights or takeaways. - There is no held-out set for searching architectural choices. It appears that the proposed architecture was optimized over the ImageNet validation set for the best performance, but a more appropriate setup would involve using a separate held-out set. - The results from pre-training on ImageNet-21K are not as impressive as those from ImageNet-1K. This reviewer suspects that the reason may be that the architecture is highly fine-tuned to the ImageNet-1K dataset. - The ADE20K Semantic Segmentation results are not fairly compared and missing numbers for recent architectures; the competing methods enjoy smaller computational costs (e.g., for Swin) to the proposed method than the proposed models; and there are only Swin and Faster-ViT architectures for comparison. - The proposed Transformer architecture (a parallel processing architecture of self-attention and MLP) is referred to as the "vanilla" block, but it is known that a typical "vanilla" Transformer processes self-attention and MLP sequentially. This terminology used by the authors could lead to confusion. Furthermore, the parallel architecture was proposed in [a] and employed in many works like [b]: - [a] MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning, arxiv 2020 - [b] Simplifying Transformer Blocks, ICLR 2024 - It would be beneficial to provide memory consumption during training and inference for the proposed architectures. - Comparing the proposed method with some recent works would reveal its effectiveness more clearly. The authors are encouraged to compare performance trade-offs with the following recent works [3, 4, 5, 6]. [1] MobileNetV4 - Universal Models for the Mobile Ecosystem, arxiv 2024 [2] Scale-Aware Modulation Meet Transformer, ICCV 2023 [3] BiFormer: Vision Transformer with Bi-Level Routing Attention, CVPR 2023 [4] MogaNet: Multi-order Gated Aggregation Network, ICLR 2024 [5] RMT: Retentive Networks Meet Vision Transformers, CVPR 2024 [6] DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs, ECCV 2024 Technical Quality: 2 Clarity: 3 Questions for Authors: - See the weaknesses - Can the authors give the performance tradeoff like Fig. 3 on an A100 GPU? Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Limitations are provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for reading the paper thoroughly and providing invaluable feedback. Below, we have tried to answer your questions and concerns. While we answered some questions in the main rebuttal, we reiterate their highlights for completeness. **Stem Design Choice.** This is a popular stem design and has been utilized in related works such as MaxViT, CoAtNet, etc. We have also included ablations with some configurations that replace convolution blocks with transformers. **Rationale behind the asymmetric design.** Existing hybrid architectures such as MaxViT, FasterViT, CoAtNet, etc., follow a symmetric design in which C and T blocks are uniformly distributed across stages or within a stage. In contrast, AsCAN architectures recommend an asymmetric distribution with a preference for more convolutional blocks in the early stages and more transformer blocks in the later stages. Further, since even this search space is large for a reasonable computational budget search, we restrict ourselves to some design choices to find reasonably performing architectures in this search space. In Table 2, we perform ablations on various combinations of these C and T blocks and show that asymmetric configurations yield better performance-latency trade-offs. It is further evaluated in Figure 3 and Table 7, which compare AsCAN with existing hybrid architectures on various accelerators. **Why modules optimized for recognition tasks generalize on generative tasks.** We use the ImageNet-1K task for the search problem since it is computationally cheaper for ablations when compared to other tasks and is still representative enough for the vision domain. To show its benefits, we use this simple design in other tasks without any task-specific optimizations. In the class-conditional generation, our asymmetric architecture achieves similar FID as various state-of-the-art models with less than half the FLOPs (translating into half latency). We see similar improvements in other tasks like Text-to-Image generation. Thus, we can conclude that the search on the ImageNet-1k task generalizes to other tasks. Indeed, we would expect to achieve even better performance once we optimize these modules per task, but this process is computationally expensive. **Architectural results presented in Table 2 can be considered expected outcomes based on the known knowledge.** - We already compare against SMT models ([2] Scale-Aware Modulation Meet Transformer, SMT-B, and SMT-S variants), see Figure 3 and Table 7 for comparison. For instance, AsCAN-B architecture achieves $84.73\%$ top-1 accuracy with a throughput of $590$ samples per second on a V100 GPU, while SMT-B architecture achieves $84.3\%$ top-1 accuracy with a throughput of $243$ samples per second. Notice that many operations in SMT architectures are not high-end accelerator friendly and thus their lower FLOPs do not translate to higher throughput. Further, SMT architecture has a symmetric distribution of convolution and transformer blocks (initial stages have only C and later stages only have T). - Similarly, MobileNetV4 architectures are targeted toward mobile device applications and hence focus on layers that are mobile-friendly. For instance, having depth-wise separable convolutions does not utilize the full accelerator capacity on high-end GPUs. Further, this architecture only leverages transformer blocks sparingly in a few configurations. - Besides, we already compared against many hybrid architectures that have such a design (either C followed by T in different stages) or uniform mixing of C and T blocks within a stage. Instead, we are proposing an asymmetric distribution of these blocks in different stages. **Provide memory consumption for the proposed architectures.** We have included the inference memory consumption along with the numbers for the remaining new architectures requested by the reviewer. As illustrated in Table 3 (see attached rebuttal pdf), AsCAN architectures have lower memory consumption compared to other architectures. For instance, for batch-size $64$, the MogaNet-XL model consumes $74.9$GB memory while AsCAN-L consumes $21.2$GB memory. This is still lower than the memory consumed by RDNet-L model which is a purely convolutional architecture and requires $26.2$GB memory. **Compare with recently proposed works.** Thank you for listing the missing related works. We have included a performance vs throughput comparison with these works in Table~3 in the attached rebuttal pdf. We have also included these architectures in Figure 1 for the performance-latency trade-off on V100 and A100 GPUs. In this list, there are architectures that have significantly lower FLOPs but this does not result in significant gains in the throughput when measured on high-end accelerators. This is due to a large presence of operators such as permute which do not add floating point operations but require additional runtime. In contrast, AsCAN architectures still outperform these works on inference latency vs top-1 accuracy trade-offs. **Performance tradeoff like Fig. 3 on an A100 GPU.** Thank you for raising this point. We already report throughput and performance in the Appendix (Table 7). We have included the requested performance trade-off figure for an A100 GPU in the rebuttal pdf. This follows a similar trend as the one for V100 GPU. **Confusion in Transformer block naming.** Thank you for pointing this out, we will update the terminology in the paper. **Lack of held-out set in ImageNet experiments.** We follow previous works in the experimental setup. **Semantic Segmentation results missing recent architectures.** Thank you for the feedback. We will include other results for semantic segmentation in the final version. --- Rebuttal 2: Comment: Dear Reviewer hpXB, Thank you very much for your valuable and constructive feedback. We have included detailed explanations and additional experiments in response to your questions. As the deadline for the discussion period approaches, we would appreciate your review of these explanations to confirm that they fully meet your expectations and resolve any remaining concerns. Thank you once again for your insightful review. Best regards, Authors --- Rebuttal 3: Comment: Dear Reviewer hpXB, We would like to thank you again for your valuable feedback on our paper. As the period for the Author-Reviewer discussion is closing very soon, we would like to use this opportunity to kindly ask if our responses sufficiently clarify your concerns. We sincerely appreciate your time and consideration. Best Regards, Authors --- Rebuttal Comment 3.1: Comment: Dear Reviewer hpXB, We would like to thank you again for your valuable feedback on our paper. As the period for the Author-Reviewer discussion is closing today, we would like to use this opportunity to kindly ask if our responses sufficiently clarify your concerns. We sincerely appreciate your time and consideration. Best Regards, Authors --- Rebuttal 4: Comment: Sorry for the very late reply to the responses. I greatly appreciate the detailed responses to my concerns, but my concerns still remain: >Existing hybrid architectures such as MaxViT, FasterViT, CoAtNet, etc., follow a symmetric design in which C and T blocks are uniformly distributed across stages or within a stage. In contrast, AsCAN architectures recommend an asymmetric distribution with a preference for more convolutional blocks in the early stages and more transformer blocks in the later stages. >Asymmetric architectures outperform many existing models that utilize specialized attention and convolutional operators. These works design models based on parameter count and FLOPs, but it typically does not translate into inference throughput gains. Some of these issues come from operators that do not contribute to parameter count and FLOPs but require non-trivial runtime, such as reshape, permute, etc. Others originate from the lack of efficient CUDA operators for these specialized attention and convolutional operators. The above authors' responses still do not give me a rationale for the architectural choice: I am still curious about why the asymmetric distribution of convolutions and transformers works well. Even though performance improvements have been observed, I feel that the design concept might not be a universal option beyond ImageNet. Additionally, the CTTTTC block in the right middle of the generative model design (in Fig. 2) suggests that the authors' claim may vary depending on the specific scenario. I believe a NeurIPS paper should provide rationale/insights into the suggested method, enabling readers to apply it to other tasks or domains. More evidence of the transferability or universality of the discriminative architecture is certainly needed. I understand that time constraints may hinder additional experiments in this discussion round, but these could be considered in future revisions. Another concern is that the authors claim higher acc vs. throughput trade-offs of a proposed architecture as a key contribution. However, from my perspective, the higher throughput largely depends on the use of FusedMBConv and the parallel Transformer architecture, as demonstrated in Table 1. Therefore, this may not be a unique contribution of this work, as it relies on existing GPU-friendly operations. Furthermore, Table 2 does not appear to be a controlled experiment in terms of the computational budget, which is the number of parameters. I believe that constraining the number of parameters would better highlight the trade-offs for "some" convolution-dominated counterparts, as these networks in the table seem to "typically" have more parameters to compute, which could put them at a disadvantage. I believe that more controlled experiments might lead to different conclusions, so the contribution of finding an asymmetric architecture in Table 2 is somewhat diluted. I also noticed that Stage 1 (S1) could be merged into the stem (S0), which leads to the final number of stages to three and deviates from the standard four-stage design. Are there any particular intuitions behind this architectural choice? (I recall seeing something similar designs elsewhere, but I can't remember which work it was. and there were no clear explanations for it there either) Finally, the authors made great efforts to address my concerns, and the contribution to generative modeling is noteworthy, so I am increasing my rating to 5. The authors are encouraged to reflect all the reviewer's concerns and the responses in to the final revision. --- Rebuttal Comment 4.1: Comment: Dear Reviewer hpXB, Thank you for reading our rebuttal and providing a positive rating. We sincerely appreciate your time and efforts. We would like to use this opportunity to clarify some remarks below. **Stem Design and merging stage S1 and S0.** The stem design in our base architecture is similar to many widely used architectures such as MaxViT, CoAtNet, EfficientFormer, etc. Further, we would like to humbly mention that the argument about merging stage S1 and S0 is invalid. Since S0 is the stem and it only consists of convolution layers, while stage S1 consists of C blocks which in our case are FusedMBConv blocks. **Rationale for the architectural choice.** The discriminative backbone produces class labels (e.g., the ImageNet class) at the end of the last layer, while the generative backbone (UNet) produces a synthesized latent (which is later decoded into an image) with the same size as the input sampled noise. Thus, they both are fundamentally different. UNet architecture consists of three stages: down, middle and up. Since our search experiments have been based off of discriminative backbone, it is easier to mimic the down and up stages with the asymmetric distribution. The middle stage design is borrowed from the standard UNet architectures. Besides, we have used our model for semantic segmentation task as well, showing that the asymmetric design is applicable to a wider range of tasks. **Performance-throughput trade-offs are simply due to GPU-friendly operations.** We respectfully beg to differ with the reviewer. We are not simply using the GPU-friendly operations. We also improve the latency-performance trade-off by designing new architecture. Please kindly notice that, given the amount of GPU-friendly operations, there exists tons of configurations in which these can be configured together to yield a new architecture. Our asymmetric design choices yields architectures with much better performance-vs-throughput trade-offs (as seen in Table 2 in main text and Table 2 in rebuttal pdf). In these ablations, while we can reduce the model size of the convolution dominated architectures, this would also result in lower performance compared to the current status. We hope the above response could help address the concern of the reviewer. Please let us know if the reviewer has other questions and we would be very happy to help answer. Best Regards, Authors
null
null
null
null
null
null
Bridging OOD Detection and Generalization: A Graph-Theoretic View
Accept (poster)
Summary: This work proposes a framework to address OOD detection and generalization jointly for image data, using a graph representation, where edges are constructed by both self-supervised data transformation probability and supervised labels. An example is provided with theoretical analysis, to show the advantages and disadvantages of the proposed method. Strengths: 1. The paper is well-written with a clear motivation. 2. The paper proposes a algorithm with graph-based formulation to jointly address OOD detection and generalization, achieving state-of-the-art performance. 3. The paper presents an example with theoretical analysis to study the characteristics of the algorithm and better understand it. 4. The paper proposes a surrogate loss to enhance computational efficiency, which is well-supported. Weaknesses: 1. From the theoretical analysis (lines 200-216), it appears that the OOD generalization ability of this algorithm depends on the relationship between $\alpha$ and $\beta$. This may lead to failures in OOD generalization, while for OOD detection, the method is more effective. 2. As the data transformation is not learnable, $\alpha$ and $\beta$ seem to be fully determined by the data distribution. Hence, there is no guarantee for OOD generalization in some scenarios. 3. The experimental evaluation is consistent with the theoretical characterization, indicating that the model is less effective at OOD and ID classification compared with its OOD detection ability. 4. Although the authors present a theoretical analysis, which reflects the advantages and disadvantages of this approach, however, for the OOD generalization failure and limited ID classification performance, the author doesn't propose further refinement to handle this issue. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Would more advanced data transformation in SSL enhances the OOD generalization ability? 2. What does it mean by heterogeneous distribution in line 266? It seems the major difference to previous methods is the usage of both unsupervised and supervised learning when building the graph. 3. For the FPR metric, why can the proposed algorithm perform so much better than the state-of-the-art OOD detection method? For example, 0.13 vs. 40.76 in ASH in Table 1. Is there any explanation for this significant uplift? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for the positive and constructive feedback on our work! Below we address your questions and comments in detail. > Further refinement for the OOD generalization performance We appreciate your insightful analysis and totally agree with your perspective. Guarantees for OOD generalization may not always hold in some specific scenarios. This is not unique to our approach, but a common challenge underlying OOD generalization algorithms (e.g., when the domain gap is significant) [1]. Inspired by your suggestion, a potential future direction could involve incorporating learnable augmentations, which may further enhance OOD generalization performance. > Would more advanced data transformation in SSL enhance the OOD generalization ability? This is an excellent question! In this paper, we chose to use the same data augmentation transformations as [2] to keep the method simple and user-friendly. We agree that exploring more advanced data transformations could be an interesting direction. > What does it mean by heterogeneous distribution in line 266? By "heterogeneous distribution," we refer to unlabeled data that includes various types of distributional shifts, such as covariate shifts and semantic shifts. This is formally defined in **Definition 2.1** in Section 2. In contrast, the baseline methods assume the unlabeled data exhibits a homogeneous shift, either entirely due to covariate shifts (as in unsupervised domain adaptation) or semantic shifts (as in novel class discovery). For greater clarity, we have revised the wording to "heterogeneous shift." Thanks for calling that out! > For the FPR metric, why can the proposed algorithm perform so much better than the state-of-the-art OOD detection method? For example, 0.13 vs. 40.76 in ASH in Table 1. Is there any explanation for this significant uplift? Great question. To clarify, ASH is a post-hoc OOD detection method, which operates on a model trained solely with ID data. In contrast, our method is trained on a combination of ID data and unlabeled data from $\mathbb{P}_\text{wild}$. OOD detection methods that use auxiliary data typically achieve significantly lower FPR compared to post-hoc methods. Therefore, a fairer comparison would be with methods trained on unlabeled data, such as Outlier Exposure, WOODS, and SCONE. Our method favorably outperforms these competitive baselines, including the state-of-the-art method SCONE [3]. ----- References [1] Ye, Haotian, et al. Towards a theoretical framework of out-of-distribution generalization. Advances in Neural Information Processing Systems 34 (2021): 23519-23531. [2] Chen, Xinlei, and Kaiming He. Exploring simple siamese representation learning. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021. [3] Bai, Haoyue, et al. Feed two birds with one scone: Exploiting wild data for both out-of-distribution generalization and detection. International Conference on Machine Learning. PMLR, 2023. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' reply. I will keep positive score. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read our response! We are glad to hear that our rebuttal addressed your concerns.
Summary: The paper proposes a graph-theoretic framework to address out-of-distribution (OOD) generalization and detection. The framework models the data using a graph, where vertices represent data points and edges indicate similarities based on supervised and self-supervised signals. By leveraging spectral decomposition of the graph's adjacency matrix, the authors derive provable errors for OOD generalization and detection performance. Empirical results demonstrate that the effectiveness of the proposed approach. Strengths: - the connection between the OOD problem and partition/clustering analysis from graph theory is interesting - the papers are well-written with examples for illustrations Weaknesses: - despite the case studies for the proposed frameworks/analysis, the proposed method in its current form seems to be impractical for large-scale problem - the effectiveness of the analysis and proposed method heavily rely on how the edges are/should be constructed in the graph, which itself is a challenging and open question - there are some connections between the analysis and claims that are not very strong (see questions below for more details) - there are some minor formatting/presentation issues in line 192-195 Technical Quality: 3 Clarity: 3 Questions for Authors: 1. what are the advantages of doing OOD generalization and detection at the same time compared to have different method for each? 2. what are the novel insight for OOD from this graph theoretic perspective? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N.A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for the positive and constructive feedback on our work! Below we address your questions and comments in detail. > Practicality of the algorithm/framework You raise an excellent point. Our graph-theoretic framework can be used practically, as detailed in Section 3.2. In particular, the spectral decomposition can be equivalently achieved by minimizing a surrogate objective in Equation (6), which can be efficiently optimized end-to-end using modern neural networks. **Empirically, we have demonstrated this on large-scale dataset including ImageNet (see Section E.2)**. Thus, our approach enjoys theoretical guarantees while being applicable to real-world data. > proposed method heavily rely on how the edges are/should be constructed in the graph. Thank you for raising this important point. We acknowledge that the construction of edges in a graph is indeed a challenging and open question, and this is a crucial aspect of our work. However, **our approach specifically addresses this challenge by proposing a surrogate objective that effectively circumvents the need for explicitly defining and constructing graph edges in a traditional sense**. Instead, we reformulate the problem into a contrastive learning framework, where the relationships between data points are implicitly captured through the use of augmentation transformations. This allows us to leverage the power of graph-based modeling while avoiding the complexities associated with direct edge construction. Moreover, our method is designed to be flexible and adaptable to different scenarios by allowing the augmentation transformation probabilities to guide the implicit graph structure. This design choice not only makes the approach more practical but also robust to variations in the underlying data distribution. By providing theoretical guarantees as a function of the parameters that define these probabilities, we ensure that our method is both effective and grounded in solid theoretical foundations. We believe our work represents a significant advancement in how graph-based methods can be applied to OOD detection and generalization. > What are the advantages of doing OOD generalization and detection at the same time compared to have different method for each? There are several benefits of devising a method that can jointly handle both OOD generalization and detection problems: - **Computational efficiency**: Investing in developing and maintaining a single method is typically more cost-effective than supporting two separate methods in inference time. This can be particularly important for organizations with limited resources, or dealing with large volume of data traffic. - **Deployment simplicity**: Deploying and maintaining a single model is generally simpler and less error-prone than managing multiple models. This includes considerations like updates, scaling, and monitoring. - **Improved performance**: The learning tasks of OOD generalization and detection can benefit from each other, as we have demonstrated in this paper. As shown in Table 1, our method excels in both OOD detection and generalization performance, surpassing state-of-the-art methods by a large margin. > Novel insights There are several theoretic insights derived from the graph-theoretic perspective: + **Effectiveness of OOD detection**: As presented in Section 4.3, we primarily analyze the OOD performance in our proposed framework. Theorem 4.2 exhibits that the separability between semantic OOD data and ID data displays a large value, which facilitates OOD detection. The empirical results in Table 1 also validate the effectiveness of OOD detection performance. + **Novel analysis of semantic OOD data**: As presented in Section B, we introduce a novel analysis of the impact of semantic OOD data, thoroughly examining cases where semantic OOD data originates from the same or different domain as covariate OOD data. Theorem B.1. demonstrates that semantic OOD and covariate OOD sharing the same domain could benefit OOD generalization, which can be empirically validated in Section E.4. + **Impact of ID labels on OOD performance**: As presented in Section C, we investigate the effects of ID labels on OOD generalization and detection, providing new insights into how the linear probing accuracy of covariate OOD and separability between ID and semantic OOD data improve with the incorporation of ID label information. Theorem C.1. and C.2. show that incorporating ID labels during pre-training can facilitate both OOD generalization and detection performance, which can also be validated empirically in Tables 2 and 6. > Formatting issue in L192-L195 Great catch! We will fix that in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I do not have further questions. --- Reply to Comment 1.1.1: Title: Reply Comment: Thank you for taking the time to read our response! We are glad to hear that our rebuttal addressed your concerns.
Summary: The paper introduces a novel graph-theoretic framework to tackle both out-of-distribution (OOD) generalization and detection. By representing data points as vertices in a graph and using the adjacency matrix decomposition, the authors derive data representations that allow for quantifiable error rates in OOD tasks. Theoretical insights are provided through formal error bounds, and empirical results demonstrate the framework's effectiveness and robustness, showcasing significant improvements over existing methods. This framework is practical and scalable, leveraging modern neural networks for efficient optimization on real-world data. Strengths: 1. The idea is novel and the model is both reasonable and sound. 2. The intuition behind the method is clearly described, and the theoretical analysis justifies the model well. 3. Comprehensive experiments demonstrate the effectiveness and robustness of the proposed method, with significant improvements over state-of-the-art techniques. 4. The framework is scalable and practical, utilizing modern neural networks for efficient optimization, making it applicable to real-world data. 5. The use of spectral decomposition of the graph’s adjacency matrix to derive data representations is a novel and effective approach. Weaknesses: 1.The paper builds on the baseline Scone for its experiments. Regarding Definition 2.2, it is unclear why four metrics are used in the experiments while only three are introduced here. Is it because Scone only introduced three? Please provide a reasonable explanation. 2.The field of Graph Neural Networks (GNNs) has mature tasks for OOD generalization and OOD detection. Why did the authors choose to use graphs to address tasks in computer vision instead of leveraging these existing GNN-specific tasks? 3.While the authors provide extensive theoretical insights and proofs, the experimental section seems insufficient. The main experiments are largely based on Scone, and the additional experiments only compare performance. If this work is pioneering, this might be acceptable, but given related prior work, the authors should not limit themselves to performance comparisons. They should demonstrate the advantages of their approach in other dimensions, such as time and space efficiency, to prove its superiority over Scone. 4.Figure 4 only visualizes the method proposed in this paper, lacking comparison with other methods. 5.The theoretical analysis in this paper is based on certain assumptions that may not hold in all practical situations, potentially limiting the applicability of the results. These assumptions include the accuracy and representativeness of the graph representations and spectral decompositions used to quantify OOD generalization and detection errors, as well as the ideal conditions for spectral decomposition and the relationships between ID and OOD data. Additionally, the analysis assumes specific distributions of wild data, linear separability of OOD data in the learned representation space, and the particular structure of the graph used in the analysis. Technical Quality: 3 Clarity: 3 Questions for Authors: 1.The theoretical analysis relies on several assumptions, such as the accuracy of graph representations and spectral decompositions, and the linear separability of OOD data. Can the authors provide more details on these assumptions and discuss the potential impact if these assumptions do not hold in practical scenarios? 2.The paper uses graphs to handle tasks in computer vision, despite the existence of well-established OOD generalization and detection tasks within the domain of Graph Neural Networks. What motivated the authors to choose this approach? Are there specific advantages that this graph-theoretic framework provides for computer vision tasks? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for the positive and constructive feedback! Below we address your questions and comments in detail. > Questions about the metrics We are happy to clarify this. SCONE adopted four metrics in experiments, **which are identical to ours**. Following SCONE, we introduce the FPR as the primary metric for evaluating OOD detection in problem setup. Since both AUROC and FPR are commonly used, we also report AUROC in our experiments for comprehensive evaluation. For better clarity, we will include AUROC in Definition 2.1. > GNN-specific tasks Thank you for highlighting the connection to the GNN literature! We chose to focus on computer vision tasks for several key reasons: 1. **Established framework**: Image classification tasks are well-established and have been extensively studied in the OOD detection and generalization literature. This allows us to position our work within a widely recognized and classical setting. 2. **Consistency with prior work**: Our problem setup closely builds on prior work, particularly SCONE, which also focuses on the image domain. To ensure evaluation consistency and fair comparisons, it was important for us to remain within this established framework. 3. **Novel graph-based perspective**: Our work introduces significant insights and techniques by applying a graph-based perspective to computer vision tasks. Unlike traditional GNN-based tasks, constructing edges and graphs for image-based tasks is less straightforward and insufficiently explored. To address this, a key innovation of our paper is the introduction of a surrogate objective, which reformulates the graph factorization problem as a contrastive learning objective. This allows our practical algorithm to be efficiently optimized without explicitly operating on a graph adjacency matrix, while still benefiting from the theoretical guarantees provided by the underlying graph-theoretic formulation. > Clarification on experiments We believe our experiments are comprehensive and offer significant advancements over SCONE. 1. We conduct more extensive evaluations than SCONE. In Section E.2, we present results on the ImageNet-100 dataset, and in Section E.3, on the Office-Home dataset—which was not considered by SCONE. These results consistently demonstrate our superior performance compared to SCONE. 2. We introduce a novel analysis and ablation study on the impact of semantic OOD data, thoroughly examining cases where semantic OOD data originates from the same or different domain as covariate OOD data. This analysis is theoretically explored in Section B and empirically validated in Section E.4, a focus that SCONE does not address. 3. We investigate the effects of ID labels on OOD generalization and detection, providing new insights into how the performance improves with the incorporation of ID label information (Section C). The empirical results shown in Tables 2 and 6 also validate our theoretical insights. Lastly, we would like to clarify that the inference time and space efficiency of our method are identical to those of SCONE, as both approaches utilize the same neural network backbone. Overall, our work represents a significant theoretical and empirical advancement over SCONE. > More visualizations As suggested, we present visualizations of SCL in the attached PDF. Compared with the baseline, our learning framework effectively pushes the semantic OOD data to be apart from the ID data and pulls the covariate OOD data close to the ID data in the embedding space. Visualization of SCONE can be found in Figure 3 of their paper. > Theoretical assumptions Thanks for the thoughtful question. We are happy to clarify this further. 1. _Accuracy of graph representations and spectral decompositions_: Our theoretical framework indeed leverages graph representations and spectral decompositions, specifically relying on the structure induced by augmentation transformations. However, we want to emphasize that our method does not depend on perfect or exact graph representations. Instead, our approach is designed to be robust to variations in the graph structure by incorporating empirical augmentation probabilities that guide the construction of the graph. The spectral decompositions used in our analysis are derived in exact closed form, following standard singular value decomposition, without requiring any additional assumptions. 2. _Linear separability of OOD data_: Contrary to the concern raised, **our theoretical analysis does not assume the linear separability of OOD data**. In fact, one of the strengths of our approach is its ability to handle cases where linear separability is not guaranteed. Our theorems explicitly account for scenarios where the linear probing error and ID-OOD separability are non-zero. This reflects real-world conditions where OOD data might not be linearly separable, yet our method can still provide meaningful guarantees and effective performance. 4. _Impact of assumptions in practical scenarios_: In practical applications, it is indeed possible that some of the idealized conditions assumed in theoretical analysis might not fully hold. However, our method is designed with flexibility in mind. For example, the parameters ($\alpha$, $\rho$, $\gamma$ and $\beta$) governing the augmentation transformation probabilities are designed with generality to capture different practical scenarios. **Our guarantees are provided as a function of these parameters, which can be flexibly adjusted to fit the specific characteristics of a given dataset**. Additionally, our empirical results demonstrate that the method performs robustly across a range of datasets and conditions, indicating that the assumptions made do not overly constrain the practical applicability of our approach. We believe this balance between theoretical rigor and practical flexibility is a key strength of our work. We will revise our manuscript to make these points clear - thank you again for your valuable comments! --- Rebuttal Comment 1.1: Comment: Thank you for addressing the concerns I raised in my previous review. I have decided to increase my score for this submission. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read our response and for increasing the score! We are glad to hear that our rebuttal addressed your concerns.
Summary: This paper proposes a unified framework for OOD detection and generalization, which first constructs a graph that includes both labeled and unlabeled data and derives data representations by factorizing the graph’s adjacency matrix. These representations help quantify OOD generalization and detection performance. The framework's effectiveness is demonstrated through experiments on CIFAR-10, ImageNet, and Office-Home datasets. Strengths: 1. The authors discuss a new method for generalizing to covariate shifts while robustly detecting semantic shifts, providing valuable insights into OOD problem. 2. In addition to mathematical equations, the paper includes an illustrative example to clarify the method, enhancing understanding. 3. The proposed method achieves better performance in OOD generalization and detection compared to the baseline methods. Weaknesses: 1. The effectiveness of the methods is limited by numerous hyperparameter selections, preventing its practical application in real-world scenarios. 2. The process for determining the distribution of augmentation $\tau$, which is the key points for constructing the graph, is not clear. he appropriateness and effectiveness of the formulation in equation (9) for various datasets and other cases remain uncertain. 3. The theoretical guarantees discussed in the paper are not clearly defined, making it unclear what the specific goals of these guarantees are. Technical Quality: 2 Clarity: 3 Questions for Authors: Please refer to the weaknesses. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback. We are glad to hear that you found our paper insightful, clearly presented, and performing well. We address each of your concerns below in detail: > Clarification on the number of hyperparameters Thank you for bringing up this point. To clarify, the practical algorithm is presented in Equation (6), which reformulates the graph factorization problem into a contrastive learning objective that can be efficiently trained end-to-end using a neural network. **Importantly, there are only two hyperparameters involved in our learning objective: $\eta_l$ and $\eta_u$**. This is comparable to, or even simpler than, many existing methods in the field, which often involve multiple hyperparameters across different components of their models. To further assist in practical application, we provide guidelines and default values for the key hyperparameters in our paper. These guidelines (Section F) can serve as a starting point, reducing the burden of hyperparameter tuning in new scenarios. Additionally, these two hyperparameters have intuitive interpretations (e.g., balancing the influence of labeled vs. unlabeled data), which can help practitioners make informed adjustments based on the specific characteristics of their data. > Clarification on augmentation transformation in Equation (9) **In practice, there is no need to determine manually the augmentation transformation probability in $\mathcal{T}(x|\bar x)$**. Our practical algorithm, as described in Section 3.2, does not rely on the explicit construction of the graph or the augmentation transformation probability, making it adaptable to different datasets. **Specifically, the graph decomposition can be equivalently achieved by minimizing a surrogate contrastive learning objective**, which operates on pairs of images. In objective (6), empirical samples of augmented images (using common augmentations [1] such as Gaussian blur, color distortion, and random cropping) are sufficient for optimization, eliminating the need to know the underlying distribution of $\mathcal{T}(\cdot | \bar x)$. This not only makes the approach more practical but also adaptable to various datasets and use cases. We explicate the augmentation transformation probability primarily to support the theoretical analysis and provide tractable guarantees on how they impact OOD generalization and OOD detection performance. By providing theoretical guarantees as a function of the parameters that define these probabilities, we ensure that our method is grounded in solid theoretical foundations. The validity of the formulation in Equation (9) can also be supported in prior work [2]. **Thus, our approach enjoys theoretical guarantees while being easily applicable to various real-world datasets, as we have shown in our extensive experiments.** [1] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. In ICML, volume 119 of Proceedings of Machine Learning Research, pages 1597–1607. PMLR, 2020. [2] Kendrick Shen, Robbie M. Jones, Ananya Kumar, Sang Michael Xie, Jeff Z. HaoChen, Tengyu Ma, and Percy Liang. Connect, not collapse: Explaining contrastive learning for unsupervised domain adaptation. In ICML, pages 19847–19878. PMLR, 2022. > Goals of theoretical guarantees **The goals and targets of our theoretical analysis are explicitly defined in Section 4.2**. Specifically, - We use linear probing evaluation (Equation 7) to quantify OOD generalization performance, measuring the misclassification rate on covariate-shifted OOD data. - We use separability evaluation (Equation 8) to quantify OOD detection performance. _Our Theorem 4.1 and Theorem 4.2 provide closed-form guarantees on these two errors respectively_. We are happy to revise the manuscript and highlight this connection more clearly. Thanks again for your valuable comments and suggestions! --- Rebuttal Comment 1.1: Comment: Dear reviewer DU8N, We wanted to touch base with you as the deadline for the author-reviewer discussion phase is approaching soon. We trust you've had the opportunity to review our rebuttal, and we would be more than happy to address any further concerns you have. Thank you once again for your time and dedication to this review process. We look forward to your response and to furthering the dialogue on our manuscript. Best, Authors --- Rebuttal Comment 1.2: Comment: Thanks for addressing my concerns. I would like to raise my score. --- Reply to Comment 1.2.1: Comment: Thank you for taking the time to read our response and for increasing the score! We are glad to hear that our rebuttal addressed your concerns.
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and commitment to providing valuable feedback and suggestions on our work. We are encouraged that reviewers find our idea to be **novel**, **interesting**, and **effective** (DU8N, ureZ, 6JZj), that our theoretical insights are **sound and valuable** (DU8N, ureZ), and that our results are **comprehensive and significant** (ureZ, LF4w). We are also encouraged that reviewers recognize our method to be **scalable**, **computationally efficient** and **practical** for real-world data (ureZ, LF4w). Additionally, we appreciate the acknowledgment of our **clear writing and presentation** (DU8N, 6JZj, LF4w). As recognized by multiple reviewers, the significance of our work can be summarized as follows: - Our work offers a new algorithmic framework that leverages graph-theoretic formulation to jointly address OOD detection and generalization problems, which is more challenging than addressing either problem alone. - The framework is grounded in the spectral decomposition of a graph, which can be equivalently realized by minimizing a surrogate contrastive learning objective. This approach enhances the computational efficiency and practicality of our framework. - Our framework provides theoretical guarantees while demonstrating effectiveness across various real-world datasets. We include sufficient ablations and illustrative examples to aid readers in understanding our method. We respond to each reviewer's comments in detail below. The PDF attached includes the visualization requested by Reviewer ureZ. In response to the valuable suggestions provided by the reviewers, we will further refine our manuscript to clarify aspects that could benefit from additional explanation. Pdf: /pdf/e4cbdf8c37ab0a5da3af2f2bdba37d4bcf2a0a03.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Parametric model reduction of mean-field and stochastic systems via higher-order action matching
Accept (poster)
Summary: The authors present a framework for learning a time-dependent gradient vector field (parametric model) that describes how the particle evolves in the state space. The idea is based on the action-matching framework (https://arxiv.org/abs/2210.06662), which learns a time-dependent gradient field that interpolates between marginals at different times in a simulation-free manner. The authors propose using high-order quadrature rules for evaluating nested integrals in the objective. They demonstrate the forecasting ability of their framework on diverse examples in high dimensions and show that it outperforms other diffusion-based and flow-based models in inference runtime and accuracy Strengths: Developing efficient and high-fidelity surrogate models is of great importance to the computational science and engineering community. The authors take a step forward in this direction by learning the effective population dynamics of the underlying physical process. The paper is well-written, and the related works are adequately referenced. Weaknesses: The objective used by the authors is based on the action-matching paper, now adapted to include the dependence on the parameters (\mu). From a methodological standpoint, the authors' only important contribution is using a higher-order quadrature scheme to discretize the nested integrals Technical Quality: 3 Clarity: 2 Questions for Authors: General comments: i) Maybe, I am not understanding things here.. Can the authors elaborate a bit more on the O(K\tau) complexity? If for instance, I train a velocity field using continuous normalizing Flow (simulation-based) using empirical marginals at different times. I can then integrate the learned velocity in time to get samples, in the same way, the authors draw samples from the gradient field through Langevin dynamics. Where is the extra \tau in complexity coming from? ii) Or, is this framework a simulation-free strategy to train a Continuous Normalizing Flow but adapted for multiple snapshots? This was not so clear in the paper. iii) The authors learn a gradient field as a function of parameters and time. Can I authors comment a bit on the stability of the learned velocity during inference time? Specific comments i) In Figure 5, a color bar showing the scales would help with the interpretation. ii) In Figure 4, a label to show what the two curves (blue and orange) are is needed. What is the ground truth and what is the prediction? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough and insightful comments: "The objective used by the authors is based on the action-matching paper, now adapted to include the dependence on the parameters (\mu) [...] the authors' only important contribution is using a higher-order quadrature scheme to discretize the nested integrals." - We stress that the higher-order quadrature scheme is not just a nice add-on to improve the accuracy a bit but the **core component to make the approach work in practice at all**. In fact, we show that action matching is unstable even on simple problems (Figure 2, see HOAM vs rand). That AM is unstable on presumably simple problems is also documented in community implementations such as the TorchCFM library. Moreover we show that AM fails on systems with complex dynamics and high dimensions (Strong Landau and 9D chaos, see Figure 8 and 9 in appendix). By contrast HOAM succeeds, allowing us to derive predictive surrogate models with low inference costs of such challenging problems for the first time. - We further uploaded a PDF that shows more detailed results showing that HOAM is essential for stable training as the method proposed in the action-matching framework fails on the problems we consider (see global response). - In terms of novelty of the objective function, variants of it have been widely used in other works far earlier than the action-matching paper (this is also stated in the AM paper). For example, there is [Reich 2011, page 240], with the earliest form appearing in [Otto & Villani 2000, Section 3, and Bennamou & Brenier 2000, eq. 35]. Thus, rather than inventing a new objective, we made the objective computationally tractable, which we consider an important contribution. \ "Can the authors elaborate a bit more on the O(K\tau) complexity? [...] Where is the extra \tau in complexity coming from?" - The standard way of handling a time $t$ and parameter $\mu$ dependency with NCSM and CFM is conditioning (see literature review in paper). As we point out, this means that for each of the K time steps, a separate inference problem has to be solved, which is expensive: $\tau$ refers to the number of steps taken in solving the SDE/ODE in one inference step (for one out of the K time steps) in conditioned CFM/NCSM. Thus with conditioned CFM and NCSM a separate SDE must be solved at every $t$ and $\mu$ for which one wants samples. - By contrast, $\tau$ does not appear in the complexity for HOAM because $\nabla s$ evolves particles such that they match $\rho$ at each time. Thus physical time $t$ and the SDE time $\tau$ are aligned. \ "[...] is this framework a simulation-free strategy to train a Continuous Normalizing Flow but adapted for multiple snapshots?" - As we discuss above, simple conditioning is not competitive for the surrogate modeling task that we consider. It is absolutely possible that other methods for training CNFs can be extended and modified to be more efficient for surrogate modeling, but instead of tweaking another method we opted to go with our approach as it is simulation-free and naturally couples the physical time t with the sampling time so that in one inference step a whole sample trajectory is obtained. This is what ultimately provides speedups, which are key for surrogate modeling. \ "The authors learn a gradient field as a function of parameters and time. Can I authors comment a bit on the stability of the learned velocity during inference time?" - It is sufficient that the drift and diffusion are uniformly Lipschitz continuous such that the inference SDE (or ODE when $\epsilon = 0$) is well-posed. See, for example, [Ambrosio, Gigli & Savaré 2005, Lemma 8.1.4]. The diffusion is a constant in this work and because we parametrize with a CoLoRA neural network, the gradient $\nabla s$ is smooth in $t, x$, which is sufficient for the well-posedness. We will add comments about this in the paper if it gets accepted. - Regarding the analytical vector field we hope to converge to, its properties depend on the data. When $t \mapsto \rho_{t, \mu}$ describes a regular curve in Wasserstein space [Gigli 2012, Definition 2.7], then the inference ODE is well-posed [Gigli 2012, Theorem 2.6]. \ Specific comments about improving figures: - We will address these, if the paper gets accepted. - The ground truth in Figure 4 is blue. References: - [Ambrosio, Gigli & Savaré 2005] Luigi Ambrosio, Nicola Gigli, and Guiseppe Savaré. Gradient Flows. Lectures in Mathematics ETH Zürich. Birkhäuser-Verlag, Basel, 2005. doi:10.1007/978-3-7643-8722-8 - [Bennamou & Brenier 2000] J.-D. Benamou and Y. Brenier. A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem. Numerische Mathematik, 84(3):375–393, Jan. 2000. doi:10.1007/s002110050002 - [Gigli 2012] Nicola Gigli. Second Order Analysis on (P2(M),W2). Memoirs of the American Mathematical Society, Volume 216; 2012 - [Otto & Villani 2000] F. Otto and C. Villani. Generalization of an Inequality by Talagrand and Links with the Logarithmic Sobolev Inequality. Journal of Functional Analysis, 173(2):361–400, June 2000. doi:10.1006/jfan.1999.3557 - [Reich 2011] Reich, S. A dynamical systems framework for intermittent data assimilation. Bit Numer Math 51, 235–249 (2011). doi:10.1007/s10543-010-0302-4 --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed responses and clarification. I will retain my rating. --- Reply to Comment 1.1.1: Comment: Thank you for the comments. If there is any other information we can provide, please let us know.
Summary: The authors focus on learning models for population dynamics of parameterized physical systems that exhibit stochastic and mean-field effects in time. To do so, the authors leverage the Benamou-Brenier formula to learn gradient fields that transport the probability density as time evolves, and that enable the generation of sample trajectories reflecting the dynamics of the population. Numerical experiments show compeling results and state-of-the-art performance in high-dimensional particle systems and in chaotic systems. Strengths: The contributions of the paper are novel, and the presentation is very clear. I commend the author's attempts at making the paper reasonably self-contained by adding context in Appendices C through E. Each of the theoretical developments are clearly motivated. Finding a vector field as proposed enables the interpolation of probability measures at different times. This notion has broad impacts beyond those stated in the paper. It is also interesting that the authors illustrate the importance of using a proper quadrature in time as opposed to random sampling. The experiments are also compelling, in particular the ability to resolve the low probability connection between two high probability regions as shown in Fig. 5. Weaknesses: To this reviewer the paper does not have any evident weakness. However, when introducing a high-order quadrature in time, which improves the performance, the authors do not elaborate on why one would, or would not, expect further improvements if similar quadrature rules were used in the spatial variable or in the physical parameters. Technical Quality: 4 Clarity: 4 Questions for Authors: I have the following minor questions: - In (6) it seems that the integrand should be evaluated on $s_t$ instead of $s$ for consistency with (4). - In (7) there seems to be a parenthesis missing that factors out the density $\rho_{t,\mu}$. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors adequately address the limitations of their work in Section 4. These are mostly technical. For instance, they require a high sampling rate in time, which is not always available in practice. This limits the applicability of the method, but does not detract from their main contribution. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and thorough review. We will address all the points brought up. "In (6) it seems that the integrand should be evaluated on instead of for consistency with (4)." - Thank you, yes, this should be $s_{t, \mu}$. "In (7) there seems to be a parenthesis missing that factors out the density rho" - Thank you, we will fix this in the final version, if the paper gets accepted. "the authors do not elaborate on why one would, or would not, expect further improvements if similar quadrature rules were used in the spatial variable or in the physical parameters." - Typically the dimension of $x$ is too high for quadrature rules to be effective, which is why we use Monte Carlo for it. We also specifically assume that our data is in the form of samples, so evaluating $\rho$ at quadrature points would require some form of density estimation, a non-trivial task in high dimensions. - For the parameter $\mu$, higher-order quadrature rules can be used but in surrogate modeling we typically have data points only at very few $\mu$ training parameters, which makes higher-order quadrature rules difficult to apply. --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses. I have the following additional question. In the pdf file you attached, Fig. 3 shows the relative error in mean and the caption states that "we see that HOAM remains relative stable, while AM increases in error." However, HOAM is the _blue_ line which not only is above the orange line (representing AM) but also seems to increase faster as $T$ increases. Is this a typo? --- Reply to Comment 1.1.1: Comment: Yes this is a typo. We thank the reviewer for catching this. We apologize for the confusion and will update the pdf accordingly. In all cases HOAM does outperform AM.
Summary: This paper develops models of population dynamics in physical systems that exhibit stochastic and mean-field effects, influenced by physics parameters. The goal is to create models that can efficiently predict system behavior as alternatives to classical numerical methods. By utilizing the Benamou-Brenier formula from optimal transport and action matching, the approach involves solving a variational problem to infer gradient fields that approximate population dynamics. These gradient fields enable the generation of sample trajectories that mimic physical system dynamics under various physics parameters. The study highlights the importance of combining Monte Carlo sampling with higher-order quadrature rules for accurate estimation and stable training. The models are demonstrated to perform well on Vlasov-Poisson instabilities and high-dimensional particle and chaotic systems, outperforming state-of-the-art diffusion-based and flow-based models that rely solely on time and physics parameters. Strengths: 1) Efficient parametric model reduction: The model is demonstrated to reduce inference runtime significantly compared to standard diffusion- and flow-based models by leveraging minimal-energy vector fields. 2) Accurate and stable dynamics learning: It captures the coupling over time steps accurately, using higher-order quadrature schemes for estimating time integrals, which enhances training stability. 3) High accuracy and reduced runtime: Achieves error rates comparable to state-of-the-art methods while reducing inference runtime by 1-2 orders of magnitude. Weaknesses: 1) The model assumes access to a dense set of time points for the Gauss-Legendre quadrature, which may not be applicable when only a few time samples are available. This was already mentioned as a limitation of the current work. As this forms the main part of the approach, the practical benefit would be limited. 2) Regarding the vector field complexity, the model seeks a vector field that minimizes kinetic energy, but in some cases, this may be more complicated than other vector fields that produce the same population dynamics. Examples include situations where the minimal-energy field varies with time, making it challenging to determine the appropriate energies to use for different problems. Technical Quality: 2 Clarity: 2 Questions for Authors: 1) Authors aims to learn population dynamics $\rho_{t,\mu}$ instead of learning the dynamics of individual trajectories $t\to X_{t,\mu}^i$ for all $i$/. What are the assumptions on model so that it admits density? If not, would this work apply to the situation under weak convergence where no density is assumed. Moreover, it is not clear why the equations (5) to (7) should admit solutions in the strong forms. 2) Parametrizing the $s_{t,\mu}$ with weight modulation as in CoLoRA [38] only applies to deterministic time-dependent dynamical systems. However, one goal of the work was stated as learning the population dynamics should allow for seamless treatment on deterministic and stochastic systems. How the latter can be handled with the corresponding parametrization is unclear. The forms of the layers assumes some low-rank structure, and only the weight modulations $\phi$ is assumed to depend on time and parameters. As such, are stochastic systems omitted from the problem definition unlike the initial motivation? 3) Solving eq (7) from data is challenging and prone to potential numerical issues. As a remedy, a combination of higher order numerical quadrature and MC sampling strategy is proposed. However, the details of this approach would be better to provide with a stability analysis on the training as mentioned in the text. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes, limitations of their work is adequately addressed. No potential negative societal impact of their work is identified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and respond to the following points: "The model assumes access to a dense set of time points for the Gauss-Legendre quadrature [...] the practical benefit would be limited" - Having data at a good amount of time points is a common situation in the setting of surrogate modeling and model reduction, which we consider here. The data are generated with a high-fidelity numerical model, which has to take small time steps for numerical stability reasons. With HOAM we show that not only we can leverage the dense sampling in time, but that it is absolutely critical to exploit it with appropriate numerical quadrature as existing methods fail due to inaccurate estimates of the time integral (see Figure 2 in paper as well as new results to provide further evidence in the uploaded PDF). - We stress that the problems that we examine here are real problems from the physics literature (see references) that are intractable with current surrogate modeling techniques (see references in paper). With HOAM we construct fast, predictive surrogate models for such problems for the first time. Additionally we show that other reasonable approaches like NCSM or CFM perform worse while incurring orders of magnitude higher inference costs. Thus, this shows that there is a large practical benefit to HOAM—creating fast surrogate models of stochastic and mean-field systems. - If the paper gets accepted, we will emphasize more clearly in the Introduction that in our problem setting it can be reasonably expected that data are available at many time points. \ "[the learned vector fields] may be more complicated than other vector fields that produce the same population dynamics." - This is an interesting point, which we briefly discuss in the Conclusion section. - The main challenge is mathematically formalizing what “complicated” and “easy” means in the context of neural network approximations, which will also depend on the problem at hand. - We build on the kinetic energy because our results show it is a reasonable energy for many different problems that also links nicely with the theory developed in optimal transport theory (see our appendix). One desirable feature of the kinetic energy is that $\nabla s$ of the minimum kinetic energy vector field is identically zero when the population dynamics are stationary - this means that if the population dynamics are stationary, then sampling with the minimum kinetic energy field is trivial as the samples don’t move. This is not necessarily the case for other compatible vector fields. - If this paper gets accepted, we will expand the comments in the Conclusion section where we briefly discuss this point. \ "Authors aim to learn population dynamics [...] What are the assumptions on the model so that it admits density? If not, would this work apply to the situation under weak convergence where no density is assumed. Moreover, it is not clear why the equations (5) to (7) should admit solutions in the strong forms." - Objective (7) (and (5), the special case where $\varepsilon = 0$) is expressed entirely in the form of expectations values w.r.t. $\rho$ - both are meaningful if no density is admitted and can be evaluated on empirical distributions (sums of dirac masses). We will add a comment about this if the paper gets accepted. - We use (5) and (7) to formally derive an optimality criterion for the learned gradient field $\nabla s^*(\theta)$. This field is smooth as it is parametrized by a neural network with smooth activation functions. For any empirical distribution (samples from $\rho_{t, \mu}$), we therefore obtain a vector field that is suited for inference. \ "Parameterizing the with weight modulation as in CoLoRA [38] only applies to deterministic time-dependent dynamical systems. [...] As such, are stochastic systems omitted from the problem definition unlike the initial motivation?" - We do consider stochastic systems in the numerical experiments (e.g., particles in aharmonic trap, high dimensional chaos). What we parametrize with CoLoRA is the gradient field, which is deterministic even if the system is stochastic (see line 37 in paper, as well as Section 2.1). Thus, the CoLoRA parametrization applies independent of whether the system is deterministic or stochastic. This allows a seamless treatment of deterministic and stochastic systems, which we consider a major advantage of our approach. \ "Solving eq (7) from data is challenging and prone to potential numerical issues. [...] would be better to provide with a stability analysis on the training as mentioned in the text." - An empirical stability analysis is provided in Figure 2, where we show that the higher-order quadrature is essential for stabilizing the training. We stress that our results show that higher-order quadrature is not just a nice add-on to improve the accuracy a bit but that it is a core component to make the approach work in practice. In fact, when just using MC, the error blows up during training (see also the new results in the uploaded PDF). - A formal stability analysis is work in progress. As an outlook, we will include the following if this paper gets accepted: The difficulty is that introduction of a numerical quadrature breaks the exact correspondence between the objective $$ \int_0^1 \left(\frac{1}{2}\mathbb E_{\rho_{t, \mu}}\left[|\nabla s_{t, \mu}|^2\right] + \mathbb E_{\rho_{t, \mu}} \left[ \partial_t s_{t, \mu} \right] \right) \mathrm{d} t - \mathbb E_{\rho_{t, \mu}} \left[ s_{t, \mu} \right] \big |_0^1 $$ and the continuity equation. Assume $\rho$ admits a density, the derivative $\partial_t$ is approximated exactly and denote by $(\dots)^n$ the value at time $t_n$ and by $w^n$ quadrature weights. The mismatch $$ \left| \sum_n w^n \int s^n \partial_t \rho^n dx - \left( - \sum_n w^n \int \rho^n \partial_t s^n dx + \int \rho s \big |_0^1 dx \right) \right| $$ equals the numerical integration error of $\frac{d}{dt}\int s\rho \\, dx$ in time. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal and the additional results. After reading your responses, I still have same concerns regarding the contribution of the paper. My main issue is that while the authors claim that higher-order action matching (HOAM) is better than the baseline action matching (AM) on some parametric dynamical systems, it is yet unclear to me how the proposed model improves the results and help the stabilization of the training as claimed. A clear framing of the problem setup and all the assumptions on the model would help the reader to better understand the results. I also find the implementation a bit unclear. If I understand correctly, the HOAM combines higher order numerical quadrature with MC sampling to solve (7) in the direction of time by employing a Gauss-Legendre quadrature. How can this be achieved numerically? It would be also useful to motivate the choice of the dynamical systems in the numerical experiments? Why the examples are relevant in this context is unclear. Hence, I believe all of these will require additional rewrite in multiple places. Therefore, I will keep my initial rating for now. --- Rebuttal 2: Comment: > I believe all of these will require additional rewrite in multiple places. We thank the reviewer for raising these concerns. We are confident that all of the concerns are addressed in the paper, which we concisely summarize in this response. The reviewer’s comments will be helpful for guiding the final revision of this paper (if accepted) to make our points even clearer. > it is yet unclear to me how the proposed model improves the results and helps the stabilization of the training as claimed. We summarize here the reasoning for why "the proposed model improves the results and helps the stabilization of the training", which is also provided in the paper but we will make this clearer in a revision, if the paper gets accepted: 1. **Gauss-Legendre quadrature results in a more accurate estimate of the time integral when compared to Monte Carlo.** This is because it is a higher-order numerical scheme (more precisely, it allows integrating higher-degree polynomials exactly) that leads to lower quadrature errors than Monte Carlo. Besides this theoretic argument, the difference in the quadrature error can be seen empirically in our experiments in, e.g., Figure 2. (See Section 2.3.) 2. **A more accurate estimate of the time integral results in a more stable and accurate estimate of the loss.** This is supported numerically by Figure 2 (left) which shows extremely high variability in the estimates of the loss for AM and low variance estimates of loss for our proposed HOAM. This also agrees with standard results from statistical learning theory where the deviation of the empirical risk (the estimate of the loss function) from the true risk (the true loss function) directly enters in bounds of generalization errors. 3. **A more accurate estimate of the loss makes solving the optimization problem tractable.** This is supported by the numerical results in the paper, explicitly in Figure 2 (middle) and shown to be essential in global response, All this is also described in detail in the main text of our paper in Section 2.3 and numerically supported by Figure 2 and the global response. For example see line 200: "Plain uniform sampling over the data set for mini-batching can lead to poor estimates of the loss." > If I understand correctly, the HOAM combines higher order numerical quadrature with MC sampling to solve (7) in the direction of time by employing a Gauss-Legendre quadrature. How can this be achieved numerically? 1. As referenced in the main text of the paper the empirical loss is given in the Appendix in section A. This is written down as a discrete summation which can be easily implemented numerically. 2. For the Gauss-Legendre quadrature, the key algorithmic step is determining the roots of the Legendre polynomials, which is implemented (for example) in scipy (scipy.special.roots_legendre()). We also provide the code of our implementation, but the link is redacted for now per submission policy. > It would be also useful to motivate the choice of the dynamical systems in the numerical experiments? Why the examples are relevant in this context is unclear. 1. From a surrogate modeling perspective, the systems that we consider are exceedingly challenging because they are high dimensional, chaotic and/or stochastic. Surrogate modeling for such systems is in its infancy because the point-wise approximations of traditional surrogate modeling techniques are meaningless. We reference these attempts in detail in the introduction and literature review. 2. We make careful efforts to choose problem setups which are experimentally meaningful to practitioners (see for example Appendix B.2 that describes that the data are obtained from code that are used by plasma physicists). The physical relevance of our problem setups are detailed in the numerous sources we cite, see [7, 21, 34, 42, 87 Sec 2(b)(i), 58]. > A clear framing of the problem setup and all the assumptions on the model would help the reader to better understand the results. 1. The problem setup is described in the Introduction section on page 1: “Given a data set of samples [...] we aim to learn a dynamical-system reduced model to rapidly predict samples that approximately follow the same law [...]” 2. When we make mathematical statements, we provide assumptions. For example, for the statement of ‘uniqueness’ on page 4 we assume that the density is positive and that the source term integrates to zero. 3. We provide an extensive appendix C-E on the connection to optimal transport and additional literature. We hope that we have been able to demonstrate that most of these concerns are addressed in the main text of the paper. If the reviewer does not have any additional concerns, we would greatly appreciate it if they would consider revising their score.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for taking the time to read the paper and very much appreciate their detailed comments. Below we provide a detailed response. We summarize here some of the main points and how we addressed them: - We now stress that the higher-order quadrature scheme that is introduced by our approach HOAM is not just a nice add-on to improve the accuracy a bit but the core component to make the approach work in practice at all. While variants of the objective that we consider have been around in the literature for a long time, we show that higher-order quadrature in time makes the objective computationally tractable and so avoids the instabilities in training that we observe in our experiments and that agree with the instability results shown in the literature (see detailed comments below). **We further uploaded a PDF that shows more detailed results showing that HOAM is essential for stable training.** - The problems that we examine here are intractable with current surrogate modeling and model reduction techniques (see references in paper). The improvements made with HOAM allow us to construct fast, predictive surrogate models for such problems for the first time. - We address concerns about densely sampling in time: We consider the setting of surrogate modeling and model reduction where data are typically generated with a high-fidelity numerical model, which has to take small time steps due to numerical stability constraints. In fact, we show that HOAM performs well on a range of real problems from the physics literature (see references) for which data are generated with standard tools from the physics community. Thus, the data requirements align with what is typically available for the problems that we aim to address with our HOAM approach. Pdf: /pdf/8454a2b1267da427aaa96d73f4518825852cf1df.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Fast samplers for Inverse Problems in Iterative Refinement models
Accept (poster)
Summary: This paper introduces a inverse problem approach to reconstruct low-resolution blurred or obfuscated images, from pre-trained generative models. The method leverages conjugate integrators to project the diffusion dynamics in an easier space, solve the inverse problem and then map back. Strengths: - The integration of Conjugate integrators for inverse problems in the paper is rigorously defined and proved. - The proposed method drastically reduces the number of sampling steps compared to other inverse problem approaches. - The presented approach achieve better reconstructed images compared with state-of-the-art inverse problem methods. Weaknesses: The paper is extremely dense with equations and mathematical formulations. I would suggest moving sections 2.3 and 3.4, as well as the numerical details of the experiments, to the appendix. This would allow space in the main paper for additional qualitative results and a broader discussion on the intuition behind the conjugate integrators for inverse problems. Including a figure illustrating the method would also be appreciated. While the mathematical formulation is the main essence of the paper, a visual representation or descriptive insight would be appreciated for the understanding. Technical Quality: 2 Clarity: 2 Questions for Authors: Minor details: - Line 608, link the approximation to Eq. (5) Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: I don't have any concern. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We provide a detailed response to several concerns below: > I would suggest moving sections 2.3 and 3.4, as well as the numerical details of the experiments, to the appendix..... We thank the reviewer for their suggestion. We stress that the final version allows for an extra page, which we will use to make the paper more easily accessible. Re. Section 2.3, we would like to highlight that this section provides an intuitive understanding of our method's tractability and working. Therefore, we think this section serves as an intuitive premise for the observations presented in our experimental section. Re. Section 3.4, we will consider moving this section to the appendix since our presented results for non-linear and noisy inverse problems serve as a proof of concept. Additionally, we plan to add an overview figure in the revised manuscript. We also provide a sample overview figure as Figure 1 in the rebuttal pdf (Also see the second point of our shared response). > Line 608, link the approximation to Eq. (5) Thanks for pointing it out. We will revise our manuscript accordingly
Summary: Inverse problems like super-resolution and deblurring remain computationally inefficient with current diffusion/flow models. The paper introduces a plug-and-play framework with Conditional Conjugate Integrators that use pre-trained iterative refinement models to simplify sampling. The method generates high-quality samples in as few as 5 steps for tasks like 4× super-resolution on ImageNet, outperforming existing methods. Strengths: 1. The paper is well-written, and the literature review is thorough. 2. The method is well-supported by theoretical foundations. 3. The experiments on both diffusion and flow-matching methods across various tasks demonstrate the robustness of the proposed speed-up algorithm. Weaknesses: 1. The reason why the paper is restricted to the $\Pi$GDM paradigm is unclear. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can the method be applied to nonlinear inverse problems? If not, please clarify the reason. 2. Though the Conditional Conjugate Integrators work well and can speed up the sampling process to 5 steps, a more detailed computational comparison is needed as each iteration becomes more complicated. Can you provide more detailed computational results? 3. Can you clarify the reason why you restrict yourself to the posterior approximation in $\Pi$GDM? Though the experimental results are promising, a direct comparison with DPS is preferred. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the paper mentioned its broader impact at the end, which looks good to me. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\newcommand\dag\dagger$ We thank the reviewer for their feedback. We provide a detailed response to several concerns below: > The reason why the paper is restricted to the $\Pi$GDM paradigm is unclear. We chose to work with the $\Pi$GDM baseline due to the following reasons: 1. Firstly, $\Pi$GDM is a very strong and widely-used baseline for tackling inverse problems, providing a more expressive posterior approximation $p(x_0|x_t)$ compared to other methods such as DPS or MCG. This makes it an excellent starting point for low NFE budget sampling. As demonstrated in our experiments (See Table 1), DPS typically requires 1000 NFEs to achieve optimal results. In contrast, $\Pi$GDM serves as a competitive baseline and a strong foundation for our method. Additionally, $\Pi$GDM offers greater flexibility since it does not require a differentiable image degradation transform, unlike DPS. Thus, PGDM is a natural choice for us to demonstrate the effectiveness of our method. 2. Secondly, in the context of popular diffusions and flows, using the posterior approximation in $\Pi$GDM results in a closed-form derivation of the conjugate operator and its inverse (See Eqns. 11 and 12 in the main text). Note that this is not a limitation of our method since the projection operator can also be derived for DPS, as highlighted in the following theoretical result (for brevity, we omit the full proof and would include it in our revision). **Proposition.** For a time-dependent design matrix $B: [0,1] \rightarrow \mathbb{R}^{d\times d}$ and the posterior approximation proposed in DPS [Chung et al.] with guidance step size $\rho$ such that the conditional score $\nabla_{x_t} \log p(y|x_t) = \rho \frac{\partial \hat{x}_0}{\rho x_t}^\top H^\top (y - H\hat{x}_0)$, introducing the transformation $\hat{x}_t=A_tx_t$, where $$A_t = \exp{\Big[\int_0^t B_s - \Big(F_s + \frac{\rho}{2\mu_s^2}G_s G_s^\top H^\top H\Big)ds \Big]}$$ induces the following projected diffusion dynamics $$d\hat{x}_t = A_t B_t A_t^{-1}\hat{x}_t dt + d \Phi_y y + d \Phi_s \epsilon(x_t, t) + d \Phi_j \Big[\partial\epsilon(x_t, t)H^\top (y - H\hat{x}_0)\Big] $$ where $\exp(.)$ denotes the matrix exponential, $\hat{x}_0$ represents the Tweedie's moment estimate and $H$ denotes the degradation operator. The proof roughly follows similar ideas from Appendix A.3 in our paper and we will include a complete proof in our revision. Despite our method applying to DPS, it is worth noting that in contrast to $\Pi$GDM, the projection operator $A_t$ and its inverse for DPS do not exhibit a closed-form solution and need to be approximated using perturbation analysis (if the step size $\rho$ is small) or computed using standard routines in packages like PyTorch. Therefore, though our method is also applicable to DPS, we choose to stick with the $\Pi$GDM framework in this work. > Can the method be applied to nonlinear inverse problems? If not, please clarify the reason. While our presentation of the proposed method is primarily in the context of linear-noiseless inverse problems, we also present an extension of our method to noisy-linear and non-linear inverse problems in Section 3.4. Furthermore, in Figure 11 of our paper, we illustrate the applicability of our method to non-linear inverse problems in the context of challenging problems like JPEG and lossy neural compression-based restoration. We would also like to point the reviewer to our common response (Point 1) for a more comprehensive justification of the utility of our method for noisy and non-linear inverse problems. > Though the Conditional Conjugate Integrators work well and can speed up the sampling process to 5 steps, a more detailed computational comparison is needed as each iteration becomes more complicated. Can you provide more detailed computational results? We provide a computational comparison in Table 1 using NFE as the metric, which is standard in the existing literature on inverse problems like DPS [Chung et al.] and PiGDM [Song et al.]. It is important to note that the compared models utilize the same pre-trained backbone diffusion/flow models, ensuring no differences in model inference speed. The only additional computation incurred involves a simple linear transform $A_t$ and some **scalar** integral calculations, which are computationally inexpensive. Moreover, these integrals can be precomputed offline (i.e., before sampling starts, as only the noise schedule influences the integral results). Therefore, the computational cost can be amortized between samples, reducing potential overhead. We will make this point more explicit in our revised manuscript. > Can you clarify the reason why you restrict yourself to the posterior approximation in $\Pi$GDM Please see our response to your first comment. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' efforts in preparing the rebuttal and acknowledge the theoretical contributions of the paper, which have been well-received by the other reviewers. However, I still have some concerns. The paper does not report PSNR and SSIM results, making it difficult to fully assess the quality of the proposed methods. Given that the paper utilizes the $\Pi$GDM paradigm, it is crucial to provide these metrics for a comprehensive evaluation. DPS, for example, demonstrates robustness in both perceptual and recovery metrics across various tasks, highlighting the importance of such evaluations. Considering above, I keep the original rating. --- Rebuttal 2: Title: Author Response Comment: We thank the reviewer for their response. Please find our detailed response below: **Comparison of perceptual vs Recovery metrics:** We agree that highlighting robustness in both perceptual and recovery metrics is important in the context of inverse problems. Following previous works like DPS and $\Pi$-GDM, we include only perceptual metrics in the main text since recovery metrics like PSNR and SSIM usually favor blurrier samples over perceptual quality. So, a tradeoff exists between distortion (a.k.a recovery) and perceptual quality. However, whether perceptual quality/recovery is preferred depends on the application. Therefore, for completeness, we provide a comparison between DPS, $\Pi$GDM, and C-$\Pi$GDM (our method) in terms of PSNR, SSIM, FID, and LPIPS in Tables 1 and 2 on the ImageNet-256 and FFHQ-256 datasets on the 4x super-resolution task. It is worth noting that the PSNR and SSIM scores for all methods correspond with the best FID/LPIPS scores presented in the main text for these methods. Tables 1 and 2 show that our method achieves competitive PSNR and SSIM scores for better perceptual quality than competing methods, even for very small sampling budgets. For instance, on the FFHQ dataset, our method achieves a PSNR of 28.97 compared to 28.49 for DPS while achieving better perceptual sample quality (LPIPS: 0.095 for ours vs 0.107 for DPS) and requiring around 200 times less sampling budget (NFE=5 for our method vs 1000 for DPS). Therefore, we argue that our perceptual quality to recovery trade-off is much better than DPS, given our method is significantly faster than DPS. **Traversing the Recovery vs Perceptual trade-off in C-$\Pi$GDM**: In addition to the guidance weight $w$, our method also allows tuning an additional hyperparameter $\lambda$, which controls the dynamics of the projection operator (See Sections 2.3 and 3.2 for more intuition). Therefore, tuning $w$ and $\lambda$ can help traverse the trade-off curve between perceptual quality and distortion for a fixed NFE budget. We illustrate this aspect in Table 3 (fixed $\lambda$ with varying $w$) and Table 4 (fixed $w$ with varying $\lambda$) for the SR(x4) task on the ImageNet-256 dataset using the PSNR, LPIPS, and FID metrics. Therefore, our method offers greater flexibility to tune the sampling process towards either good perceptual quality or good recovery for a given application while maintaining the same number of sampling steps. In contrast, other methods like DPS do not offer such flexibility. Moreover, tuning the guidance weight in DPS is very expensive in the first place due to its high sampling budget requirement (around 1000 NFE). We will extend these comparisons for other tasks and add them in the Appendix section of our revised paper. We would also request the reviewer to reconsider their evaluation. | | PSNR | SSIM | FID | LPIPS | |---------------------|-------|-------|-------|-------| | DPS (NFE=1000) | 23.81 | 0.708 | 38.18 | 0.252 | | $\Pi$GDM (NFE=20) | 21.92 | 0.646 | 37.36 | 0.222 | | C-$\Pi$GDM (NFE=5) | 22.32 | 0.641 | 37.31 | 0.220 | | C-$\Pi$GDM (NFE=10) | 23.00 | 0.651 | 34.22 | 0.206 | | C-$\Pi$GDM (NFE=20) | 23.16 | 0.654 | 34.28 | 0.207 | **Table 1**: Comparison between different methods on ImageNet-256 for the SR(x4) task | | PSNR | SSIM | FID | LPIPS | |---------------------|-------|-------|-------|-------| | DPS (NFE=1000) | 28.49 | 0.834 | 30.86 | 0.107 | | $\Pi$GDM (NFE=20) | 28.26 | 0.818 | 26.17 | 0.087 | | C-$\Pi$GDM (NFE=5) | 28.97 | 0.832 | 32.01 | 0.095 | | C-$\Pi$GDM (NFE=10) | 29.03 | 0.821 | 29.07 | 0.086 | | C-$\Pi$GDM (NFE=20) | 28.79 | 0.809 | 26.37 | 0.083 | **Table 2**: Comparison between different methods on FFHQ-256 for the SR(x4) task. | w | PSNR | LPIPS | FID | |----|-------|-------|-------| | 2 | 22.91 | 0.339 | 48.48 | | 4 | 23.37 | 0.306 | 45.03 | | 6 | 23.49 | 0.274 | 42.68 | | 8 | 23.44 | 0.266 | 40.96 | | 10 | 23.28 | 0.254 | 40.27 | | 12 | 22.89 | 0.246 | 40.13 | | 14 | 22.74 | 0.239 | 40.16 | **Table 3**: Illustration of the impact of $w$ for a fixed $\lambda=0.0$ on the sample recovery (PSNR) vs sample perceptual quality (LPIPS, FID) at NFE=5 for our method. The task is SR(x4) on the ImageNet-256 dataset. | $\lambda$ | PSNR | LPIPS | FID | |-----------|-------|-------|-------| | -1.0 | 20.96 | 0.291 | 42.56 | | -0.8 | 21.33 | 0.265 | 40.97 | | -0.6 | 21.69 | 0.240 | 39.38 | | -0.4 | 22.04 | 0.223 | 37.83 | | -0.2 | 22.32 | 0.220 | 37.31 | | 0.2 | 22.73 | 0.257 | 45.27 | | 0.4 | 22.90 | 0.275 | 48.98 | | 0.6 | 23.03 | 0.283 | 47.47 | | 0.8 | 23.11 | 0.285 | 46.2 | | 1.0 | 23.15 | 0.285 | 46.41 | **Table 4**: Illustration of the impact of $\lambda$ for a fixed $w=15$ on the sample recovery (PSNR) vs sample perceptual quality (LPIPS, FID) at NFE=5 for our method. The task is SR(x4) on the ImageNet-256 dataset.
Summary: This work proposes a plug-and-play based method that leverages pretrained diffusion and flow models to solve inverse problems. The proposed method called Conditional Conjugate Integrators adapts previously proposed Conjugate Integrators framework for fast sampling of diffusion and flow models to solve linear inverse problems. The key idea is to project the diffusion (flow) dynamics into another latent space where sampling can be more efficient. Upon completion, the dynamics is projected back to the original pixel space. The paper provides the mathematical forms of the projection operator for conditional diffusion (flow) dynamics, then adapts it to get the projection operator for linear inverse problems. The paper also provides tractable forms of the projection matrix and its inverse. This derivation can be seen as a more general form of previously proposed method $\Pi$GDM ($Pi$GFM). The method can be extended to nonlinear and noisy settings. The paper provides promising results on datasets like LSUN Bedrooms, AFHQ Cats, FFHQ, ImageNet etc. on tasks like inpainting, superresolution, gaussian deblurring etc. Strengths: 1. Methodology: The proposed method seems efficient — it can solve linear inverse problems in 5 steps. In comparison, previous methods need 20-100 or even 1000 steps. 2. Experiments: The paper includes ablation studies on the choice of hyper-parameters and provides comparisons against former state-of-the-art methods like $\Pi$GDM, DPS, DDRM,$\Pi$GFM etc. The proposed method out-performs previous methods by a significant margin while using 5-10 steps and is on par with or better than previous method for >=20 steps. 3. Writing: Paper is well written. The core ideas and the methodology have been presented well, and derivations are easy to follow. Weaknesses: The benefits of the proposed method for the settings of noisy linear inverse problem setting as well as non-linear inverse problems remain unclear. The paper does not include any quantitative results for these two problem settings. It only provides some limited qualitative results for super-resolution for the noisy setting with $\sigma_y$=0.05 and compression inverse problem and JPEG restoration problem for non-linear settings. Technical Quality: 3 Clarity: 3 Questions for Authors: Questions: 1. Why does Eq. 117 hold true for VPSDE? I’m missing the simplification step here. 2. Why does Table 1 not include results of inpainting for diffusion models on ImageNet? Similarly, Table 4 in the appendix skips results for inpainting on FFHQ. Suggestions: 1. All the tables and figures must explicitly state all the relevant settings of the inverse problems. It is not immediately apparent that some of these quantitative results are only for noiseless linear inverse problems. 2. Consider including an algorithmic box that summarizes C-$\Pi$GFM and C-$\Pi$GDM. This would provide a concise overview of the method to the readers. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper discusses relevant limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\newcommand\dag\dagger$ We thank the reviewer for their insightful comments and feedback. We provide a detailed response to several concerns below: >The benefits of the proposed method for the settings of noisy linear inverse problem setting as well as non-linear inverse problems remain unclear. The paper does not include any quantitative results for these two problem settings. It only provides some limited qualitative results for super-resolution for the noisy setting with =0.05 and compression inverse problem and JPEG restoration problem for non-linear settings. As illustrated in Figures 7 and 11 in the main text, we would like to highlight that our proposed method can be applied to challenging noisy and non-linear inverse problems in as few as 5-10 sampling steps. Therefore, the proposed method can also be beneficial for these settings. However, we acknowledge that our qualitative results for noisy and nonlinear problems serve as a proof of concept and that further evaluation and refinements are necessary. We intend to pursue more detailed investigations and improvements in future work and will highlight the same in the Conclusion section of a subsequent revision. We would also like to point the reviewer to our common response (Point 1) for a more comprehensive justification of the utility of our method for noisy and non-linear inverse problems. >Why does Eq. 117 hold true for VPSDE? I’m missing the simplification step here. For a given degradation operator $H$, the core idea in this step is the property of the projection matrix $P=H^\top(HH^\top)^{-1}H$, which shows: $$PH^\dag = \big[H^\top(HH^\top)^{-1}H\big]\big[H^\top(HH^\top)^{-1}\big] = H^\top(HH^\top)^{-1} = H^\dag$$ Now, Eqn. 116 reads as: $$\Phi_y = -\int_0^t \frac{w\beta_t\mu_s}{2}\exp(\kappa_1(s))\Big[H^\dag + (\exp(\kappa_2(s)) - 1)PH^\dag\Big] ds$$ Replacing $PH^\dag = H^\dag$ in the above equation, we get, \begin{align} \Phi_y &= -\int_0^t \frac{w\beta_t\mu_s}{2}\exp(\kappa_1(s))\Big[H^\dag + (\exp(\kappa_2(s)) - 1)H^\dag\Big] ds\\\\ &= -\int_0^t \frac{w\beta_t\mu_s}{2}\exp(\kappa_1(s))\exp(\kappa_2(s))H^\dag\Big] ds \\\\ &= -\big[\int_0^t \frac{w\beta_t\mu_s}{2}\exp(\kappa_1(s) + \kappa_2(s))ds\big]H^\dag \end{align} Therefore, we arrive at Eqn. 117. We hope this resolves any confusion. We will also add these simplifying steps in our subsequent revision. > Why does Table 1 not include results of inpainting for diffusion models on ImageNet? Similarly, Table 4 in the appendix skips results for inpainting on FFHQ. We thank the reviewer for pointing this out. We couldn't include these comparisons due to time constraints but will include them in a subsequent revision. > All the tables and figures must explicitly state all the relevant settings of the inverse problems. It is not immediately apparent that some of these quantitative results are only for noiseless linear inverse problems. We thank the reviewer for pointing this out. We are planning to update the paper with more detailed captions and agree that we should mention the settings of the problems more explicitly in the figures and tables > Consider including an algorithmic box that summarizes C-$\Pi$GFM and C-$\Pi$GDM. This would provide a concise overview of the method to the readers. We thank the reviewer for this suggestion. We plan to add a pseudocode/algorithm box summarizing the proposed samplers in the main text. We would also like to point the reviewer to our common response (Point 2) for more details regarding improved illustrations of our method. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I would like to retain my score.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful comments and are glad that the reviewers found our paper well-written (Reviewers hLds, PpUR), well-supported by theoretical arguments (Reviewer PpUR), our proposed method efficient (Reviewer hLds), and rigorously defined (Reviewer X2Z6). Below we highlight some shared concerns that the reviewers have and address reviewer-specific concerns as individual responses. We also provide a rebuttal pdf with our response which illustrates the main figure we plan on including in our paper revision for better understanding of the proposed method. **Shared comments**: **Re. applicability to Noisy and Nonlinear Inverse Problems**: Reviewers hLds and PpUR wished to see more details and results of our approach applied to noisy and nonlinear inverse problems. While we appreciate the suggestions and are willing to expand on them (see below), we stress that our primary focus in this work is on the *linear and noiseless case*, which captures a set of diverse and important inverse problems encountered in image restoration applications (e.g., super-resolution, deblurring, and inpainting). Similar to other works (e.g., on $\Pi$GDM), we presented the selected results on nonlinear and noisy setups as proof of concept. In our paper, we discuss our proposed approach for addressing noisy and nonlinear inverse problems in Section 3.4, with some qualitative results presented in Figures 7 and 11. More specifically, for noisy linear inverse problems, our approximations for the noisy projection operator $A_t^{\sigma_y}$ are accurate to an order of $O(\sigma_y^4)$ [See Eq. 15 in the main text], where $\sigma_y$ is the noise level added to the output of the degradation operator. Therefore, unless the noise levels are very high, this implies that our method is applicable in most practical scenarios. Empirically, Figure 7 illustrates the validity of our theoretical arguments for the task of noisy super-resolution. Therefore, while we do not include quantitative comparisons on noisy inverse problems, our qualitative results serve as a good proof-of-concept of the generality of our proposed conditional sampling framework. We will add further results for noisy-linear inverse problems in our revised manuscript to highlight this aspect further. Next, for nonlinear problems, our approximations follow a heuristic approach that is similar to $\Pi$GDM (Song et al.) but can be empirically effective as demonstrated in the context of challenging non-linear inverse problems like JPEG and lossy neural compression-based restoration (see Figure 11) under a limited compute budget. Thus, our method offers promising avenues for speeding-up nonlinear inverse problems. However, we acknowledge that our qualitative results for noisy and nonlinear problems serve as a proof-of-concept, and that further evaluation and refinements are necessary. We intend to pursue more detailed investigations and improvements in future work. **Re. Illustration of the proposed method** Reviewers X2Z6 and hLds made nice suggestions about improving the illustration of our proposed method. As a follow-up, we present an intuitive visualization of our proposed sampling framework in Figure 1 in our rebuttal pdf. To summarize, as illustrated in the figure, given a starting time $t_s$, our method tries to map the conditional diffusion dynamics into a more well-conditioned space where sampling is more efficient. The hyperparameter \lambda controls the amenability of the projected space for faster sampling. Moreover, the projection operator itself is a function of the degradation operator and the guidance scale resulting in more robust sampling even at high guidance scales. We revert back to the original space after sampling concludes in the projected space. While we will continue to improve upon these visualizations, we hope that the accompanying figure can help resolve any confusion that the readers or the reviewers may have regarding our proposed approach. In subsequent revision, we will also include the algorithmic pseudocode of our method as suggested by Reviewer hLds for more clarity. Pdf: /pdf/3240fbe734e4d44713ab4c7e38dbc4498b1cf192.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Integrating GNN and Neural ODEs for Estimating Non-Reciprocal Two-Body Interactions in Mixed-Species Collective Motion
Accept (poster)
Summary: This work proposes a graph neural network-based deep learning model for two-body interactions, which has better efficiency based on two numerical experiments. Strengths: Two-body interactions are not my research field. Thus, I don't have enough expertise to evaluate this work. I sent an email to AC to re-assign this paper when I received the assignment. Unfortunately, I didn't receive any feedback. My justification is just based on the writing without any methodology consideration. Weaknesses: see strengths Technical Quality: 3 Clarity: 3 Questions for Authors: see strengths Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: see strengths Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
null
Summary: This paper presents a model combining GNN and Neural ODEs for the task of two-body interaction prediction. Two numerical experiments were conducted to evaluate the model's performance. Strengths: - Using a GNN and Neural ODE to describe the dynamics of two-body interaction state changes is well-motivated and reasonable. - Experiments from interaction models are implemented from simple to complex setups, showing the efficiency and capacity of the proposed method. - The paper structure and writing is clear and easy to follow. Weaknesses: One of the main contribution claimed in this paper (as well as in its title) is the combination of GNN and Neural ODE, but this approach has been studied in multiple works back since 2020, the authors seem to overlook the important and related body of works [1,2,3,4]. [2] is a direct competing method (GDE) where the authors proposed a model integrating graph NN and Neural ODEs. GDE-related applications exist as well, for example, social network embedding [5], action recognition [6]. Given the missing discussion of related work, and my concern about the major technical contribution claim in this paper, I do not see much merits of this study to be novel and interesting to the community. [1] L. Xhonneux et al.: Continuous Graph Neural Networks. PMLR 2020. [2] M. Poli et al.: Graph Neural Ordinary Differential Equations. AAAI 2020. [3] L. Chen et al.: Signed Graph Neural Ordinary Differential Equation for Modeling Continuous-Time Dynamics. AAAI 2024. [4] A. Han et al.: From Continuous Dynamics to Graph Neural Networks: Neural Diffusion and Beyond. TMLR 2024. [5] Y. Zhang et al.: Improving Social Network Embedding via New Second-Order Continuous Graph Neural Networks. KDD 2022. [6] L. Pan et al.: Spatial-temporal graph neural ODE networks for skeleton-based action recognition. Scientific Reports 2024. Technical Quality: 3 Clarity: 2 Questions for Authors: The author should discuss in their response regarding the missing related works that involve integrating GNN+Neural ODEs, and highlight their differences/contribution from previous work. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Limitations were discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for bringing to our attention these papers. We will cite these papers in the final edition. Re – Question and Weakness (Relationship to Graph ODE): The pre-existing methods [1-5] solve and train ODEs on a graph with fixed edges and consider changes in the edge weight. These approaches require a fully or almost fully connected graph in solving ODE for an application to collective motion where the adjacency drastically changes in time, such as collective cell migration, flocks, and pedestrian dynamics. In such systems addressing the adjacency at each time is critical because the behavior of individuals is determined by sensible information, that is, the state of neighboring others. This is computationally expensive making it difficult to be extrapolated to situations with unknown graph structures. In contrast, since edge structures are continuously being updated in our approach, the pruning effect substantially reduces memory requirement - in our case of 400-body simulations with the edge density of 2%, the required memory of 200 GB for fully connected graph is reduced to 30GB making it feasible on an off-the-shelf GPU board. As for reference [6], although the method introduces a dynamic edge structure, the time series of the graph (edge) structure needs to be estimated first using an entire time series and thus the trained model cannot be further extrapolated to situations outside of the particular graph structure. Our method resolves this issue by setting a rule to define edges at each time point instead of explicitly giving graph structure time series. --- Rebuttal Comment 1.1: Comment: Thanks for the authors clarifying my concerns on the difference of the method from previous work, and highlighting their contributions on computational costs, I have increased my score to 5.
Summary: The paper introduces a novel deep learning framework for estimating two-body interactions in mixed-species collective motion. The authors combine graph neural networks (GNNs) with neural ordinary differential equations (Neural ODEs) to predict interactions between pairs of entities based on their states. This approach represents the system as a dynamic graph, using GNNs to efficiently calculate interactions and Neural ODEs to learn system dynamics. Authors demonstrate their method via two experiments: a simple harmonic interaction model and a more complex model simulating cellular slime molds. The framework successfully estimates interaction functions and replicates both individual and collective behaviors in these systems. The authors provide detailed quantitative analyses of estimation accuracy and demonstrate the method’s ability to generalize to single-species scenarios after training on mixed-species data. Strengths: Strengths: - Novel integration of GNNs and Neural ODEs for collective motion analysis - Ability to handle mixed-species systems and estimate species-dependent interactions (and demonstrating its effectiveness across simple and complex models) - Successful generalization to single-species scenarios after training on mixed-species data - Potential applicability to a wide range of biological systems exhibiting collective motion Weaknesses: Weaknesses: - High computational cost and long training times . - Current limitation to deterministic motion equations and pairwise interactions. - Lack of ablative studies on hyper parameter optimizations and method’s sensitivity to noise. - Lack comparative analysis with existing methods for estimating collective motion dynamics. Technical Quality: 3 Clarity: 2 Questions for Authors: Questions for the authors: 1. How does the performance of your method compare to existing approaches for estimating collective motion dynamics, such as SINDy or Bayesian optimization methods? 2. Have you explored the method’s sensitivity to noise in the input data? How robust is the estimation process to measurement errors or stochastic fluctuations in the trajectories? 3. I’m intrigued by the use of LAMB optimizer. What specific characteristics of your problem or empirical results led you to choose LAMB over AdamW? 4. What are the primary factors limiting the scalability of your approach to systems with stochastic motion equations and higher-order interactions? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback and comments. We are encouraged by your overall evaluation and many of the technical suggestions which are very helpful in future extension of the work. Re – Question 1 and Weakness 4 (Comparative analysis with SINDy and Bayesian optimization methods): Thank you for pointing out the lack of comparative analysis. Although both SINDy and Bayesian optimization are in principle applicable to our target system, they require drastic change of codes. Thus we have not tested them. Re – Question 2 and Weakness 3 (The method’s sensitivity to noise in the input data): No, we have not systematically studied dependency on the noise parameter, nor introduced additional noise, and this is on-going and part of our future plans. Due to the very fact that stochastic differential equations are employed to generate the dataset and to train ODEs, we expect the method itself to be robust against stochasticity. In the mixed-species case, linear relationship between the true vs estimation was robust (fig. 2F and 2J). We do however see consistent under-estimation, albeit slight, of the interaction term of \phi (F^{(2)}_\phi) which presumably is caused by the stochasticity that is added to d\phi/dt. Re – Question 3 (Advantage to use LAMB optimizer in comparison with AdamW): Our module has a layer where the output of the fully connected network is multiplied by a trainable scalar. When using Adam, the scalar behaved erratically compared with weights and biases in fully connected layers. The ability of LAMB to automatically adjust the learning rate between layers helped us stably train the module. We are aware that there is also an option to set a different learning rate for the scalar using AdamW, which we did not test. Re – Question 4 and Weakness 2 (Primary factors which limits the scalability of our approach to systems with stochastic motion equations and higher-order interactions): Computational load is the primary factor when extending the present approach to stochastic systems. With the current stand-alone off-the shelf GPU in the market (NVIDIA A6000), it takes weeks to train ODEs. Since the bottleneck seems to be at neural ODE calculation, performing bayesian inference to fit SDEs would take an unrealistically long time. Some form of parallelization and use of super computers may help resolve this issue in the future. As for higher-order interactions, the limitation is the lack of a good library to deal with hypergraphs, at least to the best of our knowledge. The other option is to use message passing twice, which we have not tried, because of the caveat of making the training and interpretation of neural networks more difficult. Re – Weakness 1 (High computational cost and long training times): Since our target of estimation is pairwise force which would work as a total force to drive the entities, we are essentially dealing with an inverse problem. The costs are necessary for stable and accurate estimation of each pairwise force against noise. Given the nature of the problem, we believe costs of memory and time for estimation is a difficult issue to resolve. Still, as we mentioned in the last section, we agree that it is an important avenue to explore in future research. --- Rebuttal Comment 1.1: Comment: I thank the reviewers for their detailed response. I increase my score to 7.
Summary: This paper seeks to model dynamic interactions of biological agents (primarily cellular slimes) by adapting the (continuous time) Langevin model, which models the collective motion of the system from considering individual force terms and the summation of pairwise interactions. Such pairwise interactions can be modelled as a graph; thus the others model the interaction terms with message passing on a graph and the solution to the differential equations are modelled with a neural ODE. The main contribution seems to be the learning of the force terms with fully connected neural networks. The authors claim that the main problem with previous works is that no models have been proposed that can model systems that interact in unknown ways; however, the system is trained in a supervised way based on known simulations. The results suggest the learning model can replicate the simulation, I don’t believe there was any attempt to extrapolate beyond the training regime. Strengths: 1. The paper describes a very interesting problem which (to my knowledge) hasn’t been extensively explored through learning based approaches 2. It uses state-of-the-art message passing gnns and Neural ODEs as part of the 3. The model is accurately replicating simulation behaviour 4. Code is available Weaknesses: 1. The paper is really hard to read. It took me a very long time to understand how it was implemented and although the code was generously submitted, this isn’t commented making it difficult to follow in the time available. Gnns and ODEs are in the title of the paper but are barely mentioned in the text. 2. The methodological contribution is limited 3. The model is trained in a supervised way meaning that it cannot learn new behaviours or extrapolate beyond the training regime. As this was stated as the main limitation of previous works it suggests this isn’t adding anything beyond these? 4. Results are based on validating relative to simple simulations - and run time is apparently anyway slow, so what is the value of the proposed approach Technical Quality: 3 Clarity: 2 Questions for Authors: 1. The main contribution seems to be to use a fully connected network to learn the force terms through a data term that forces the model to match the simulation results. Why not use a Physics informed neural network to embed the physics into the model, and thus hopefully extrapolate beyond the training regime? 2. Wouldn’t it make more sense to model this as a graph ODE? http://arxiv.org/pdf/1911.07532 3. It would help future readers if the authors explain in more detail how the whole network is trained - including where the gnn and neural ODE are called (it took me a long time to work this out as they are only briefly mentioned in the text) 4. This paper seems like it might be relevant and seems to be going beyond what is proposed here through learning models of interactions https://arxiv.org/abs/2303.09906 with PiNNs - can the authors explain the advs/disadvs of their work relative to an approach such as this Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Limitations are not really discussed beyond runtimes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback and comments. Your careful reading was very helpful in identifying places that we missed to explain in the draft. Re – Question 1 and Weakness 2 (Why not use a Physics informed neural network): This was briefly described in the Background section, however it was not fully conveyed in relation to Physics-Informed-Neural-Networks (PINN). While there is no strict definition of PINN, we believe our model absolutely falls into that category, since we build our architecture on equations of motion for active matter (a class of coupled equations of motion important in non-equilibrium and biological systems) with forces being represented by neural networks. Most works on PINN centers on the estimation of relatively simple close to equilibrium systems such as diffusion and fluid dynamics where semi or almost non-supervised learning is partially possible based on first principles. Instead, actively propelled particles as such studied here are far from equilibrium and Hamiltonian is not available. It is that we are exploring this difficult class of systems which is nevertheless fundamental in biological physics that makes the study novel and important. Re – Question 2 & Weakness 2 (Relationship with Graph ODEs): Our model is in fact a graph ODE except that the graph is time evolving due to change in particle positions. Valid edges were detected at every timepoint based on particle-to-particle distance. Existing graph ODEs instead have explicitly defined (i.e. fixed or semi-fixed) graphs. We realized that this wasn’t fully described and would be happy to clarify in the final edition. Re – Question 3 & Weakness 1 (Clarify the details of training): We recognize now that the section that describes this (p.4 L. 130-136) should have been clearer. Apart from adding more comments in the codes, we propose to reorganize and extend the Method section, and add a flow chart in the final version. The reorganized section would make the following clear: - Both for generating the training data with predetermined forces and the learning approach with neural networks, we solve a differential equation, eq. (1) in the original draft, over time. Depending on whether there is noise in the system, we employed either neural ODE or neural SDE. By neural ODE we simply mean that we solve an ODE in which the forces are given by neural networks. In either case, the r.h.s. of eq. (1) has to be evaluated at least once per time step. - To generate the training data, predetermined force functions were simply evaluated. During the learning process, the forces are represented by neural networks. Graph neural networks (GNN) techniques were employed to make this evaluation efficient. We note that the state of the system at any given time can be expressed by a graph, with each entity representing a vertex, and interactions between pairs of entities representing edges. As the positions evolve over time, the graph is time-dependent, too. - Whenever the r.h.s. of eq. (1) needs to be evaluated by the solver, we pass the system state to the GNN wrapper. The GNN wrapper defines the graph structure and performs a message passing process, which evaluates the neural network (or predetermined function to generate training data) of force F^{(1)} for each vertex, evaluates the neural network for the force F^{(2)} for each edge, and finally adds the forces on the edges to each connecting vertex. - Loss function: In order to infer the forces F^{(1)} and F^{(2)}, we split training data into pairs of states which are a time tau apart. Then, for each pair of states, we integrate the equation of motion from the first state for a time tau using the neural network forces and then compare the resulting state to the second state. This exploits that the noise in the training data has the tendency to partially cancel out over time. Re – Question 4 (advantages and disadvantages of our method in comparison with arxiv:2303.09906) The preprint addresses learning the dynamics of an order parameter of alignment of agents instead of the underlying full trajectory of the interacting agents. The order parameter encodes some of the information about the collective dynamics of the agents, since it is an average over the agent’s states at a given time, but it does not come close to capturing their full dynamics. Their approach infers the dynamics of a one-dimensional quantity, whereas we infer the full dynamics of all the agents in the system. Their approach is useful in situations where one is only interested in understanding the dynamics of a mesoscopic or macroscopic quantity such as an order parameter and where having to keep track of the complete dynamics of the system is a hindrance, e.g. in terms of data storage or computational cost. Their approach however cannot be applied to inferring the microscopic interactions between the agents (because they are intentionally averaged out), which is the aim of this study. Re – Weaknesses 3 & 4 (Unable to extrapolate as other previous works due to supervised approach. The value of the proposed approach): This is related to Question1. We should note that in complex systems where first principle is not easily available, an unsupervised approach is not feasible. To the best of our knowledge, the only studies of learning the interaction of actively propelled particles is on the Viscek model (cited refs [4, 24]) which still has an order parameter that can be exploited. The aim of our work instead is to go beyond such system where no clear order parameter is given. From application point of view, too, the interaction term in Viscek model leaves out crucial aspects of interaction between biological cells and thus we turned to the current model equation. Whether learning such models is even possible was not at all trivial and this work demonstrates its first realization and thus novel. We realized that this aspect was not sufficiently explained and will be amended in the corrected edition. --- Rebuttal Comment 1.1: Title: Less confident but still have questions Comment: I've read your response and thank you for your attempt to qualify but this has raised more questions. For sure, although I have some basic understanding of PINNs its not my specialism and I do not know well the systems you are modelling - so I have reduced my confidence in my review. What I was wondering was whether the equations with which you use to define your simulations could not be used to generate a physics based loss for your ode - where my understanding is taken from this paper (which does not model interaction terms): Linka, K. et al. Bayesian Physics-Informed Neural Networks for real-world nonlinear dynamical systems. arXiv [cs.LG] (2022). I ask this because from what I understand from your paper section 5.2 (last paragraph) the model is overfitting on the interaction forces, and my limited understanding is that adding some form of physics loss can help regularise and allow models to extrapolate beyong the training regime? As it stands the current results perform well for simulations - but for these the model is known, and a neural network is a universal approximator, so I assume it is not entirely surprising that it can learn the function given enough data? In general, I'm not clear what the longterm projection of this work is? Can the authors comment on how this would be translated to real world data for which a model doesn't exist and what the potential advantage of that would be? How would they extrapolate beyond the training regime? How would they validate that the model works? I'm still not clear why the graph neural ODE paper (Poli et al 2019) is not relevant here - I admit to not having studied this paper in depth but they show results on multi agent trajectory extrapolation and discuss a spatio-temporally evolving graph. --- Reply to Comment 1.1.1: Title: Answer to the questions Comment: We appreciate the reviewer's comments and the thoughtful questions raised. Below, we provide a detailed response to address the concerns highlighted. >whether the equations in your simulations could not be used to generate a physics based loss for your ode The paper by Linka et al aimed at learning daily new COVID-19 cases (\hat{x}). Since it was evident that the data oscillates in time, the authors stabilized the prediction of \hat{x} by constraining a neural network (NN) model (which takes time t as input and outputs x) to exhibit a similar behavior to a harmonic oscillator model (where f\{x\} := \ddot{x} + (c/m)\dot{x} + (k/m)x = 0 for x). Specifically, the constraint was applied by adding the term f\{x\}^2 to the loss function. Using this approach, the authors successfully predicted future time series which is not included in the training data. In contrast, our work aims at studying a system where we can not use a term like f\{x\}^2. The very purpose of the learning process is to discover the model we used to create the training data. We used one of the models that has been proposed to explain the collective behavior of cells. However, this model does not fully account for the characteristics observed in the experimental data we aim to analyze, such as phase separation-like behavior between different cell types. Additionally, when considering other systems, both the collective behavior as a whole and the individual cell behavior differ significantly between systems, lacking any unifying pattern. So far, no universal model nor principle governing these systems have been found. Given this background, we avoided assuming a specific model in the training process. > what I understand from your paper section 5.2 (last paragraph) the model is overfitting on the interaction forces, … adding some form of physics loss can help regularise and allow models to extrapolate … The last sentence of Section 5.2 may have caused confusion. There was no overfitting but rather, it was underestimation described here. We will more clarify this point in the final version. We should note that the extrapolation performed in the Linka et al paper is indeed achieved in our study. In Section 5, This is demonstrated by our model’s ability to reproduce the dynamics of collective motion from initial states not included in the training data. >a neural network is a universal approximator, so … it is not entirely surprising While the universal approximation theorem guarantees the existence of a solution where a NN can approximate any function, this does not imply that such a solution can be found, nor that the solution will generalize well to similar data. Our case is an inverse problem where each individual in a group is influenced by forces from numerous neighbors. The challenge is to estimate individual forces from the integrated trajectories observed, which are the result of summing and integrating those forces. It is not guaranteed that such forces can always be computed, and even if the computation converges to a solution that explains the trajectory, due to the nature of inverse problems, this does not ensure that the individual forces before summation are correct. However, we have confirmed that our computations do converge, and that the individual forces are accurately estimated. >what the longterm projection of this work is? In biological physics, many including us are attempting to address various types of collective motion exhibited by a group of groups of cells. Understanding such dynamics is fundamental to knowing how tissues form. Our present approach should be applicable in extracting individual cell behavioral rules from real data. >How would they validate that the model works? This is related to the above comments. Our plan is to test this approach by applying real experimental data and from there, make corrections and improve the assumed model. >why is the graph neural ODE paper (Poli et al 2019) not relevant? The method by Poli et al is relevant to the present work. We will clarify the difference between our method and theirs in the camera-ready version. Poli et al addresses systems where the graph structure changes much slower than ours. Their extrapolation is limited to a few steps during which the edge structure remains almost unchanged. If the prediction extends beyond a few steps, the predicted node states deviate from the correct values, leading to inconsistencies with the edge structure. To perform long-term predictions with their method, one would need to compute the neural ODE over short intervals, updating the edge structure based on the resulting node states, and then repeat the neural ODE computation for another short interval. This approach is inconvenient and assumes that the edge structure remains unchanged during the neural ODE computation. In contrast, our method does not require pre-defined edges, allowing one to make long-term predictions based solely on the initial state of nodes.
Rebuttal 1: Rebuttal: We thank reviewers for careful consideration of our work and are thankful for their astute criticism, which was very helpful in vastly improving the quality of the manuscript. The manuscript was reviewed by 4 reviewers, with two of them (Shpk and Sm49) recommending acceptance. The other two, KAZi and T7k9, expressed disapproval out of concerns of novelty of the work which we believe are mainly due to the relatively short description provided in our original draft on the research background and the methodology. We have provided a point-by-point response to the reviewers’ comments which we incorporate in an updated draft. We feel strongly that we have been able to address the concerns of the referees. In particular, we would like to briefly point out the following modifications: - Referees KAZi and T7k9 both asked about the relationship to graph ODE. We clarify the relationship of our method with pre-existing approaches to combine GNNs and neural ODE. Our new approach does not require any given graph structure, but defines it using a given rule at every time step of neural ODE calculation. This enabled us to extrapolate the trained model to unknown initial conditions, even in case the adjacency drastically changes in time. - Describe Methods more clearly. In particular, how Graph Neural Networks (GNNs) are used to evaluate the equations of motion effectively. We added a flow chart which describes how our model evaluates the equations of motion (attached PDF file). - Describe Physics Informed Neural Network and where the present work sits relative to related literatures. - Referee KAZi expressed difficulty in readability while referee T7k9 found the writing to be rather strong. Grammar and expression will be checked and corrected. Pdf: /pdf/c46c8c35109d4039e2217563e9e41a93a6fb4aee.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision
Accept (poster)
Summary: This paper attempts to address a forward-looking and challenging question: "Can we limit human supervision to easier tasks, yet enable the model to excel in harder tasks?" (referred to as Easy-to-Hard Generalization). Based on the observation that "evaluation is easier than generation", the authors propose to train a verifier on easy tasks, then make use of its generalization ability to supervise the generator on hard tasks. To harness the complementary strengths of outcome reward models (ORMs) and process reward models (PRMs), the authors introduce the Outcome & Process Reward Model (OPRM), so as to better utilize Easy-to-Hard evaluators. Through extensive experiments, the authors verify that easier-level evaluators maintain their effectiveness on harder tasks. Further experiments explore the use of the easy-to-hard evaluator as a reward model in reinforcement learning and underscore the potential of using easy-to-hard evaluation to improve easy-to-hard generators. Strengths: * The problem this paper aims to tackle is promising and challenging. * The proposed approach is intuitive and has strong motivation. * This paper is well-written and presents clear ideas. * Through extensive experiment, the authors validate that the proposed method of "training a verifier on simple tasks, then leveraging its generalization capability to guide the generator on complex tasks" is workable. Weaknesses: * **The definition of difficult problems could be further refined.** The scenario considered in this paper is how to enhance the model's ability to perform difficult reasoning tasks when humans cannot provide effective supervisory signals. Given that the model's capabilities vary across the seven subsets in the MATH dataset on which the experiments are based, the definition of difficult problems may be biased. For instance, the LLM performs significantly better on the Algebra subset than on Geometry and Number Theory. Therefore, the improvement at levels 4-5 may primarily result from the performance enhancement in Algebra as it inherently has some evaluation ability for Algebra level 4-5 problems). Thus, it would be better to display the performance of the proposed method in different subsets of MATH, as well as the performance at levels 4 and 5 under different subsets, to prove that it can help the model solve "truly difficult" problems (such as Number Theory level 5). Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. I will consider raising my score if the authors can address my concerns. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I think the authors have addressed their limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review and positive feedback. We're glad to hear that you appreciate the easy-to-hard generalization problem we’re going on. Your recognition of our proposal and motivations is encouraging. We address your questions below. **Weaknesses** **W1 (a). The definition of difficult problems could be further refined** The goal of this paper is not to provide a specific way to split data into easy and hard portions for any arbitrary domain **but to show how we can enable generalization on hard tasks by only supervising the model on easy tasks**. Specifically, in the settings for scalable oversight (aligning superhuman AI), we can treat **all the tasks that humans can annotate as easy, and all the tasks that humans cannot supervise as hard**. This makes a clear definition of “easy” vs “hard” for real-world tasks. It is only for research purposes that the experimental datasets used in the paper, as the reviewer noticed, have a clear division of the difficulty levels. This division helps verify the idea of easy-to-hard generalization (Lines 30-34), where the model is trained only on **easy data (simulating the tasks that humans can label)**, and then generalizing to **hard data (simulating the tasks that humans cannot handle)**. **W1 (b). For instance, the LLM performs significantly better on the Algebra subset than on Geometry and Number Theory. Therefore, the improvement at levels 4-5 may primarily result from the performance enhancement in Algebra as it inherently has some evaluation ability for Algebra level 4-5 problems). Thus, it would be better to display the performance of the proposed method in different subsets of MATH, as well as the performance at levels 4 and 5 under different subsets, to prove that it can help the model solve "truly difficult" problems (such as Number Theory level 5).** Thanks for the suggestions. We indeed have conducted the fine-grained analysis of OPRMs’ re-ranking improvements divided by Level and Math Category in Figure 19 and Figure 20 in Appendix N, where Number Theory and Geometry are somewhat more difficult than Algebra. However, we can see in Figure 20 that **for almost all categories, OPRMs’ re-ranking can bring more than a 4% improvement**. Besides, we have added Figure 2 (Right) in the uploaded PDF to verify the effect under the level 4-5 subset in number theory and geometry. We can see that in this hard subset, **OPRM can bring nearly a 10% improvement**, further demonstrating the feasibility of the easy-to-hard approach and the superiority of OPRM. We will include more experiments on the hard parts of different category subsets in the revision paper. Many thanks for your suggestions. --- Rebuttal Comment 1.1: Comment: Thanks for your response, my major concerns have been addressed and I have adjusted my score accordingly. --- Rebuttal 2: Comment: Dear reviewer bhN7, we are glad our response has addressed most of your concerns. Thank you for increasing your score!
Summary: This paper addresses the issue that humans cannot always provide helpful demonstrations or supervision on tasks beyond their expertise. Based on the observation that evaluation is easier than generation, the authors propose "easy-to-hard generalization," training a verifier on easy tasks and leveraging its generalization ability to supervise the generator on hard tasks. Experimenting mainly in math reasoning tasks, they demonstrate that easy-to-hard generalization from evaluators can enable easy-to-hard generalization from generators. Strengths: The paper introduces the Outcome & Process Reward Model (OPRM), which harnesses the complementary strengths of ORMs and PRMs. Experiments show that OPRM is more efficient. It also conducts a systematic experimental setup, testing various generators and evaluators, as well as optimization algorithms (BoN, Weighted Voting, and RL). It provides numerous experimental analyses in scalable alignment. Weaknesses: While the paper is well-written and presents a solid analysis, several weaknesses need addressing: 1. What is the difference between "easy-to-hard" and "weak-to-strong"? You state that human supervision is available but not reliable in the weak-to-strong setting, but in the [OpenAI paper](https://cdn.openai.com/papers/weak-to-strong-generalization.pdf) it says, "We find that when we naively finetune strong pretrained models on labels generated by a weak model, they consistently perform better than their weak supervisors, a phenomenon we call weak-to-strong generalization." In that study, they finetune GPT-4 with a GPT-2-level supervisor. Can a GPT-2-level supervisor be seen as a verifier on easy tasks? The novelty should be considered. 2. In Table 1, the Full ICL setting performs worse than the Easy-To-Hard ICL setting. How do you explain this? The intuition is that sampling from both easy and hard exemplars may help solve hard problems more effectively than just demonstrating easy exemplars. Although the quality of PRM800K is lower than MetaMATH, your explanation for why the Full ICL setting is worse than the Easy-To-Hard ICL setting is insufficient (line 198). 3. Can you report the average accuracy of verifying each step in a solution when you evaluate the evaluators? Section 3.5 does not explain why this is not reported. 4. When citing images and sections, consider adding automatic jumps for easier navigation. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. What is the difference between "easy-to-hard" and "weak-to-strong"? 2. In Table 1, the Full ICL setting performs worse than the Easy-To-Hard ICL setting. How do you explain this? 3. In Table 1, the Easy-To-Hard SFT setting performs slightly worse than the Full SFT setting, which is expected. Do you think the evaluation of the Easy-To-Hard setting is influenced more by the format of problems or by the true grasp of the principles of solving hard tasks? Are there any better evaluation methods? This is an open question, and I would appreciate your thoughts on it. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The paper may lack innovation regarding the concept of "easy-to-hard generalization" and the method to achieve it. Many experimental conclusions appear to merely reproduce or support existing works and lack depth. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review and constructive feedback. We appreciate your recognition of the paper’s clarity, the proposed OPRM methods, and the solid experiments analysis. We address your concerns and questions below. **Weaknesses & Questions** **W1&Q1. What is the difference between "easy-to-hard" and "weak-to-strong"? & Can a GPT-2-level supervisor be seen as a verifier on easy tasks?** “Easy-to-Hard” (E2H) studies how the model generalizes when trained by **clean** labels on **easy** tasks, while “Weak-to-Strong” (W2S) studies how the model generalizes when trained by **noisy** labels on **all available** tasks. We listed a few differences between E2H and W2S as clarification, which we’ll add to the revised version of the paper: - W2S uses weak teacher’s prediction as the supervision, where no human annotation is used for the strong model. E2H uses human annotations, but they are limited to easy problems. - W2S studies classification or short-answer prediction problems, while E2H studies generative (or long-CoT reasoning) problems. - The two models used in W2S are the weak teacher and the strong student, which are models of different sizes but trained on the same task. The two models used in E2H are the generator and the evaluator, which can be of the same size, but trained on different tasks (as policy model or as reward model). - The research question in the W2S analogy is: Can we have the student model outperforming the teacher model? The research question in the E2H analogy is the following: **Can we produce a system (LLM+evaluator) trained on human annotations on easier tasks only but can perform well on harder tasks for which we do not have any human annotations?** Finally, both E2H and W2S are analogies of scalable oversight, which studies how we can align superhuman AI models. > “Can a GPT-2-level supervisor be seen as a verifier on easy tasks?” We believe this is not a well-defined question. We would like to clarify that:1) W2S only studies classification problems, 2) E2H studies generalization of verifiers on hard tasks, not easy tasks. > "You state that human supervision is available but not reliable in the weak-to-strong setting, but in the OpenAI paper it says, ..., they finetune GPT-4 with a GPT-2-level supervisor." The GPT-2-level supervisor in the W2S paper is used to simulate the noisy human supervision on tasks that are too difficult for humans to reliably evaluate. **W2&Q2. Explanation for why the Full ICL setting is worse than the Easy-To-Hard ICL setting is insufficient (line 198).** One of our hypotheses is that ICL is mainly performing format learning, so **exemplars of easy problems might be simpler for the model to understand and follow**, whereas the format of difficult problems may be more challenging for the model to grasp. Another hypothesis is that **the level of noise in hard data** is likely higher than in easy data. This is like how humans are more prone to making mistakes when annotating difficult questions (inconsistencies in reasoning solutions can also be considered a form of noise), making it difficult for the model to effectively extract knowledge from hard ICL data. There is also research [1] suggesting that knowledge is stored in data in a hardness-invariant way. Therefore, selecting hard data for ICL does not necessarily lead to performance improvement. **W3. Can you report the average accuracy of verifying each step in a solution when you evaluate the evaluators?** We conducted additional experiments using the PRM800K-test data, which includes correctness annotations for each step, to test our model's ability to distinguish correct reasoning steps. We randomly selected a portion of PRM800K-test data to balance positive and negative samples. The accuracy of the reasoning steps for the three models is as follows: | Reward Model | Step ACC (%) | Outcome ACC (%) | |----------------------|--------------|-------------| | ORM-PRM800K-7B | 64.3 | 71.7 | | PRM-PRM800K-7B | **80.4** | 63.5 | | OPRM-PRM800K-7B | 79.8 |**74.4** | This table demonstrates the effectiveness of our trained PRM, showing that PRM has a significantly greater ability to distinguish steps compared to ORM. Additionally, in Figure 2 (Left) of the uploaded PDF, we present the Step ROC curves of three models, where PRM and OPRM exhibit better step discrimination abilities compared to ORM. However, it is important to note that a stronger ability to distinguish steps does not necessarily indicate that the evaluator is more helpful for generation. We then also present the Outcome ROC curves of three models on discriminating the final outcome. We collect data generated on MATH500 test set from our 7B policy model. According to the final outcome and groundtruth, we label each data and select a positive-negative balanced set to plot the Outcome ROC curves, where OPRM exhibits better outcome discrimination abilities compared to ORM and PRM. The above table also shows the effectiveness of OPRM on Outcome discrimination ability. **W4. Citing images and sections** Thank you very much for your recommendation. We will correct this issue in the revised version. **Additional Questions** **Q3. Do you think the evaluation of the Easy-To-Hard setting is influenced more by the format? Are there any better evaluation methods?** In this paper, we have **controlled all data to have a consistent format**. Therefore, the format will not influence the evaluation results and conclusions presented in the article. Our format is the same for all levels. We released all the data, which are ready for the reviewer to check. We also believe accuracy with (greedy, majority voting, best-of-n, and weighted voting) are comprehensive enough to provide an evaluation of the mathematical reasoning tasks. [1] The Unreasonable Effectiveness of Easy Training Data for Hard Tasks, arXiv:2401.0675. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I have read the author's response, and I raise my rating. --- Reply to Comment 1.1.1: Comment: Thank you for raising the score! Please let us know if there are any remaining questions or concerns that we can address!
Summary: In this paper, the authors propose *easy-to-hard* generalization, which is to train a reward model on simpler tasks and then use it to evaluate the solutions for more difficult tasks. They have conducted in-depth studies on MATH, and also demonstrated effectiveness on the coding benchmark APPS. This work serves as a nice proof-of-concept study of "evaluation is easier than generation", and suggests a new way toward scalable alignment without human supervision. Strengths: - The idea of *easy-to-hard* generalization is inspiring, and it's nice to see that the idea works out on the challenging MATH dataset. - The authors have conducted sufficient experiments and detailed analysis, which makes the paper quite worthy to be read and referred to. Weaknesses: - The definition of "easy" v.s. "hard" is not very clear, and it seems that the proof-of-concept experiments rely on the structure of the benchmarks, as the MATH dataset has 5 divisions of difficulty and APPS has 3. However, when there are not explicit tags of difficulty in a benchmark, what is the authors' definition for "easy" and "hard" in terms of each data sample? - Following the above weakness, a natural question for the authors is to demonstrate the practical value of this work when compared with recent work that scales synthetic data for MATH (e.g., [1]). Can the massive synthetic data be viewed as a mixture of easy and hard problems + the corresponding solutions? If we treat such mixed data as the easy part and train RMs on it, can we expect similar easy-to-hard generalization? Which kinds of ability would be "unlocked"? [1] Improve Mathematical Reasoning in Language Models by Automated Process Supervision. https://arxiv.org/abs/2406.06592 - According to the figures shown in the paper, the "Weighted Voting w/ RM" method always yields better performance as N increases. By comparison, "Best-of-N w/ RM" and "Majority Voting" can plateau or even become worse when N increases from 512 to 1024. Does the weighted voting with RM guarantee the increasing performance, or is it just by accident? Would be great if there are formal explanations to this. - While scaling the sampling times N has seen improvements, are there certain problems whose correct solutions are never sampled by the LLM when letting N be very large? - More case studies would be beneficial to provide a more intuitive understanding of how the RM on easy problems generalize to hard ones. - The authors have conducted in-depth analysis about the comparisons between PRM, ORM, and OPRM on MATH. While I appreciate the experiments, I wonder what the effect of PRM/ORM w.r.t. easy-to-hard generalization is. For example, is it true that we should always adopt PRM when it is possible to get process-based supervision (Lines 219~221)? If this is true, would the results on APPS be better when we could access process-based supervision for code (for example, treat the interleaved comments in a code snippet as that)? Technical Quality: 3 Clarity: 3 Questions for Authors: The idea of "evaluation is easier than generation" is appealing, and it seems that the idea draws inspiration from the assumption that P < NP. However, for some tasks it seems that evaluation shares a similar level of difficulty with generation. For example, let there be N integers: A_1, A_2, ..., A_N, and the task is to multiply them altogether: A_1 * A_2 * ... * A_N = ? . While doing multiplication is no easy task for LLMs, it seems that evaluating whether some potential answer is the result of A_1 * A_2 * ... * A_N is as difficult as generating the answer of the multiplication, since in either way, one should do it a serial manner (for i in 1,2..N). Any thoughts/evidence on this? And broadly speaking, are there cases when evaluation is as difficult as generation, or evaluation is even harder than generation? It would be great if the authors could shed light on this. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive feedback on our paper. We are glad that you appreciate the inspiring easy-to-hard generalization problem we’re working on, and thank you for acknowledging the thoroughness of our experiments. We address your questions below. **Weaknesses** **W1. The definition of "easy" vs "hard".** The goal of this paper is not to provide a specific way to split data into easy and hard portions for any arbitrary domain but to show **how we can enable generalization on hard tasks by only supervising the model on easy tasks**. Specifically, in the settings for scalable oversight (aligning superhuman AI), we can treat all the tasks that humans can annotate as easy, and **all the tasks that humans cannot supervise as hard**. This makes a clear definition of “easy” vs “hard” for real-world tasks. It is only for research purposes that the experimental datasets used in the paper, as the reviewer noticed, have a clear division of the difficulty levels. This division helps verify the idea of easy-to-hard generalization (Lines 30-34), where the model is trained only on **easy data (simulating the tasks that humans can label)**, and then generalizing to **hard data (simulating the tasks that humans cannot handle)**. **W2. Can the massive synthetic data[1] be viewed as a mixture of easy and hard problems?** The easy-to-hard generalization framework is also applicable to data generation methods such as [1]: we can treat all generated question-solution pairs that have been verified by ground-truth as easy problems, while those not verified by ground-truth (or open questions) are considered as hard problems. This is because the easy-to-hard generalization framework does not need to know the ground-truth solutions for the hard problems. We leave combining our framework and other methods as future work. **W3. Does the weighted voting with RM guarantee the increasing performance?** In our experiments with 7b-34b models, we found that weighted voting with RM is always better than BoN or majority voting. Here’re our insights: - why weighted voting is better than majority voting? Theorems 1 & 2 in [2] show the convergence of the accuracy with an increasing number of samples. Specifically, the limit is determined by the likelihood of generating the correct answers through all possible reasoning paths (and the likelihood should be viewed as a weighted sum for Weighted Majority Voting). As long as the reward model is “better than random (informally)”, i.e., assigning higher rewards to correct solutions on average, the accuracy limit of Weighted Majority Voting is higher than that of Majority Voting. - why weighted voting is better than best-of-n? [3] shows that the scaling curve of BoN is $R_{bon}(d) = d(\alpha - \beta \cdot d)$, where $d$ is the square root of the KL divergence of the policy and $d = \sqrt{log N - \frac{N - 1}{N}}$. This means the performance of BoN will ultimately become worse when reward over-optimization happens (i.e., $d > \frac{\alpha}{2\beta}$). **W4. Are there certain problems whose correct solutions are never sampled?** We conducted additional experiments on Pass@N and reported the results in the uploaded PDF. We found there are still some problems that cannot sample a correct answer. More specifically, **Pass@N is highly correlated with difficulty**. As illustrated in Figure 1 in the uploaded PDF, with a larger number of samples, the Pass@N for Level 1 problems is nearly saturated. However, for Level 5 problems, there are still many instances where a correct solution is not sampled. **W5. More case studies.** We have included more case studies in Figures 3, 4 of the uploaded PDF. Evaluator can help generalize to harder ones in the following ways: - The evaluator can **help identify and reduce the confidence of hallucinations caused by misleading information in problems**. As demonstrated in Case Study 1, the solution selected by majority voting with an answer of 36 is misled by the different units of measurement in the problem (2.5 hours and 90 seconds), resulting in an incorrect solution. Then, the ORPM model successfully gives this solution a low score. - The evaluator can **assist in reducing the confidence of solutions that misuse mathematical theorems**. In Case Study 2, the majority solution incorrectly applies the theorem "the sum of the exterior angles of a polygon is 360°", leading to erroneous reasoning, and low confidence by the ORPM model. **W6. what the effect of PRM/ORM w.r.t. easy-to-hard generalization is. and would the results on APPS be better with accessing process supervision?** We compared PRM, ORM, and OPRM in Appendix G, where we found PRMs and ORMs perform similarly, with PRMs slightly outperforming ORMs on hard tasks. However the OPRMs that are trained on the mixed data of PRMs and ORMs significantly outperformed both of them. Hence, we believe we should adopt PRMs and ORMs together for the OPRM, which can complement ORM and PRM. We believe the results on APPS would be better if we could obtain a human-annotated (or synthetic) PRM dataset for code. However, that’s out of the scope of our paper. **Questions** **Q1. Are there cases when evaluation is even harder than generation?** Not all evaluations are easier than generation. As shown in [4], LLMs might generate content that exceeds their own understanding based on the given context. An LLM might create a **highly coherent and contextually linked story**, but when questioned about the **logical connections** within the story, it may fail to make accurate judgments. [1] Improve Mathematical Reasoning in Language Models by Automated Process Supervision, arXiv:2406.06592. [2] An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models, arXiv:2408.00724. [3] Scaling laws for reward model overoptimization, ICML 2023. [4] The Generative AI Paradox: “What It Can Create, It May Not Understand”, ICLR 2024. --- Rebuttal Comment 1.1: Comment: Thank you for the response! Since the added analysis and explanations have resolved most of my concerns, I have raised the score. --- Reply to Comment 1.1.1: Comment: Dear reviewer, we are glad our response has resolved most of your concerns. Thank you for raising your score!
Summary: - This paper studies the question of how a system can be improved when the performance of a system has surpassed human performance on a task. - As a testbed, the paper uses problems from the MATH dataset which have been sorted into 5 levels by difficulty. - The authors first train process-supervised reward models on level 1-3 problems on the MATH dataset. - They then use the reward models learned from easy problems to supervise policies on hard (levels 4-5) problems on MATH. - They find that the reward models substantially improve the performance of the policy on hard tasks when used as either reward models in RL or as re-ranking models during inference, despite being only trained on easy problems. Strengths: - The extensive comparison of different training methods (ReST, DPO, PPO) is useful even outside of the context of the research question. - The methodological decision to compare both re-ranking and RL on hard problems is very sound and makes me more confident in the conclusion of the paper. - Although there have been several recent papers on easy-to-hard generalization that establish that easy-to-hard generalization is possible, I think the experimental setup here takes a different angle by showing that the *reward models* learned on easy tasks transfer to harder tasks *in multiple ways* and specifically that evaluators generalize better than generators to hard tasks. Weaknesses: - The differences between the comparison categories in many cases are small, on the order of 1-2 percentage points. I also did not see any error bars or variance estimates (except maybe in Figure 4, though this is unclear). This makes assessment of the scientific validity of the results a bit more challenging. - The conclusions were demonstrated on only two tasks and both tasks were formal reasoning tasks. Would the conclusions transfer to natural language reasoning tasks? Technical Quality: 3 Clarity: 2 Questions for Authors: - I found nearly all the tables in the paper hard to read / extract information from. - It isn't clearly indicated (or at least I couldn't tell) how many times each model was trained, were there multiple runs, etc. I see what looks like error bars on some of the plots, but no explanation of these is given. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review and the positive feedback on our paper. We are pleased that you found our proposals interesting and our experiments thorough. It's particularly encouraging to hear that you recognize the novelty of our easy-to-hard generalization approach and our results useful even outside of the context of the research question. We address your questions below. **Weaknesses** **W1. The differences between the comparison categories in many cases are small, on the order of 1-2 percentage points. I also did not see any error bars or variance estimates (except maybe in Figure 4, though this is unclear).** We are only reporting baseline results in Table 1. The main claim of the paper, that the easy-to-hard generalization of evaluators helps generators, is supported by results in Figure 3, Figure 4, Table 2, and Table 3. The accuracy improvements from the reward model are often significant (e.g., comparing weighted voting to majority voting or comparing RL models to SFT models). We agree that observing the variance of the error is important. Figures 3 and 4 represent the curves for different combinations of random sampling trials, where the solid curves for the performance average and the shaded areas for the error ranges (performance variance). **W2. The conclusions were demonstrated on only two tasks and both tasks were formal reasoning tasks. Would the conclusions transfer to natural language reasoning tasks?** In our easy-to-hard framework, we haven't specialized any assumptions in MATH or Code related to the problem (easy-to-hard generalization) we're studying, so our method should be transferable to other tasks in principle. We left the verification of our method in other domains as future work. **Questions** **Q1. I found nearly all the tables in the paper hard to read / extract information from.** We have added an explanation about the performance variance for each curve and will include the revised explanation in the revised version of our paper. **Q2. It isn't clearly indicated (or at least I couldn't tell) how many times each model was trained, were there multiple runs, etc. I see what looks like error bars on some of the plots, but no explanation of these is given.** For the training times, most of the training runs in the paper were only conducted once due to resource constraints, and also because we've observed that the performance was quite stable in our preliminary studies. For the plots, the error bars analysis is indeed included in each curve plot such as Figure 3, 4. We will add more descriptions to the plot in the paper. Specifically, for all the problems, we sampled 2048 solutions. Taking N=32 on the x-axis as an example, we randomly select 32 solutions from the 2048 rollout samples, record the consensus score (majority voting, weighted voting, and BoN), and repeat this process 400 times. The solid curve represents the mean accuracy of these 400 sampled combinations of solutions, and the shaded margin of each curve represents the performance variance. --- Rebuttal 2: Comment: Authors, thank you for the response. I have no further questions. This paper should be a clear accept. --- Rebuttal Comment 2.1: Comment: Dear reviewer, we greatly appreciate your support for our work. Thank you for maintaining your score!
Rebuttal 1: Rebuttal: Dear Reviewers and AC, Thank you all for your time and effort in reviewing our paper. We are grateful to 3nAE, dzZx, and bhN7 for recognizing **the adequacy and novelty of our experiments and motivations** and acknowledging **the importance of the problem we are exploring, easy-to-hard generalization**. We also thank VNqz and bhN7 for recognizing the intuition behind our proposed OPRM method. Our contributions are well-recognized and can be summarized as: - We show the potential of easy-to-hard generalization, where models can be guided to **solve complex problems without direct human supervision on these harder tasks**. - We demonstrate that the easy-to-hard generalization in **evaluator models can effectively guide the generalization of the policy model on challenging tasks**. This underscores the effectiveness of re-ranking strategies and reinforcement learning in leveraging evaluators to achieve performance gains on challenging tasks. We have added several figures in the uploaded PDF to aid readers in understanding our paper. These figures will also be included in our revised paper: - Figure 1: The Pass@N curve shows its high correlation with difficulty. - Figure 2 (Left): The Step ROC Curve and Outcome ROC Curve. - Figure 2 (Right): The performance of ORPM on Geometry Level 4-5 Problems and Number Theory Level 4-5 Problems. - Figures 3 & 4: Case studies demonstrating how the evaluator can assist in solving hard mathematical questions. We sincerely appreciate all the efforts from the reviewers and ACs put into improving our paper. We have responded to every raised concern and hope our response can address them. Thanks again for all the effort and time. Best, Authors Pdf: /pdf/e2936e2d38ff0b728dfb9ebe2106366885ea9530.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
CLIPLoss and Norm-Based Data Selection Methods for Multimodal Contrastive Learning
Accept (spotlight)
Summary: This paper introduces a novel metric, negCLIPLoss, for selecting high-quality data. Additionally, the paper proposes a norm-based metric, Normsim, which offers an improved measure of data quality and is compatible with existing methods. Both negCLIPLoss and NormSim demonstrate significant performance improvements, outperforming state-of-the-art methods, while maintain low preprocessing time. Theoretical interpretations are provided for NormSim within the framework of a linear model. Strengths: 1. The paper is intuitive, well-motivated and well-written. 2. The proposed methods are simple and effective. 3. The experiments are sufficient. Weaknesses: 1. In Line 86, the authors assert that NormSim does not explicitly consider diversity but provide no further explanation. Since diversity is often linked to the generalization performance of models, it is unclear how the proposed methods implicitly connect with diversity. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. The algorithm 1 requires the knowledge of the batch size B and the parameter $\tau$ from the teacher model. If the teacher model is private and both B and $\tau$ are not accessible (for example, only api is provided), are the proposed methods still workable? How critical is the batch size B to model performance? Can the parameter $\tau$ be estimated? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your recognition of our paper and your constructive feedback. We have responded to your concerns and will revise our paper based on the discussions. We would also appreciate it if you could let us know if our response addresses your concerns. > **Q1**: In Line 86, the authors assert that NormSim does not explicitly consider diversity but provide no further explanation. Since diversity is often linked to the generalization performance of models, it is unclear how the proposed methods implicitly connect with diversity. **A1**: We reply to this concern as the following points: 1) Many top baselines, such as DFN and T-MARS, also don't explicitly consider diversity, yet they still provide good performance. Devil [1] even shows that valuable data is worth sampling multiple times, which they call 'quality duplication'. Therefore, one important reason why NormSim works well without explicitly considering diversity may be that when the computing budget is limited, as in the DataComp benchmark, the model first needs to learn the most useful and representative data, which should be similar to some target data. 2) Moreover, we chose validation data from 24 downstream tasks ranging from ImageNet to EuroSet, which may have covered a sufficiently diverse range of target examples for NormSim to calculate similarity. The diversity of the target data will consequently result in the diversity of the selected subset. 3) An additional reason may be that our proposed negCLIPLoss already implicitly selects more diverse data, as shown in Figure 1 of the main paper. If some training data are diverse, they will match less with other data and thus have a lower normalization term $R$. This results in a larger negCLIPLoss and a higher probability of being sampled. Thanks for your concern and we would like to add these discussions into the NormSim section in the revised paper. [1] Yu, Haichao, et al. "The devil is in the details: A deep dive into the rabbit hole of data filtering." arXiv preprint arXiv:2309.15954 (2023). > **Q2**: Algorithm 1 requires the knowledge of the batch size B and the parameter $\tau$ from the teacher model. If the teacher model is private and both B and $\tau$ are not accessible (for example, only api is provided), are the proposed methods still workable? How critical is the batch size B to model performance? Can the parameter $\tau$ be estimated? **A2**: This is a good concern about the limitation of our method. First we claim that most of the CLIP model is either close-sourced (no API, like the SOTA filtering model of DFN) or fully open-sourced (providing model weights, like OAI CLIP, openclip, LAION, etc), so our method should be workable for most of the current CLIP models. Besides, when only the API is provided, the recommended values for $B$ and $\tau$ are 32768 and 0.01, respectively. The reason is that 1) In general, similar to the training stage, a larger batch size can result in better performance in negCLIPLoss filtering since it contains more contrastive data pairs in a batch. 32768 is the training batch size of the OAI CLIP model, and these data can be fitted into a single 24G GPU in the CLIP forward pass. In A1 in the ‘reply to all reviewers’ parts, we also theoretically show that using a larger batch size, negCLIPLoss will have a smaller approximation error. 2) For $\tau$, when the model is accessible, we can directly get it by the model parameters since it’s learnable (And note that their temperature is the reciprocal of our definition). However, their values of $\tau$ are always as regular as 0.01. The reason is that there is a manually set lower bound in the CLIP training setup (for the original definition it’s upper bound 100) for the trainable $\tau$, and after training they always reach this bound. Therefore, when the model parameter is unavailable, we recommend first trying 0.01 for $\tau$, and then sampling a small subset and tuning $\tau$ around it. For tuning the parameters, except for training a small-scale model, we also recommend sampling a small subset and calculating the negCLIPLoss on it with different hype-parameters settings, and then visualizing them (like Figure 6-11 in the main paper) for choosing. Details are shown in Appendix C.5. What’s more, to show how batch size B influences the model performance, we do some ablation study on $B$ and $\tau$. Due to the limited time and resources, we mainly focus on the OAI CLIP-B/32 model. The results are shown in A2 in the ‘reply to all reviewers’ parts (Table R1). In R1 we can see that in general, |B| = 32768 is better than |B|=16384, and $\tau=0.01$ performs the best for both batch sizes. These results support our claims above. We will add these ablation studies and the discussion in the revised paper.
Summary: This work proposes two new approaches for data selection for vision-language pre-training. The first approach, negCLIPLoss, adds the contrastive loss as a normalization term on top of the existing CLIPScore metric. The second approach, NormSim, further improves the performance if examples from target task distribution are available. Together, the methods achieve state-of-the-art results on ImageNet-1K and 38 downstream tasks with DataComp-medium without any external data or model. Several important theoretical justifications and interpretations are provided for the methods. Strengths: Quality: the quality is overall high. Without external resources (on which previous methods rely), the proposed approaches improve evaluation performances by 5.3% on ImageNet and 2.8% on average of 38 downstream tasks. Further, there are many theoretical results that focus on the guarantees of NormSim (though with strong assumptions). Clarity: this paper is very well written. It is well-motivated, the distinctions of previous approaches are succinctly laid out, the methods are well presented, and the results have a clear structure. Significance: this paper will bring significant impacts. The data selection problem has been increasingly vital for training higher-quality vision-language models. The proposed approaches, which focus on metrics instead of models or data, are compatible with different techniques that can be combined with advanced models in the future. The approaches also provide significant efficiency improvements (e.g., from 70 L40 hours to 5 L40 hours). The theoretical analyses can provide useful tools for future research as well. Weaknesses: Quality: this is a minor complaint, but in Lines 229 - 230 the authors state that "the results of baselines on the leaderboard do not apply to our datasets, and we reproduce all the top baselines on the leaderboard with their public UIDs of the selected data" because some URLs of images become invalid. The leaderboard scores of baselines seem higher than the reproduced results in the submission. Could the authors also include the DataComp leaderboard results in the Appendix for fair comparison? There are also some minor questions below. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. In Lines 135 - 136, the inaccessible batch division $B^*$ from teacher CLIP models is different from the actual batch $B_k$ in this work, in terms of both the actual image-text pairs and the batch size. Are there any potential theoretical guarantees or approximations to show that such a difference is reasonably negligible? 2. Could the authors further show the derivations of the discussions on the two important NormSim instances? 1) Lines 179 - 180 ($p=2$, equivalent to selecting a subset that aligns with the principal components), and 2) Lines 181-182 ($p=\infty$, a sample will be selected if it has high similarity to any target)? These may help other readers to understand NormSim better. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors discussed the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your recognition of our paper together with your valuable comments and suggestions. We will revise our paper according to your comments. We respond to your questions below and would appreciate it if you could let us know if our response addresses your concerns. > **Q1**: this is a minor complaint, but in Lines 229 - 230 the authors state that "the results of baselines on the leaderboard do not apply to our datasets, and we reproduce all the top baselines on the leaderboard with their public UIDs of the selected data" because some URLs of images become invalid. The leaderboard scores of baselines seem higher than the reproduced results in the submission. Could the authors also include the DataComp leaderboard results in the Appendix for fair comparison? **A1**: Thanks for your advice! We would include the DataComp leaderboard results in appendix in the revised version. > **Q2**: In Lines 135 - 136, the inaccessible batch division B∗ from teacher CLIP models is different from the actual batch Bk in this work, in terms of both the actual image-text pairs and the batch size. Are there any potential theoretical guarantees or approximations to show that such a difference is reasonably negligible? **A2**: Thanks for mentioning this. We construct a theorem using the concentration inequality to show that when the batch size is sufficiently large, the normalization term $R^{B_k}$ obtained from actual batch $B_k$ can approximate $R^{B^*}$ calculated using ground truth batch $B^*$ quite well, i.e., $R^{B_k} = (1+o(1))R^{B^*}$. The details have been shown in A1 in the ‘reply to all reviewers for the major concern’ parts. Here we assume that $B^*$ and $B_k$ are i.i.d. for simplicity since the claim cannot hold if the teacher batch is very different from the actual batch. We also assume that $|B|=|B^*|$. In practice, we claim that a larger batch size is better since it can contain more contrastive pairs in a batch, and we do some ablation studies as shown in A2 in the ‘reply to all reviewers for the major concern’ parts (Table R1) to support our claim. > **Q3**: Could the authors further show the derivations of the discussions on the two important NormSim instances? 1) Lines 179 - 180 (p=2, equivalent to selecting a subset that aligns with the principal components), and 2) Lines 181-182 (p=∞, a sample will be selected if it has high similarity to any target)? These may help other readers to understand NormSim better. **A3**: Thanks for your advice, we show the derivations as follows and we add them in the revised paper. For convenience, we let $f(x_t)$ denote the image embedding of the target data $x_t \in X_T$, and $f(x_s)$ denotes the image embeddings of training data $x_s \in X_S$. Then the definition of NormSim on a data $x_s$ is $$ NormSim_p(X_{T}, x_s) = \left(\sum_{x_t \in X_T} [f(x_t)^\top f(x_s)]^p\right)^{1/p} \qquad (R1) $$ Then when $p=2$, we have $$ NormSim_2(X_{T}, x_s) = \left(\sum_{x_t \in X_T} [f(x_s)^\top f(x_t)]\cdot [f(x_t)^\top f(x_s)] \right)^{1/2} = \left(f(x_s)^\top \cdot\sum_{x_t \in X_T} [f(x_t) f(x_t)^\top ]\cdot f(x_s) \right)^{1/2} $$ Note that $\Lambda=\frac{1}{|X_T|}\sum_{x_t \in X_T} [f(x_t) f(x_t)^\top]$ is the variance matrix of the target image embeddings. Then using $NormSim_2$ for filtering, we have $$ S = \arg \max_{|S|=N}\sum_{x_s \in X_S} NormSim_2(X_{T}, x_s) = \arg \max_{|S|=N}\sum_{x_s \in X_S} f(x_s)^\top \cdot \Lambda \cdot f(x_s) \qquad (R2) $$ Take $\Lambda=USU^\top$ as the eigen decoposition of $\Lambda$, $S = \text{diag}(s_1,\ldots,s_r)$ where $s_1>\ldots > s_r$ is the matrix of eigenvalues, and $U=[u_1,\ldots,u_r] \in R^{d\times r}$ are the corresponding eigenvectors, i.e., the principal component directions. Note that the column vectors of $U$ and $f(x_s)$ are all unit vectors, so we get that Eqn. R2 means $\text{NormSim}_2$ select the data that best match with the principal components of the target variance. Besides, when $p=\infty$, from Eqn. R1 and the definition of infinity norm, we know that $NormSim_{\infty}(X_{T}, x_s) = \max_{x_t \in X_T} f(x_t)^\top f(x_s)$, thus it measures the max similarity between the data $x_s $ with any target data $x_t \in X_T$. Therefore, a sample will be selected if it has high similarity to any target data. We will add these discussions in the revised papers. --- Rebuttal 2: Title: Response Comment: The reviewer thanks the authors for the global and the specific responses. The reviewer is satisfied with the response and will maintain the score. --- Rebuttal Comment 2.1: Title: Thank you for reviewing Comment: We sincerely thank you for your time and constructive advice on improving our work!
Summary: Data selection is crucial in the pretraining stage to clean the web-crawled, large, and noisy pretraining dataset. Typically, existing methods use embeddings to compute CLIPscore in order to assess the data sample alignment quality. This paper introduces two new methods to enhance this measurement: 1. negCLIPLoss: a better adjustment to reduce bias within a given batch. 2. NormSim: provides additional information when downstream tasks are known, allowing the selection of samples that are close to the target downstream tasks. Empirical results demonstrate that these proposed methods can be easily combined with existing filtering approaches. The authors also illustrate that their approach yields state-of-the-art results on the DataComp leaderboard. Strengths: * Originality: Most of the work in data curation relies heavily on the original CLIP score. It's a new idea to adapt the CLIP score and elevate this measurement for better use. * Quality: The resulting performance is solid and achieves the top position on the leaderboard (medium-scale). * Clarity: The motivation behind the two approaches is clear, but some areas need further clarification. Questions are listed below. * Significance: Data selection in the pretraining dataset is important to the field, and they have demonstrated that their approaches are effective in achieving state-of-the-art results. Weaknesses: 1. I think we need more clarification on how to interpret the Top X% in three different metrics in Figure 1. Can the authors provide a more detailed description? Also, how is the R score derived from the batched data? How to find the proper batched data to use? 2. It seems that the negCLIPLoss is not incorporated into the training loss. We use it as a measurement when CLIP embeddings are provided. In this scenario, how do we determine the batch data, B, for subtracting the regularization term? Would the size of the batched data affect the measurement? The sampling method to find batched data is unclear to me. 3. I am unclear about the process for greedily selecting samples using NormSim, especially when the raw data pool is massive, and how to define the size of S. 4. I would suggest moving algorithm steps from the Appendix into the main body, or showing some steps in the main body. They are good at understanding filtering steps. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. In Figure 1, R scores on the left side are in the top 100%, while on the right side they are in the top 10%. How should these be interpreted and categorized as underestimates or overestimates of quality? 2. When the downstream targets are not accessible, we may use the current filtered dataset as a reference, but how do we find the first-round reference dataset as a proxy to compute NormSim? 3. I would like to list several papers that I found and read for data selection. https://arxiv.org/abs/2405.15613, https://arxiv.org/abs/2401.12225, https://arxiv.org/abs/2302.03169, https://arxiv.org/abs/2401.04578, https://arxiv.org/abs/2404.07177 Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: I didn't see any potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback to help us improve our paper. We will revise our paper based on your feedback. We detail our response below and please kindly let us know if our response addresses your concerns. > **Q1**: I think we need more clarification on how to interpret the Top X% in three different metrics in Figure 1. Can the authors provide a more detailed description? (How should these be interpreted and categorized as underestimates or overestimates of quality?) **A1**: Thanks for mentioning this. We show the modified Figure 1 in the one-page supplementary based on your advice, and we illustrate it in detail as follows. We use the ‘Top X%’ of a metric to denote the score which is top X% high among all the scores of this metric in the data pool. For example, in Figure 1, R scores on the left side are top 100%, indicating that these examples have the smallest R in the dataset. Besides, in this case, we note them as ‘CLIPScore can underestimate the quality’, mainly just because their CLIPScore is relatively small (like Top 78%) while negCLIPLoss are high (like Top 34%). As we can see from both visualization and the experimental result, those data have high quality which is underestimated by CLIPScore. Similar claims hold overestimation cases. In Lines 154-165 in the main paper, we further show the intuition behind the normalization term $R$. > **Q2**: How is the R score derived from the batched data? How to find the proper batched data to use? **A2**: We summarize how we choose random batch and obtain R score and negCLIPLoss from different batched data as follows: (1) We split the whole data into batches randomly, from which we obtain batches $\{B_1,\ldots, B_k\}$. (2) For each batch $B_s$, we calculate the cross-image-text similarity between the data in the batch, i.e., $f_l(x^l_i)^\top f_v(x^v_j)$ for any $i, j \in B_s$. (3) Using these scores, we can calculate the metrics of all the data in this batch from Eqn.1-2, and we record them for each data. (4) Repeat (1) - (3) for K times (Note each data will have multiple $R$ calculated from K different batches which all contain the data itself), we then calculate the mean value for these K different R scores and negCLIPLoss, and use them to approximate the ground-truth values. Details can be found in Algorithm 1 in Appendix C.1. We mention that this process isn’t the only choice to get the random batch. We choose this method mainly to avoid double calculation of the cross-image-text similarities. > **Q3**: negCLIPLoss is not incorporated into the training loss. We use it as a measurement when CLIP embeddings are provided...Would the size of the batched data affect the measurement? **A3**: Yes, we use negCLIPLoss only for data filtering rather than training. We want to emphasize that the main focus of our paper is on data selection with fixed training pipelines. In A2 in the 'reply to all reviewers' parts, we show how the batch size affects the measurement. In A1 in the ‘reply to all reviewers’ parts, we also theoretically show that using a larger batch size, negCLIPLoss will have a smaller approximation error. > **Q4**: the process for greedily selecting samples using NormSim, especially when the raw data pool is massive **A4**: We note that NormSim is only determined by each data itself like CLIPScore, so the ‘greedily selecting samples using NormSim’ just means simply selecting the data with top NormSim scores. We use the words ‘greedily’ because for this particular NormSim-D algorithm (Details in Algorithm 2 in Appendix C.3), theoretically, we should solve harder optimization problems, but here we use a greedy way (select the top scores) to do approximation. In the revised paper we would change the word to prevent confusion. > **Q5**: how to define the size of S **A5**: In general, for all the top filtering methods, like CLIPScore, HYPE, and T-MARS, we always need to set the target size of the filtered dataset manually. Like in DataComp, all these top baselines keep the downsampling ratios ranging from 15%~30%. Our method with OAI CLIP first selects the data with the top 30% negCLIPLoss and then selects the top 66.7% NormSim scores to keep 20% of the original pool. We don’t tune the target size carefully here for fair comparison. In practice, this remains an open problem for all leading baselines when dealing with a large raw data pool. Here we found that a simple but very useful way to define $S$, is random sampling a small subset (like 1000 data) from the large pool and visualizing these data based on their scores, as Figure 6-11 in the main paper. From this we can determine the filtering threshold of the metric scores and thus the target size. (like we find 0.7~0.75 is a good threshold for NormSim). Details are shown in Appendix C.5. But overall, deciding a proper S is beyond the scope of this paper. We agree that this can be a meaningful direction for future research. We are also aware of some recent works [1] that suggest there are scaling laws for data filtering, indicating that the target size for filtering is strongly dependent on the computing budget. [1] Goyal, Sachin, et al. "Scaling Laws for Data Filtering--Data Curation cannot be Compute Agnostic." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. > **Q6**: I would suggest moving algorithm steps from the Appendix into the main body **A6**: Thanks for your advice! We will add them in the revised version. > **Q7**: how do we find the first-round reference dataset as a proxy to compute NormSim in NormSim-D? **A7**: For the first run, we just use the whole original dataset as the proxy for calculating $\text{NormSim}_2$. For effectiveness, we only randomly downsample 10% of data for calculating $\text{NormSim}_2$, and the results are similar to using all the data. > **Q8**: list several related papers **A8**: Thanks for your advice! We would cite all these papers in the revised version. --- Rebuttal Comment 1.1: Comment: Dear Authors, I have read your general response and individual comments. Thanks for your reply. Thanks for addressing studies on batch size and clarifying some details in the paper. In general, this paper gives a new idea and a good adjustment to replace CLIPScore, but some places lack detailed descriptions. I support this paper and keep my original score here. Thanks. --- Reply to Comment 1.1.1: Comment: Thanks for your reply and support, and we will add the suggested details mentioned in the rebuttal in the next version. Thanks for taking the time to make our paper better!
null
null
Rebuttal 1: Rebuttal: # Reply to all reviewers for the major concern We sincerely appreciate all reviewers for their insightful and constructive feedback to make our paper better. We will revise our paper according to these comments. Here we will address the most common concerns of the reviewers and will put other responses in separate rebuttals. Most of the reviewers have some concerns related to whether there is any (theoretical) guarantee that we can use the random batch from the pretraining dataset to approximate the inaccessible ground-truth batch in calculating $\mathcal{R}$ and negCLIPLoss, and how the batch size and temperature will affect our method negCLIPLoss. We answer these questions as follows > **A1**: Concentration of Normalization Term $\mathcal{R}$ We construct a theorem using the concentration inequality to show that when the batch size is sufficiently large, the normalization term $R^{B_k}$ obtained from actual batch $B_k$ can approximate $R^{B^*}$ calculated using ground truth batch $B^*$ quite well. The details are as follows: We assume that the pretraining dataset $\mathcal{D}$ is *i.i.d.* sampled from distribution $\mathcal{P}$. Besides, to use pretraining data batch to approximate the ground truth batch, one necessary condition is that their distribution is similar. Here for simplicity, we assume that they are also *i.i.d.*. **Assumption R1**: We assume that the ground-truth batch of data $B^*$ used by the teacher model is *i.i.d.* to the pretraining dataset $\mathcal{D}$ which is required to be filtered. For simplicity, we denote $s_{ij} = \bar f_{v}(x^v_i)^\top \bar f_{l}(x^l_j), i, j \in B$ to be the cross-image-text similarities in the batch $B$. Then the normalization term can be written as $\mathcal{R}^B_i = \frac{\tau}{2}\left[\log(\sum_{j \in B} \exp(s_{ij}/\tau)) + \log(\sum_{j\in B}\exp(s_{ji}/\tau))\right]$. Here note that $s_{ij} \in [-1,1]$. We show that $\mathcal{R}_i^B = (1+o(1))\mathcal{R}_i^{B^*}$ for all $i$ when $|B|$ is sufficiently large, which means that we can use the random batch to approximate the ground-truth batch. **Theorem R1**: If Assumption R1 holds and the batch size satisfies $|B|=|B^*|$, then we have $\mathcal{R}_{i}^B=\Theta(\log(|B|))$ while $|\mathcal{R}_i^B - \mathcal{R}_i^{B^*}| = O(\frac{1}{\sqrt{|B|}})$ for any $i \in B \cap B^*$. *Proof*: Since $s_{ij} \in [-1,1]$, It's obvious that $\mathcal{R}_i^B=\Theta(\log(|B|))$. Let $\alpha_{ij} := e^{(s_{ij}/\tau)} - E_j[e^{(s_{ij}/\tau)}]$, then $\alpha_{ij}$ is zero-mean. Note that since the data is *i.i.d.*, so does $\alpha_{ij}$. Therefore, we denote $\gamma := E_{j}[\alpha_{ij}^2]$. Note that $|\alpha_{ij}|\leq e^{1/\tau} =: M$, from Bernstein inequality we have $$ \mathbb{P}(|\sum_{j \in B}\alpha_{ij}| \geq t) \leq 2\exp(-\frac{\frac{1}{2}t^2}{|B|\gamma + \frac{1}{3}Mt}) $$ A similar conclusion holds for $B^*$. These result that with probability at least $1-\eta$, we have $$ |\sum_{j \in B}\alpha_{ij}| \leq \max \left( 2\sqrt{|B|\gamma\ln(\frac{2}{\eta})}, \frac{4}{3}M\ln(\frac{2}{\eta}) \right) =: t(|B|,\gamma, \eta, M) $$ Thus we have $|\sum_{j\in B}\exp(\frac{s_{ij}}{\tau})-\sum_{j\in B^*}\exp(\frac{s_{ij}}{\tau})| \leq 2 t(|B|,\gamma, \eta)$. Furthermore, for any $x_1, x_2 > 1$, it's easy to prove that $|\log(x_1)-\log(x_2)| \leq \frac{|x_1 - x_2|}{\min(x_1, x_2)}$. Therefore, we have $|\log(\sum_{j\in B}\exp(\frac{s_{ij}}{\tau}))-\log(\sum_{j\in B^*}\exp(\frac{s_{ij}}{\tau}))| \lesssim O(\frac{1}{\sqrt{|B|}})$, and thus similar claims hold for $|\mathcal{R}_i^B - \mathcal{R}_i^{B^*}|$. > **A2**: Ablation study on batch size and the temperature. All the reviewers are concerned about the choice of batch size. We claim that in general, similar to the training stage, **a larger batch size always results in better performance in negCLIPLoss filtering** since it can contain more contrastive data pairs in a batch, and thus it can check the image-text matching between more different data. Therefore, we consider the largest batch size 32768 which can fit into a single 24G GPU in the CLIP forward pass, and we note that this is also the training batch size that OpenAI used for training CLIP. To support our claim, we do some ablation studies on $B$ and $\tau$. Due to the limited time and resources, we mainly focus on the OAI CLIP-B/32 model. Results are as in Table R1: **Table R1**: Ablation study of $B$ and $\tau$ using OpenAI CLIP-B/32 model on DataComp-medium. | negCLIPLoss | Dataset Size | ImageNet (1) | ImageNet Dist. Shift (6) | VTAB (11) | Retrieval (3) | Avg. (38) | |---------------|---------------|-------|------------|----|----|-----| | $\|B\|=16384, \tau=0.01$| 33M | **28.8**| 25.0 | 32.5 | 26.2 | 33.0 | | $\|B\|=16384, \tau=0.02$| 33M | 28.6 | 24.8 | 33.3 | 25.3 | 33.1 | | $\|B\|=16384, \tau=0.07$| 33M | 28.0 | 24.2 | 33.5 | 25.1 | 32.6 | | $\|B\|=32768, \tau=0.005$| 33M | 28.5 | 25.0 | 33.6 | **27.0** | 33.0| | $\|B\|=32768, \tau=0.01$| 33M | **28.8** | **25.1** | **33.7** | 26.6 | **33.6**| | $\|B\|=32768, \tau=0.02$| 33M | 28.5 | 24.8 | 33.6 | 26.2 | 32.9| | $\|B\|=32768, \tau=0.07$| 33M | 28.2 | 24.5 | 32.8 | 25.2 | 32.7| | **negCLIPLoss $\cap$ NormSim** | | | | | | | | $\|B\|=16384, \tau=0.01$| 22M | **32.4** | **27.4** | 34.5 | 26.1 | 34.7| | $\|B\|=16384, \tau=0.02$| 22M | 31.8 | 26.7 | 35.0 | 24.9 | 34.2| | $\|B\|=16384, \tau=0.07$| 22M | 31.0 | 26.3 | 35.0 | 25.5 | 33.9| | $\|B\|=32768, \tau=0.005$| 22M | 32.2 | 27.2 | 35.3 | **26.5** | 34.8| | $\|B\|=32768, \tau=0.01$| 22M | **32.4** | **27.4** | **35.9** | 26.3 | **35.2**| We can see that in general, negCLIPLoss with a larger batch size ($|B|=32768$) indeed has better or comparable downstream performance. Nevertheless, $|B|=16384, \tau=0.01$ still has good performance when being combined with NormSim ($\tau=0.01$ performs well for both batch sizes). These match our theoretical findings in A1: using a larger batch size, negCLIPLoss will have a smaller approximation error. Pdf: /pdf/f6e3bcd055e2e7ba2f430c19418daa2d6b37a407.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Towards Editing Time Series
Accept (poster)
Summary: This paper introduces Time Series Editing (TSE), a method for generating time series data with control over their attributes. TSE generates a sample by modifying an existing time series through manipulation of specific attributes while maintaining consistency in others. This approach leverages the hypothesis that patterns in time series synthesis are transferable between attribute values with consistent shared attributes. The authors propose a novel multi-resolution diffusion-based model for sample generation, that is trained using a bootstrapped learning algorithm. They show that this architecture and training procedure choice overcomes challenges related to biased data distribution and varying degrees of influence different attributes have on time series generation. Strengths: $\textbf{Importance of the task}$: I think the proposed method is tackling a very important problem that has potential in many applications. In fact, I would suggest that the authors highlight the potential of their approach even more. Generating such time series samples has many applications in different domains. Right now the paper focuses on the 'editing' aspect of TSE, but the proposed method has potential to generate counterfactual time series with control over attributes. $\textbf{Multi-resolution generative model}$: The proposed multi-resolution diffusion model is an interesting design to overcome the varying scale of impact in time series. Weaknesses: $\textbf{Edited attribute set}$: In the experiment section, the results are for a specific type of change (a specific set of edited attributes). It is not clear why this specific combination was selected. I think the performance should be reported for all possible combinations, or the average of all possible combinations. Another interesting extension to this would be to look into these averaged performances, as a function of the size of the manipulated set of attributes. This might not be helpful for the selected real-world datasets because they only have 2 attributes, but at least the authors can assess this in the synthetic setting. $\textbf{Standard errors}$: Some of the reported performances are very close and it is difficult to assess their significance. The authors need to add confidence intervals to the reported numbers in all tables. $\textbf{RaTS}$: It is not clear how $p(a_k^{tgt}|x)$ is estimated in order to calculate the RaTS performance score? $\textbf{MSE for real data}$: In general, it is difficult to know the ground-truth for edited time series in real-world dataset. However, I believe in the existing datasets, this is not impossible. For instance, in the Air dataset, if the changed attribute is the city, we have the corresponding time series in the data. This way the authors can also report MSE/MAE for the real-world datasets as well. This is important because even though the other metrics assess important qualities for the model, they cannot tell how well or realistic the generated signal is. Adding this evaluation can help better assess the performance of TSE on real-data. Technical Quality: 2 Clarity: 3 Questions for Authors: $\textbf{Correlated attributes}$: In many real-world applications, the underlying attributes are not independent. As a result, randomly changing attributes while fixing others can result in out of distribution or unrealistic samples. I'm wondering if this becomes a problem especially in the bootstrap self-scoring step where the model will end up overfitting to out of distribution samples. $\textbf{Distributional coverage}$: I'm not sure the conclusion drawn from Figure 5 is correct. The figure shows T-SNE projection of the data and the bootstrapped data, and the conclusion is that the generated samples fill in the gaps in the original data space. However, with methods like T-SNE, the projection is a function of the data they are trained on. Which means comparing the three subfigures that are representing different data cohorts is not possible. Can the authors provide more explanation on this? $\textbf{Unseen attributes}$: Does the approach extend to unseen attribute values? Can you generate a time series with an edited attribute where the value has never been seen in the training data? Note: I would be willing to increase my score if some of the concerns mentioned in the review are addressed. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: $\textbf{Evaluating generated time series samples}$: One of the biggest challenges for generative models in time series is evaluating the generated samples. The proposed metrics and evaluations are useful in assessing different aspects of the performance of TSE, but there is no discussion on how we can assess whether a generated sample is realistic or not. Even a low MSE by itself doesn't guarantee high-quality generated samples. I think this is something worth discussing in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive feedback! >**W1. Edited attribute set** Thanks for your comments. In the tables below, we show the average experimental results of editing different types and numbers of attributes on the Synthetic dataset. It can be seen that our TEdit outperforms baseline method on all metrics. The subscripts "edit" and "prsv" refer to the edited and preserved attributes. For detailed results of each combination, please refer to the PDF in the General Response. We will add more results in the revised version. - ### Table 1: averaged results of different edit-preserve attribute groups on Synthetic. |Method|MSE$\downarrow$|MAE$\downarrow$|RaTS_edit$\uparrow$|RaTS_abs_prsv$\downarrow$| |-|-|-|-|-| |Time Weaver|0.1502|0.2953|0.8537|0.1206| |TEdit-TW|**0.1184**|**0.2641**|**0.9599**|**0.1069**| >**W2. Standard errors** For each setting, we run the experiment three times, and calculate the mean and standard deviation (STD). These results can be seen in the PDF file in the General Response. From the results, the improvement of our method against the baseline is significantly larger than the STD values in almost all the settings, which illustrates that our method's improvements are significant. Moreover, the STD values of our method is much smaller than the baseline, showing that our method has better stability. >**W3. RaTS** Briefly speaking, we follow [1] and leverage an external model TAP (Time series - Attribute Pretraining) similar to CLIP [2] to estimate this probability. The details of TAP are presented in Appendix G. $p(a_{k}^{tgt}|x)$ is calculated in the following steps. - Use TAP to encode time series $\mathbf{x}$ and the $N_k$ possible values, e.g., linear and exponential, of the $k$-th attribute $a_k$, e.g., trend type, into embeddings $\mathbf{h}_{x}$ , and $\mathbf{h}\_{a_k}^{n}$, where $n\in[1,..., N_k]$. - Obtain $p(a_k^n|x)$ by applying a softmax over the cosine similarity between $\mathbf{h}_{x}$ and $\mathbf{h}\_{a_k}^{n}$. >**W4. MSE for real data** MSE is indeed a straightforward metric for measuring the quality of generated samples. However, MSE can be used only when source and target samples share **all** the attributes, including hidden attributes such as noise, except for the edited attributes. We can guarantee such a 1-1 pair for synthetic data. However, in real-world datasets, there are many unseen attributes and interference which make the time series vary a lot even with the same attribute combination. For example, air quality is not only influenced by city and season, but also other factors such as weather, wind and the industrialization level. We may not say that the edited time series by modifying city attribute from "Boston" to "New York" is exactly the same as the specific data sample with city of New York in the dataset. For the real-world datasets, we follow the practices in computer vision domain [1], by utilizing external models to evaluate the quality of the edited samples. We acknowledge that evaluating generative models is challenging, especially for the newly proposed time series editing task. We will keep investigating better benchmarking solutions in this novel direction. >**Q1. Correlated attributes** We also had similar concerns at the early stage of experiments. However, after statistically visualizing the data generated by bootstrapping, we found that the data generated by bootstrapping is a more complete complement to the original real data distribution, as shown in Fig. 5 in the paper. Training on both the real and the bootstrapped data makes the model fit the complete distribution and derive better performance, as illustrated by our experiments. Therefore, there is almost no overfitting outside the real distribution. >**Q2. Distributional coverage** Thanks for your comments and sorry for the misleading description. We use all the data, including both real and bootstrapped data, for T-SNE. Thus, the three subfigures of Fig. 5 share the same T-SNE model with the same projection, which means their data cohorts are the same. So it is reasonable to comapre the distribution coverage of the original real dataset, bootstrapped synthetic dataset, and the full dataset of the above two sets, in the three subfigures. We will further clarify this in the revised version. >**Q3. Unseen attributes** Our understanding of the "unseen attribute values" is "unseen attribute combinations". In other words, the model has seen all $N_k$ values of each individual attribute $a_k$, but has not seen certain attribute combinations. This setting is the same as our synthetic datasets. There are 4 unseen attribute combinations in the test data. Therefore, our proposed TEdit could deal with the unseen attribute combinations. In the future, we plan to further explore this interesting setting for real-world datasets. >**L1. Evaluating generated time series samples** Thanks for your insightful comments! Assessing the reality of the generated data has always been a critical open challenge. This is extremely challenging even for images, where attributes such as color and size can be easily interpreted, not to mention time series, where the data is difficult to be interpreted. The classic work [3] asked humans to evaluate the reality of the generated images. We follow the practice of recent works [1] to use external models as evaluation. Nevertheless, assessing the reality can be an interesting future direction. We will further discuss it in the revised version. [1] Li et al. Blip-diffusion: Pre-trained subject representation for controllable text-to-image generation and editing. NeurIPS'2023. [2] Radford et al. Learning transferable visual models from natural language supervision. ICML'2021. [3] Meng et al. SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations. ICLR'2022 --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: Thank you for the detailed rebuttal. I believe the added evaluations and results will be a good addition to the paper. The rebuttal has covered most my concerns and I'm happy to edit my score. --- Reply to Comment 1.1.1: Comment: Thank you so much for your recognition and we are very much encouraged by your positive feedback and increased scores. We will further improve our paper according to your suggestions. If there are any other suggestions or questions you'd like to discuss, please don't hesitate to let us know. We are happy to further discuss the opportunities and challenges of this new and exciting Time Series Editing (TSE) task.
Summary: The paper discusses the challenges of synthesizing time series data influenced by intrinsic and extrinsic factors. It highlights the limitations of existing benchmarks and methods and suggests a new task, Time Series Editing (TSE), which modifies given time series based on specified properties while preserving other attributes. In addition, the paper suggests a new method based on generative bootstrapping and the incorporation of multi-resolution modeling to enhance data coverage, demonstrating its efficacy in experiments. Strengths: - The task of editing specific attributes is novel and very interesting. - Providing end-to-end benchmarks that include datasets, metrics, and baselines - Suggesting a new baseline method based on bootstrapping and multiresolution Weaknesses: - The work "On the Constrained Time-Series Generation Problem" [1] introduces time series generation and editing. Although there are certain differences between the current work and the above work, they can be seen as similar. However, the paper does not reference or discuss the similarities and differences. I think that incorporating such a comparison is crucial. In addition, this somehow negatively affects the first novelty that the offers suggested. Nevertheless, I still think the task is novel enough. - The main experiment results in Tables 1 and 2 are on specific attributes; to show robustness, it is necessary to show or average all the possible preserved-changed attribute groups. Multi-resolution models for time series tasks are already in use [2, 3], which limits the novelty. I want to caveat this weakness by stating that [2] is not in the context of generation, and [3] is almost concurrent with this work. [1] - On the constrained time-series generation problem. Coletta, Andrea and Gopalakrishnan, Sriram and Borrajo, Daniel and Vyetrenko, Svitlana. [2] - Multi-resolution Time-Series Transformer for Long-term Forecasting. Yitian Zhang, Liheng Ma, Soumyasundar Pal, Yingxue Zhang, Mark Coates [3] - Multi-Resolution Diffusion Models for Time Series Forecasting. Lifeng Shen, Weiyu Chen, James Kwok Technical Quality: 3 Clarity: 3 Questions for Authors: - Can the author present all the possible preserved-edited groups in the main results? Currently, in my opinion, the main results are not extensive enough. - Why did the author not mention "On the Constrained Time-Series Generation Problem"? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: detailed on the weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments! >**W1. Comparison with [1]** Thanks for pointing it out, and we will include a discussion and comparison of [1] in the revised version. Time Series Editing (TSE) proposed in our work is essentially different to [1], though these two works share some similarities in conditional generation. Specifically, the term "constraint" mentioned in [1] is actually similar to the term "condition" or "attribute" used in our work and [2]. The task of [1] can be formulated as $\mathbf{x}^{tgt}=\Phi(\mathbf{a}^{tgt})$, where $\mathbf{a}^{tgt}$ is the target constraint, $\mathbf{x}^{tgt}$ is the desired output and $\Phi$ denotes the function of generative model to be learned. However, our proposed TSE is formulated as $\mathbf{x}^{tgt}=\Phi(\mathbf{x}^{src}, \mathbf{a}^{src}|\mathbf{a}^{tgt})$, which generates $\mathbf{x}^{tgt}$ based on the input source sample $(\mathbf{x}^{src}, \mathbf{a}^{src})$ and the target condition $\mathbf{a}^{tgt}$. Since [1] and [2] are both "conditional generation" studies, we compare with the more recent work [2] in our current version. We will discuss and compare with [1] in the revised verison. >**W2. Averaged results** Thanks for your feedback. Below we provide the experimental results of the average performance of editing different attributes for reference, hoping to help you better understand the performance of different methods. We also provide the results for all 6 edit-preserve attribute groups in the pdf of the general response. In the following tabels we present the avergaed results of editing different attribute combinations on the Synthetic and Air datasets. The Synthetic dataset has 3 attribtues: trend type, trend direction and season cycle. The Air dataset has 2 attributes: city and season. The results show that our proposed method TEdit outperforms the baseline on all the evaluation metrics. - ### Table 1. Average performance of editing different attributes on Synthetic. |Method|MSE$\downarrow$|MAE$\downarrow$|RaTS_edited$\uparrow$|RaTS_abs_preserved$\downarrow$| |:--------:|:---------:|:--------:|:---------:|:--------:| |Time Weaver|0.1502|0.2953|0.8537|0.1206| |TEdit-TW|**0.1184**|**0.2641**|**0.9599**|**0.1069**| - ### Table 2. Average performance of editing different attributes on Air. |Method|RaTS_edited$\uparrow$|RaTS_preserved$\downarrow$| |:--------:|:--------:|:--------:| |Time Weaver|0.8799|0.2017| |TEdit-TW|**0.9970**|**0.1753**| >**W3. Multi-resolution** Thank you for sharing these works, it will actually enhance the discussion of related works of our paper. As mentioned by your comments, one work is almost concurrent with our paper and these works primarily focus on the forecasting task, aiming to make more accurate predictions. Our work, however, focuses on the editing task, which requires to generate the corresponding time series regarding the given editing attributes as condition. The reason we propose the multi-resolution approach is that different attributes impact the time series at different scales. For example, the trend type has a global impact, while the season number focuses on more local details. In our work, we consider both the multi-resolution aspects of the time series and attributes, and leverage multi-resolution mechanism on both modeling and generation, which is more reflective of reality. Thank you again for mentioning these related works. We will discuss them in the revised version of our paper. [1] Coletta et al. On the constrained time-series generation problem. NeurIPS'2023 [2] Narasimhan, Sai Shankar, et al. Time weaver: A conditional time series generation model. ICML'2024 --- Rebuttal Comment 1.1: Title: Respose Comment: Thank you for your response. Regarding the first concern, I agree that clarifying the differences in task definitions is important. In my opinion, contrasting these definitions clearly is crucial. Additionally, a universal formulation for both tasks should be considered. For instance, if x_source and a_source are left empty, both tasks could be viewed as the same. Therefore, I believe that these problems should be unified under a single benchmark. I encourage the authors to explore how this framework can be unified in the final revision of the paper. As for my second concern, I am unclear if the reported results are averaged across all editing scenarios. The title mentions "different attributes," which suggests that it may only cover a subset of the possible editing scenarios. I believe an extensive evaluation across all possible editing scenarios should be conducted on both datasets to thoroughly assess the method's robustness. Could you clarify this? Lastly, thank you for providing clarification on the third concern. I believe the approach you've provided is solid enough to serve as a reliable baseline for benchmarking, which is significant. As for the claimed novelty, I think it should be specifically attributed to adapting the multiresolution approach for time series editing and generation. --- Reply to Comment 1.1.1: Comment: Thank you so much for your insightful comments! For the first concern, indeed, leaving $\mathbf{x}^{src}$ and $\mathbf{a}^{src}$ empty, the editing task will degenerate to the conditional generation task, as is mentioned by your comments. Therefore, *conditional generation could be regarded as a special case of editing*. Actually, we can directly evaluate the performance of conditional generation on our editing datasets by masking out the information of the source $\mathbf{x}^{src}$ and $\mathbf{a}^{src}$. Specifically, for the editing task, each sample has three inputs $\mathbf{x}^{src}$, $\mathbf{a}^{src}$ and $\mathbf{a}^{tgt}$; for the conditional generation task, only $\mathbf{a}^{tgt}$ is available to models. To further enhance our experiments regarding to your suggestions, we conducted additional experiments on the conditional generation task, which will be put into the revised paper version as a supplementary experimental setting of our work. In the table below, we report the averaged performance of conditional generation of Time Weaver and our TEdit-TW on different Synthetic datasets: all 6 possible edit-preserve attribute groups. Please note that the RaTS score is not suitable for the conditional generation and we have not reported it in this new experiment, since RaTS scores are calculated based on both $\mathbf{a}^{src}$ and $\mathbf{a}^{tgt}$, and $\mathbf{a}^{src}$ is unavailable for the conditional generation task. It can be seen that our method TEdit-TW can significantly outperform Time Weaver for the conditional generation task. Following your suggestions, we will add the conditional generation task to our benchmark in the revised version. Thank you for your valuable suggestion. #### Tabel 1. Averaged performance of conditional generation on 6 different Synthetic datasets: all 6 possible edit-preserve attribute groups. |Method|MSE$\downarrow$|MAE$\downarrow$|TAP_trend_type$\uparrow$|TAP_trend_direction$\uparrow$|TAP_season_cycle$\uparrow$| |:--------:|:--------:|:--------:|:--------:|:---------:|:--------:| |Time Weaver|0.3509|0.4691|0.7685|0.9880|0.2035| |TEdit-TW|0.2414|0.3958|0.8353|0.9899|0.2568| Regarding the second concern, we are sorry that the "different attributes" in the title caused the confusion. The "different attributes" actually means "all possible combinations of edit-preserve groups". For the synthetic dataset, it includes the following (edit | preserve) sets: - (trend type | trend direction, season cycle) - (trend direction | trend type, season cycle) - (season cycle | trend type, trend direction) - (trend type, trend direction | season cycle) - (trend type, season cycle | trend direction) - (trend direction | trend type, season cycle). For the Air dataset, it includes the following (edit | preserve) sets: - (city | season) - (season | city) The scores in Tables 1 and 2 in the previous response are averaged over all these combinations, and the scores for each edit-preserve group are presented in the PDF files of the general response. As for the third concern, as is mentioned by your comments, our novelty lies in the multiresolution modeling in time series editing and generation task, which is unique and different compared to the concurrent related works mentioned by your previous review comments. We are encouraged by your positive comments and we think that a more proper description would help better understand these works in the revised paper. Thank you once again for acknowledging our work. We will carefully revise our paper in line with your valuable suggestions, including: * Unifying the definition framework of editing with conditional generation and providing a clearer discussion of their differences and distinctions. * Incorporating additional experiments on our method within the conditional generation setting, to enhance the comprehensiveness of our experiments. * Refining our claims to more properly describe our contributions, particularly in relation to the multiresolution mechanism in generation and editing tasks. We are happy to have more discussions if you have any other concerns or suggestions.
Summary: This paper introduces time series editing (TSE), a novel approach for modifying existing time series to match specified attributes while preserving other attributes. Unlike traditional methods that generate time series from scratch, TSE employs a multi-resolution modeling and bootstrap tuning framework, enhancing data coverage and generation precision. The paper presents a multi-resolution diffusion model and a comprehensive benchmark dataset, demonstrating the method’s effectiveness on both synthetic and real-world data. Strengths: - The paper introduces the novel concept of Time Series Editing (TSE), a significant innovation in the field of time series data synthesis. - This work offers a practical solution for generating high-quality time series data with specified attributes, addressing data sparsity and enhancing the utility of time series synthesis. The novel approach and its demonstrated effectiveness on diverse datasets highlight its broad applicability and potential to drive future research and applications in the field. - The overall quality of the paper is exemplary, demonstrated by thorough experimentation and rigorous evaluation. The authors present a well-designed DDIM model with the proposed multi-resolution noise estimator and bootstrapped training algorithm. - The paper is clearly written and well-structured, making it accessible to readers with varying levels of expertise in time series analysis. Weaknesses: - The paper focuses on editing specific attributes of time series while preserving others but does not extensively discuss the handling of diverse attribute types and their interactions. - The reproducibility is uncertain without code and detailed implementation information in Appendix I. - There are minor writing issues, such as “levela” in line 291, page 8, that need correction. Technical Quality: 3 Clarity: 4 Questions for Authors: I have no specific questions. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your appreciation and feedbacks! Below are the responses. >**W1. Handling of diverse attribute types and their interactions.** Thanks for your comments. The following tables 1~6 show the complete experimental results of editing different combination of attributes (trend type, trend direction and the number of season cycles), hoping to provide a better understanding of our method. Due to time limitation, in the following tables, we present the results of TEdit-TW and Time Weaver [1] on editing different sets of attributes on Synthetic dataset. From the tables, our TEdit-TW outperforms Time Weaver [1] on all attribute combinations. We also found that the difficulty of editing different attributes varies greatly. By comparing MSE, (1) editing season_cycle is the most difficult, editing trend_type is the easiest, and (2) editing multiple attributes is usually more difficult than a single one. For more detailed experimental results, please refer to the PDF file in the General Response. - ### Table 1. Synthetic dataset: edit "trend type". |Method|MSE$\downarrow$|MAE$\downarrow$|RaTS:trend_type$\uparrow$|TAP:trend_type$\uparrow$|RaTS_abs:trend_direction$\downarrow$|TAP:trend_direction$\uparrow$|RaTS_abs:season_cycle$\downarrow$|TAP:season_cycle$\uparrow$| |-|-|-|-|-|-|-|-|-| |Time Weaver|0.0565/0.0115|0.1875/0.0229|0.7532/0.0656|0.5940/0.0686|0.0017/0.0002|0.9866/0.0012|0.0913/0.0245|0.8424/0.0132| |TEdit-TW|0.0431/0.0008|0.1629/0.0015|0.8139/0.0093|0.6621/0.0052|0.0016/0.0001|0.9879/0.0003|0.0818/0.0010|0.8582/0.0010| - ### Table 2. Synthetic dataset: edit "trend direction". |Method|MSE$\downarrow$|MAE$\downarrow$|RaTS_abs_trend_type$\downarrow$|TAP_trend_type$\uparrow$|RaTS_trend_direction$\uparrow$|TAP_trend_direction$\uparrow$|RaTS_abs_season_cycle$\downarrow$|TAP_season_cycle$\uparrow$| |-|-|-|-|-|-|-|-|-| |Time Weaver|0.1178/0.0181|0.2844/0.0214|0.4211/0.07058|0.5835/0.0739|1.9720/0.0010|0.9831/0.0044|0.0924/0.0249|0.8083/0.0394| |TEdit-TW|0.0965/0.0010|0.2636/0.0013|0.3426/0.0174|0.6656/0.0101|1.9716/0.0005|0.9826/0.0003|0.0747/0.0098|0.8282/0.0056| - ### Table 3. Synthetic dataset: edit "season cycle". |Method|MSE$\downarrow$|MAE$\downarrow$|RaTS_abs_trend_type$\downarrow$|TAP_trend_type$\uparrow$|RaTS_abs_trend_direction$\downarrow$|TAP_trend_direction$\uparrow$|RaTS_season_cycle$\uparrow$|TAP_season_cycle$\uparrow$| |-|-|-|-|-|-|-|-|-| |Time Weaver|0.2294/0.0130|0.3699/0.0090|0.1467/0.0299|0.6988/0.0092|0.0014/0.0001|0.9868/0.0007|0.4991/0.1420|0.2321/0.1344| |TEdit-TW|0.2004/0.0025|0.3608/0.0025|0.1471/0.0027|0.7768/0.0023|0.0012/0.0001|0.9887/0.0002|0.8086/0.0088|0.5230/0.01258| - ### Table 4. Synthetic dataset: edit "trend type" and "trend direction". |Method|MSE$\downarrow$|MAE$\downarrow$|RaTS_trend_type$\uparrow$|TAP_trend_type$\uparrow$|RaTS_trend_direction$\uparrow$|TAP_trend_direction$\uparrow$|RaTS_abs_season_cycle$\downarrow$|TAP_season_cycle$\uparrow$| |-|-|-|-|-|-|-|-|-| |Time Weaver|0.0793/0.0091|0.2216/0.0122|0.6438/0.0539|0.6351/0.0567|1.1216/0.0008|0.9865/0.0011|0.0731/0.0114|0.8401/0.0197| |TEdit-TW|0.0504/0.0007|0.1743/0.0009|0.7194/0.0033|0.7070/0.0019|1.1217/0.0008|0.9871/0.0001|0.0575/0.0020|0.8641/0.0014| - ### Table 5. Synthetic dataset: edit "trend type" and "season cycle". |Method|MSE$\downarrow$|MAE$\downarrow$|RaTS_trend_type$\uparrow$|TAP_trend_type$\uparrow$|RaTS_abs_trend_direction$\downarrow$|TAP_trend_direction$\uparrow$|RaTS_season_cycle$\uparrow$|TAP_season_cycle$\uparrow$| |-|-|-|-|-|-|-|-|-| |Time Weaver|0.1873/0.0031|0.3310/0.0018|0.6564/0.0381|0.6721/0.0421|0.0022/0.0002|0.9844/0.0004|0.4523/0.1320|0.3880/0.1244| |TEdit-TW|0.1453/0.0001|0.2939/0.0005|0.6959/0.0078|0.7283/0.0045|0.0018/0.0001|0.9883/0.0001|0.6924/0.0105|0.6233/0.0080| - ### Table 6. Synthetic dataset edit: "trend direction" and "season cycle". |Method|MSE$\downarrow$|MAE$\downarrow$|RaTS_abs_trend_type$\downarrow$|TAP_trend_type$\uparrow$|RaTS_trend_direction$\uparrow$|TAP_trend_direction$\uparrow$|RaTS_season_cycle$\uparrow$|TAP_season_cycle$\uparrow$| |-|-|-|-|-|-|-|-|-| |Time Weaver|0.2307/0.0122|0.3776/0.0108|0.2556/0.0409|0.6810/0.0536|1.1252/0.0005|0.9862/0.0009|0.4601/0.1571|0.3448/0.1526| |TEdit-TW|0.1747/0.0011|0.3289/0.0013|0.2534/0.0080|0.7352/0.0075|1.1253/0.0003|0.9872/0.0002|0.6906/0.0059|0.5762/0.0044| >**W2. Reproducibility.** Thanks for your comments. We will release our codes upon the acceptance of this paper, and we will figure out a better way to present the implementation. In the current version of our paper, we have listed the implementation details and training details in the appendix. Specifically, Appendix I provides the training configurations. Appendix H shows the architectures of our model. Appendix E and F present evaluation model and metrics. Moreover, Appendix C and D illustrate the optimization details and training algorithm. Appendix E discusses the data processing details. Appendix F explains the evaluations. We will refine the discussion of the above implementation details for better reproducibility. >**W3. Writing suggestions.** Sorry for the inconvenience. We will make more rounds of proofreading and revise the details. [1] Narasimhan et al. Time weaver: A conditional time series generation model. ICML'2024 --- Rebuttal Comment 1.1: Title: Response from Reviewer ZaGr Comment: Thank you for your detailed response, which addresses some of my concerns. I appreciate the effort you put into it. I will maintain my current rating unless more direct replication evidence is provided. --- Reply to Comment 1.1.1: Comment: Thanks for your positive feedback! We are happy to hear that most of your concerns have been addressed. As for the replication, we have sent AC an anonymous version of the code and data for the replication test, which is allowed under the regulation of NeurIPS rebuttal. Due to the rebuttal policy of NeurIPS, we are not allowed to add external links in the comments. We promise that we will publish the whole framework including the data preparation, training, and evaluation pipeline, for reproducibility upon the acceptance of our paper. Here, we provide the details of our implementation as below, including pseudo code for training and evaluation, model architectures, and hyper-parameters, hoping to better help improve the reproducibility of our work. ## 1. Pseudo code >**Pre-training** > >Inputs: \#epochs $N_{epoch}$, \#batches for each epoch $N_{batch}$, batch size $B$, noise estimator $\epsilon_\theta$, pretraining dataset $\mathcal{D}$, total diffusion step $T$, variance schedule $\\{\beta_t\\}_{t=1}^T$ > >For epoch $n_{epoch}<N_{epoch}$: > >$\qquad$For batch $n_{batch}<N_{batch}$: > >$\qquad\qquad$ \# 1. Load data > >$\qquad\qquad$ Load a batch of $\mathbf{X}, \mathbf{A}$ > >$\qquad\qquad\qquad$ \# 2. Train $\Phi$ via Eq.1 > >$\qquad\qquad\qquad$ For each sample $\mathbf{x}\in\mathbf{X},\mathbf{a}\in\mathbf{A}$: > >$\qquad\qquad\qquad$ $t\sim Uniform(1,T)$ \# Sample a diffusion step $t$ > >$\qquad\qquad\qquad$ $\epsilon\sim\mathcal{N}(0,1)$ \# Sample Gaussian noise $\epsilon$ > >$\qquad\qquad\qquad$ $\mathbf{x}\_t=\sqrt{\alpha\_t}\mathbf{x}\_0+\sqrt{1-\alpha\_t}{\epsilon},(0,\mathbf{I}),~\alpha\_t:=\Pi\_{s=1}^t(1-\beta\_s)$ \# Get noisy time series $\mathbf{x}\_t$ > >$\qquad\qquad\qquad$ $\hat{\epsilon}=\epsilon_\theta(\mathbf{x\_t},t,\mathbf{a})$ \# Predict the noise > >$\qquad\qquad\qquad$ $l=||\hat{\epsilon}- \epsilon||^2$ \# Noise estimation loss > >$\qquad\qquad\qquad$ Update the noise estimator $\epsilon_\theta$ by minimizing $l$ >**Finetuning** > >Inputs: \#epochs $N_{epoch}$, \#batches for each epoch $N_{batch}$, batch size $B$, diffusion model $\Phi$ (the core component is the noise estimator $\epsilon_\theta$), bootstrap ratio $\psi$, finetuning dataset $\mathcal{D}$ > >For epoch $n_{epoch}<N_{epoch}$: > >$\qquad$For batch $n_{batch}<N_{batch}$: > >$\qquad\qquad$ \# 1. Load data > >$\qquad\qquad$ Load a batch of $\mathbf{X}^{src}, \mathbf{A}^{src}, \mathbf{A}^{tgt}$ > >$\qquad\qquad$ \# 2. Bootstrapping > >$\qquad\qquad$ $\hat{\mathbf{X}}^{tgt}\leftarrow\Phi(\mathbf{X}^{src},\mathbf{A}^{src}|\mathbf{A}^{tgt})$ \# Generate the target $\mathbf{X}^{tgt}$ by editing $\mathbf{X}^{src}$ as Eqs.4-5 > >$\qquad\qquad$ $\hat{\mathbf{X}}^{src}\leftarrow\Phi(\hat{\mathbf{X}}^{tgt},\mathbf{A}^{src}|\mathbf{A}^{tgt})$ \# Back-translation Eqs.4-5 > >$\qquad\qquad$ $\mathbf{s}\leftarrow MSE(\hat{\mathbf{X}}^{src}, \mathbf{X}^{src})$ \# Get self-score via MSE > >$\qquad\qquad$ $\hat{\mathbf{X}}^{tgt}\leftarrow Sort(\hat{\mathbf{X}}^{tgt}|\mathbf{s})$ \# Sort $\hat{\mathbf{X}}^{tgt}$ based on $\mathbf{s}$ > >$\qquad\qquad$ $\hat{\mathbf{X}}^{tgt}\leftarrow\hat{\mathbf{X}}^{tgt}[:,B\cdot\psi]$ \# Keep the top $\psi$ samples with lowest MSE scores $\hat{\mathbf{x}}^{tgt}\in\hat{\mathbf{X}}^{tgt}$ > >$\qquad\qquad$ \# 3. Calculate loss > >$\qquad\qquad$ Calculate the noise estimation loss based on the bootstrapped sample $\hat{\mathbf{X}}^{tgt}$. \# Similar to Eq.1 for pertaining > >$\qquad\qquad$ Update the model $\Phi$ via backpropogation. >**Evaluation** > >Inputs: model $\Phi$, dataset $\mathcal{D}$, \#batches $N_{batch}$, Time series Attribute Pretraining model (TAP) > >For batch $n_{batch}<N_{batch}$: > >$\qquad$ \# 1. Load data > >$\qquad$ Load a batch of $\mathbf{X}^{src}, \mathbf{A}^{src}, \mathbf{A}^{tgt}$, $\mathbf{X}^{tgt}$ \# $\mathbf{X}^{tgt}$ is only available for Synthetic data > >$\qquad$ \# 2. Generate $\hat{\mathbf{X}}^{tgt}$ by editing $\mathbf{X}^{src}$ > >$\qquad$ $\hat{\mathbf{X}}^{tgt}\leftarrow\Phi(\mathbf{X}^{src},\mathbf{A}^{src}|\mathbf{A}^{tgt})$ > >$\qquad$ \# 3. Evaluate $\hat{\mathbf{X}}^{tgt}$ > >$\qquad$ Calculate RaTS scores (Appendix F.3) and TAP scores (Appendix F.2) based on the TAP model (Appendix G). > >$\qquad$ Calculate $MSE(\hat{\mathbf{X}}^{tgt}, \mathbf{X}^{tgt})$ if $\mathbf{X}^{tgt}$ is available
Summary: This work proposes a new task as time series editing (TSE), that edits existing time series samples based on given sets of attributes. The goal is to perform edits that are only relevant to the changed attributes while preserving other existing attributes. The authors propose a multi-resolution modeling and generation model, and they perform bootstrap tuning on self-generated instances to perform the experiments. The authors show the proposed method can be used for modifying specified time series on synthetic dataset and real-world datasets. Strengths: - The idea is interesting. The TS editing could be an important task for time series. - The proposed framework makes sense. The Multi-Resolution backbone clearly addresses issues of existing backbones; and the bootstrapped training method seems legit given the challenge of existing data/information. - Experiments on synthetic datasets showed clear advantage of the proposed method. Weaknesses: While I personally like the idea a lot, there are a few non-neglectable issues (rooms for improvement): 1. The work could use better applications. There are a couple of tasks/applications that could clearly benefit from TS editing. For example, see the works below: [1] Cheng, J. Y., Goh, H., Dogrusoz, K., Tuzel, O., & Azemi, E. (2020). Subject-aware contrastive learning for biosignals. arXiv preprint arXiv:2007.04871. [2] Liu, R., Azabou, M., Dabagia, M., Lin, C. H., Gheshlaghi Azar, M., Hengen, K., ... & Dyer, E. (2021). Drop, swap, and generate: A self-supervised approach for generating neural activity. Advances in neural information processing systems, 34, 10587-10599. [3] Yi, K., Wang, Y., Ren, K., & Li, D. (2024). Learning topology-agnostic eeg representations with geometry-aware modeling. Advances in Neural Information Processing Systems, 36. The above works include datasets with very specific attributes, e.g. subject information; task information; geo/channel information. If the proposed method could be applied to the above tasks, and demonstrate robustness, the paper could be much stronger. 2. Experimental results are overall not convincing enough. The datasets are too small, and there is no enough information about the synthetic dataset. There are not a lot of baselines. 3. Some parts of the method section can use better presentation: - Section 3.1 How does the Time Series Editing task differ from the normal editing task in vision? Why do the authors want to define a new definition for it? - Section 3.2 The authors should argue why is diffusion necessary for performing the task. What advantages would it bring? - Section 3.4 The authors should compare the proposed method with traditional SSL methods like BYOL & Simclr and discuss about their differences. Technical Quality: 4 Clarity: 3 Questions for Authors: NA Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 2 Limitations: The authors discussed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your appreciation and comments! >**W1. Suggested new datasets.** Thanks for your suggestion. The datasets you mentioned are indeed related and useful, revealing a wider application space for our work. We have looked into these datasets, many of which are EEG time series with various attributes, such as subject and task information. For example, the SEED dataset used in [1] contains attributes such as EEG channels and subjects, which is similar to the Motor dataset (Sec. 4.1 and Appendix E) used in our paper. Potential applications of these datasets include completing time series of missing channels by editing time series of other channels. Recall that, time series editing is a relatively new task, and there is still much to explore. We plan to further investigate the evaluation benchmark of Time Series Editing (TSE) task and will use a broader range of datasets in future research. >**W2. Dataset size, synthetic data details and baselines.** Thanks for your comments. 1. For the dataset size, we believe the size should not be considered "too small" since the datasets we used and many datasets used in recent studies, e.g., time series generation [2] and LLM based time series model [3], have approximately the same scale. In total, our Synthetic-1/Synthetic-2/Motor/Air have 10,660/13,680/12,800/6,850 samples. Many datasets used in [2] and [3] range between 10,000~30,000. Collecting larger datasets will be an important future work for our paper and the proposed TSE task, and we will keep gathering more datasets with larger scales. 2. For the synthetic dataset, due to space limitation in the main paper, its details are presented in Appendix E.1. Briefly speaking, the synthetic samples we constructed consist of four parts: *trend*, *season*, *noise* and *bias*. - The *trend* has two attributes: trend types (linear, quadratic, exponential, logistic) and trend directions (up, down). - The attribute of *season* is the number of cycles, with possible values of 0, 1, 2, 4. - The *noise* is a combination of Gaussian noise and high-frequency noise. - The *bias* of each sample is randomly sampled from a uniform distribution. 3. Regarding the baselines, we are exploring a brand new field, and to the best of our knowledge, this is the first work focusing on developing methods and evaluation benchmarks for TSE. There are only few related works, and adapting the models from other tasks to this one involves significant efforts. The baselines we selected are Time Weaver [2], the latest work of conditional time series generation, and CSDI [4], a classic time series imputation framework. Please note that the latest work [2] only compared with 2 traditional GAN-based methods due to the lack of proper baselines in conditional time series generation. We believe [2][4] effectively reflect the current state-of-the-art in related fields. However, we will consider adding more comparative analyses in the future. Thank you for the suggestion. > **W3. Presentation suggestions.** Thank you for your suggestions and we will update the revised version accordingly for better readability. 1. About time series editing (TSE) and image editing. 1.1. Comparison of TSE and image editing. Broadly speaking, TSE shares the same target with image editing: edit the given data according to the specified conditions. However, time series data is quite different from image data, and there are unique challenges for TSE. - Time series has attributes such as *recoding location of city*, which are more difficult for model to interpret than the attributes such as *color* in image. - Time series data has its own unique properties, e.g., variant seasonality and low signal-to-noise ratio. - Moreover, the data of the time series corresponding to some specific conditions might be rare, making generation and editing quite challenging. Note that, we propose to use multi-resolution modeling and generation with bootstrapping mechanism trying to resolve the problem of aforementioned problems. 1.2 Definition presented in Section 3.1 Though sharing similar definition of image editing task, in Section 3.1, we formally formulate TSE task mainly because editing is a relatively new concept for the time series research community. A formal problem formulation could help readers to easily understand the task and follow the paper. 2. The necessity of diffusion. TSE is a kind of generation task, which requires powerful generative models. There are two considerations of using diffusion models. (1) Diffusion is the latest generation technique and some recent works [2] have demonstrated that diffusion models can significantly outperform traditional methods like Generative Adversarial Network (GAN). (2) To keep fair comparison to the latest conditional time series generation work [2], we leverage the backbone model and base generation algorithm as the same as the baseline works [2][4]. 3. BYOL and SimCLR The work BYOL you mentioned bootstraps the output of a neural network to serve as targets for a learning network to approximate. It only bootstraps on the learning targets and it is different from our method that utilizes the learned generative model to generate the samples of both input and targets for bootstrapping the learning of the target generative model. SimCLR adopts different data augmentation to build the target representation for contrastive learning, which does not leverage bootstrapping. We will add more discussions to other bootstrapping works to enhance the presentation of our work. [1] Yi et al. Learning topology-agnostic eeg representations with geometry-aware modeling. NeurIPS'2023 [2] Narasimhan et al. Time weaver: A conditional time series generation model. ICML'2024 [2] Jin et al, Time-LLM: Time Series Forecasting by Reprogramming Large Language Models. ICLR'2024 [4] Tashiro et al. CSDI: Conditional score-based diffusion models for probabilistic time series imputation. NeurIPS'2021 --- Rebuttal Comment 1.1: Title: Look forward to hearing from you Comment: Dear Reviewer tfFr, Thanks for your valuable comments and feedback! We hope our response could help address your concerns. Today is the last day of discussion. We look forward to hearing from you and hope to discuss any concerns or suggestions to further improve our work. Thank you! Best, Authors of the 10940 submission --- Rebuttal 2: Title: Thanks Comment: Overall I like the work, especially the possibilities it brings to the field. I'd suggest the authors include more discussions during revision, especially regarding the applications that I suggested to encourage future follow-ups. The rebuttals also provided some other evidence for me to further advocate for the work and thus I will raise my score. --- Rebuttal Comment 2.1: Comment: Thank you for your feedback! We will revise our paper according to your suggestions.
Rebuttal 1: Rebuttal: # General Response We thank all the reviewers' insightful and valuable feedback! We are encouraged by the positive comments from the reviewers such as: - The proposed Time Series Editing (TSE) is an interesting and important task with a wide range of applications (Reviewer tfFr, ZaGr, EFAk and FqWw). - The proposed framework TEdit, including bootstrapping and multi-resolution mechanism, is technically sound (Reviewer tfFr, ZaGr, EFAk, FqWw). - Experimental results demonstrate the advantages of the proposed TEdit method as a new baseline (Reviewer tfFr, ZaGr, EFAk). - Paper is clearly written and well-structured (Reviewer ZaGr). Here, we also try to address some common concerns raised by the reviewers. **Q1. Averaged results of different modifying attributes.** As raised by reviewers ZaGr, EFAk, FqWw, the average value of modifying different attributes should be used as a more robust evaluation metric. ***We have provided the corresponding results in Tables 1 and 2, and also put the detailed experimental results of editing different attributes in the below attached PDF file in this General Response.*** Due to the time limit of rebuttal, we compared the performance of Time Weaver and our method on Synthetic dataset and Air real-world dataset. We will conduct full experiments in the revised version of our paper. |Method|MSE$\downarrow$|MAE$\downarrow$|TAP_trend_type$\uparrow$|TAP_trend_direction$\uparrow$|TAP_season_cycle$\uparrow$|RaTS_edited$\uparrow$|RaTS_abs_preserved$\downarrow$| |:--------:|:--------:|:--------:|:--------:|:---------:|:--------:|:---------:|:--------:| |Time Weaver|0.1502|0.2953|0.6441|0.9856|0.5760|0.8537|0.1206| |TEdit-TW (ours)|**0.1184**|**0.2641**|**0.7125**|**0.9870**|**0.7122**|**0.9599**|**0.1069**| #### Table 1. Average performance of editing different attributes on Synthetic dataset. |Method|TAP_city$\uparrow$|TAP_season$\uparrow$|RaTS_edited$\uparrow$|RaTS_preserved$\downarrow$| |:--------:|:--------:|:--------:|:--------:|:--------:| |Time Weaver|0.6386|0.2955|0.8799|0.2017| |TEdit-TW (ours)|**0.7530**|**0.3303**|**0.9970**|**0.1753**| #### Table 2. Average performance of editing different attributes on Air dataset. From the above tables of average performance, our proposed method (TEdit) outperformed the baseline in all aspects on Synthetic dataset and real-world Air dataset. However, averaging the performance on editing different attributes of time series neglects important insights on time series editing task. From the detailed experiment results (as listed in the PDF file below), we found that the difficulty of editing different attributes is significantly different, and the average result may discard the difference. Specifically, we conclude the observations of the detailed performance on editing different attributes of time series as below. * On both Synthetic dataset and real-world Air dataset, our method performs better than the baseline in almost all the settings. * On Synthetic dataset, we notice that editing the attribute "trend type" is the easiest (better performance) while editing "season_cycle" is the most difficult. * The difficulty of modifying multiple attributes is higher than that of modifying an individual attribute. * On Air dataset, our method far exceeds the performance of the baseline in terms of the metrics of editing attributes. We will add these discussions in the revised paper accordingly. **Q2. About the baselines and related works.** Some reviewers have recommended some other related works for discussion, including wider applications on real-world datasets (Reviewer tfFr) and more discussion on differentiation to the other works (Reviewer EFAk). These suggestions indeed help improve our work. We briefly explain the baseline selection of our paper. As is been recognized by all the reviewers, the proposed task of time series editing is novel, while there are very few related works on this new direction. Our work compares two baselines, namely Time Weaver [1] which focuses on conditional generation of time series, and CSDI [2] which focuses on time series imputation tasks. Note that, these works are either the most cutting-edge research [1], or the classic yet pioneering work [2] in their respective fields. Both are representative in the latest research frontier in time series generation. The works mentioned by Reviewer tfFr cover more datasets in addition to that in our paper. These new datasets extends the application of our work, and they also share similar processing pipeline to that used in our paper. We plan to leverage those new datasets to further enhance the benchmark of the proposed time series editing task, as detailed in the response of W1 of Reviewer tfFr. Reviewer EFAk has provided some related works about time series generation and the model architecture. We have detailed the discussion of differentiation to these works in the response to W1 and W3 of Reviewer EFAk. We will add the discussion of these works in our revised paper. [1] Narasimhan, Sai Shankar, et al. "Time weaver: A conditional time series generation model." ICML'2024. [2] Tashiro, Yusuke, et al. "Csdi: Conditional score-based diffusion models for probabilistic time series imputation." NeurIPS'2021. Pdf: /pdf/bd80ca4f6b929a623f9b39477f1d7b0868b596cc.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
PeRFlow: Piecewise Rectified Flow as Universal Plug-and-Play Accelerator
Accept (poster)
Summary: This paper proposes piecewise rectified flow (PeRFlow) for accelerating pre-trained diffusion models. To overcome the requirement of synthetic data generation in rectified flow, the authors propose to prepare the training data by dividing the entire ODE trajectory into multiple time windows. The sampling trajectories within each time window are then straightened by the reflow operation. The proposed method is adapted to multiple diffusion models with different parameterizations. Experiments on text-to-image (SD-v1.5, SD-v2.1, SDXL) and text-to-video (AnimateDiff) models demonstrate the effectiveness of the proposed method. Strengths: * This paper addresses a major performance bottleneck in rectified flow, namely the synthetic data generation stage, which requires costly simulation with higher numerical errors. The proposed solution allows online simulation of ODE trajectories and thus more efficient training. * The proposed PeRFlow is extensively tested on multiple diffusion models for text-to-image and text-to-video generation, and the comparative results with the previous state-of-the-art few-step diffusion baselines are impressive. * Code is provided for both training and inference. Weaknesses: * The contribution of this work is weakened by its similarity to Sequential Reflow [1], and the additional design to be compatible with different parameterization strategies is somewhat incremental. * The main motivation to improve training efficiency (line 47) is not reflected in the experiments. The authors should provide a more comprehensive comparison with rectified flow in terms of performance/training computation tradeoff. * The statement in line 55 that PeRFlow "has a lower numerical error than integrating the entire trajectories" should be more carefully validated, e.g. by quantitatively comparing straightness [2] or curvature [3] within each time window. The authors could also apply their method to the commonly used 2D checkerboard data to provide a more intuitive visualization of the learned probability path. * There is a lack of ablation studies for several design choices. The authors should analyze the sensitivity of PeRFlow's performance to the number of time windows and sampling steps (more densely). It is also unclear how adding "one extra step in $[t\_K,t\_{K-1}]$​" (line 223) contributes to the final results. --- 1. Yoon, et al. Sequential Flow Straightening for Generative Modeling. arXiv 2024. 2. Liu, et al. Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow. ICLR 2023. 3. Lee, et al. Minimizing Trajectory Curvature of ODE-based Generative Models. ICML 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: * The proposed method seems to be applicable to model training in addition to acceleration. Have the authors considered training their models from scratch on CIFAR-10 or ImageNet? This would allow a direct comparison of the performance/efficiency tradeoff with a broader family of flow matching algorithms. * What does "one-step" mean in Figure 8 when the sampling step should be lower bounded by the number of time windows of 4? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. The authors have discussed their limitations in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the comments and advice. 1. *The contribution of this work is weakened by its similarity to Sequential Reflow [1], and the additional design to be compatible with different parameterization strategies is somewhat incremental.* - Technically, PeRFlow shares a similar idea of dividing a whole time range into multiple sequences. Moreover, PeRFlow designs dedicated parameterizations for each type of pretrained diffusion models, which facilitate fast convergence of acceleration. PeRFlow discusses the effect of CFG of teacher models during distillation. PeRFlow also demonstrates the plug-and-play properties of flow-based acceleration methods. 2. *The main motivation to improve training efficiency (line 47) is not reflected in the experiments. The authors should provide a more comprehensive comparison with rectified flow in terms of performance/training computation tradeoff.* - Yes, we agree that a more comprehensive comparison of performance/computation tradeoff should be included. - In each training iteration, the computational cost of PeRFlow for synthesizing the training target is $1/K$ of that of InstaFlow, where $K$ is the number of time windows. That's why we make the claim. - For example, in each iteration, InstaFlow samples a noise and uses 32-step DDIM to solve the target ( solving from t=1 to t=0). If PeRFlow divides the whole time window into 4 segments, it only requires 8-step DDIM operations to solve a sub- time window. 3. *The statement in line 55 that PeRFlow "has a lower numerical error than integrating the entire trajectories" should be more carefully validated, e.g. by quantitatively comparing straightness [2] or curvature [3] within each time window. The authors could also apply their method to the commonly used 2D checkerboard data to provide a more intuitive visualization of the learned probability path.* - Thanks for the advice, we should add some visualization. Usually, the claim may be held in most cases, because the accumulation error of numerical integration increases along with the length of time. 4. *There is a lack of ablation studies for several design choices. The authors should analyze the sensitivity of PeRFlow's performance to the number of time windows and sampling steps (more densely). It is also unclear how adding "one extra step in [𝑡𝐾,𝑡𝐾−1]" (line 223) contributes to the final results.* - Thanks for the advice. We will add more analysis in the final version. - number of time windows and sampling steps - The number of training segments depends on the minimum steps we expected for the inference stage. Suppose the minimum steps for the inference stage is N, the number of training segments K should be less or equal to N. In our experiments, we evaluate 4-step, 6-step, and 8-step generation results, so we set the number of training segments as four. The reason is that we cannot approximate the velocity of a time window by the velocity of its previous time window. So, for each window, we should allocate at least a one-step computation budget. - In some special cases (e.g., Wonder3D in Figure 8 Appendix), after 4-piece PeRFlow acceleration, the trajectory across the whole time window is almost linear. We can generate multi-view results with one step. But in most cases, we should use an inference step larger or equal to the training segments. - We explain the inference budget allocation strategy in line 215-225. - We agree it should be clarified with a more formal statement. Thanks for this suggestion. Suppose we have $K$ time windows and $N>=K$ inference steps. From noisy to clean state, the time windows are indexed by $K, K-1, ... 1$. - If N can be divided by $K$, each time window will be solved by $N//K$ steps. Otherwise, we allocate $N//K+1$ steps for windows, whose index $i$ satisfies $K-i < N \text{mod} K$. The rest windows are allocated $N//K$ steps. In other words, we try to allocate the budget equally. If not divisible, the extra budget is given to time windows in noisy regions, because the important layout synthesis is finished in these regions. 5. *The proposed method seems to be applicable to model training in addition to acceleration. Have the authors considered training their models from scratch on CIFAR-10 or ImageNet? This would allow a direct comparison of the performance/efficiency tradeoff with a broader family of flow matching algorithms.* - In this work, we focus on the acceleration of pretrained diffusion models. We leave the study of pretraining (training flow models from scratch via PeRFlow) into future work. - Although Stable-diffusion 3 demonstrates that training a rectified flow (single-piece linear flow) can generate high-quality images, there are still several open questions to study. - Piecewise linear flow generalizes the rectified flow, does there exist a proper window-dividing plan to train a better generation model? - If so, what kind of properties will it have in comparison to the vanilla rectified flow? Fast convergence? Fewer steps requested for inference sampling? 6. *What does "one-step" mean in Figure 8 when the sampling step should be lower bounded by the number of time windows of 4?* - Please refer to the answer to weakness 4. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses to my review, which addressed many concerns. I understand that some of the requested experiments cannot be performed due to limited time for rebuttal, and I hope that these experiments will be included in the final version.
Summary: This paper presents a novel approach to accelerating diffusion models by introducing the Piecewise Rectified Flow (PeRFlow). This method significantly enhances the efficiency of generating high-quality generative samples by dividing the flow trajectories of diffusion models into several time windows and straightening them using a reflow operation. Key contributions of the paper include: - Superior Performance in Few-Step Generation: PeRFlow reduces the number of inference steps required while maintaining or improving the quality of generative samples. - Fast Training and Transfer Ability: The models adapt quickly due to inherited parameters from pre-trained diffusion models, demonstrating good transferability across different models. - Universal Plug-and-Play Capability: PeRFlow models serve as accelerators compatible with various pre-trained diffusion models, facilitating seamless integration into existing workflows. Strengths: Overall I find that the writing is clear, concise, and well-structured, making it easy for readers to follow the arguments and understand the key points. I like the idea of multi-step or piecewise generative models since it is natural to extend InstaFlow into a multi-step fashion, which offers flexibility between speed and quality. Weaknesses: - I think the multi-step consistency model [1] should be discussed since it has a strong correlation with this paper. In the experiments section, you only compare PeRFlow with LCM and InstaFlow, both of which are relatively early works. There are plenty of distillation methods in this field that are worth mentioning and comparing, including HyperSD [2], CTM [3], and DMD [4]. - The most important hyper-parameter N, i.e., the number of segments, lacks analysis. How do you choose its value? What’s the relationship between the number of segments used in training and the number of sampling steps used in inference? - I like the idea of “plug-and-play” accelerator by extracting the delta weight to speed up other diffusion models. However, the implementation details and analysis in the paper are really limited with just a few demos. Besides, I think this is a general method that can be applied to any accelerated diffusion model, such as LCM? - The paper claims that “the computational cost is significantly reduced for each training iteration compared to InstaFlow”. However, do you have any quantitative evaluation, including the comparison with other methods? [1] Heek, Jonathan, Emiel Hoogeboom, and Tim Salimans. "Multistep consistency models." ICML 2024. [2] Ren, Yuxi, et al. "Hyper-sd: Trajectory segmented consistency model for efficient image synthesis." *arXiv preprint arXiv:2404.13686* (2024). [3] Kim, Dongjun, et al. "Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion.” NeuIPS 2023. [4] Yin, Tianwei, et al. "One-step diffusion with distribution matching distillation.” CVPR 2024. Technical Quality: 3 Clarity: 4 Questions for Authors: See the weakness above. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the comments and advice. 1. *I think the multi-step consistency model [1] should be discussed since it has a strong correlation with this paper. In the experiments section, you only compare PeRFlow with LCM and InstaFlow, both of which are relatively early works. There are plenty of distillation methods in this field that are worth mentioning and comparing, including HyperSD [2], CTM [3], and DMD [4].* - Thanks for providing these related works. Yes, we add the discussion of [1], which shares a similar idea of dividing the whole time window into multiple segments. The difference is that [1] trains a consistency model for each segment while PeRFlow trains a linear flow. - In Table 1, we compared PeRFlow with LCM, InstaFlow, and SDXL-lightning which is a state-of-the-art few-step text-to-image generator. Yes, we agree that other distillation methods should be discussed also. 2. *The most important hyper-parameter N, i.e., the number of segments, lacks analysis. How do you choose its value? What’s the relationship between the number of segments used in training and the number of sampling steps used in inference?* - The number of training segments depends on the minimum steps we expected for the inference stage. - Suppose the minimum steps for the inference stage is N, the number of training segments K should be less or equal to N. In our experiments, we evaluate 4-step, 6-step, and 8-step generation results, so we set the number of training segments as four. - The reason is that we cannot approximate the velocity of a time window by the velocity of its previous time window. So, for each window, we should allocate at least a one-step computation budget. - In some special cases (e.g., Wonder3D in Figure 8 Appendix), after 4-piece PeRFlow acceleration, the trajectory across the whole time window is almost linear. We can generate multi-view results with one step. But in most cases, we should use an inference step larger or equal to the training segments. 3. *I like the idea of “plug-and-play” accelerator by extracting the delta weight to speed up other diffusion models. However, the implementation details and analysis in the paper are really limited with just a few demos. Besides, I think this is a general method that can be applied to any accelerated diffusion model, such as LCM?* - Yes, the plug-and-play is a general property, like LCM, where we can add the delta-weights to any other diffusion pipeline for inference acceleration, such as controlnet, image-to-image, and ip-conditioned image generation. - We should add more implementation details. The delta weights are equal to the weights after PeRFlow acceleration minus the initial pretrained weights. - We provide a short analysis in line 196-204, where we observe that the delta-weights of PeRFlow can better preserve the properties of the original diffusion models in comparison to LCM, including a minor domain shift. 4. *The paper claims that “the computational cost is significantly reduced for each training iteration compared to InstaFlow”. However, do you have any quantitative evaluation, including the comparison with other methods?* - In each training iteration, the computational cost of PeRFlow for synthesizing the training target is $1/K$ of that of InstaFlow, where $K$ is the number of time windows. That's why we make the claim. - For example, in each iteration, InstaFlow samples a noise and uses 32-step DDIM to solve the target ( solving from t=1 to t=0). If PeRFlow divides the whole time window into 4 segments, it only requires 8-step DDIM operations to solve a sub- time window. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. I would like to maintain my score.
Summary: In this paper, the author proposes a new paradigm of sampling process (Piecewise Rectified Flow-PeRFlow) with reflow operation in the diffusion model, straightening the trajectories of the origin PF-ODEs and achieving a better performance in a few-step generation. Specifically, the PeRFlow divides the sampling process (ODE trajectories) into multiple time windows, then does reflow operation in each single time window to straighten the trajectories in each time window. Compared to the original diffusion model with reflow operation, it significantly reduced the synthesis time of training data for reflow, also narrowing the numerical errors of solving ODEs when generating the training data to get a higher-quality generated training dataset. Also, it only requires several inference steps to solve the ending point in each time window, achieving a diffusion model acceleration method with faster training convergence, more linear trajectories, better performance. Strengths: * The paper is well organized and clearly structured, it is very easy to follow. * Many fully detailed mathematical formulas are derived, making it easier to understand the details of the proposed method. * The figure about the proposed method is well designed, the effect and rough structure of the proposed method can be understood at a glance without looking at the text description. * The paper used enough large dataset of images which contain rich images & texts and used enough SOTA acceleration methods to evaluate the proposed method. Weaknesses: * The evaluation metrics for most generative models include FID and IS. And this paper only adopts the FID as the evaluation metric. Although this paper is aimed to accelerate the diffusion model with better performance, it is better if author can evaluate the diversity of generated images of the proposed method using the IS. In this case, people can know if such method will affect the diversity of generated images. * It would be better if author can directly indicate on the table that lower FID values are better. People who are not in the generative model filed are not familiar with FID. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weakness part. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the comments and advice. 1. *The evaluation metrics for most generative models include FID and IS. And this paper only adopts the FID as the evaluation metric. Although this paper is aimed to accelerate the diffusion model with better performance, it is better if author can evaluate the diversity of generated images of the proposed method using the IS. In this case, people can know if such method will affect the diversity of generated images.* - Thanks for your suggestion. We will add the IS values in the final version. Due to the limit of rebuttal time and GPU resources, we cannot generate enough images to compute IS value in this round of discussion. 2. *It would be better if author can directly indicate on the table that lower FID values are better. People who are not in the generative model filed are not familiar with FID.* - Thanks for your suggestion. We will highlight this in the final version.
Summary: The paper introduces a new flow-based method designed to accelerate diffusion models by dividing the sampling process into several time windows. The sampling path within each time window is straightened by the reflow operator. This approach allows for fast training convergence and, transferability, compatibility with various pretrained diffusion model workflows. Strengths: - The approach’s motivation is clear. Theoretical arguments support the proposal well. - The empirical results are promising, and better than other existing baselines. Weaknesses: - When dividing the sampling process into several time windows, the error of the previous windows immensely affects the later ones, potentially increasing the cumulative error of the whole sampling process. Technical Quality: 3 Clarity: 3 Questions for Authors: - Can PeRFlow's approach be generalized to other types of generative models beyond diffusion models such as GAN-based or VAE-based? - Can you provide more detailed insights into the parameterization techniques used and their impact on the training convergence and final model performance? - What are the observed benefits of using synchronized versus fixed CFG modes? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the comments and advice. 1. *When dividing the sampling process into several time windows, the error of the previous windows immensely affects the later ones, potentially increasing the cumulative error of the whole sampling process.* - In practice, we observe increasing the number of time windows reduces the difficulty of training, because a shorter time window is easier to straighten and the error of this short window is better controlled. - We agree with this concern. This potential issue may happen when the error in each window does not change much concerning the time length. Then, more windows may lead to a larger error. But, in our experiments, we find the error of each time decreases obviously if we shorten the time window. 2. *Can PeRFlow's approach be generalized to other types of generative models beyond diffusion models such as GAN-based or VAE-based?* - GAN-based and VAE-based methods are naturally one-step generators. PeRFlow works well for iterative generators like diffusion/flow methods. 3. *Can you provide more detailed insights into the parameterization techniques used and their impact on the training convergence and final model performance?* - Pretrained diffusion methods learn much useful information from large-scale training data. Parameterizing the target few-step model in the same way as the pretrained diffusion helps inherit useful information, such as interacting with the conditioning texts. Then, the acceleration algorithm can train on a relatively small dataset and converge fast. 4. *What are the observed benefits of using synchronized versus fixed CFG modes?* - as discussed in section 2 (line 152-158), the CFG-sync mode preserves better the sampling diversity and the compatibilty of the original diffusion models with occasional failure in generating complex structures, while the CFG-fixed trades off these properties in exchange for fewer failure cases. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their responses. I would like to keep my original score.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
FactorizePhys: Matrix Factorization for Multidimensional Attention in Remote Physiological Sensing
Accept (poster)
Summary: The paper introduces FactorizePhys that utilizes Non-negative Matrix Factorization (NMF) to decompose voxel embeddings. By integrating FSAM into both 3D-CNN and 2D-CNN architectures, FactorizePhys can estimate blood volume pulse signals from video frames. Through evaluations and comparisons with state-of-the-art rPPG methods, the effectiveness of FSAM and FactorizePhys are demonstrated. Strengths: 1. Introducing the Factorized Self-Attention Module (FSAM) for computing multi-dimensional attention from voxel embeddings. 3. Evaluation of FSAM and FactorizedPhys against state-of-the-art rPPG methods 4. Integration of FSAM into existing 2D-CNN-based and 3D-CNN-based rPPG architectures to demonstrate its versatility. Weaknesses: 1. The motivation to use the non-negative matrix factorization is unclear. Why can the factorized matrix be used as the attention? Authors should give more insights about the non-negative matrix factorization and show the attention maps to demonstrate the effectiveness. 2. There are other matrix factorization methods such as SVD and QR. Why is the non-negative matrix factorization chosen? 3. The factorization needs to solve an optimization problem which should do gradient descent steps. In line 216, the Authors only use a one-step gradient for matrix factorization. Is the one-step gradient enough to achieve satisfactory factorization? 4. The method part is not clearly illustrated. e.g., equation 5 is confusing, and the symbols in the equation are not well explained. The symbol mentioned in the main text is not well shown in the main figure. Technical Quality: 2 Clarity: 2 Questions for Authors: Please check the weakness part. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors mentioned the limitations of the work and proposed future directions such as time series estimation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for valuable comments and insightful questions. We will revise manuscript to reflect our responses. For your comments on visualization, we request you to refer our global response. We address rest of the comments here: * W1: The motivation to use the non-negative matrix factorization (NMF) as the attention: Factorization or matrix decomposition results in a low-rank approximation, and among such methods, NMF is highly researched area [1-4]. From prior works, Hamburger module [7] that implemented NMF, demonstrated its effectiveness in capturing global context, specifically for semantic segmentation and conditional image generation tasks. Hamburger module [7] outperformed transformer based network, though it was not implemented as attention mechanism, and the embeddings were limited to spatial and channel dimensions. Further, there have been efforts in reducing the redundancy in feature embeddings [6]. These prior works inspire us to investigate if low-rank approximation of embeddings (intuitively - a type of squeeze operation [5], that squeezes information without reducing dimension of the embeddings), when multiplied with the original embeddings (similar to excitation), forms an effective attention. The proposed FSAM module builds upon Hamburger module [7] and extends NMF based factorization for multi-dimensional embeddings having spatial, temporal and channel dimensions. Optimization of a network having factorization module implemented as an attention mechanism, can influence the network to increase saliency of the relevant features, such that factorized approximation of embeddings retain these features, while discarding the less salient features. As factorization can handle multiple dimensions simultaneously with appropriate transformation or mapping, it underlines its unique potential as muti-dimensional attention mechanism, unlike existing candidates that reduces one or more dimensions of embeddings to compute attention. * W2: NMF v/s SVD, QR The rationale for choosing NMF compared to other decomposition techniques is as follows: i) The only constraint posed by NMF on the matrix, its vectors and the features is non-negativity, while SVD, QR and Vector Quantization (VQ) assume statistical independence or orthogonality between vectors of the approximated matrix. For deep-layer embeddings, constraints of orthogonality or statistical independence may not be relevant, and therefore such decomposition methods are not well-suited. ii) Owing to non-negativity or purely additive constraints, NMF effectively learns parts-based representation, which further enhances interpretability of the learned features [3]. * W3: Gradient steps We fully agree with the reviewer that factorization needs to solve optimization problem. However, we would like to clarify in the manuscript that one-step gradient relates to a gradient step for each iteration that solves factorization with multiplicative update in one gradient step per iteration, as proposed in the Hamburger module [7]. Empirically we find 4 to 6 iterations to be sufficient to obtain the desired level of approximation. It is to be noted that our objective is not to achieve perfect fit with sub-zero approximation error. On the contrary, factorization serves as a better attention mechanism when it optimally approximates only salient features and discards less relevant features. * W4: Methods section, main figure and equation-5 We will revise the methods section to sufficiently clarify the use of the symbols in the equations, along with clearer depiction of the symbols in the main figure. In regards to the equation 5, it mentions the negative Pearson correlation as an objective function for the downstream task of rPPG signal estimation task. We notice the typo in the description, and further understand the use of “i” and “T” in equation 5 can be clarified for better readability. To clarify here, “T” refers to total number of samples of the estimated and ground-truth signal, which is also equal to the total number of input video frames. 1. Y. -X. Wang and Y. -J. Zhang, "Nonnegative Matrix Factorization: A Comprehensive Review," doi: 10.1109/TKDE.2012.51 2. Lee, D., & Seung, H. S. (2000). Algorithms for non-negative matrix factorization. Advances in neural information processing systems. 3. Lee, Daniel D., and H. Sebastian Seung. (1999) "Learning the parts of objects by non-negative matrix factorization." nature 401.6755 4. Gan, Jiangzhang, et al. "Non-negative matrix factorization: a survey." The Computer Journal 64.7 (2021): 1080-1092. 5. Hu, Jie, Li Shen, and Gang Sun. "Squeeze-and-excitation networks." Proceedings of the IEEE conference on computer vision and pattern recognition, 2018 6. Han, Zongyan, Zhenyong Fu, and Jian Yang. "Learning the redundancy-free features for generalized zero-shot object recognition." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020. 7. Z. Geng, M.-H. Guo, H. Chen, X. Li, K. Wei, and Z. Lin, ‘Is Attention Better Than Matrix Decomposition?’, in International Conference on Learning Representations, 2021. --- Rebuttal Comment 1.1: Comment: Thanks for the authors's response. My main concern still exists, which is how the NMF is related to the rPPG task. I think NMF is useful for general tasks, but the relation to rPPG is not that strong. Therefore, I will keep the score. --- Reply to Comment 1.1.1: Comment: We thank you for your comment and time to help improve our manuscript. While preparing for our previous response, we couldn’t spot out your main concern “which is how the NMF is related to the rPPG task”, which we are addressing below. To begin with, we fully agree that NMF is useful for general tasks or to put it in other words, it is agnostic to the downstream tasks. Here, we would like to highlight that rPPG estimation task is concerned with a series of spatial-temporal tasks including video object tracking, video segmentation and video action recognition from the perspective of learning spatial-temporal features. Therefore, rPPG estimation task benefits from the attention mechanism like any other spatial-temporal downstream task. As described in our response to Reviewer a4X8, the proposed approach helps jointly compute multi-dimensional attention using NMF to gain the modeling advantage, which is unlike most current methods that compute attention disjointly across different dimensions, which our reviewer highlighted as our strength. Further insights about the relevance of the proposed method for rPPG estimation task can be drawn from the visualization of learned attention maps, that we provided in Figure-4 of the PDF submitted along with global response. In these learned attention maps, higher cosine-similarity score can be observed for the model trained with the proposed FSAM module. Higher cosine-similarity score between the temporal dimension of the embeddings and the ground-truth PPG signal indicates higher saliency of temporal features. The spatial spread of high cosine-similarity scores highlights that the learned attention is selective to the regions of the face that have exposed skin surface (where rPPG signal can be found). This provides clearer evidence that the model trained with the FSAM module can appropriately pick the spatial features that are the sources of the desired temporal signal. Thus, the presented comparison of the visualization offers greater insights into the effectiveness of the joint computation of multi-dimensional attention for rPPG estimation task. On top of the joint computation of multi-dimensional attention using the proposed FSAM module, transformation of embeddings to factorization matrix as formalized in Equation-6 of the submitted manuscript plays a key role in obtaining significant performance gains as observed for our main, cross-dataset evaluation, which highlight superior generalization ability in the given task. Specifically, as per the Equation-6, temporal dimension of the embeddings is mapped to the vectors of factorization matrix, and the rest of the dimensions form features of the factorization matrix. This mapping enables explainable selection of rank of factorization in rPPG extraction case. Across the entire facial region, we expect only a single rPPG source signal, and similarly we expect this to be represented within the embeddings. Performing factorization of embeddings, with an optimally chosen rank (=1) is therefore highly suited for the given rPPG extraction task. Through an overall optimization of the network in presence of rank-1 approximation of embeddings, model learns to increase the saliency of the most relevant spatial-temporal features, which explains high effectiveness of the proposed method in rPPG estimation. We hope that the above explanation addresses the concern on “how the NMF is related to the rPPG task.”
Summary: The paper presents a novel attention block FSAM devised for handling spatio-temporal data. It is benchmarked against a range of SOTA architectures on a suitable selection of different datasets, and found to perform strongly. Strengths: The paper is well presented and clearly structured The is a good degree of novelty in the proposed architecture, and the application is of high interest. The experiments are conducted over a well selected range of real world datasets. A good level of detail is provided on the methodology, and the performance of the proposed approach appears strong. Weaknesses: While the paper's experimental results are promising, they do not appear to be accompanied by any uncertainty estimates. Including these is crucial to allow the reader to draw meaningful conclusions from the results. I would refer the authors to question 6 in the checklist at the end of the paper. The answer given to the question 'does the paper report [...] statistical significance of the experiments' is given as "N/A", while the guidelines advise that this answer indicates that the paper does not include any experiments. Also, while it is acceptable to focus on the intra-dataset performance within the main text, it is crucial to at least include the regular results in the Appendix. A couple of minor typos: On line 196: "device a" -> "devise a" On 236: "It's" -> "Its" Technical Quality: 3 Clarity: 3 Questions for Authors: Is FSAM anticipated to have significantly wider impact on spatio-temporal applications, beyond rPPG? How do the different candidate methods compare in their scalability to higher temporal and spatial resolutions? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: There is a discussion on potential societal impact in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for acknowledging the novelty of our contributions and significance of the reported results. We highly value your suggestions based upon which we will further revise manuscript. For your comments on statistical significance, uncertainty estimates and scalability, we request you to refer our global response. We address rest of the comments here: * Regular results in the Appendix: We would like to clarify that our main results are for cross-dataset evaluation, which offers better insights into the real-world performance for unseen data distribution. If we understand your suggestion correctly, it relates to reporting within-dataset performance. Based on this interpretation, we present intra-dataset/ within-dataset results in Table-7 in the PDF attached to the global response. * A couple of minor typos: On line 196: "device a" -> "devise a" On 236: "It's" -> "Its" Thank you for spotting the typos. We will address these and we will thoroughly proof-read the manuscript for its revision. * Is FSAM anticipated to have significantly wider impact on spatio-temporal applications, beyond rPPG? Among the existing works, NMF as implemented in the Hamburger module [1] demonstrated its effectiveness in capturing global context, specifically for the semantic segmentation and the conditional image generation tasks. While Hamburger module [1] outperformed transformer based network, authors did not implement their method as an attention mechanism, and the embeddings were limited to spatial and channel dimensions. FSAM builds upon it and investigates NMF based factorization of embeddings as multi-dimensional attention with additional temporal dimension. In principle, the factorization of embeddings results in a low-rank approximation of embeddings. Optimization of a network having factorization module implemented as an attention mechanism, influences the network to increase saliency of the most relevant features, such that factorized approximation of embeddings retain these features, while discarding the less salient features. As factorization can handle multiple dimensions simultaneously with appropriate transformation or mapping, it underlines its unique potential as muti-dimensional attention mechanism, unlike existing candidates. End-to-end estimation of rPPG signal from facial video frames represents one of the challenging spatial-temporal tasks as it requires network to learn to pick the spatial features having the desired temporal signature, while discarding the variance related to head-motion, illumination and skin-tones. While this work is limited to the evaluation of FSAM for a spatial-temporal task of estimating rPPG signal, based on the consistent performance gains across the datasets, and versatility of the module in 3D-CNN and 2D-CNN models, we envisage FSAM to have wider impact on spatio-temporal applications such as video segmentation, video object tracking, and action recognition, which we consider as future extension of this work. 1. Z. Geng, M.-H. Guo, H. Chen, X. Li, K. Wei, and Z. Lin, ‘Is Attention Better Than Matrix Decomposition?’, in International Conference on Learning Representations, 2021. --- Rebuttal 2: Comment: Thank you for responding thoughtfully to my concerns. Overall my main concerns have been addressed so I shall update my score accordingly. > If we understand your suggestion correctly, it relates to reporting within-dataset performance. Yes that is correct, thank you for including these. Regarding the new Table 7: - Some of the STD values seem very large, for example the RMSE STD is around ten times larger than the mean. Is this a typo or just a feature of the dataset? - For the MAE values in "Performance evaluation on PURE", it seems as though Physnet should be bolded since it has 2.78, vs 2.83 which is currently in bold. --- Rebuttal Comment 2.1: Comment: Thank you for finding our responses satisfactory and for increasing the score. Below we respond to your further queries: * Very large RMSE STD: Thanks for spotting out this error. We investigated this thoroughly and discovered an edge case scenario of one participant in the PURE dataset, with ground-truth HR=46. The data of only this participant was driving the RMSE STD very high. We first inspected the estimated rPPG signals for all the which were found well aligned with the ground-truth BVP signal. The root cause was low-cut-off freq (0.75 Hz), of the band-pass filter as implemented in rPPG-Toolbox [1], upon which our code is built. This filter is applied both to the ground-truth BVP signal and estimated rPPG signals before computing HR. This low-cut value impacts the FFT-peak based computation of HR for HR=46, as the main peak is suppressed and results are driven by the harmonics, leading to the observed large errors. We changed this to 0.5 Hz to accommodate low HR cases and re-evaluated ours as well as SOTA methods on PURE dataset, first for within-dataset case. We thoroughly verified that this change in low-cut-off frequency did not alter any outcome for HR higher than 46, which is the case with other datasets. Below, we present the revised within-dataset results for PURE dataset, where we observe expected RMSE STD range for all models. | Model | Attention Module | MAE (HR) ↓ | | RMSE (HR) ↓ | | MAPE (HR)↓ | | Corr (HR) ↑ | | SNR ( dB, BVP) ↑ | | MACC (BVP) ↑ | | | -------------------- | ------------------- | ---------- | ---------- | ----------- | ----------- | ---------- | ---------- | ----------- | ----------- | ---------------- | ---------------- | ------------ | ------------ | | | | Mean | STD | Mean | STD | Mean | STD | Mean | STD | Mean | STD | Mean | STD | | PhysNet | \- | 0.88 | 0.50 | 2.29 | 4.17 | 1.25 | 0.63 | 0.99 | 0.03 | 22.25 | 1.99 | 0.92 | 0.01 | | PhysFormer | TD-MHSA\* | 0.98 | 0.49 | 2.31 | 4.17 | 1.41 | 0.62 | 0.99 | 0.03 | 20.85 | 1.87 | 0.89 | 0.01 | | EfficientPhys | SASN | 0.68 | 0.48 | 2.13 | 4.16 | 0.86 | 0.53 | 0.99 | 0.03 | 17.72 | 1.81 | 0.84 | 0.02 | | EfficientPhys | FSAM (Ours) | 0.88 | 0.50 | 2.29 | 4.17 | 1.25 | 0.63 | 0.99 | 0.03 | 17.42 | 1.87 | 0.83 | 0.02 | | FactorizePhys (Ours) | FSAM (Ours) | 0.78 | 0.50 | 2.25 | 4.18 | 1.06 | 0.62 | 0.99 | 0.03 | 21.13 | 2.05 | 0.89 | 0.01 | To ensure the validity of our main results on PURE, we compared ours and the best-performing SOTA, trained using UBFC-rPPG. | Model | Attention Module | MAE (HR) ↓ | | RMSE (HR) ↓ | | MAPE (HR)↓ | | Corr (HR) ↑ | | SNR ( dB, BVP) ↑ | | MACC (BVP) ↑ | | | -------------------- | ---------------- | ---------- | ---------- | ----------- | ----------- | ---------- | ---------- | ----------- | ----------- | ---------------- | ---------------- | ------------ | ------------ | | | | Mean | STD | Mean | STD | Mean | STD | Mean | STD | Mean | STD | Mean | STD | | EfficientPhys | SASN | 3.39 | 1.58 | 12.59 | 103.66 | 3.65 | 1.35 | 0.84 | 0.07 | 10.27 | 1.17 | 0.69 | 0.02 | | FactorizePhys (Ours) | FSAM (Ours) | 0.54 | 0.21 | 1.70 | 1.56 | 0.77 | 0.31 | 1.00 | 0.01 | 15.18 | 0.96 | 0.80 | 0.02 | The large RMSE STD error of the SOTA method here is due to the low SNR of the estimated rPPG signal, affecting waveform morphology, and thereby the computed HR for the edge cases. As the SNR of rPPG signals estimated by FactorizePhys is higher, its results are further improved. This further underlines the superior cross-dataset generalization that our proposed model with FSAM module is able to achieve. Given these observations, we will further revise all cross-dataset results for evaluation performed on PURE dataset. * Bold for 2.78, vs 2.83: Apologies for this typo. We will present the revised results with correctly “bolded” entries in the revised manuscript. [1] Liu, Xin, et al. "rppg-toolbox: Deep remote ppg toolbox." NeurIPS 2024 --- Reply to Comment 2.1.1: Comment: In continuation of the previous comment, below we share revised main results (after changing the low-cutoff for band-pass filtering) for all the models trained using UBFC-rPPG dataset and evaluated on PURE dataset. | Model | Attention Module | MAE | (HR) ↓ | RMSE | (HR) ↓ | MAPE | (HR)↓ | Corr | (HR) ↑ | SNR | (dB, BVP) ↑ | MACC | (BVP) ↑ | | -------------------- | ---------------- | ---- | ------- | ----- | ------ | ----- | ----- | ---- | ------ | ----- | ----------- | ---- | ------- | | | | Mean | STD | Mean | STD | Mean | STD | Mean | STD | Mean | STD | Mean | STD | | PhysNet | \- | 7.70 | 2.16 | 18.30 | 113.63 | 13.15 | 3.87 | 0.66 | 0.10 | 11.48 | 1.12 | 0.74 | 0.02 | | PhysFormer | TD-MHSA\* | 7.47 | 2.18 | 18.33 | 131.60 | 11.72 | 3.47 | 0.65 | 0.10 | 9.22 | 1.12 | 0.68 | 0.02 | | EfficientPhys | SASN | 3.39 | 1.58 | 12.59 | 103.66 | 3.65 | 1.35 | 0.84 | 0.07 | 10.27 | 1.17 | 0.69 | 0.02 | | EfficientPhys | FSAM (Ours) | 2.10 | 1.19 | 9.37 | 76.23 | 2.60 | 1.18 | 0.92 | 0.05 | 11.25 | 1.07 | 0.71 | 0.02 | | FactorizePhys (Ours) | FSAM (Ours) | 0.54 | 0.21 | 1.70 | 1.56 | 0.77 | 0.31 | 1.00 | 0.01 | 15.18 | 0.96 | 0.80 | 0.02 | \* TD-MHSA*: Temporal Difference Multi-Head Self-Attention \cite{yu2022PhysFormer}; SASN: Self-Attention Shifted Network \cite{liu2023EfficientPhys}; FSAM: Proposed Factorized Self-Attention Module We further notice the large RMSE STD errors for SOTA methods attributed to the low SNR of the estimated rPPG signal, affecting waveform morphology, and thereby the computed HR for the edge cases. As the SNR of rPPG signals estimated by FactorizePhys is higher, it significantly outperforrms on edge cases and unseen datasets. The superior cross-dataset generalization of our proposed FSAM module is further evidenced with these revised results.
Summary: The paper proposed the Factorized Self-Attention Module (FSAM), which jointly computes multi-dimensional attention across spatial, temporal, and channel dimensions using non-negative matrix factorization (NMF). The FSAM is integrated into a new end-to-end 3D-CNN architecture called FactorizedPhys, designed to estimate blood volume pulse signals from video frames. Extensive experiments on multiple datasets demonstrate FSAM's effectiveness. Strengths: The overall writing of the paper is clear. The proposed method leverages the strengths of matrix factorization to capture global spatial-temporal context effectively. Specifically: - Unlike most current methods that compute attention disjointly across different dimensions, the proposed method jointly computes multi-dimensional attention using non-negative matrix factorization to gain the modeling advantage. -Ablation studies are conducted to assess the different mappings of voxel embeddings in the factorization matrix. - Model Complexity is analyzed experimentally. Weaknesses: -Lack of visualization to show how the results of the proposed method are superior (say, an image of the output to show the learned attention). -Lack of variance of the results in the experiments table. -It is still unclear why and in what situation the proposed method is better than the SOTA methods. Say, pick some cases from the model output that show that FactorizePhys is better than the others. Technical Quality: 3 Clarity: 3 Questions for Authors: - Are the methods sensitive to the hyperparameters? Lack of sensitivity analysis. -The authors could consider visualizing more about the outputs of the models. -Lack of scalability analysis of the methods, especially when the NMF is adopted. For large datasets, it is valuable to know whether the method is scalable compared to other methods. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitations are mentioned in the paper but could be highlighted more clearly in the discussion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for acknowledging the strengths of our contributions, providing constructive feedback and suggestions. We will revise manuscript to reflect responses to your comments that we describe here. For your comments on visualization, variance and scalability, we request you to refer our global response. We address rest of the comments here: * Describing the cases when the proposed method is better than the SOTA methods: We will revise the discussion section to describe the key findings as follows: For all reported evaluation metrics, the proposed method outperforms SOTA methods for its cross-dataset evaluation (revised Table-1 in PDF) on PURE and iBVP datasets, for all respective training datasets. This suggest consistent and superior generalization achieved by the proposed method. Cross-dataset evaluation on UBFC-rPPG dataset further highlights the performance gains of the proposed FactorizePhys model when it is trained using iBVP and SCAMPS datasets, and at-par performance with SOTA models when trained using PURE dataset. FactorizePhys uniquely outperforms the SOTA methods on all testing datasets when SCAMPS dataset is used for training, further stressing the superior generalization achieved from synthesized dataset with the proposed method. Lastly, the proposed method consistently offers superior SNR and MACC for the estimated rPPG signals, which highlights enhanced reliability. We further report intra-dataset evaluation (Table-7 in PDF), where performance of FactorizePhys is at-par with the SOTA methods. * Are the methods sensitive to the hyperparameters? Lack of sensitivity analysis While we follow fair training-testing strategy as described below and report statistical significance of results in the supplementary data, we specifically evaluated sensitivity for factorization rank – which is the most important hyper-parameter for the proposed FSAM module. The results shown in the appendix Table 4 of the submitted manuscript indicate low sensitivity for a set of low-ranks, while the performance drops with higher rank, which is an expected behaviour. High-rank factorization results in low matrix approximation error which does not remove redundancy in the embeddings, and thereby not serving as an effective attention mechanism. Factorization rank is therefore an important hyperparameter that is required to be adjusted based on the architecture, placement of the module within the architecture and the downstream task. For rPPG signal extraction downstream task, we expect only one underlying signal source and therefore rank-1 factorization of temporal vectors offers optimal performance. Fair training-testing strategy: All model-specific hyper-parameters were maintained as provided by the respective SOTA methods, while the training pipeline related hyper-parameters were kept consistent for training all the models. Training pipe-line related hyper-parameters that we maintained consistent include - pre-processing steps for images and labels, batch-size, number of epochs, learning rate, scheduler and optimizer. However, we noticed that the number of epochs that we initially kept as 30, resulted in extremely low training-loss for all SOTA methods, affecting their cross-dataset generalization. The proposed method did not show such an overfitting problem and was not found senstive to the number of training epochs after convergence. This offered advantage to the proposed method. For fair evaluation against SOTA methods, we revised the training, validation and testing strategy, in which instead of using the validation set based best epoch from 30 epochs training, we re-trained all models for 10 epochs, similar to a recent work [1], and used the last epoch for cross-dataset as well as within-dataset evaluation. Table 6 in PDF highlights that the updated strategy offers fair evaluation against the SOTA methods. Revised Table 1 and new appendix Table 7 provide detailed results for cross-dataset and regular (within-dataset) evaluation respectively, with statistical variance mentioned for all the evaluation metrics. * The authors could consider visualizing more about the outputs of the models. We appreciate this feedback and we will add a figure in the revised manuscript to show the learned attention map as well as output of the models. Here, in Figure 5 of the PDF, we have added a sample plot that compares the output rPPG signals of the proposed FactorizePhys (in orange) and EfficientPhys (best performing SOTA model, in blue) with the ground-truth BVP signal (GT, in black). We have provided large sample set of such comparisons in the supplementary data shared with the ACs. To fit more plots in the supplementary data, we used higher JPEG compression for the figures, due to which images may appear noisy, for which we apologize. * The limitations are mentioned in the paper but could be highlighted more clearly in the discussion section. We appreciate reviewer for this comment. Following are few limitations which we would like to clearly highlight in the discussion section: (a) While the proposed FSAM module has shown to be effective spatial-temporal attention for rPPG signal extraction task, its efficacy as an attention mechanism for other spatial-temporal tasks such as video understating, video object tracking, and video segmentation, needs to be further investigated. (b) For signal extraction tasks, different forms of constrained NMF were not investigated. Specifically the variants of NMF which put temporal or frequency constraints on the time-series vectors may offer more effective attention, since such constraints can be chosen based on the characteristics of the ground-truth signal to be estimated. 1. C. Zhao, H. Wang, H. Chen, W. Shi and Y. Feng, "JAMSNet: A Remote Pulse Extraction Network Based on Joint Attention and Multi-Scale Fusion," 2023, doi: 10.1109/TCSVT.2022.3227348. --- Rebuttal Comment 1.1: Comment: The authors answer most of my questions. I especially appreciate the effort of putting in more visualization. But the concern "It is still unclear why and in what situation the proposed method is better than the SOTA methods" is still not answered well. I understand the experimental results are better than the SOTA but still not clear how the proposed method could gain significant improvement. If it is due to the use of joint computation of multi-dimensional attention, the case study should compare the difference(either visualization or some deep explanation of results) of different approaches that use attention as well. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for appreciating our efforts on preparing visualization of learned attention maps. We also apologize for not making it sufficiently clear about why the proposed FSAM module achieves significant performance gains. Here we would like to provide further insights. Firstly, aligned with your suggestion, through our visualization presented in Figure-4 of the PDF submitted with the global response, we compare embeddings of the identical base 3D CNN model – trained without and with FSAM module (which performs joint computation of multi-dimensional attention) . In these visualized learned attention maps, higher cosine-similarity score can be observed for the model trained with the proposed FSAM module. Higher cosine-similarity score between the temporal dimension of the embeddings and the ground-truth PPG signal indicates higher saliency of temporal features. Further look at the spatial spread of high cosine-similarity scores highlights that the learned attention is selective to the regions of the face with the exposed skin surface (where rPPG signal can be found). This provides a clearer evidence that the model trained with the FSAM module can appropriately pick the spatial features that are the sources of the desired temporal signal. Thus, the presented comparison of the visualization offers greater insights into the effectiveness of the joint computation of multi-dimensional attention. To compare the performance against existing attention modules, we picked the best performing SOTA (EfficientPhys that implements SASN attention) based on our main results and adapted our proposed FSAM for 2D-CNN based EfficientPhys architecture. We then replaced SASN module with the proposed FSAM module. Our main (cross-dataset) results, in Table-1 of PDF and within-dataset results (Table-7) of the PDF in global response, provide direct comparison of the performance of FSAM and SASN in EfficientPhys architecture. While EfficientPhys with FSAM module performs at-par with the EfficientPhys having SASN attention module, it can be inferred that 2D-CNN based architecture is not able to leverage the full potential of the joint spatial-temporal attention, which is achieved in the proposed 3D-CNN based architecture. We understand that in addition to the joint computation of multi-dimensional attention using the proposed FSAM module, transformation of embeddings to factorization matrix as formalized in Equation-6 of the submitted manuscript plays a key role in obtaining significant performance gains as observed for our main, cross-dataset evaluation, which highlight superior generalization ability. Specifically, as per the Equation-6, temporal dimension of the embeddings is mapped to the vectors of factorization matrix, and the rest of the dimensions form features of the factorization matrix. This mapping enables explainable selection of rank of factorization in rPPG extraction case. Across the entire facial region, we expect only a single rPPG source signal, and similarly we expect this to be represented within the embeddings. Performing factorization of embeddings, with an optimally choosen rank (=1) is therefore highly suited for the given rPPG extraction task. Through an overall optimization of the network in presence of rank-1 approximation of embeddings, model learns to increase the saliency of the most relevant features, which explains high effectiveness of the proposed method. We will revise the results and discussion section to offer these deeper insights. Hope above response address your main concern.
null
null
Rebuttal 1: Rebuttal: We would like to sincerely thank all reviewers for their valuable feedback that helps stregthening our contributions. We would like to respond to common comments here, while responding individually for the rest. * a4X8, ECso: Visualization of the attention maps to demonstrate the effectiveness the proposed FSAM module: We fully agree that visualization of learned attention can provide insight into the efficacy of the proposed method. In figure 4 of PDF, we have added the same for the network trained without and with the proposed FSAM module. Each tile represents a channel of 4D embedding (spatial, temporal and channel), and the figure shows all the channels of an embedding layer. For each channel, we compute cosine similarity between the temporal dimension of a spatial-temporal voxel and the ground-truth signal. The resultant color-coded cosine-similarity score-map offers more intuitive visualization of the learned spatial-temporal attention, as compared to that used in the existing works [1] that disjointly present spatial and temporal attention. Given this comment, we will introduce a new figure in the results section of revised manuscript, along with a brief description on our approach to generate visualization of the learned attention map. Further, our supplementary data with more samples of learned embeddings, model outputs of estimated signals, detailed results and code is shared with the ACs. * a4X8, c6jF: Reporting of statistical significance of the experiments, variance and uncertainty estimates: To report statistical significance, we picked the best performing SOTA method (EfficientPhys [2]) from the main cross-dataset evaluation. For the SOTA and the proposed method, we performed 10 rounds of training and testing with random seed values. Paired T tests show that the proposed method outperforms SOTA with statistical significance obtained for all reported evaluation metrics. Results are added to an excel-sheet that is a part of the supplementary data, which we have recently shared with the ACs. For rPPG signal estimation task, total uncertainty measure has been shown to highly correlate with the absolute error obtained from the heart-rate computed using the estimated rPPG signal and the ground-truth heart-rate [3]. While authors used CHROM [5] rPPG method to show this correlation, they clarified that their method is agnostic to the rPPG method [3] and their findings can be generalized to time-series estimation task [3, 4]. Based on this understanding, we report mean absolute error as well as mean values of other relevant rPPG specific evaluation metrics in the submitted manuscript. We have further revised Table-1 (in attached PDF) to provide the corresponding variance measures to draw meaningful conclusions from the experiments. We understand that the reported evaluation metrics along with the variance measures thoroughly compare the uncertainty estimates of the proposed method with that of the SOTA methods. * a4X8, c6jF: Scalability Analysis: i) Performance of FSAM for higher temporal and spatial resolution: In new appendix Table-8 (in PDF), we compare the performance of the proposed module by varying spatial resolution and temporal dimension. With higher spatial resolution and more temporal frames, we observe small performance gain. We further conducted repeated tests (10 paired tests with different seed values) that showed non-significant performance difference between the models trained with 160x72x72x3 (low-res) and 240x128x128x3 (high-res) dimensional input video frames. Results of the repeated tests for scalability are also added to the supplementary data. ii) Overview of computational complexity when FSAM is adapted for higher temporal and spatial resolution: The number of trainable parameters of the FSAM module are dependent only on the parameters in pre and post convolution layers, while the factorization operation is implemented as no-grad, thereby not adding trainable parameters as well as not directly affecting the gradients flow for optimization of the main network. For low-res and high-res inputs, trainable parameters of FSAM are just 328 within our implemented 3D-CNN architecture having 56200 parameters. For low-res and high-res, dimension of embeddings approximated by FSAM are 160x392 and 240x5000 respectively, which are both approximated in 4 iterations, executed only during forward pass. For very high-resolution embeddings, one of the high-resolution dimensions can be appropriately splitted to execute batched optimization for factorization. Thus FSAM adds negligible overhead making it highly suitable for scalability. Our additional ablation study (in supplementary data) highlight that the models trained using the FSAM retain the same performance, when deployed without the FSAM module for evaluation. This eliminates inference time latency and computational overhead of the module. The results reported in the revised Table-1 (in PDF) are obtained from the proposed FactorizePhys model deployed without FSAM module. This further improves the scalability of the proposed method. 1. C. Zhao, H. Wang, H. Chen, W. Shi and Y. Feng, "JAMSNet: A Remote Pulse Extraction Network Based on Joint Attention and Multi-Scale Fusion," June 2023, doi: 10.1109/TCSVT.2022.3227348 2. Liu, Xin, et al. "Efficientphys: Enabling simple, fast and accurate camera-based cardiac measurement." Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2023 3. R. Song, H. Wang, H. Xia, J. Cheng, C. Li and X. Chen, "Uncertainty Quantification for Deep Learning-Based Remote Photoplethysmography," 2023, doi: 10.1109/TIM.2023.3317379. 4. W. Qian, D. Zhang, Y. Zhao, K. Zheng and J. J. Q. Yu, "Uncertainty Quantification for Traffic Forecasting: A Unified Approach," 2023, doi: 10.1109/ICDE55515.2023.00081. 5. De Haan, Gerard, and Vincent Jeanne. "Robust pulse rate from chrominance-based rPPG." IEEE transactions on biomedical engineering 60.10 (2013): 2878-2886. Pdf: /pdf/62fea13e76b3db94eed2285220cddf0ca1915470.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Conjugated Semantic Pool Improves OOD Detection with Pre-trained Vision-Language Models
Accept (poster)
Summary: This paper proposes a novel method utilizing pre-trained vision-language models (VLMs) to enhance out-of-distribution (OOD) detection. Their core idea lies in constructing a Conjugated Semantic Pool (CSP) to enrich the pool from which OOD labels are drawn. Unlike simply expanding lexicons with synonyms and uncommon words, the CSP leverages modified superclass names that serve as cluster centers for samples with similar properties across categories. This, the authors theorize, not only increases pool size but also amplifies the activation probability of OOD labels while maintaining low label dependence. Extensive experiments demonstrate significant performance improvements over existing approaches. Strengths: Originality: The paper introduces a unique concept, conjugated semantic labels, for enhancing the semantic pool, offering a valuable contribution to OOD detection. Theoretical Underpinnings: The authors provide a robust theoretical framework, including mathematical models and justifications demonstrating how the CSP improves OOD detection. Empirical Validation: Extensive experiments showcase the effectiveness of the proposed method, achieving significant advancements over current state-of-the-art techniques. Generalizability: The method's applicability across diverse VLM architectures suggests broad utility and adaptability. Clear Problem Definition: The paper clearly identifies limitations in existing methods, particularly regarding simple lexicon expansion, and proposes a well-reasoned solution. Weaknesses: 1. Ablation Study & OOD Score Function: The ablation study in Table 11 showcases that competitive performance on iNaturalist and SUN datasets is achievable without the CSP. However, it remains unclear how the proposed OOD score function (designed by the authors) contributes to performance when CSP is not employed. Further clarification is needed on: (1)Performance Source: If strong results are achievable without CSP on these datasets, what factors besides the CSP are driving the overall performance improvement observed in the main experiments? (2) OOD Score Function Usage: How can the authors' OOD score function be effectively utilized in scenarios where CSP is not implemented? 2. Synonym Handling and CSP Overlap: The paper acknowledges the limitations of including synonyms in larger lexicons. However, it lacks a deeper exploration of how the CSP specifically addresses this issue. Additionally, the potential for semantic overlap within the CSP itself needs to be addressed. It would be beneficial to see a discussion on: (1) CSP vs. Synonyms: How does the design of the CSP inherently mitigate the problems introduced by synonyms in traditional lexicon expansion? (2) Mitigating Overlap: How do the authors handle potential semantic overlap between elements within the CSP? Are there strategies to ensure distinctiveness between labels? 3. Systematic Analysis on Place and Texture Datasets: A more systematic analysis of the CSP's impact on the Place and Texture datasets, particularly concerning the design principles of semantic pooling, would be valuable. The authors' own experiments suggest that designing effective semantic pooling requires considering multiple factors. Therefore, a more in-depth exploration of: (1) Place and Texture Performance: How does the CSP specifically improve performance on Place and Texture datasets? Can these improvements be linked to specific design choices within the CSP? (2) Semantic Pooling Design Factors: Building on the authors' findings, what factors are crucial for designing effective semantic pooling mechanisms, especially in the context of Place and Texture datasets? Overall, this paper proposes a promising approach to improve OOD detection by innovatively using conjugate semantic labels to expand the semantic pool, which is supported by theoretical analysis and empirical results. However, I still have some questions about the method that the authors need to address. Technical Quality: 4 Clarity: 4 Questions for Authors: See Weakness Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: See Weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments and the positive rating! In the following, your comments are concisely mentioned (due to the length limitation) and then followed by our point-by-point responses. > 1. Ablation Study & OOD Score Function: ... Thank you for the insightful comment! The OOD score function we adopted is proposed by NegLabel (ICLR2024) without modifications. The performance improvement on certain datasets when not using CSP stems from minor adjustments we made to the lexicon used to construct the original semantic pool. Specifically: 1. **Performance Source:** In the baseline presented in Table 11, we removed all adjectives and proper nouns, including personal names, organization names, and quantifiers, from the semantic pool constructed by NegLabel. This led to a higher expected activation probability of negative labels for OOD images in the iNaturalist and SUN datasets. However, this did not result in an overall performance improvement. Compared to the NegLabel method, the performance gains on the iNaturalist and SUN datasets were offset by performance drops on the Places and Texture datasets, resulting in a decrease in overall model performance (25.40% -> 27.39% on FPR95). 2. **OOD Score Function Usage:** Our Conjugated Semantic Pool (CSP) is a method for expanding the semantic pool. Without using CSP, we still have the original semantic pool composed of simple noun labels, which can be used with the OOD score function. We will include additional details in the implementation part of the revised version to ensure clearer explanations. > 2. Synonym Handling and CSP Overlap: ... Thank you for the valuable comment! We provided related discussion in Appendix D.1 (L1143-1152). Specifically: 1. **CSP vs. Synonyms:** When the size of a traditional lexicon reaches a certain point, further expansion inevitably introduces a large number of synonyms, which are unlikely to contribute to performance improvement. However, using CSP to expand the semantic pool effectively mitigates this issue without introducing numerous synonyms, since the labels in CSP are centers of property clusters, while the labels in the original semantic pool are centers of category clusters. As discussed in Appendix D.1, our statistical analysis supports this claim: we calculate the average maximum similarity between each label and other labels within the semantic pool—a metric that reflects the proportion of synonymous pairs within the pool and tends to increase monotonically as the semantic pool expands. Our findings indicate that only 3.94% of the original labels find more similar counterparts in the expanded CSP, resulting in a negligible increase in the aforementioned metric from 0.8721 to 0.8726. Consequently, the mutual dependence between the new and original labels is relatively low. 3. **Mitigating Overlap in CSP:** When constructing the conjugated semantic pool, each label has a different adjective and a randomly selected superclass from a set of 14 superclasses. For two labels in the CSP, significant semantic overlap would only occur if both their adjectives and superclass words are synonymous, thus the likelihood of such overlap within the CSP is very low. Of course, we acknowledge that CSP, as a method for expanding the semantic pool, cannot eliminate synonyms that already exist within the original pool. However, the main limitation of synonyms is that their activations are highly dependent on each other, making it difficult for synonym-based expansion to improve performance in line with the theoretical guidance of mathematical models. The presence of a small number of synonyms should not significantly harm model performance. > 3. Systematic Analysis on Place and Texture Datasets: ... Thanks for the valuable comment! We provide a more in-depth exploration of these issues and will add them to the revised version: 1. **Place and Texture Performance:** We provide related discussion in Appendix D.1 (L1132-1142) and provide supporting data in Table 5, which may be helpful to clarify this issue. Specifically, by linking the performance improvements on the Places and Textures datasets (as well as SUN) to the design choices of CSP, we conclude that because the CSP was constructed using a diverse range of adjectives and superclass words with broad semantic meanings, labels in the CSP can be considered as centers of property clusters. Therefore, the effectiveness of the CSP is based on the implicit assumption that OOD samples exhibit various visual properties. The images in SUN, Places, and Texture primarily depict natural environments and textures, which have strong visual property diversity, leading to relatively larger performance improvements. In contrast, iNaturalist, where images are mostly focused on various plants with limited visual property diversity, does not benefit from the inclusion of the CSP. 2. **Semantic Pooling Design Factors:** Focusing on the following factors of CSP can contribute to performance improvement: (1) When constructing the CSP, avoid reusing adjectives and instead pair each adjective with a randomly selected superclass to minimize semantic overlap between labels. (2) When setting up the superclass set, aim to include a broad semantic range in the set. Generally, more superclasses tend to bring better performance due to increased diversity. (3) As mentioned in our response to Reviewer gvS3’s comment 3, the ratio of negative label selection impacts performance, and selecting negative labels from the CSP achieves the best performance at 40%. **In the end, thanks again for all your time and consideration in reviewing our paper!** --- Rebuttal Comment 1.1: Title: Comment Comment: Thanks for your response. It has solved my concerns. And I have raised my rating.
Summary: This paper presents a novel zero-shot out-of-distribution (OOD) detection pipeline that enhances performance by expanding the semantic pool with a conjugated semantic pool (CSP), which consists of modified superclass names that cluster similar properties from different categories. The approach aims to increase the activation probability of selected OOD labels by OOD samples while ensuring low mutual dependence among these labels. By moving beyond traditional lexicon-based expansion and using the CSP, the paper's method outperforms existing works by 7.89% in FPR95, demonstrating a significant improvement in OOD detection without the need for additional training data. Strengths: 1. The paper is well-written and easy to understand. 2. The supplement to the shortcomings of the NegLabel theory is very enlightening. 3. Extensive experiments demonstrate the superiority of the method. Weaknesses: 1. The primary contribution of this work is its enhancement of the semantic pool expansion issue within NegLabel, representing an incremental step within the NegLabel framework. 2. There is ambiguity regarding the distinction between citations and contributions. For instance, the content related to Lemma 1 in both the preliminary section and the appendix A.1, A.2 seems to originate directly from the original NegLabel paper, yet this source is not explicitly referenced in the main text and appendix. 3. In Section 3.3, the authors think that OOD images may not include their true label "white peacock" among the negative labels is a issue. However, this argument appears debatable. As evidenced in NegLabel's visualizations and Figures 6 and 7 of this paper, OOD images may not consistently exhibit the highest similarity to their ground truth (GT) label and could show significant similarity with multiple negative labels. Thus, even if the true GT label of an OOD image is absent from the negative labels, the NegLabel method can still effectively detect OOD data. 4. From an implementation standpoint, how does including terms like "smartphone" and "cellphone" as negative labels impact the overall OOD score? The claim in Section 3.2 about the impact of synonym disruption on Lemma 1's independence seems somewhat overstated. 5. I am curious whether, aside from the semantic pool, all other aspects of the technical implementation directly derive from NegLabel. Specifically, does the NegMining Algorithm, the OOD score calculation, and the Grouping Strategy align exactly with those in NegLabel? This aspect requires further clarification. 6. Additionally, is Figure 3 a representation of actual distribution distance data or a schematic created by the authors? Utilizing real experimental data to support the findings presented in Figure 3 would strengthen its validity. If it is a schematic, it may introduce subjective biases from the authors. 7. The paper extensively discusses the limitations of previous methods and the intended outcomes in Section 3, yet provides relatively minimal detail on the actual implementation of the proposed method. It would be beneficial for the authors to expand on the specifics of their proposed approach in Section 3.3. For instance, further elaboration on how superclass selection is conducted and integrated with adjectives would be valuable. 8. According to the description in the paper, the method used to select cluster centers appears crucial. Have the authors considered employing clustering techniques for superclass selection rather than relying solely on manual selection? 9. Another point of confusion arises regarding how variations in corpus size impact the acquisition of semantic pools. Does this necessitate selecting additional superclasses? Technical Quality: 3 Clarity: 3 Questions for Authors: see Cons Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive comments and positive rating! Below, we briefly restate your comments (due to length limitations) and then provide our point-by-point responses. > 1. Enhancement of semantic pool expansion within NegLabel Thanks for the valuable comment. Our method builds on the NegLabel framework, introducing further innovations. **NegLabel is the most effective paradigm for this task.** In addressing its limitations, we proposed a performance modeling approach that better aligns with experimental results and identified the conditions for effective semantic pool expansion. This allowed us to optimize this step, resulting in satisfactory performance improvements. Therefore, we believe our work **provides meaningful theoretical and performance advancements, showcasing unique innovation**. > 2. Ambiguity between citations and contributions. We originally structured the paper to clearly differentiate contributions, with NegLabel's contributions discussed in the Preliminary section and our own in the Methodology. We apologize for any ambiguity and will clarify this in the revised version. In Preliminary, we have cited NegLabel at **L99 and L104**, to introduce NegLabel's methodology (L97-103) and its theoretical performance modeling (L104-124). We **will make these references more explicit** and add the statement in Appendix A.1 and A.2: “**The proof in this part is adapted from the appendix of [24]**”. Additionally, we have cited NegLabel [24] a total of **16** times, with "NegLabel" appearing **20** times, throughout the paper. This should make it clear that our method is based on NegLabel, upon which we have introduced further innovations. > 3. Debatable augument about not including true labels. Thank you for the insightful comment! We agree that NegLabel can effectively detect OOD data when the true label (GT) of the OOD image is excluded from the negative labels. However, this isn’t always guaranteed. If an OOD image’s most similar classes are not OOD classes, the model may fail to detect it as OOD. For instance, in a scenario where the ID classes are "*hognose snake*", "**basset hound**", and "**Afghan hound**", and the OOD classes are "**toy terrier**", "*green snake*", and "*king snake*" (all are ImageNet-1k categories), if "**toy terrier**" is not included in negative labels, OOD detection for "toy terrier" images might fail, even with a strong zero-shot classifier, because **their most similar classes are within the ID set**. Moreover, even if NegLabel works when the GT is excluded from negative labels, **it doesn't mean including the GT in negative labels is pointless**. Ideally, with a sufficiently powerful VLM, OOD images should exhibit the highest similarity to their GT labels. Including the GT in negative labels **can naturally result in higher OOD scores for OOD images**, improving detection in more challenging scenarios. > 4. Impact of synonym disruption on Lemma 1's independence. Thanks for the valuable comment! Regarding this issue, we suggest considering **an extreme case**: if we duplicate each negative label 10 times, will the model's performance improve simply because the semantic pool is larger? Clearly not, and this contradicts the derivation in NegLabel and our work, which advocates for larger semantic pools. The reason is that this duplication violates the independence assumption in Lemma 1. When there is significant interdependence among Bernoulli variables, actual performance can deviate substantially from the mathematical model. **Adding synonyms has a similar effect to duplicating labels.** However, we do **NOT** consider that using Lemma 1 in NegLabel and our work for performance modeling is flawed, as any mathematical modeling is an approximation of real-world conditions under ideal assumptions. > 5. Question about technical implementations from NegLabel. Yes, we adopted the exact same NegMining algorithm, OOD score calculation, and Grouping Strategy as in NegLabel. In the revised version, we will clarify these technical details more explicitly. > 6. Question about Figure 3. Fig 3 is indeed a schematic diagram created by us to illustrate our motivations. Following your suggestion, we plan to generate visualizations using real experimental data to support the findings presented in Fig 3 in the revised version. > 7. More implementation details. We will incorporate more specific implementation details in Sec 3.3 and 5.1 of the paper. Specifically, we manually chose terms with broad semantic meanings, such as "item", "place", "creature", as superclasses. Experiments showed that various superclass sets can all achieve satisfactory performance improvements. The combination of adjectives with superclasses is entirely random: for each different adjective, we randomly select a superclass from the set and pair it with the adjective to form a phrase. > 8. Employing clustering techniques for superclass selection. Thanks for the constructive comment. The clustering technique is not employed for superclass selection for its **higher complexity and weaker interpretability**. Clustering results vary with factors like the algorithm, sample size, and number of clusters, adding complexity. In contrast, our method introduces no additional hyperparameters, requires no extra data or training, and enhances interpretability by using explicit superclass terms instead of abstract cluster centers. > 9. How variations in corpus size impact semantic pools. Regardless of the corpus size, our method for constructing CSP is consistent, as mentioned in response to comment 7, and the pseudocode is provided in the rebuttal PDF file. Therefore, the size of the CSP is corresponds to the number of adjectives in the corpus, but a larger corpus does not necessarily require more superclasses. **In the end, we express our sincerest gratitude for your time and thoughtful consideration!** --- Rebuttal Comment 1.1: Comment: Thanks for your responses and most of my concerns are addressed. I have raised my score.
Summary: This paper explores how to set potential OOD labels to facilitate OOD detection with vision-language models. The paper first conducts a theoretical analysis, revealing that in addition to increasing the negative label space, it is also important to increase the probability of negative labels being activated and to reduce mutual dependence between negative labels. Therefore, the paper proposes a new strategy for constructing negative labels by introducing modified superclass names to construct a conjugated semantic pool (CSP). Experiments are conducted on standard benchmarks. Strengths: 1. The paper theoretically analyzes the criteria for selecting negative labels and proposes a new method for designing negative labels. 2. The motivation is clear, and the method design is straightforward and effective. 3. The proposed method achieves state-of-the-art results on standard benchmarks. Weaknesses: While the overall method design is agreeable, there are some details that need clarification: 1. In Line 276, the authors claim that CSPs overlapping with ID classes will not be selected as negative labels. How is this implemented, and does it significantly impact the results? 2. In Line 321, the authors introduce the superclasses used and mention combining these superclasses with adjectives to form the CSP. Could you provide more details on the adjectives used and the selection process? Is the pairing with superclasses done randomly? 3. How many negative labels are ultimately used? Beyond the 10,000 classes in the negative label pool, how many new negative labels are introduced? Does the number of newly introduced negative labels significantly affect OOD detection performance? 4. In my experiments, on ImageNet, performance initially improves and then declines as the number of negative labels increases; however, in experiments with CIFAR as the ID dataset, performance continuously improves with more negative labels. Have the authors observed similar phenomena, and can they provide an explanation? 5. It is suggested to follow Neglabel and refer to the introduced classes as "negative labels" instead of "OOD labels" to avoid confusion with the labels of practical OOD datasets. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our sincere gratitude for your valuable comments. Below, we first reiterate your comments, subsequently providing our detailed responses to each point. > 1. In Line 276, the authors claim that CSPs overlapping with ID classes will not be selected as negative labels. How is this implemented, and does it significantly impact the results? Thank you for the insightful comment! In the revised version, we will clarify the statement in L276 to avoid any ambiguity. Apart from the negative label selection strategy based on inverse similarity provided by NegLabel, **we do NOT have additional implementation to ensure that CSP labels with semantic overlap with ID classes are not selected**. What we intended to convey in L276 is that the labels in the CSP are also selected as negative labels **based on inverse similarity**, thereby reducing the likelihood of CSPs with semantic overlap with ID classes being chosen as negative labels. There are no specific strategies in our method that can completely prevent semantic overlap between CSP and ID classes. > 2. In Line 321, the authors introduce the superclasses used and mention combining these superclasses with adjectives to form the CSP. Could you provide more details on the adjectives used and the selection process? Is the pairing with superclasses done randomly? Thank you for your suggestion! When constructing the conjugated semantic pool, the adjectives we used were unfiltered adjectives from the lexicon adopted ("adj.all.txt" in WordNet). For selecting negative labels, we employed the NegMining algorithm provided by NegLabel. The pseudocode is provided in the rebuttal PDF file. The pairing of superclasses and adjectives was entirely random; that is, for each adjective, we randomly selected a superclass from the set introduced in L321 to form a phrase. > 3. How many negative labels are ultimately used? Beyond the 10,000 classes in the negative label pool, how many new negative labels are introduced? Does the number of newly introduced negative labels significantly affect OOD detection performance? Thank you for your question! **We ultimately used 8,492 negative labels, of which 7,005 were simple noun labels from NegLabel, and 1,487 were from our constructed CSP.** The proportion of negative labels selected from the semantic pool was 15%, consistent with NegLabel. However, the total number of noun labels we obtained was smaller than in NegLabel because we removed word categories such as personal names, organization names, and quantifiers, which generally have little utility in activating OOD images. All the ablation experiments were conducted after removing these categories. The number of newly introduced negative labels has a noticeable impact on OOD detection performance. When keeping the selection proportion of noun labels at 15%, and gradually adjusting the proportion of negative labels selected from CSP from 2% (198 labels) to 100% (9,916 labels), the model’s performance (FPR95) increased from **23.22%**, peaked at **16.66%** when the proportion was 40% (3,966 labels), and then began to decline, reaching **18.64%** when the proportion was 100%. **The specific experimental results are presented in Table 1 of the Rebuttal PDF file.** In other words, if a separate selection ratio parameter for CSP was set, a higher performance than reported could be achieved at 40%. However, to avoid parameter tuning, we directly adopted the 15% ratio parameter adopted by NegLabel. > 4. In my experiments with CIFAR as the ID dataset, performance continuously improves with more negative labels. Have the authors observed similar phenomena, and can they provide an explanation? Thank you for your comment! We had not previously conducted experiments using CIFAR-100 as the ID dataset. To explore this issue, we conducted experiments with CIFAR-100 as the ID dataset and iNaturalist, Places, SUN, and Textures as the OOD datasets. Due to the presence of overlapping categories between CIFAR-100 and the OOD datasets, we manually removed the following categories from CIFAR-100: flowers (orchids, poppies, roses, sunflowers, tulips), large man-made outdoor things (bridge, castle, house, road, skyscraper), large natural outdoor scenes (cloud, forest, mountain, plain, sea), and trees (maple, oak, palm, pine, willow). This means that the ID dataset we used contains only 80 distinct categories. **The experimental results are presented in Table 2 of the Rebuttal PDF file.** In brief, we did **NOT** observe a continuous improvement in performance as the number of **total** negative labels increased. The performance trend remains an inverse-V curve, similar to what we observed with ImageNet-1k. However, the peak point of the average performance across the four OOD datasets is indeed **reached at a larger selection ratio** compared to experiments on ImageNet-1k, shifting from about 15% to approximately 50%. Theoretically, continuous performance improvement seems somewhat abnormal. When the selection ratio increases to 100%, the negative labels actually **cover all words in the lexicon without utilizing any information from the ID categories**, making it improbable to achieve optimal results. If this analysis does not fully address your concerns, we welcome you to provide more details and results of your experiments for further discussion. > 5. It is suggested to follow Neglabel and refer to the introduced classes as "negative labels" instead of "OOD labels" to avoid confusion with the labels of practical OOD datasets. Thank you for your valuable suggestion! We agree that referring to the newly introduced classes as "negative labels" is indeed a more appropriate approach. We will incorporate this correction in the revised version. **In the end, we express our sincerest gratitude for your valuable suggestions and positive rating!** --- Rebuttal Comment 1.1: Comment: I'm grateful for your response. My concerns have been mostly resolved, and I now have a deeper understanding of your method. I find it simple yet effective. However, it largely builds on NegLabel (by adding adjective+noun combinations beyond separate nouns and adjectives). Therefore, I will keep my current score.
Summary: This paper presents improvements to zero-shot OOD detection methods based on pre-trained vision-language models. The study first models factors that influence the performance of existing pipelines and theoretically derive two necessary conditions for enhancing performance: expanding the OOD label candidate pool and maintaining low interdependence. Subsequently, the author analyzes why simple expansion methods do not meet these conditions and proposes constructing a conjugated semantic pool for expansion. This method meets the theoretical conditions and achieves SOTA performance on several public benchmarks. Strengths: 1. The article presents reliable mathematical modeling and theoretical derivations, and the proposed method is simple and efficient. 2. The proposed method achieves considerable performance improvements over the latest methods. 3. The article features a clear structure and logical coherence, demonstrating a high standard of presentation and writing quality. Weaknesses: 1. The description in the caption of Table 1 is unclear. In the experimental section, the author mentioned that the upper part of Table 1 represents traditional OOD detection methods, while the lower part pertains to detection methods based on pre-trained models like CLIP. However, this distinction is not explicitly marked in the caption, which might lead readers to misunderstand that all methods are based on the CLIP framework. 2. There is a possibility that in some cases, the chosen OOD labels from CSP may also have high similarity to ID images. The authors should provide more explanation on this point. 3. I do not quite understand the significance of the experiments in Table 5. I suggest the authors provide a more detailed discussion. 4. In Table 9, the meaning of "Size" of the corpus is not clear. Does it refer to the total number of words in the dictionary or the number of selected OOD labels? Technical Quality: 3 Clarity: 4 Questions for Authors: NA Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Please see the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our sincere gratitude for your valuable suggestions and the positive rating. Below, we first reiterate your comments, subsequently providing our detailed responses to each point. > 1. The description in the caption of Table 1 is unclear. In the experimental section, the author mentioned that the upper part of Table 1 represents traditional OOD detection methods, while the lower part pertains to detection methods based on pre-trained models like CLIP. However, this distinction is not explicitly marked in the caption, which might lead readers to misunderstand that all methods are based on the CLIP framework. Thank you for the comment! We will clarify the distinction between the methods in the upper and lower parts of Table 1 in the caption to avoid any potential misunderstandings. > 2. There is a possibility that in some cases, the chosen OOD labels from CSP may also have high similarity to ID images. The authors should provide more explanation on this point. Thank you for the insightful comment! As with the NegLabel method we followed, we used a negative label mining algorithm to minimize the similarity between selected negative labels and ID images. Specifically, we calculated the similarity between each OOD label in the CSP and the ID label space, selecting the least similar ones as the negative labels for actual use. We have provided the pseudocode for this process in the rebuttal PDF file. Although we cannot completely avoid instances where certain OOD labels may exhibit high similarity with ID images, the strategy we employed effectively decreases the likelihood of such occurrences. > 3. I do not quite understand the significance of the experiments in Table 5. I suggest the authors provide a more detailed discussion. Thank you for your suggestion! The experiments in Table 5 primarily aim to demonstrate that, consistent with our established theory, expanding label candidates with the CSP satisfies the requirement derived in Section 3.1: concurrently enlarging the semantic pool size M and the expected activation probability q_2 of OOD labels. Specifically: Since the superclasses used in constructing the CSP typically include broad semantic objects, the property clusters encompass samples from numerous potential OOD categories. Therefore, their centers have much higher expected probabilities of being activated by OOD samples, which brings an increase in q_2. In Table 5, we present the expected softmax scores for a single OOD label from both the original semantic pool and the CSP. These scores, averaged across OOD samples, serve as an approximation of q_2, which is defined as the expected probability of OOD labels being activated by OOD samples. Table 5 reveals that the average score of our CSP across four OOD datasets is distinctly higher than that of the original pool, indicating that this expansion leads to an increase in q_2. > 4. In Table 9, the meaning of "Size" of the corpus is not clear. Does it refer to the total number of words in the dictionary or the number of selected OOD labels? Thank you for your question! Since we used nouns from the lexicon to construct the original semantic pool and adjectives to build the conjugated semantic pool, the "Size" in Table 9 refers to the total number of nouns and adjectives in the lexicon, not the number of selected OOD labels. **In the end, we express our sincerest gratitude for your time and consideration!** --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer QWh5 Comment: Thanks to the authors for the response. I am happy that most of my concerns have been addressed and I have decided to keep the score.
Rebuttal 1: Rebuttal: We thank all the reviewers for their thorough reading of our work and the high-quality feedback they provided. Their comments have been immensely beneficial in enhancing the quality of our manuscript and deepening our own understanding of this field. We have uploaded detailed, point-by-point responses for each reviewer, which we believe address all the concerns raised. Additionally, we have provided the requested algorithm pseudocode and extra experimental results in the uploaded rebuttal PDF file, as referenced in our responses to the corresponding reviewers We look forward to engaging in further discussion and exchange with the reviewers during the next stage of the review process. Finally, we extend our sincerest thanks to all the reviewers for their time and thoughtful consideration! Pdf: /pdf/385b4886d155e38ad3d2d75d62cc9bc7673b6042.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper proposes a method for zero-shot out-of-distribution (OOD) detection using an expanded semantic pool of modified superclass names. By leveraging a pre-trained vision-language model, the approach aims to improve OOD classification performance by ensuring low mutual dependence among selected OOD labels. This method outperforms existing techniques by 7.89% in FPR95, highlighting its effectiveness in handling OOD detection tasks. Strengths: 1. The theoretical analysis of expanding NegLabel using the u-function is very clear and intuitive. 2. The experiments are conducted very comprehensively. Weaknesses: 1. The method proposed in this paper lacks innovation; its main framework and basic performance are entirely derived from NegLabel. 2.The preliminary section lacks clear references to the original theory of NegLabel and the proofs in the appendix. 3.The description of the proposed method is very concise, which makes it unclear for readers to understand the details of the method design. It is recommended to provide pseudocode for the algorithm to clarify the method's design. Technical Quality: 2 Clarity: 3 Questions for Authors: see weakness Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: YES Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our sincere gratitude for your constructive comments. Below, we first reiterate your comments, subsequently providing our detailed responses to each point. > 1. The method proposed in this paper lacks innovation; its main framework and basic performance are entirely derived from NegLabel. Thank you for your comments! While we utilized the existing SOTA method, NegLabel, as our primary framework, we believe that our work provides meaningful theoretical and performance advancements, showcasing unique innovation. We sincerely hope the following explanation addresses your concerns, and we welcome any further discussion on this matter. Regarding the methodological framework, we have discussed the theoretical shortcomings in NegLabel's derivation (L1030-1042) : NegLabel undertakes a rudimentary theoretical analysis of the correlation between OOD detection performance and the quantity of adopted potential labels, concluding that an increase in selected labels correlates with enhanced performance. However, this conclusion contradicts the observed actual trend. The contradiction arises from that NegLabel simply assumes a constant higher similarity between OOD labels and OOD images compared to ID images, **neglecting that this similarity discrepancy originates from the strategy of reverse-order selection of OOD labels based on their similarity to the ID label space**. As the set of selected OOD labels transitions from "*a small subset of labels with the lowest similarity to the entire ID label space*" to "*the whole semantic pool, which is unrelated to the setting of ID and OOD labels*", the discrepancy in similarity of ID images to OOD labels versus OOD images to OOD labels will progressively diminish until it disappears. **Reviewer p3ao also comments that** "the supplement to the shortcomings of the NegLabel theory is very enlightening". Building on this insight, we incorporated this dynamic process into our analysis and optimize the performance modeling of NegLabel, leading to the derivation of the unique mathematical model presented in Section 3.1. With a series of derivation, this model establishes the conditions for performance improvement through semantic pool expansion: an unequivocal strategy for performance enhancement requires **concurrently increasing the semantic pool size and the expected activation probability of OOD labels** and **ensuring low mutual dependence among the activations of selected OOD labels**. In Section 3.2, based on the optimized mathematical model, we analyzed why simple lexicon expansion fails to yield further performance improvements. In Section 3.3, we proposed a novel method for constructing a conjugate semantic pool, expanding the semantic pool in a manner that satisfies the theoretical conditions mentioned above, and achieved satisfactory performance gains. Therefore, **all the derivations and design choices detailed in Methodology are contributions we have made to this framework**. In terms of basic performance, while NegLabel, as the current SOTA method, indeed demonstrates strong performance, our approach has achieved considerable improvements over NegLabel on the standard ImageNet-1k OOD detection benchmark, with a **1.55% increase in AUROC and a 7.89% reduction in FPR95**. Furthermore, our method significantly outperforms NegLabel in various scenarios, including hard OOD detection tasks (*Table 2*), different CLIP models (*Table 3*), different ID datasets (*Table 7*), different corpus sources (*Table 9*), and different VLM architectures (*Table 10*). The effectiveness of our approach has been validated through extensive experiments. **The strong performances we achieved stem from our theoretical analysis and the proposed conjugated semantic pool**, rather than simply from the use of NegLabel. > 2. The preliminary section lacks clear references to the original theory of NegLabel and the proofs in the appendix. Thank you for the valuable comment! We originally structured the paper to clearly differentiate contributions, with NegLabel's contributions discussed in the **Preliminary** section and our own in **Methodology**. We apologize for any ambiguity and will clarify this in the revised version. In Preliminary, we have cited NegLabel at **L99 and L104**, to introduce **NegLabel's methodology** (L97-103) and its **theoretical performance modeling** (L104-124), respectively, summarizing the contributions made by NegLabel. We will make the references more explicit in the revised version. In Appendix A.1 and A.2, to ensure clarity and proper acknowledgment, we will add the following statement: “**The proof in this part is adapted from the appendix of [24]**”. Additionally, we have cited NegLabel [24] a total of **15** times, with its name, "NegLabel," appearing **20** times, throughout the paper. This should make it clear that our method has strong relevance with the NegLabel framework, upon which we have introduced further innovations. If there are any aspects where the contributions of NegLabel have not been clearly articulated, we would greatly appreciate the opportunity for further discussion and are fully supportive of providing any necessary clarifications. > 3. The description of the proposed method is very concise, which makes it unclear for readers to understand the details of the method design. It is recommended to provide pseudocode for the algorithm to clarify the method's design. Thank you for the constructive comment! Following the suggestion, we have **provided the pseudocode of our algorithm in the rebuttal PDF file** as Algorithm 1, and we will incorporate it, along with a more detailed version of the method design, into the appendix in the revised version. If there are any aspects of the algorithmic process that remain unclear, we would be more than happy to engage in further discussion. **Finally, we extend our sincerest thanks for all your time and consideration!**
null
null
null
null
null
null
Variational Multi-scale Representation for Estimating Uncertainty in 3D Gaussian Splatting
Accept (poster)
Summary: This work suggests a method to estimate uncertainty for the output of a 3D Gaussian Splatting (3DGS) model, by means of variational calculus. Specifically, the authors split the 3DGS model to two hierarchical levels (or- two scales). The coarser "Base" level is very similar to vanilla 3DGS, but now each Gaussian contains a list of parameters $\{ \mu_i, \sigma_i\}$ that are used to spawn the "Finer" level Gaussians. The latter "Finer" Gaussians are then fed into the standard 3DGS inference pipeline, only now these Finer Gaussians are fitted using the re-parametrization trick and the ELBO. The experimental section compares to other uncertainty estimation methods and shows favorable results. Strengths: The method builds on the simplicity of 3DGS, and seems rather straightforward to implement. Given additional prior knowledge on ELBO based methods, the proposed method is rather easy understand from the text (with the small exception mentioned below). The results show the power of this method compared to other methods (with the one exception mentioned below). Weaknesses: The method seems attractive, but needs to be better positioned w.r.t. previous works: * FisherRF, (citation [42] in the paper) is mentioned in line 116 but is not compared against. The justification for the lack of numerical comparison is that "the posterior needs extensive computations", but the authors of [42] state they run on a modest card (RTX3090) in 70 FPS. At the very least, results from [42] can be easily added to Table 1 based on Table 4 in [42]. Additionally, the [code](https://github.com/JiangWenPL/FisherRF) of [42] was made public. * latentSplat [arxiv 2403.16292](https://arxiv.org/abs/2403.16292) - while it operates under a **simpler setting than this work** , and while it is not yet accepted a peer-reviewed venue (AFAIK), I would recommend mentioning this method to prevent confusion. Technical Quality: 3 Clarity: 3 Questions for Authors: High level questions: * Can the proposed method be applied directly on the **Base** level Gaussians? i.o.w. - why does one needs more than a single Gaussian in the Finer level? * I would appreciate a deeper discussion on the similarities to [42]. --- While the method is rather straightforward, some design choices are not clear to me: * why is $\mu_n$ in eq(6) is limited to this range? * why is the spawning method of the Base level different from vanilla 3DGS? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors do not discuss limitation of their method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review and the positive feedback! We address your comments and suggestions below. ## W1: Comparison with FisherRF We present the results of evaluating the quality of depth uncertainty maps using FisherRF. Note that our training view selection and implementation of AUSE in MAE metric follows the code of Bayes’ Ray [7]. The performance of other methods is provided in Table 1 in the main paper, and the average AUSE of our method is superior to FisherRF. | | africa | basket | statue | torch | average | |--------|--------|--------|--------|-------|---------| | FisherRF | 0.29 | 0.27 | 0.31 | 0.42 | 0.32 | | Ours | 0.18 | 0.17 | 0.13 | 0.29 | 0.19 | In our original submission, the choice of not presenting FisherRF results is mainly because all other methods we listed can compute not only depth uncertainty maps but also RGB uncertainty maps, which forms our complete evaluation. However, FisherRF only reports the depth uncertainty results in the paper. The provided code only contains depth uncertainty map rendering, where they compute the per Gaussian uncertainty and perform alpha blending as the depth uncertainty map. Regarding computational efficiency. We would to further clarify that FisherRF actually requires a pre-computation step to compute the Hessian for all the Gaussians. For the basket scene in the LF dataset, we use 29.7s in a single V100 card to compute these Hessian. Then, the uncertainty map can be rendered for around 70 FPS. Instead, our method does not require any pre-computation step for rendering RGB uncertainty. ## W2: Discussion about LatentSplat LatentSplat builds variational distribution for Gaussian features and achieves generalizable reconstruction of radiance field. Thank you for pointing this out, and we will discuss more and cite this work in the related work section. ## Q1: Why do we need more than one single Gaussian in the Finer level? Please refer to the results in Q2 in the global response. Actually, we found that the number of finer level Gaussian is rather important for the quality of uncertainty estimation. The purpose of multiple finer Gaussian is to encourage a variety of scale distribution among different finer level Gaussians that are attached to the same base Gaussian. This leads to more diversified samples in inferring the uncertainty using the learned posterior, which helps improve the accuracy of uncertainty. ## Q2: Deeper discussion on the similarity to FisherRF The similarity between our method and FisherRF is that they both can estimate the uncertainty explicitly for each point (Gaussian) in 3DGS. However, the parameter uncertainty in FisherRF $\mathbf{H}^{\prime \prime}\left[\mathbf{w} \mid D^{\text {train }}\right]$ is approximated with the diagonal elements of the Hessian matrix of the parameter $\mathbf{w}$ given the training data $D^{\text {train}}$. This approximation is made by employing assumptions like the predictive distribution should be a normal distribution with the mean equal to the maximum a posteriori solution and precision equal to the Fisher Information (Equation 18 in [6]). Instead, we follow a Variational Bayesian approach, in which we set priors for parameters, then minimize the KL divergence between prior and variational distribution to infer the variance of model parameters as the uncertainty. ## Q3: Why $\mu_{n}$ is limited to this range The prior distribution of offset is set as this uniform distribution so that the scale of $K$ number of finer level Gaussian after offset would become $S_{offset} = S_{base}+\mu_n \sim U (S_{base}/K, S_{base})$, which encourage that the finer Gaussians scale not to exceed $S_{base}$, the scale of base Gaussian they attached or become too small and lose functionality. This setting enables the multi-scale representation by learning the distribution of scale of finer level Gaussian. ## Q4: Why is the spawning method of base level different from vanilla 3DGS We would like to clarify that densification and spawning are different operations. Spawning refers to the operations that select and offset base level Gaussians to construct finer level Gaussian. The densification of Gaussian is to clone all the attributes of Gaussians and translate the new ones. For example, in our experiment, we train the LLFF scenes for 12K steps in total, during which the densification is performed for every 500 steps and spawning is performed for every 4K steps. Note that both base Gaussian and non-base Gaussian can perform densification, the difference is that the densification of base Gaussian also clones the attached finer Gaussians. Apart from that, the implementation of densification is identical to the original 3DGS. Thank you for pointing this out, and we will add the above details in the revision. If you have further questions please feel free to raise them. We would be glad to provide materials and discuss! --- Rebuttal Comment 1.1: Title: Thanks for the deatiled reply Comment: The authors have thoroughly addressed all of my questions and concerns, and I stand by my positive score. --- Indeed, I mistakenly asked about spawning rather than densification. If I understand correctly - you create/duplicate new base-Gaussians every 500 iterations, and re-spawn new finer-Gaussians every 4K iterations. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the feedback from the reviewer! The difference between densification and spawning operations is exactly what you describe above. Thank you for your positive score on our work!
Summary: This paper proposes an uncertainty estimation method for the 3D Gaussian Splatting (3DGS) radiance field reconstruction algorithm. The proposed methods leverage the multi-scale properties that lie inherently in a vast amount of Gaussian ellipsoids to improve the performance of variational inference. The paper validates quantitatively that the proposed method aligns with the novel view synthesis error better than previous methods on NeRF uncertainty estimation and naïve methods on 3DGS uncertainty estimation. The experiments also validate an interesting application of the proposed uncertainty estimation methods: to determine noisy Gaussians and remove them thus reduce the floater artifacts. Strengths: 1. The motivation of this paper is clear. The problem of increasing diversity in the sampling of variational inference is challenging, and the idea of developing a multi-scale representation in 3DGS to solve it is intriguing. 2. The method is sound and easy to follow. Representing the same scene with multiple scale levels is feasible in previous CG techniques. The proposed technique of learning the scene with different scales by assuming different variational distributions, and estimating the predictive uncertainty by sampling from these “multi-scale posteriors” is clear. Designing an offset table to increase inference efficiency is a plus. 3. The experiments demonstrate the effectiveness of the method in uncertainty estimation by comparing AUSE and NLL. The synthesized image quality in Table 6 is better than NeRF based method, and the noisy Gaussians removal experiments are interesting. Weaknesses: 1. Ambiguous details: i): In line 187, there are K alternative values in the offset table. If so, how many values that the offset table contain for one spawned Gaussian? ii): In Figure 3 (c), how are sampling and inference done from multiple spawned Gaussians? The Figure should contain more details to clarify the difference with other methods. 2. More ablation studies. i): Replacing the multi-scale prior in equation 6 with the same prior distribution for all layers can be compared to study the performance gain of the multi-scale method. ii): The impact of the quantity of spawned Gaussians in the offset table (line 174) on the reconstruction performance can also be studied. Technical Quality: 4 Clarity: 3 Questions for Authors: Please see weakness. Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: More comparing results with simpler methods such as naïve variational inference can be provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your acknowledgment of our work! We provide illustrations for your concerns below. ## W1: Ambiguous values in the offset table for each spawned Gaussian Since we only offset the position and scale in the offset table, and each offset table contains $K$ spawned Gaussians, then for each spawned Gaussians the length of the offset vector is 12 (6 for position offset distributions and 6 for scale), so the number of alternative values in the offset table is $12 \cdot K$. ## W2: Ambiguous Figure Generally, the inference pipeline of our method is to first sample from the spawned finer Gaussians and then sample from their learned posterior distribution. We will update the details in the figure. ## W3: How about setting the same prior for all finer levels Thank you for the question. Actually, if setting the same prior for all finer levels, the learned posterior would lean to the same distribution after training. Therefore, this method could be regarded as equivalent to using a single finer level Gaussian. We perform additional experiments to explore the effectiveness of this method, and provide the results on the LLFF dataset in Q2 in the global response. We found that using only one finer level Gaussian would largely decrease the uncertainty estimation performance. ## W4: The impact of the number of spawned Gaussians Generally, the performance would increase when there is more number of fine level Gaussians. This is because more spawned Gaussians provide more capacity to fit the posterior distribution. We provide the comparison results and analysis of using 1, 5 and 10 spawned Gaussians in Q2 of the global response to validate this. ## L1: Comparison with naive variational inference Please also see the answer in W3. We’ll be happy to address any further questions, please feel free to raise them. --- Rebuttal Comment 1.1: Comment: The rebuttal from the author has solved my concerns. I will keep my rating as accept. The author should address the problems raised in the comments in the revised version. --- Reply to Comment 1.1.1: Comment: Thank you for the feedback! We will improve the presentation quality, and add the above ablations in the revision. --- Rebuttal 2: Comment: Dear Reviewer, We appreciate your precious time and efforts in reviewing our work! We look forward to addressing any concerns you may have during the remainder of the discussion period. Best Regards, Authors
Summary: The paper aims to quantify uncertainty in the learning pipeline in 3DGS. To this end, the author(s) proposed to leverage explicit scale information to build variational multiscale 3D gaussians leading to the construction of diversified parameter space samples. This results in the proposition of a multiscale variational inference framework for uncertainty estimation in the 3D Gaussian model. Experimental results on popular benchmark datasets are shown to demonstrate the efficacy of the proposed method. Strengths: 1. The paper solves an extremely useful problem for robot vision and control applications: quantifying uncertainty in 3D Gaussian splatting. Uncertainty modeling is essential for developing robot vision-based automation systems that are robust, safe, efficient, and capable of operating in complex and dynamic environments. By explicitly accounting for uncertainties, this approach can help design more effective and reliable robotic vision systems. Therefore, this paper clearly attempts to address a significant problem. 2. Uncertainty quantification on explicit 3DGS by exploiting both the model space diversity and efficiency. Such a modeling strategy ensures efficient and compact model output. Weaknesses: 1. The results are not as impressive as mentioned in the abstract and in the introduction of the paper, i.e., state-of-the-art. 2. The approach to use only scale can cause problems with points that are far from the camera. This is critical particularly for the formulation that takes both scale and rotations to model the covariance —refer to 3DS original work. Kindly comment. 3. Writing of the paper can be improved. a. can not -> cannot b. robotics navigation -> robot navigation c. less samples of parameters as possible -> parameters samples as few (less) as possible. d. spawn strategy -> not clearly explained while introducing this term for the first time in the paper, i.e., what author(s) mean by this term—referring to contribution. Explaining at least briefly here will improve the draft. e. Evaluation is generally not considered as contribution (contribution 3) f. Section 2.2 line 97 “Rearly…this task”. Kindly rephrase this line, it is confusing in the current form. g. Line 228: “the original is”...? h. Line 234: Then-> then There are more typos… kindly correct them. i. Missing references in implicit uncertainty modeling Lee et al. RAL 2022 “uncertainty guided policy for active robotic 3d reconstruction using neural radiance fields” Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Kindly clarify how removing uncertain 3D Gaussian still leads to complete scene images visualization on screen (rasterization). 2. Kindly provide details on how distance of the points from the camera affects the uncertainty quantification. 3. It is well-known that COLMAP 3D and camera pose are not perfect. How structure from motion uncertainty contributes to the current approach. Kindly comment. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Although a few limitations are obvious from the experiments section. It is not explicitly mentioned in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the kind and helpful comments! We will address your concerns below. ## W1: The results are not as impressive as claimed Firstly, we’d like to clarify that the ensemble method is a naive baseline that trains 10 vanilla 3DGS with different random seeding, which incurs an extreme computational burden compared to other methods. Therefore, the ensemble method serves as a performance upper bound, with the same practice in [7]. Apart from the ensemble method, our method is optimal in most cases. The only exception is the AUSE and NLL metric on the LF dataset, where our method is inferior to CF-NeRF, in Table 2. We discuss this result on line 250-252. On the other hand, in this setting, our view synthesis quality is better than other methods, which is another primary goal of building uncertainty-aware 3DGS models. ## W2: The approach to use only scale can cause problems with points that are far from the camera; scale and rotation together form the covariance We provide qualitative results regarding the points far from the training camera in Figure 2 in the attachment. In unbounded scenes from MipNeRF 360 dataset, using all training images, our method successfully renders both uncertainty maps and novel views of the distance background. Technically, although we refer to our method as multi-scale representations, as shown in Equation 6, we offset not only scale but also position to spawn the finer Gaussian. Additionally, the scale of the finer Gaussian is restricted by our setting of prior distributions. The learned posterior of $K$ finer Gaussian scale would approach $U (S_{base}/K, S_{base})$, which means that the scale of finer Gaussians would be constrained softly to avoid unlimited variation in scale compared to the base Gaussian they attached. Therefore, for base Gaussians far from the camera, the scale of their spawned finer Gaussian is still adaptive, without harming the rendering quality. In practice, we found that the distributions of rotation over the scene are rather random and irregular compared to the scale, while the scale varies with object size. Therefore, we chose to decompose the covariance and only offset the scale to form our multi-scale representation. ## W3: Writing and reference problems Thanks for your careful inspection! We will fix the raised problems and improve the writing thoroughly in the revision. The RAL paper proposes to use the entropy along the ray weight to evaluate pixel-wise uncertainty, which is used to guide active reconstruction. We will discuss and cite the RAL paper in the related work section. ## Q1: How does removing uncertain 3D Gaussian still lead to complete scene renderings Please refer to Q1 in the global response, where we visualize that when removing 10% of the Gaussians, the complete scene is preserved and small floaters are cleaned. We also illustrate why in the original paper Figure 1 and Figure 4 we remove at least 50% of the noisy Gaussians. ## Q2: Details on how the distance of the points from the camera affects the uncertainty When estimating the per point (Gaussian) uncertainty in the MipNeRF 360 dataset, usually the model is more uncertain about the points (Gaussians) distant from the training camera trajectory. Generally, the uncertainty value of points depends on how well they are covered by multiple views (cameras) without occlusion. The distant, background points have less content provided from multiple views in a casually captured dataset, which makes the background modeling difficult. When rendering the uncertainty map, the points (Gaussians) far from testing cameras are projected to a smaller pixel region in image space, due to the inherent properties of perspective projection. However, the intensity of uncertainty pixels is consistent within the projected area in image space. ## Q3: How does COLMAP uncertainty contribute to the current approach In 3DGS training, the COLMAP generates sparse point clouds and camera poses as the input of 3DGS. Thus, from the 3DGS point of view, the uncertainty contained in camera poses generated by the structure from motion can be categorized as aleatoric uncertainty, which inherently exists in the input data to 3DGS [5]. The aleatoric uncertainty, together with epistemic uncertainty that exists in the model parameters of 3DGS, results in the predictive uncertainty estimated in our method. ## L1: More discussion on limitations For quantitative results, we discuss the limited results in line 250-252 of the original paper. We will talk more about the qualitative performance of rendering uncertainty maps in MipNeRF 360 in the revision. Thank you again for your effort in reviewing our work! We’ll be glad to discuss if there are any further concerns. --- Rebuttal 2: Comment: Dear Reviewer, We appreciate your precious time and efforts in reviewing our work! We look forward to addressing any concerns you may have during the remainder of the discussion period. Best Regards, Authors
Summary: This paper proposed a novel multi-scale variational representation for 3D gaussian splatting to estimates its uncertainty. Specifically, this paper introduced a spawn strategy to split a base gaussian into multiple sub gaussians and apply variational inference to estimate the uncertainty. Strengths: (1), Estimating uncertainty in the 3D reconstruction / novel view synthesis systems is a critical task and has great application potential. (2), The paper proposed a novel representation and solid mathematical derivation to solve this problem. (3), The experiment shows strong relevance between the estimated uncertainty and the rendering error. Weaknesses: (1), Authors claimed one of the potential application is guided interactive active data acquisition. It would be great to have more detailed illustration or even experimental exploration. Technical Quality: 3 Clarity: 3 Questions for Authors: (1), For the floater removal experiment, is removing the background actually the desired effect for real world use case? In my understanding , a good uncertainty-based gaussian pruning approach should be able to remove the blurry gaussians to improve the rendering clarity while preserving most of the objects regardless its foreground or background. (2), For the related works section, there are some additional related uncertainty quantification works could be discussed: [1] Naruto: Neural active reconstruction from uncertain target observations. CVPR 2024 [2] Active neural mapping. ICCV 2023 Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The uncertainty qualification requires (additional) sub gaussian construction, might potentially increase the computational burden. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comment and positive feedback! We address your concerns as follows. ## W1: Discuss more about active data acquisition. In active data acquisition of 3DGS, image collection and the 3DGS model training are performed alternately. At each image collection step, the most informative image is selected via an acquisition function to maximize the model quality with the same number of images used. Our uncertainty estimation method can contribute to this acquisition function, indicating where the model is uncertain about and acquiring more data around there. We perform a simple experiment on active data acquisition of 3DGS on the LLFF dataset. Specifically, the original training dataset serves as the candidate image pool, and 10% of images are randomly chosen for training initially. Then, one image is chosen for every 500 steps until 30% of images are used. We render our uncertainty map and aggregate the pixel values to choose the most uncertain image from the pool as the next image added to the training set. After all images are chosen, the 3DGS model is further trained for 3K steps. The densification interval is 100 steps, and the spawning interval is 500 steps, and both operations are performed until training ends. As shown in the Table below, we found that the view synthesis quality of active 3DGS with our uncertainty estimation is better than choosing images randomly. Due to the various detailed settings in the active data acquisition task, we prefer to fully evaluate the performance of our uncertainty in active learning in the revision of this paper. | | PSNR | SSIM | LPIPS | |--------|-------|------|-------| | Random | 20.97 | 0.65 | 0.234 | | Ours | 21.35 | 0.69 | 0.212 | ## Q1: Floater removal should clear blurry Gaussians regardless of foreground or background. Please refer to Q1 in global response, where we show that our method can clean foreground noisy Gaussians. ## Q2: Discussion on related work. Thank you for pointing these out! Different from our uncertainty estimation method, Naruto [8] learns the uncertainty of depth by the Negative Log Likelihood loss function. Active neural mapping [9] leverages the artificial neural variability [1] to indicate the predictive uncertainty. We will compare and cite them in the related work section. ## L1: Constructing sub Gaussians might increase the computational burden We analyze specifically the extra computational cost from the following perspective: **Rendering RGB image**: The image rendering speed is basically the same as vanilla 3DGS rendering, since we do not require multiple sampling from variational distribution. **Calculating Per Gaussian Uncertainty for Pruning**: Calculating per Gaussian uncertainty for pruning simply aggregates the variance of learned posteriors for each Gaussian, which takes less than 1s for the garden scene in MipNeRF 360 in a V100 GPU. **Rendering Uncertainty Map**: Rendering the uncertainty map requires sampling from the variational distribution. If taking $N$ samples, the rendering cost would be $N$ times the vanilla rendering cost. In our experiments, we set $N$ equal to 10, the same as the number of finer gaussian. This cost growth follows the common practice in Variational Bayes approaches such as CF-NeRF [4] and others [2], [3]. **Memory**: The extra memory cost for the offset table of a base Gaussian is $12 \cdot (K-1)$ floating numbers while $K$ is the number of finer Gaussians. We only build offset tables for base Gaussians which takes up around 50% of all Gaussians in the original scene. If rendering uncertainty maps is not required, we can retain only one entry in the offset table to render RGB images after pruning the noisy Gaussians. We’ll be glad to discuss if you have any further concerns. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I appreciate the author for the detailed response. This rebuttal resolves most of my concern, I would keep my rating as weak accept. Authors are encouraged to add the active data acquisition part to the main paper or at least the supplementary materials. --- Reply to Comment 1.1.1: Comment: Thank you for affirming our response! We will add more detailed content on active data acquisition to the revision. We welcome discussion on any further questions.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their time and efforts in reviewing our paper and providing insightful comments. Here we address two common questions from the reviewers in this global response, and the individual questions from each reviewer are provided separately. ## Q1: To Reviewer [UQFr, 4RNR]: Why does the floater removal experiment remove the background? Can we retain the background? Firstly, we observe that when training the 3DGS on MipNeRF 360 unbounded scenes, the majority of the floaters lie in the background region. This is because the training camera trajectory is placed around the centering object while providing insufficient multi-view information to reconstruct the background. Thus, we visualize the floater cleaning performance by removing at least 50% of the Gaussians to validate that our method can remove most of these noisy floaters. Moreover, our visualization results are consistent with Figure 1 in Bayes Ray [7], which also removes the noisy background of an unbounded scene represented by NeRF. We further show that our method can also perform the floater removal effect for noisy Gaussian in the foreground. As shown in Figure 1 in the attachment, when removing only 10% of the Gaussian and visualizing from a close view, our method can also remove smaller floaters in the foreground to improve the clarity of the synthesized view, while keeping the background complete. ## Q2: To Reviewer [XD3Y, ZTX9]: Comparison with simpler methods, such as using less or even one finer level Gaussian. We compare the view synthesis and uncertainty estimation performance using $K \in {1, 5, 10}$ number of finer level Gaussians spawned in the offset table. Same as Section 4.2 in the original paper, we train on all 8 scenes in the LLFF dataset and report the average results. We found that improving the number finer level Gaussian $K$ shows a notable increase in the quality of uncertainty estimation. More finer level Gaussians improve the sample space diversity, therefore providing precise estimation of model parameter uncertainty. Nevertheless, the quality of novel views fluctuates when $K$ changes. We think this is because 5 spawned Gaussians are enough to represent the scene while the quality of uncertainty could be further promoted by more spawned Gaussians. | | | | | | | |------------------------------------|-------|-------|-------|------|------| | | PSNR | SSIM | LPIPS | AUSE | NLL | | 1 Finer Gaussisans | 23.67 | 0.791 | 0.194 | 0.52 | 0.47 | | 5 Finer Gaussisans | 23.92 | 0.797 | 0.179 | 0.43 | 0.34 | | 10 Finer Gaussisans (Default Setting) | 23.84 | 0.805 | 0.186 | 0.38 | 0.32 | We thank the reviewers again for their valuable feedback, and sincerely look forward to discussing any further questions! ## Reference [1] Xie, Zeke, et al. "Artificial neural variability for deep learning: On overfitting, noise memorization, and catastrophic forgetting." Neural computation (2021). [2] Kingma, Durk P., Tim Salimans, and Max Welling. "Variational dropout and the local reparameterization trick." NeurIPS (2015). [3] Blundell, Charles, et al. "Weight uncertainty in neural network." International conference on machine learning. ICML (2015). [4] Shen, Jianxiong, et al. "Conditional-flow nerf: Accurate 3d modelling with reliable uncertainty quantification." ECCV (2022). [5] Kendall, Alex, and Yarin Gal. "What uncertainties do we need in bayesian deep learning for computer vision?." NeurIPS (2017). [6] Kirsch, Andreas, et al. “Unifying Approaches in Active Learning and Active Sampling via Fisher Information and Information-Theoretic Quantities.” TMLR (2022). [7] Goli, Lily, et al. "Bayes' Rays: Uncertainty Quantification for Neural Radiance Fields." CVPR (2024). [8] Feng, Ziyue, et al. "Naruto: Neural active reconstruction from uncertain target observations." CVPR (2024). [9] Yan, Zike, et al. "Active Neural Mapping." ICCV (2023). Pdf: /pdf/4dac57810f949786ef88590ef0cf7bcb5706c02f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
On Softmax Direct Preference Optimization for Recommendation
Accept (poster)
Summary: This paper extends DPO from pairwise (Bradley-Terry) to multi-way comparison (Plackett-Luce), where one positive example is considered better than multiple negative examples. The paper applies this approach in recommendation systems, showing promising results on several standard benchmarks. From Rafailov et al: “although more general Plackett-Luce ranking models [ 30, 21] are also compatible with the framework if we have access to several ranked answers”. This paper shows the derivation of the Plackett-Luce extension. In that sense the technical contribution is somewhat incremental. I would therefore view this more as an application paper in recommendation systems. However this is not how it is written, most of the focus is on the theoretical analysis extending DPO from Bradley-Terry to Plackett-Luce. Strengths: * The paper is technically sound and mostly clearly written. * The experimental results look very promising. Weaknesses: * The technical contribution seems somewhat incremental. * Some points need clarification (see questions below). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. It is assumed that there is a single positive example which is better than all the negative ones. In reality, multiple items may satisfy users’ interests, so the negative examples should be chosen with care. In the experiments, how are negative samples chosen? This was not clear from the paper. \ Actually, in recommendations it is natural to show a slate of results and the user would click on one (or more), so the unclicked results can serve as negatives. Though this is not the type of data available in benchmarks used here such as MovieLens. 2. In Fig 2a, it seems like most of the gains in performance already exist in SFT. How come this is much better than other fine-tuned models like LLaRA? 3. Fig 2b: the performance of S-DPO seems to still be improving after 1200 iterations, why not run it longer until it stops improving? Also, since the S-DPO loss and DPO loss are not directly comparable, you should include a figure showing HR@1 as a function of training steps for S-DPO and DPO. 4. Fig 3a: the performance seems to still be increasing, is there a number of samples for which it saturates? 5. MIssing baseline: compare the proposed approach to DPO with pairs consisting of the positive example and each negative example separately (use same negative example choice as S-DPO). This is more computationally expensive but interesting to see how it compares in terms of performance (and runtime). Related to that, it would also be good to include a comparison of runtimes between S-DPO and DPO (perhaps show the above variant of Fig 2b in terms of runtime, not just of training steps). Minor: * L79: empty section * L132: deviate -> deviating * L175: the exp should be with the sum * L180: “faster<space>.” Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: No societal impact concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **For Reviewer 6tmb** We appreciate your comments, some of which inspires us to greatly improve our paper. Below we provide the point-to-point responses to address your concerns and clarify the misunderstandings of our proposed method. If you have additional questions, we would be pleased to discuss them with you. >**Comment 1: Uncertain contribution** We acknowledge your statement that original DPO has mentioned the utilization of preference models such as Bradley-Terry and Plackett-Luce to model strict ranking relationships among ranked data samples. We claim that S-DPO is a alternated version of DPO deriving a new preference model from the PL model, which is different from the original PL model and has a softmax structure. The the new preference model focuses on partial ranking relationships among preferred items and dispreferred items, which widely exist in recommendation data. Such inherent different in data characteristics makes it nontrivial to properly adapt DPO in recommendation while introducing multiple negatives which is important for recommenders. Besides adapting softmax DPO to recommendation data, we also provide theorectical analysis to connect pairwise DPO with BPR loss and connect S-DPO with softmax loss in recommendation such as InfoNCE, ensuring its superior performance. Also, we further analyze the gradient of S-DPO loss and theorectically prove that mining hard negatives is the reason behind S-DPO to provide effective gradient. LM-based recommenders aim to directly generate the preferred items by first encoding textual prompt then autoregressively decoding, which is different from traditional recommenders which calculate similarity among user and item numerical representations. With such differences, the training losses between LM-based recommenders and conventional recommenders have connections but are strictly under different settings. So traditional loss functions like BPR and InfoNCE are unsuitable for LM-based recommenders. For LM-based recommenders, introducing ranking information and multiple-negatives has been largely omitted. Besides, there lacks effective methods to introduce multiple negatives into the training pipeline of LM-based recommenders. To our knowledge, we are among the first to point out that multi-negatives is important in LM-based recommenders and propose to effectively instill partial ranking information into LM-based recommenders through an alternated softmax version of DPO. >**Comment 2: Negtive sampling.** You are right that negative sampling is a critical area of exploration in recommender systems. In this work, negative samples were randomly selected. We acknowledge the importance of more sophisticated negative sampling strategies, such as popularity-based and similarity-based negative sampling, which we leave as future work. The type of data you mentioned, where unclicked results serve as negatives, is indeed common in real-world industrial datasets but is challenging to obtain in benchmark datasets like MovieLens. Within our experimental setting, random sampling serves as a simple yet effective strategy. >**Comment 3: Gain of performance.** As line 215-217 mentioned, we optimize loss only on item title and find it effective in recommendation data. For other LM-based baselines, we adopt their official implementation for a fair comparison. >**Comment 4: Saturation of negative samples.** We believe that the performance will further improve after more training iterations but as mentioned in limitation section that our computation resourse is limited, so we run all experiments for 3 epochs for fair comparson. We also believe that introducing more negatives may bring more gain but we can only explore part of it due to the same limitation in our computation resources. >**Comment 5: Efficiency.** We appreciate your point. To address this, we have now included effectiveness and efficiency comparisons between DPO and S-DPO. $K$ denote the number of negative items. $C_\mathcal{M}$ denote the complexity of base model $\mathcal{M}$. **Time Complexity** | Methods | S-DPO | DPO | |:------:|:------:|:------:| | Complexity | $\Theta((K+1)(C_\mathcal{M}+1))$ | $\Theta(2KC\_\mathcal{M}S\_t)$ | The complexity of S-DPO scales with the factor $\frac{1}{2}+\frac{1}{2K}+\frac{1}{2C_\mathcal{M}}+\frac{1}{2KC_\mathcal{M}}$ compared to DPO. Since $C\_\mathcal{M}$ is usually large for LLMs, the bigger $K$ is, the smaller the factor is, which means the more efficiency S-DPO will posses compared with DPO. Empirically, When both considering 3 negative examples and trained on goodreads dataset with 4 A100s, it will take 25h for S-DPO and 53h for DPO, indicating the efficiency of our method. Additionally, we conducted experiments on three datasets compare DPO with multiple negatives and S-DPO on LLAMA2-7b. **LastFM** | Methods | HR@1 | ValidRatio | |:------:|:-------:|:------:| | DPO | 0.6342 | 0.9972 | | DPO-3neg | 0.6413 | 0.9964 | | S-DPO-3neg | **0.6477** | **0.9980** | **MovieLens** | Methods | HR@1 | ValidRatio | |:------:|:-------:|:------:| | DPO | 0.4947 | 0.9684 | | DPO-3neg | 0.4947 | 0.9474 | | S-DPO-3neg | **0.5263** | **0.9895** | **Goodreads** | Methods | HR@1 | ValidRatio | |:------:|:-------:|:------:| | DPO | 0.6381 | 0.9900 | | DPO-3neg | **0.6661** | 0.9900 | | S-DPO-3neg | 0.6628 | **0.9950** | The results show that S-DPO still achieves leading or comparable performance with less complexity. This can be attributed to S-DPO considering more negative examples during single gradient update, resulting in more effective gradients compared to DPO. --- Rebuttal Comment 1.1: Title: Sorry for leading confusion Comment: We apologize for our mistakes. The revised complexity analysis is as follows: $K$ denotes the number of negative items. $C_\mathcal{M}$ denotes the base model $\mathcal{M}$. $S_t$ denotes the size of the S-DPO training set. **Time Complexity** | Methods | S-DPO | DPO | |:------:|:------:|:------:| | Complexity | $ \Theta((K+1)(C_\mathcal{M}+1)S_t) $ | $\Theta(2KC\_\mathcal{M}S\_t)$ |
Summary: proposed a softmax DPO in recommendation domain. Strengths: 1. The paper mainly focus on softmax DPO for recommendation 2. The paper is well wrriten with good experimental validation. 3. The source code is avaliable. Weaknesses: 1. The novely of this work is not high, it seems that they mainly adapt DPO to the recomemndation area, most of the concept are from DPO. 2. what is the complexith of S-DPO, and how can it scale up the large-scale dataset? Technical Quality: 2 Clarity: 3 Questions for Authors: 1. The novely of this work is not high, it seems that they mainly adapt DPO to the recomemndation area, most of the concept are from DPO. 2. what is the complexith of S-DPO, and how can it scale up the large-scale dataset? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: It is unclear how this framework can scale up to large scale dataset. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **For Reviewer AMu8** Thanks for your time and feedback. To address your concerns, we present the point-to-point responses as follows. Looking forward to more discussions with you. >**Comment 1: Uncertain Novelty.** Generative LM-based recommenders has recently been explored which aim to directly generate the preferred items by first encoding textual prompt then autoregressively decoding, which is different from traditional recommenders which calculating similarity among user and item numerical representations. With such differences, the training losses between LM-based recommenders and conventional recommenders have connections but they are strictly under different settings. For LM-based recommenders, introducing ranking information and multiple-negatives has been largely omitted. Besides, there lacks effective methods to introduce multiple negatives into the training pipeline of LM-based recommenders. To our knowledge, we are among the first to point out that multi-negatives is important in LM-based recommenders and propose to effectively instill partial ranking information into LM-based recommenders through an alternated softmax version of DPO. Adapting DPO to recommendation is a non-trivial task. Conventional DPO utilizes the Bradley-Terry (BT) or the Plackett-Luce (PL) preference model and focus on absolute ranking relationships among data samples, while we also utilize the PL preference model but focus on partial ranking relationships specifically for recommendation data and derive a new preference distribution from the PL model. Moreover, we also provide theorectical analysis to connect pairwise DPO with BPR loss and connect S-DPO with softmax loss in recommendation such as InfoNCE. Also, we further analyze the gradient of S-DPO loss and theorectically prove that mining hard negatives is one of the reasons behind S-DPO to provide effective gradient. >**Comment 2: Uncertain Complexity.** To address your concern, we have now included analysis of efficiency comparison between traditional DPO and our S-DPO, focusing on theoretical analysis and empirical results . This comparison highlights the efficiency gains achieved by S-DPO, supporting its practical value. Let $K$ denote the number of negative items taken into consideration by S-DPO and $C_\mathcal{M}$ denote the time complexity of the chosen base model $\mathcal{M}$. The time complexity of computing S-DPO loss is $\Theta((K+1)C_\mathcal{M}+K+1)=\Theta((K+1)(C_\mathcal{M}+1))$. For the DPO loss, the complexity of computing a single positive-negative pair is $\Theta(2C_\mathcal{M})$. Besides, when an equal number of negative items are added to the DPO training set, the total number of the DPO training set is $K$ times of that of the S-DPO training set. Therefore, let $S_t$ denote the size of the S-DPO training set. The time complexity of training with DPO loss scale to $\Theta(2C_\mathcal{M}\times KS_t)=\Theta(2KC_\mathcal{M}S_t)$. Although this complexity is in the same magnitude as the time complexity of training with S-DPO loss, the latter scales the complexity of training with DPO loss with the factor $\frac{1}{2}+\frac{1}{2K}+\frac{1}{2C_\mathcal{M}}+\frac{1}{2KC_\mathcal{M}}$. Since $C\_\mathcal{M}$ is usually large for LLMs, the bigger $K$ is, the smaller the factor is, which means the more efficiency S-DPO will posses compared with DPO. **Time Complexity** | Methods | DPO-1neg | S-DPO | DPO | |:------:|:-------:|:------:|:------:| | Complexity | $\Theta(2C_\mathcal{M}S_t)$ | $\Theta((K+1)(C_\mathcal{M}+1))$ | $\Theta(2KC\_\mathcal{M}S\_t)$ | > **Comment 3: Large scale Dataset** We have used large scale benchmark dataset such as goodreads. For extremely large datasets, it might be a common issue for researchers having difficulties to implementation. --- Rebuttal Comment 1.1: Title: Sorry for leading confusion about time complexity Comment: We apologize for our mistakes. The revised complexity analysis is as follows: $K$ denotes the number of negative items. $C_\mathcal{M}$ denotes the base model $\mathcal{M}$. $S_t$ denotes the size of the S-DPO training set. **Time Complexity** | Methods | S-DPO | DPO | |:------:|:------:|:------:| | Complexity | $ \Theta((K+1)(C_\mathcal{M}+1)S_t) $ | $\Theta(2KC\_\mathcal{M}S\_t)$ | --- Rebuttal Comment 1.2: Title: Thank you for your time Comment: Thank you very much for your valuable feedback. Your suggestions on complexity comparison have enhanced our method. We hope our replies have resolved most of your concerns. If so, we kindly ask if you might consider revising your score. As the rebuttal period is nearing its end, we would like to discuss any remaining issues you may have. Is there anything else you'd like to discuss?
Summary: The authors propose a modification to the Direct Preference Optimization (DPO) by incorporating a softmax loss to enhance the training of language model (LM)-based recommender systems. Strengths: - The paper is well-written. - The mathematical formulations are clear and appreciated. - The proposed method is fairly simple yet effective. - The results are convincing. Weaknesses: 1) Novelty - The use of multiple negatives in recommendation systems is not new and should be cited. Thus, Research Question 2 (RQ2) lacks novelty as it has already been explored. The novelty of using multiple negatives in LM-based recommender systems is uncertain. 2) Statements - Equation 8 represents the softmax loss, which has already been applied in other recommender systems like BERT4Rec, whereas Equation 12 does not represent the softmax loss, leading to confusion. - The claim in lines 225-226 "indicating the significant roles of knowledge and reasoning ability in language models for recommendation tasks." is not provable due to differences in training datasets. - The statement "However, indicated by the low valid ratio and suboptimal performance, untuned LM-based recommenders are limited by inadequate instruction-following capabilities or a lack of domain-specific knowledge" in lines 228-230 is incorrect as evidenced by high ValidRatio scores of ChatRec. 3) Replicability - The prompts used in the experiments are not provided, making replication difficult. 4) Evaluation - NDCG is usually preferred, and metrics like @10 or @20 are more commonly used than @1 in recommendation evaluations. For example, LM-based RecSys works like LLama4Rec use @5. - In Figure 2a, the statistical significance of the results is not specified. It is also unclear why the results are based on LLama and not LLama2. 5) Comparison - The comparison of training epochs between LM-based recommenders (5 epochs) and S-DPO (an additional 3 epochs) is unfair. - In Table 1, the best HR@1 is highlighted in bold, but the same should be done for the ValidRatio column even if your model does not perform the best in that aspect. - The statistical test used should be specified. - Optimizing hyperparameters for the proposed model without doing the same for competitor models is unfair. A fair comparison would involve equal time allocation for hyperparameter searches. - Figure 2b shows that S-DPO decreases faster but remains higher than DPO, making the comparison questionable. Similarly, Figure 2c lacks clarity on whether the values are comparable. -The statement "On the other hand, an excessively large β prevents the model from effectively learning ranking relationships, leading to suboptimal performance" is not supported by Figure 3b and 3c, which show consistent performance improvement. 6) Related work - The related work section is more effective when placed after the introduction to better contextualize the state-of-the-art before presenting the new method. - The discussion on LM for recommendation is not thorough, with many works cited together without detailed descriptions. Technical Quality: 2 Clarity: 2 Questions for Authors: - Consider replacing "side effect" with a more positive synonym. - The methodology section (2) appears empty; clarify if the true (new) methodology starts at section 3.1. - Equation (1) needs clarification as i_p and i_d represent items (identifiers) and not scores nor ranks. - Space before subsection name in line 148 - Line 159: "mutli" --> write "multi" - Clarify lines 174-180 regarding "hard negative items" and the theoretical basis for S-DPO's ability to mine hard negatives. - Section 3.2 is unclear, especially the statement "The structural correlation indicates that DPO and S-DPO are more suitable for recommendation than the language-modeling loss." - The definition of Rel.Ipv should be moved to the caption of Table 1. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: - The computational effort required for using more negative samples is not considered. It would be beneficial to compute an approximate empirical complexity of the method. - The use of the Movielens 100K dataset is a limitation due to its small size. A larger dataset like ML-1M would provide better validation while remaining computationally feasible compared to ML-20M. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Comment: **For Reviewer iyxD** I would like to express my gratitude for your detailed review and the valuable feedback provided. >**Comment 1: Uncertain Novelty.** We agree with you that introducing multiple negatives is important and has been widely explored in conventional recommenders, as we briefly discussed in section 3.2. However, as your discovery, LM-based recommenders which has different training paradigm compared with traditional recommenders has just been explored, on which multiple-negatives has largely omitted and lack effectively method to introduce into training pipeline. To our knowledge, we are among the first to point out that multi-negatives is important in LM-based recommenders and propose to effectively instill partial ranking information into LM-based recommenders through an alternated softmax version of DPO. >**Comment 2: Confusing Statement** - Clarification of Equation 12: Equation 12 is also an equivalent variant of the softmax loss, which can be rewritten as: $ E_{(u,i_p,I_d)}\left[{\rm log}\frac{{\rm exp}(f(u,i_p))}{{\rm exp}(f(u,i_p)) + \sum_{i_d \in \mathcal{I}_d}{\rm exp}(f(u,i_d))}\right] $ - We will modify line 225 to "indicating the significant roles of knowledge and reasoning ability in language models for recommendation tasks in semantically informative datasets." - Untuned LM-based recommenders achieve suboptimal performance because of inadequate instruction-following capabilities (reflected by a low valid ratio and a low hit ratio) or lack of domain knowledge (reflected by a high valid ratio but a low hit ratio). ChatRec, with GPT-4 as its backend, falls into the second case. We will modify line 228 to avoid confusion. > **Comment 3: Replicability.** In fact, we have uploaded all the code and prompt used in the anonymous link " https://anonymous.4open.science/r/S-DPO-C8E0" which has been attached in the abstract of submitted manuscripts. We will further include all of our prompts in the appendix. > **Comment 4: Evaluation.** - We calculate further HitRatio and NDCG, please refer to supplementary material (Table 2) for results. - Sorry for leading confusion, the results in Figure 2a is based on LLama2 instead of LLama, which will be modified in the manuscript. > **Comment 5: Statistical Test.** We utilize one-sided t-test for all of our statistical test and find the p value less than 0.05 among all shown experiments. Clarification will be included in implementation details. > **Comment 6: Comparison.** - It has been shown that 5 epochs is enough for model convergence. Follow your suggestions, we conduct SFT for further 3 epochs and results show that it bring no gain to the final performance. - We are adding a dedicated hyperparameter table to provide transparency and demonstrate that each model was given an equal opportunity for optimization. Please refer to supplementary material (Table 1). - **Regarding Figure 2b and 2c**: We further ensure that the validation setting for DPO and S-DPO are consistent, allowing for a direct and fair comparison. The revised results are now included in the supplementary material (Figure 1). We compare loss, relative likelihood and absolute likelihood to validate the effectiveness of S-DPO - **Regarding Figures 3b and 3c**: A very low β overly prioritizes ranking information, compromising the model's ability to follow instructions, as seen in the decreased valid and hit ratios. Conversely, an excessively high β constrains the model too much by the reference model's standards, leading to lower performance (hit ratio@1). A slight increase in valid ratio with β values greater than 1 suggests better adherence to the reference model's constraints, supporting our interpretation. >**Comment 7: Related work.** We agree with your suggestion and we will reorganize our revised version. The lack of in-depth discussion is due to space constraints. We will include a more detailed discussion in the appendix. --- Rebuttal 2: Title: We edit the complexity Comment: >**Comment 8: Unclear section** - **Synonym.** We will replace it with more positive synonym such as "ancillary benefit" in our manuscript. - **Empty section.** Methodology starts at Section 3.1. - **Equation (1).** We will modify the notation "$>_u$" to "$\succ_u$" to better distinguish the comparisons between items according to the preferences of user $u$ from the comparisons between scores or ranks. - **Lines 174-180.** S-DPO treats gradients of different negative (dispreferred) items differently by assigning the gradient of each negative item with an extra term $\frac{1}{\sum_{e_d \in E_d}{\rm exp}(g(e^\prime_d,e_d,x_u))}$=$\frac{{\rm exp}(r_\theta(x_u,e_d))}{\sum_{e^\prime_d\in E_d}{\rm exp}(r_\theta(x_u,e^\prime_d))}$ . This term reflects the relative reward of each negative item compared with other negative items. We can categorize negative items into two categories: (1) Hard negative items , whose reward $r_\theta(x_u,e_d)$ is relatively high, making it more probable to be chosen by LM-based recommenders; (2) Easy negative items, whose reward $r_\theta(x_u,e_d)$ is relatively low, making it less likely to be output. For hard negative items, the extra weight term tends to be larger, leading more decline for likelihood. - **Section 3.2.** "Given effectiveness of BPR and InfoNCE loss in recommendation, we argue that sampled-based loss which explicitly compares preferred and dispreferred items such as DPO and S-DPO is more suitable for training LM-based recommenders than only utilizing language modeling loss." - Rest mistakes will be modified in the revised version, thanks for pointing out. > **Comment 9: Complexity.** $K$ denote the number of negative items. $C_\mathcal{M}$ denote the base model $\mathcal{M}$. $S_t$ denote the size of the S-DPO training set. **Time Complexity** | Methods | S-DPO | DPO | |:------:|:------:|:------:| | Complexity | $ \Theta((K+1)(C_\mathcal{M}+1)S_t) $ | $\Theta(2KC\_\mathcal{M}S\_t)$ | --- Rebuttal 3: Title: Thank you for your time Comment: We sincerely appreciate your valuable comments. Your feedback on direct comparisons and complexity analysis is important for improving our method. We hope our responses have satisfactorily addressed most of your concerns. If this is the case, could we kindly ask you to consider adjusting your score? With the rebuttal period coming to a close, we are eager to discuss any further concerns you might have.
Summary: Inspired by advancements in Direct Preference Optimization (DPO) and the success of softmax loss in recommendations, the paper propose Softmax-DPO (S-DPO). S-DPO enhances LM-based sequential recommenders by incorporating multiple negative samples in preference data.The paper demonstrate the superiority of S-DPO in modeling user preferences and boosting recommendation effectiveness. Strengths: 1. DPO, a method that considers preferences in NLP, has been extended to LM-based recommender systems, expanding NLP techniques to the recommender system domain. 2. Building upon related previous work such as reinforcement learning from human feedback (RLHF) and DPO, the motivation and approach of this paper are well-founded. Weaknesses: In the recommender system domain, considering preferences in the loss function is not particularly new. However, experiments and explanations for this are lacking. 1. There are no experiments comparing S-DPO with other preference-based losses in recommender systems (BPR, InfoNCE, as the paper mentioned ). If such comparisons are not feasible, the reasons for this are insufficiently explained. 2. The experiments lacked the application of S-DPO loss to various base models. Consequently, the general applicability of S-DPO was not demonstrated. 3. Running DPO multiple times and S-DPO appear conceptually similar. Therefore, there are no experiments comparing their efficiency. A comparison of time or memory usage seems necessary. Technical Quality: 3 Clarity: 2 Questions for Authors: Same as weaknesses Is there a justification for not conducting the experiment described above? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: LM's methodology was brought to the Recommender system. However, there seems to be a lack of discussion in the Recommender system. BPR loss is typically not used in sequential scenarios, so it would have been beneficial to emphasize that S-DPO specifically addresses preferences in sequential contexts. Additionally, conducting more experiments to validate this aspect would have been advantageous. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **For Reviewer gGah** Your main suggestions about considering additional base models help us substantiate wide applicability of S-DPO. >**Comment1: Lack comparison with traditional preference-based losses.** Thank you for your insightful question, which raises a profound and previously unexplored issue: whether traditional preference-based losses can be directly applied to LM-based recommenders. Following your suggestion, we attempted to train LM-based recommenders using BPR loss and InfoNCE loss. However, due to the significant differences between losses designed for language models (e.g., SFT, DPO) and discriminative preference-based losses, and given the time constraints of the rebuttal period, we have not yet obtained results within a reasonable range. We will continue to adjust parameters and training methods over the next discussion week to provide a more reliable conclusion. We greatly appreciate your question, looking forward to more discussions with you. Additionally, we want to emphasize that we have compared preference-based losses in traditional recommenders, such as GRU4Rec-BPR. > **Comment 2: General applicability of S-DPO** Great point! Following your suggestion, we have **extended our experiments** to include more base models. We selected language models with different architectures (**LLAMA1-7b, Pythia-2.8b, Mistral-7b**), on varying datasets (**LastFM, MovieLens**), and of different sizes to perform experiments. We compared untuned LMs, LMs with only SFT as the training loss, and LMs with DPO and S-DPO for further training. Due to computational resource and time limitations, we experimented with S-DPO using **3 and 8** negative samples on two datasets. **Results on LastFM** **LLAMA1-7b** | Methods | HR@1 | ValidRatio | |:------:|:-------:|:------:| | LLAMA1 | 0.0465 | 0.5872 | | SFT | 0.5980 | <u>0.9980<u> | | DPO | 0.6084 | 0.9976 | | S-DPO-3neg | <u>0.6285</u> | 0.9976 | | S-DPO-8neg | **0.6365** | **0.9988** | **Pythia-2.8b** | Methods | HR@1 | ValidRatio | |:------:|:-------:|:------:| | Pythia | 0.0265 | 0.3648 | | SFT | 0.1611 | 0.4281 | | DPO | 0.1896 | 0.4220 | | S-DPO-3neg | <u>0.1948</u> | **0.4689** | | S-DPO-8neg | **0.2200** | <u>0.4685</u> | **Mistral-7b** | Methods | HR@1 | ValidRatio | |:------:|:-------:|:------:| | Mistral | 0.0633 | 0.7475| | SFT | **0.7828** | **0.9992** | | | DPO | 0.7415 | 0.9964 | | | S-DPO-3neg | 0.7679 | <u>0.9972</u> | | | S-DPO-8neg | <u>0.7820</u> | <u>0.9972</u> | | **Results on MovieLens** **LLAMA1-7b** | Methods | HR@1 | ValidRatio | |:------:|:-------:|:------:| | LLAMA1 | 0.0316 | 0.5158 | | SFT | 0.3895 | **0.9684** | | DPO | 0.3789 | **0.9684** | | S-DPO-3neg | **0.4526** | 0.9474 | | S-DPO-8neg | **0.4526** | 0.9579 | **Pythia-2.8b** | Methods | HR@1 | ValidRatio | |:------:|:-------:|:------:| | Pythia | 0.0421 | 0.5895 | | SFT | 0.1053 | 0.5684 | | DPO | <u>0.1271</u> | <u>0.8449</u> | | S-DPO-8neg | **0.1474** | **0.8737** | **Mistral-7b** | Methods | HR@1 | ValidRatio | |:------:|:-------:|:------:| | Mistral | 0.0842 | 0.6737 | | SFT | 0.4211 | 0.9894 | | DPO | <u>0.4421</u> | 0.9684 | | S-DPO-3neg | <u>0.4421</u> | **0.9895** | | S-DPO-8neg | **0.4947** | **0.9895** | The experimental results indicate that DPO generally enhances the performance of SFT across different language models, demonstrating the importance of preference information for recommendation tasks. By incorporating multiple negative examples, S-DPO can achieve further performance improvements on top of DPO. Moreover, as the number of negative examples increases, the model's performance also improves. >**Comment 3: Comparison between DPO and S-DPO.** We appreciate your valuable comment. To address this, we have **added effectiveness and efficiency comparisons between DPO and S-DPO**. $K$: the number of negative items. $C_\mathcal{M}$: the complexity of base model $\mathcal{M}$. **Time Complexity** | Methods | S-DPO | DPO | |:------:|:------:|:------:| | Complexity | $O((K+1)(C_\mathcal{M}+1))$ | $O(2K C_\mathcal{M} S\_t)$ | The complexity of S-DPO scales with the factor $\frac{1}{2}+\frac{1}{2K}+\frac{1}{2C_\mathcal{M}}+\frac{1}{2KC_\mathcal{M}}$ compared to DPO. Since $C_\mathcal{M}$ is usually large for LLMs, the bigger $K$ is, the smaller the factor is, which means more efficiency S-DPO will posses compared with DPO. Empirically, When both considering 3 negative examples and trained on goodreads dataset with 4 A100s, it will take 25h for S-DPO and 53h for DPO, indicating the efficiency of our method. Additionally, we conducted experiments on three datasets compare DPO with multiple negatives and S-DPO on LLAMA2-7b. **LastFM** | Methods | HR@1 | ValidRatio | |:------:|:-------:|:------:| | DPO | 0.6342 | 0.9972 | | DPO-3neg | 0.6413 | 0.9964 | | S-DPO-3neg | **0.6477** | **0.9980** | **MovieLens** | Methods | HR@1 | ValidRatio | |:------:|:-------:|:------:| | DPO | 0.4947 | 0.9684 | | DPO-3neg | 0.4947 | 0.9474 | | S-DPO-3neg | **0.5263** | **0.9895** | **Goodreads** | Methods | HR@1 | ValidRatio | |:------:|:-------:|:------:| | DPO | 0.6381 | 0.9900 | | DPO-3neg | **0.6661** | 0.9900 | | S-DPO-3neg | 0.6628 | **0.9950** | The results show that S-DPO still achieves leading or comparable performance with less complexity. This can be attributed to S-DPO considering more negative examples during single gradient update, resulting in more effective gradients compared to DPO. > **Limitation 1: Lack of discussion in the recommender system.** We are not sure we fully understand your point. We discussed the LM-based recommender and traditional recommender in related works. > **Limitation 2: conducting more experiments and emphasizing S-DPO specifically addresses preferences in sequential recommendation** Thanks for your valuable comments! Following your suggestion, we added more experiments and will emphasize the important role of S-DPO in sequential recommendation. --- Rebuttal Comment 1.1: Title: Sorry for leading confusion Comment: We apologize for our mistakes. The revised complexity analysis is as follows: $K$ denotes the number of negative items. $C_\mathcal{M}$ denotes the base model $\mathcal{M}$. $S_t$ denotes the size of the S-DPO training set. **Time Complexity** | Methods | S-DPO | DPO | |:------:|:------:|:------:| | Complexity | $ \Theta((K+1)(C_\mathcal{M}+1)S_t) $ | $\Theta(2KC\_\mathcal{M}S\_t)$ | --- Rebuttal Comment 1.2: Title: Experimental results on comparing S-DPO with other preference-based losses (BPR and InfoNCe) Comment: It is noted that traditional similarity-based training losses are not well-suited for generative language model-based recommenders, and thus cannot be directly applied in our context. To address this, we adapted BPR and InfoNCE losses within the language model training framework, employing random negative sampling on the LastFM dataset using Llama2-7b. The experimental results are summarized below: | Method | HitRatio@1 | ValidRatio | | ---------- | ---------- | ---------- | | LLAMA2 | 0.0233 | 0.3854 | | LLAMA2-BPR | 0.0008 | 0.0152 | | LLAMA2-InfoNCE | 0.0029 | 0.0246 | | LLAMA2-SDPO | **0.6609** | **0.9900** | From these results, we observe that adapting traditional losses for LM-based recommenders significantly diminishes the inherent capabilities of language models, while also failing to effectively capture domain-specific knowledge. This leads to a marked decrease in both hit ratio and valid ratio. From this experiment, we can see that traditional recommendation losses cannot be directly adapted to LM-based recommenders. Therefore, incorporating explicit ranking information and multiple negative examples into LM-based recommenders is a nontrivial task. S-DPO addresses this challenge by introducing a partial order relationship into PL preference modeling and further deriving the effective loss with softmax structure. We look forward to further discussing these findings with you. --- Rebuttal Comment 1.3: Title: Thank you for your time Comment: We greatly appreciate your valuable comments. Your insights on various base models and comparisons between DPO and S-DPO have significantly helped us improve our paper. We hope our responses have addressed most of your concerns. If they have, would you consider raising your score? If you have any remaining concerns, we would be eager to discuss them further, especially as the rebuttal period is about to end. Do you have any additional questions?
Rebuttal 1: Rebuttal: We are delighted to see the contributions of our paper have been acknowledged by the majority of the Reviewers. Specifically, we appreciate the Reviewers' recognition of our clarity in presentation (iyxD, AMu8, 6tmb), well-founded theoretical analysis (gGah, iyxD, 6tmb) and effectiveness (iyxD, AMu8, 6tmb). We appreciate all the reviewers for their valuable comments and suggestions. This helped improve our submission and better strength our claims. Taking into account suggestions of Reviewers, we have summarized the updates to the paper as follows: - **Experiments on three language model backbones compared S-DPO with DPO and SFT.** Following the suggestions of Reviewer gGah, we have incorporated three additional language model backbones to validate the generalization ability of S-DPO. - **Comparison experiments of effectiveness and efficiency analysis of DPO and S-DPO under the same negative samples.** Addressing the concerns raised by Reviewers gGah, iyxD, AMu8 and 6tmb, we have conducted experiments to validate the effectiveness of S-DPO with better efficiency compared with DPO. - **More detailed explanations.** In response to Reviewers gGah, iyxD, AMu8 and 6tmb, we provide detailed explanation to clarify some of our statements, address questions and provide more understanding of our method. - **Details about S-DPO.** We have incorporated detailed discussion about S-DPO, including hyperparameters selection, data likelihood decline issue and gradient analysis, to address the concerns of Reviewers iyxD and 6tmb. We have tried our best to address the main concerns raised by reviewers in limited time and we hope that these improvements will be taken into consideration. We also present the point-to-point responses for each reviewer below. Pdf: /pdf/615e261a02ece176ea4d531da5e8691511674640.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Symmetry Discovery Beyond Affine Transformations
Accept (poster)
Summary: This paper proposes a method for finding a transformation that is invariant to a given function. The transformation is restricted as governed by a single parameter, so it is written as a flow described by a specific vector field. The method estimates the vector field by solving the polynomial regression. The authors evaluated the performance on three toy datasets and a classification dataset. Strengths: 1. A flow-based symmetry detection is novel and an interesting research direction. Weaknesses: 1. There is a large room to improve the presentation. First of all, the terminology seems not standard in ML (e.g. I've never heard the term "machine learning function"). Second, the contents are not self-contained --- some unfamiliar notions appear without explanation (e.g. elbow curve). Third, it is not clear what the proposed method actually does. For example, it is said that the method solves Eq.(10), but it is not likely that we can always solve it. I mean, for some data, the equation may not have a solution. Such a case is not mentioned in the paper. 2. The capabilities and limitations of the proposed method are not fully mentioned. 3. The experiments are mainly on toy datasets and are not strongly convincing that the method is applicable to real problems. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. What is the transformation class that is (theoretically) handled by the method? 1. How did you solve Eq. (10), (13), (14) in the experiments? 1. What would happen if x contains some noise? 1. When does the proposed method fail? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Limitations are not provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We have responded below. **Terminology**: With no intent to confuse the reader, we used the term "machine learning function" to cover very general types of functions which may appear within the context of machine learning. Such a function could be a regression function, a classification function, or a function offering a manifold description of given data, as in level set estimation or in the presence of a metric tensor. The elbow curve notion refers to a relatively sudden increase in function values, perceived as an elbow shape, since the function values immediately preceding the elbow point are usually close together. This is sometimes referred to as finding a "kink," as in [R1] within the context of selecting the "best" number of clusters $k$ for $k$-means clustering. In general, Eq. (10) may not have a solution, as is the case with OLS regression. Eq. (10) is estimated/fitted by constrained optimization of a selected loss function. We will clarify this in the revision. **Capabilities and Limitations**: We have clarified many points regarding the capabilities and limitations of our method in our response to the reviewers. We will include a discussion of these points in the revised version. **Only using toy datasets**: Current SOTA papers in symmetry detection commonly use simulated data. An experiment with simulated data allows for the identification of ground truth symmetry, allowing one to quantify the accuracy of a given method. Symmetry detection on real-world data can be applied, but it is harder to evaluate the ability of new methods to properly detect symmetry if ground truth symmetry is not known, as is generally the case with real data. Thus our experiments are consistent with other SOTA papers in this area. Nevertheless, we present an additional experiment on real data that we will include in the revision. This comes from a publicly-available dataset that deals with weather for four decades in the vicinity of Bear Lake, Utah. The dataset, along with a report describing how the data was sourced, is publicly available [R2]. The dataset gives daily weather attributes. It contains $14,610$ entries with 81 numeric attributes including the daily elevation level of the lake. The dataset contains precisely 40 years' worth of data from October of 1981 through September of 2021. We believe an understanding of the behavior of the weather in the past is relevant to this problem. Therefore, we first construct $13,149$ time series of length $1461$ in $81$ dimensions by means of a sliding window of length $1461$ days: the first time series is constructed using the first $1461$ days (the number of days in four years). The next time series is constructed using the second day through day 1462, and so forth. After converting the raw data to time series data, we apply a transformation on the data meant to extract time-relevant features of the data known as the Multirocket transform [R3]. We select $168$ kernels in our transform. The Multirocket transform transforms the data from $13,149$ time series of length $1461$ in $81$ variables to tabular data: $13,149$ entries in $1344$ variables. For such high-dimensional data, we turn to PHATE, a SOTA data visualization method [R4]. Using PHATE, we reduce the dimension to $2$, so that our new dataset has $13,149$ entries in $2$ variables. The resulting data appears to approximately lie on a circular shape and is shown in Figure 1 of the rebuttal attachment. Figure 1 in the rebuttal attachment suggests that the data is largely periodic. In fact, further experimentation reveals that the points for a given calendar year correspond to a traversal around the circular-like shape. Thus, for the analysis of non-seasonal weather patterns, it may be of use to examine features of this embedded dataset which are invariant under the periodic transformation, the approximate symmetry. Indeed, our method reveals an approximate invariant function given by \begin{equation*} f(x,y) = 0.33592 x^2 + 0.94189 y^2 - 2.9743 \cdot 10^{-4} = 0. \end{equation*} In Figure 2 of the rebuttal attachment, we replot the embedded data, colored this time by the value of the approximate invariant function. This experiment shows that symmetry can occur in real data, and that our method can detect symmetry and estimate invariant functions for real data. **Transformation class**: The transformations our method can handle are 1-parameter subgroups of transformation groups (Lie groups). We will clarify this in the revision. **Solving method**: Solutions to Eqs. (10), (13), and (14) are estimated using constrained regression. The specific loss function and hyperparameters vary by experiment, but we will clarify these details in the revision. **Noise**: Noise is common in regression problems, and it affects the quality of least squares estimates. We have noise present in the experiment that compares our method with LieGAN, since the dataset is not technically rotationally symmetric, but rather approximately rotationally symmetric. **When our method fails**: It may fail if the number of independent parameters exceeds the number of datapoints. The success of the method depends on optimization and loss function choices, as well as hyperparameters. We will clarify these points in the revision. [R1] T. Hastie, R. Tibshirani, and J. Feiedman, ``The Elements of Statistical Learning: Data Mining, Inference, and Prediction,'' 2nd ed. Springer, 2009. [R2] B. D. Shaw et al., “Supplementary files for: ‘interactive modeling of bear lake elevations in a future climate’,” 2024. [R3] C. W. Tan, A. Dempster, C. Bergmeir, and G. I. Webb, “Multirocket: Effective summary statistics for convolutional outputs in time series classification,” CoRR, abs/2102.00457, 2021. [R4] K. Moon et al., “Visualizing structure and transitions in high-dimensional biological data,” Nature Biotechnology, vol. 37, pp. 1482 – 1492, 2019. --- Rebuttal Comment 1.1: Title: To authors Comment: Thank you for responding to my review. I also appreciate that the authors conducted additional experiments. My concerns are partially resolved, and I will increase my score.
Summary: The paper presents a method for continuous symmetry detection under the manifold assumption. Crucially, the symmetries that are discovered by this method can extend beyond the affine ones. The method is tested and compared against the state of the art (LieGAN), and is found to outperform it in scarce data regime, and be competitive in the large data regime. Strengths: * The paper connects symmetry to invariance of dynamical systems to particular functions. Instead of explicit identification of the symmetry, the focus is on identifying the infinitesimal generators of the flows which correspond to the symmetries themselves. Once found, the vector field can be used to create symmetry-invariant features. * The exposition is overall very clear. Many examples are provided throughout. * The paper outlines a (sometimes) computationally cheap alternative to methods such as LieGAN. Weaknesses: * Depending on the machine learning function of interest, the method requires level set estimation, which is done by assuming that level sets themselves can be found by linear combination of polynomials. This seems to be very optimistic * The elbow method to identify the right amount of polynomial terms to keep seems a bit brittle * The scaling of a polynomial based method is very unfavourable beyond very toy-like examples. * More generally, who chooses the feature functions that form the basis for the linear combinations? If these weaknesses are addressed, the rating from this reviewer may increase. Technical Quality: 3 Clarity: 4 Questions for Authors: Can we get a discussion of the computational complexity/scaling of this method? Seems like there are combinatorial explosions hiding just beneath the surface of the method. A proper discussion of this may result in an increase in the rating from this reviewer. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: The authors have properly discussed the limitations of their model in terms of the experiments they have run. Not enough has been said in terms of the various null space estimations, which require knowledge of adequate “basis functions”, which can not just be polynomials in most cases. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We have responded below to the concerns raised. **Use of Polynomials**: In general, level set estimation can be applied using linear combinations of any set of smooth functions, though our experiments do specialize to polynomial functions. However, smooth functions can be approximated by polynomials of sufficiently high degree and are thus universal approximators [R1]. **The elbow curve**: Instability can arise, and the results of the elbow curve can be misleading. This seems to primarily occur when the problem of degenerate expressions, discussed first in lines 182 - 204, is not properly controlled. For level set estimation, if both $f=0$ and $hf=0$ are discoverable expressions based on the pre-defined search space (for example, $x-y=0$ and $x^2-xy=0$), the loss function values may not generally follow an elbow curve, and any perceived elbow may not occur at the correct number of components for level set estimation. We discuss some workarounds for this problem, both for level set estimation and for vector field estimation, in lines 182-204, as well as 226-231. **Scaling beyond toy examples**: Vector fields with polynomial coefficients can characterize highly complex and non-trivial symmetries. Especially due to their universal approximation properties [R1], we expect that many symmetries that occur in the real world are expressible or approximately expressible in this form. Additionally, our method advances the current SOTA methods, as current SOTA methods consider a more restricted class of symmetries. Thus, we believe our contribution is an important advance. See our overall rebuttal for more details on our contributions. See also our response to reviewer U1Sk for an experiment on real data. **Feature function choice**: The features are chosen, in general, based on the types of symmetries sought. For example, if only affine symmetries are being sought, the feature functions for the vector fields are chosen to be affine functions. If no assumption on the form of the symmetries can be made, one can choose the features to be arbitrary polynomials, due to their universal approximation property [R1]. **Computational complexity**: The computational complexity of polynomial regression is comparatively low, especially in a single dimension [R2]. Moreover, manifold optimization algorithms on the matrix manifolds used herein are also of low computational complexity, with a recent manifold optimization algorithm obtaining computational complexity of $\mathcal{O}(\log(T)/\sqrt{T})$, where $T$ is the number of iterations [R3]. In contrast, current SOTA methods employ large neural networks, which networks are known to take more computational resources to train. There is no combinatorial explosion in our method. In fact, we can obtain a precise count of the number of parameters our method requires. Each vector field in $m$ dimensions using polynomials of degree $n$ has the following number of coefficients: \begin{equation*} \dfrac{(m+n)!}{n!(m-1)!}. \end{equation*} The coefficients can increase quickly, particularly in higher dimensions, but for every fixed value of $m$, the number of coefficients increases as a polynomial function of $n$ rather than combinatorially. However, each coefficient in our model increases the power of our method to detect symmetries. In contrast, while SOTA methods employing large neural networks must have parameters like ours which characterize the symmetries, they also have additional parameters. The number of parameters in our method increases according to the number of vector fields in a basis of our search space, whereas in methods such as LieGAN, these parameters are required in addition to other parameters present in the neural network architecture. To address the potential issue of having a “large” number of parameters, which number is dwarfed by the number of parameters present in neural networks that take $m$ inputs, we suggest limiting the search space to infinitesimal isometries: Killing vectors, that is. We discuss this restriction further in Appendix A. [R1] A. Pinkus, “Weierstrass and approximation theory,” Journal of Approximation Theory, vol. 107, no. 1, pp. 1–66, 2000. [R2] L. Li, “A new complexity bound for the least-squares problem,” Computers \& Mathematics with Applications, vol. 31, no. 12, pp. 15–16, 1996. [R3] H. Kasai, P. Jawanpuria, and B. Mishra, “Riemannian adaptive stochastic gradient algorithms on matrix manifolds,” 2019. --- Rebuttal Comment 1.1: Comment: I wish to thank the authors for their careful and thorough addressing of this reviewer’s questions. The discussion of the scaling in terms of ambient dimension and polynomial degree is well received; while there may not be a combinatorial explosion for fixed ambient dimension as a function of the chosen degree, high ambient dimension still seems to be a non-ideal setting. In a related way, the example provided by the authors on a real world dataset seems to have been conducted on a projection of the original, high dimensional dataset (via application of PHATE). While such a procedure is certainly appropriate and convincing in this case, it is not clear if it is also necessary for the success of the proposed method of symmetry discovery. After considering the answers currently provided in this rebuttal, this reviewer confirms their rating. --- Reply to Comment 1.1.1: Title: Extension of experiment to high dimensional data Comment: We thank the reviewer for the feedback. In light of the question of conducting the method in a high dimensional setting, we have applied our method directly to the rocket-transformed Bear lake data of dimension 1344. We provide a summary of the experiment herein. We begin level set estimation by restricting our search space to polynomials of degree 1 (or less). This we do to avoid the discovery of degenerate components of the level set function. The first elbow curve is obtained using increments of integer multiples of 84, so that each subsequent iteration corresponds to an integer multiple of 84 components of the level set model. We identify the number $15 \cdot 84$ as the elbow point. For having $15 \cdot 84$ components of degree 1 polynomials, we find that the fitting is completed in approximately $555$ seconds. We have used the MSE loss and Riemannian SGD with a learning rate of $0.01$, training for $1000$ epochs. (We recall that, for the sake of comparing SOTA methods, LieGAN required approximately 175 seconds to detect symmetry in dimension 2, training for 100 epochs.) Having obtained a degree 1 polynomial estimation of the data, we project the data of dimension 1344 onto the space of dimension $1344-15 \cdot 84 = 84$, similar to what we have done in our experiment in 10 dimensions in appendix C5 the paper itself. Realizing there may still be degree 1 polynomial terms, we generate another elbow curve, this time in incremental steps of 12. We find an elbow point at $6 \cdot 12$, training with the same loss function, number of epochs, and optimization parameters and algorithm as before. These iterations take significantly less time, being in a lower number of dimensions, with the iteration at $6 \cdot 12$ components taking approximately 3 seconds. Again, we project the data onto the lower dimensional space implied by our level set description of the data. (By the way, this projection is, in effect, using constant-polynomial vector fields to obtain manifold coordinates as the flow of these vector fields. This is possible since the vector fields are so simple. We do not explicitly identify the vector fields: see the 10-dimensional experiment in appendix C5 for an analogous experiment.) We continue the search for degree 1 polynomial components of the level set function, now in a mere 12 dimensions. Another elbow curve reveals an elbow point at 8, and we again project the data so that the final dataset exists in 4 dimensions. Now that we are confident that no additional degree 1 polynomial terms exist, we expand our search space to polynomials of degree 2. Finding no convincing elbow curve, the level set estimation step concludes, having found no components of complexity greater than degree 1 polynomials in this case. The symmetry of the level set function was exploited to explicitly reduce the dimension of the dataset, owing to the simple nature of a level set function with strictly degree 1 polynomials. Therefore, no additional symmetry detection efforts are necessary, and our experiment is concluded. This experiment shows that our method can be applied to datasets of higher dimension. Future work includes the study of symmetry for high dimensional datasets. To date, our method appears to be the only method capable of conducting continuous symmetry detection in high dimensions.
Summary: The paper uses standard ideas from differential geometry to find symmetries in datasets. The procedure is the following: 1. Estimate a parameterization of the dataset (what the authors call machine learning functions). This step looks like manifold learning. 2. Find a vector field under which the machine learning functions are zero. 3. Find a coordinate system for the invariant space for the vector field in 2. Though the employed techniques are not exactly the same, the ideas remind me of this paper https://arxiv.org/abs/2008.04278 Strengths: - The problem the paper addresses is very interesting and can have many applications. - Several numerical examples are presented. Weaknesses: - The paper should state the assumptions under which the algorithms work. - In particular it seems that one necessary assumption for the first step is that the data lies in a manifold that can be parameterized by a single chart (the f). Is that correct or can this assumption be bypassed somehow? - Another seemingly needed assumption is that the group of symmetries one can learn is a 1-parameter group. If one has a 2-parameter group then it would be given by another vector field, and it is not obvious how to make sure that the 2 vector fields are compatible. Is this a necessary assumption too? - The paper mentions that the techniques they develop work also for discrete groups, and it gives a not very detailed discussion in Appendix B. I don't see how that's the case. Say that the data has a symmetry with respect to an unknown action of an unknown permutation group. How can this method find it? It is not obvious how the algorithm would work in this case. Technical Quality: 3 Clarity: 2 Questions for Authors: In addition to the questions above: - What is the rationale behind assuming that the h functions are polynomials? Could this be done by implementing the h functions with MLPs? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The paper doesn't explicitly state the technical limitations of their approach (see weaknesses). The paper could be improved significantly by stating the mathematical assumptions under which the algorithms work. Additional discussions on the dependence on the dimensionality of the data, dimensionality of the symmetry group, number of samples needed, etc, would improve the paper as well. Even if just empirical. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We have responded below to the concerns raised. **Assumptions**: The single chart assumption is not necessary. Consider $f(x,y,z) = -x^2-y^2+z^2-1$. The surface $f=0$ is a hyperboloid of two sheets. We can estimate a continuous symmetry of this hyperboloid--a rotation in the $(x,y)$ plane--though a single coordinate chart cannot be given. Level set estimation does not deal with coordinate charts, which is a benefit of using a level set to characterize an embedded manifold. We are also not assuming that the symmetry group has dimension 1. We do assume that the group has 1-parameter subgroups. There may be several 1-parameter subgroups, which our method can handle. The discovered vector fields are compatible in the sense that they admit a common set of invariant functions. This is described in lines 112-120, where we seek vector fields (each representing a 1-parameter subgroup) which annihilate the machine learning functions. **Discrete transformations**: If one can express the discrete transformation parametrically, this method can be applied to discrete transformations. Many examples stem from continuous symmetries with a fixed continuous parameter, such as a rotation in the plane by a fixed but arbitrary angle $\theta$, where $\theta$ is the parameter to be optimized. Another example is a 2-d reflection about a straight line passing through the origin. Such a line can be characterized by the equation $ax+by=0$, and a formula for the reflection can be written as \begin{equation*} S(x,y;a,b) = \dfrac{1}{a^2+b^2} \begin{bmatrix} a^2-b^2 & -2ab\\\\ -2ab & b^2-a^2 \end{bmatrix} \begin{bmatrix} x\\\\ y \end{bmatrix}. \end{equation*} Optimizing the parameters $a$ and $b$ in $f(S(x)) = f(x)$ for some function $f$ would yield the "best fit" line of reflection under which $f$ is symmetric. Permutations may not be expressible parametrically, and we will state this explicitly as a limitation of the method, since previous work with permutation groups has been done and is of interest. **Polynomials vs. MLPs**: Since MLPs are universal approximators (as are polynomial functions), our method can be applied to MLPs as well. The main reason MLPs are not used here is because of the difficulty in using them for level set estimation: the MLP would need to output 0 for every input and yet not be the zero function. Meanwhile, we assume that the invariant functions are polynomials primarily for two reasons. First, in the case of affine symmetry, the invariant functions are typically expressed in terms of polynomials. Secondly, polynomials can approximate smooth functions with an arbitrary degree of accuracy [R1] and are easy and transparent to manipulate. **Additional Discussion**: We will include a discussion on the number of parameters needed to estimate the symmetries in dimension $m$ using degree $n$ polynomials. This point is discussed in response to reviewer uBWo. The number of parameters relates to the dimension of the symmetry group and can also provide insight on the number of samples needed. [R1] A. Pinkus, “Weierstrass and approximation theory,” Journal of Approximation Theory, vol. 107, no. 1, pp. 1–66, 2000. --- Rebuttal Comment 1.1: Comment: I appreciate the response by the authors. The assumption on the 1-parameter subgroups is clear, and I suggest it is stated as an assumption under which the algorithm works more explicitly. The clarification of when one can use this method with discrete groups is useful too. I understand this is not the main point of the paper but since it is mentioned in the abstract I think adding the assumption is necessary. The assumption on the manifold is a bit less clear. Does it work for any manifold or does it need to be the level set of a polynomial or system of polynomials? I'll increase my score but for the final version I'd like to see a remark stating the precise mathematical assumptions on the data an symmetry group which is assumed by the algorithms. it doesn't have to be a theorem. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response. Level set estimation works under the assumption that the data lies, or at least approximately lies, on an embedded submanifold of the original feature space. Every embedded submanifold can be represented as a level set of a smooth function, at least locally [R1]. When a smooth machine learning function $f$ is given, we can estimate vector fields using the equation $X(f)=0$. The function $f$ may be related to level set estimation, or it could be another type of machine learning function, such as a regression function. We assume that the dataset lies on a differentiable manifold $M$, that the function $f$ is a smooth function on $M$, and that $X$ is a tangent vector field on $M$. Though our examples commonly examine symmetries of level sets with polynomial components, it is not necessary to assume that the components of any level set are polynomials. We will clarify these assumptions in the revision. [R1] J. M. Lee, Introduction to Smooth Manifolds. Springer New York, NY, 2012.
Summary: The authors are trying to solve a challenging problem to discover symmetry for given data, regarding such symmetry may include non-affine transformations. They observes one-parameter family of symmetric transformations can be represented as a vector field. In the proposed method, they first find machine learning functions (level set estimation) for given data, and then find vector fields which annihilate the machine learning functions. Strengths: The authors address an interesting problem: symmetry detection beyond affine transformations. Their observation to connect transformations and vector fields technically sound. Weaknesses: While the title of the manuscript is quite general, their method seems to highly rely on choice of pre-determined models for machine learning functions (Sec. 3.1) and vector fields (Sec. 3.2). However, no experiment addresses impact of choices of those pre-determined models. In addition, all of the experiments assume we already know a proper pre-determined model of vector fields. The case that pre-determined parametric model of vector fields does not cover the GT vector field used for data generation is not discussed. To address the challenging problem about non-affine symmetry, we should more carefully handle it. Please see the question about the cosine similarity in "Questions" below. Results of affine symmetry detection (Sec. 4.1) seem worse than LieGAN. While LieGAN produces results which clearly converge to GT in terms of both bias and variance, but the proposed method does not in terms of both. The proposed method have benefit at low samples and speed, but this contribution seems too marginal. For non-affine symmetry experiment (Sec. 4.2), experimental details are missing. I cannot consider it is a good result unless I can see more case studies about different choices of pre-determined models. These weaknesses make me doubtful about results and benefit of the proposed method. Technical Quality: 3 Clarity: 2 Questions for Authors: Discuss the following relevant reference: * Desai, Krish, Benjamin Nachman, and Jesse Thaler. "Symmetry discovery with deep learning." *Physical Review D* 105.9 (2022): 096031. The cosine similarity metric used in Section 4 is computed by coefficients of the pre-determined model which determine the overall vector field rather than an average of cosine similarities for vectors at each point, right? If so, how about the following case: In Sec. 4.2, even if we got a vector field $2y f \partial_x + 3x^2 f \partial_y$, then it is also a correct vector field describing symmetry of given data. If you are trying to propose a metric to evaluate non-affine symmetry, then your metric should produce zero value also for this case. Please discuss about this circumstance. Which pre-determined model for vector field are you using in Sec. 4.2? It must be clarified. The result in Equation (16) seems able to be also produced by LieGAN, since the equation indicate an affine symmetry. Why are not you comparing with LieGAN? Issues on clarity of exposition. * Attach color bars for Figures 1-2 * It would be better to clarify to which values $B$ and $w$ in Eq. (1) and $M$ and $W$ in Eq. (13) corresponds for each experiment in Section 4. Minor typoes * Line 61: "exampe" -> "example" * Equation (2): $\partial_{x^i}$ -> $\partial x^i$ at the denominator Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Following limitations should be emphasized, if my understanding is not wrong. * Based on pre-determined model of one-parameter symmetries, this work find the most suitable one-parameter symmetry among the pre-determined model Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We have responded below to the concerns raised. **Pre-determined models**: We believe that we are the first to detect continuous symmetries in the context of machine learning beyond the affine group, despite work on the subject spanning decades. Our method of estimating vector fields is regression with general additive models, where the coefficients of the model features are constrained: we specialize to polynomials for the purpose of illustration and better comparison with existing methods for symmetry detection. We have showcased examples in which polynomial-based models are most appropriate, and although experiments which make use of other types of smooth functions can be devised, the only aspect of our method which changes in these cases is the components of the feature matrix used to estimate the coefficients. Therefore, our method can be trivially adapted to non-polynomial models. Later in this response, we will present the results of an experiment which uses non-polynomial functions. A suitable pre-determined model for the detection of manifold symmetries would rely on the nature of the manifold itself. However, it appears that although the manifold assumption is widely used in machine learning, very little work has been done to characterize the manifolds on which data is assumed to lie. Therefore, our choice to specialize to polynomial-based models is no less valid than other specializations. Additionally, polynomials can be used to approximate any continuous function [R1] and thus, in theory, can characterize any continuous symmetry, at least locally. **Covering the GT vector field**: If the degrees of the polynomial functions of a vector field are sufficiently high, the vector field can approximate a suitable ground truth vector field. We also reiterate that current SOTA methods, if recharacterized in terms of vector fields, would assume that the coefficients of the vector fields are degree 1 polynomials. Our presentation of symmetry detection in terms of vector fields opens up a great many possibilities for types of functions used, although we use polynomials for better comparison with existing methods and due to their universal approximation property [R1]. **LieGAN comparison**: For every value of $N$, our method usually outperformed LieGAN for most trials. However, the average score of our method was brought down by outlier trial runs for our method, likely due to poor initialization of the model parameters. However, such a shortcoming could be easily overcome by fine-tuning. In the spirit of a fair comparison, we did not perform any (albeit simple) hyperparameter tuning for our method, since we did not perform any hyperparameter tuning for the LieGAN method. It is also evident that the error bars for the scores in both methods overlap, except for $N=200$ where our method vastly outperforms LieGAN. Thus, the experimental results seem to suggest that our method is comparable to LieGAN in terms of accuracy, except when $N$ is low. Ultimately, the purpose of this experiment was to show that our method competes with SOTA in terms of accuracy when detecting affine symmetries, while offering a computational advantage. However, using the median as the estimate and the IQR for the error bars gives the results in Table 1 of the rebuttal attachment. We now present an additional experiment that uses both linear terms and a few sinusoidal terms as a pre-determined model. First, we generate 2048 numbers $x_i$ and 2048 numbers $y_j$, each from $U(0,2\pi)$. Next, for each pair $(x_i,y_i)$, we obtain $z_i$ by means of $z_i = \sin(x_i)-\cos(y_i)$, so that a ground-truth level set description of the data is given as $z-\sin(x)+\cos(y) = 0.$ We first apply our level set estimation method to estimate this level set. We optimize the coefficients of the model \begin{equation*} a_0 + a_1 x + a_2 y + a_3 z + a_4 \cos(x) + a_5 \cos(y) + a_6 \cos(z) + a_7 \sin(x) + a_8 \sin(y) + a_9 \sin(z)=0 \end{equation*} subject to $\sum_{i=0}^9 a_i^2 = 1$. In light of Eq. (10) in the paper, the matrix $B$ has a row for each of the 2048 tuples $(x_i,y_i,z_i)$, and 10 columns, which columns correspond to the 10 different feature functions in our pre-determined model. The vector $w$ contains all 10 parameters $\\{a_i\\}_{i=0}^{9}$. Using the $L_1$ loss function and the (Riemannian) Adagrad optimization algorithm with learning rate $0.01$, our estimated level set description is \begin{equation*} -0.57737 z - 0.57713 \cos(y) + 0.57756\sin(x) = 0, \end{equation*} which is approximately equivalent to the ground truth answer up to a scaling factor. **Relevant reference:** We will include a brief discussion of the paper by Desai et al. We note that LieGAN outperformed this work in their experiments, and thus a comparison is unnecessary. **Details about Section 4.2 experiment**: For this experiment, the search space for level set estimation was limited to cubic polynomials, while the search space for vector fields was limited to quadratic coefficients. In this setting, the only valid symmetries are constant multiples of the given ground truth symmetry. It is a good point that $fX$ may not have a favorable cosine similarity. Thus, it seems that the cosine similarity metric can only be applied where the issue of uniqueness can be controlled, as in this particular example. We will discuss this limitation in the revision. The purpose of this experiment is to show that our method can detect non-affine symmetries, where current SOTA cannot. See also our overall rebuttal for further discussion. [R1] A. Pinkus, “Weierstrass and approximation theory,” Journal of Approximation Theory, vol. 107, no. 1, pp. 1–66, 2000. --- Rebuttal 2: Comment: I appreciate the response and the additional discussion by the authors. Some of my concerns have been resolved, but some others still remain. Q1. LieGAN comparison for affine symmetry detection By the additional experiment in the rebuttal attachment, now the benefit of the proposed method against LieGAN has become clear. While the weakness about initialization must be discussed in the paper, the result is sufficient to support the benefit. Other concerns are related the "predetermined models". Here I would like to reorganize the other concerns as followings by narrowing down the scope from what are general non-affined symmetries to coefficients of predetermined models of vector fields: Q2. Is finding vector fields enough to say "we solve non-affine symmetry detection problem"? Q3. Is finding coefficients of predetermined regression models of vector fields enough to say "we solve non-affine symmetry detection problem"? Q4. Is the coefficients of predetermined models of vector fields a plausible evaluation metric? The authors address Q2 and Q3 well. While the authors' discussion about these questions including the new experiment with sin, cos model should be added to the manuscript, Q2 and Q3 cannot be a reason to reject this paper. Now I agree that This paper has its own benefit which support possibility to be accepted, including SOTA performance for affine symmetry detection and an approach to reduce non-affined symmetry detection to finding a vector field. However, Q4 is a concern in academic perspective that this paper can lead to misunderstanding among future researchers about what non-affine symmetry is and how it should be evaluated. One of reason I asked "The case that pre-determined parametric model of vector fields does not cover the GT vector field used for data generation is not discussed." is that I wanted the authors to realize the cosine similarity metric for coefficients do not have any sense for that case and to provide another plausible metric. The universal approximation property does not defense this issue. Let me explain why the evaluation metric of cosine similarity for coefficients is weird by analogy to another formulation of problem. Suppose that there is a computer vision method where a neural network produces output images. Then the method must be evaluated using an image distance metric between output images from the proposed network and GT images. On the other hand, it will be so weird and misleading if one assume there is an GT network parameters which exactly produces GT images and evaluating the method using distance between network parameters. It seems illogical to claim the benefit of the non-affine symmetry detection results when there hasn't been sufficient discussion on how non-affine symmetry should be evaluated. This could potentially establish an incorrect evaluation method for future researchers, which might undermine the benefits of the proposed method. For this reason, my current score is "borderline reject". The minimum requirement for acceptance is as following: * Clarify that: the evaluation metric is incomplete since it is not defined on vector fields themselves but depends on choice of parametric representations of vector fields. Then large changes in vector fields might raise relatively small changes in coefficients and vice versa. * At least one additional experiment for the case that pre-determined parametric model of vector fields does not cover the GT vector field used for data generation. Then the authors should clarify they currently do not know plausible evaluation metric for this case and finding such metric is an important future work, and qualitative results which shows GT and estimated vector fields must be included. --- Rebuttal Comment 2.1: Comment: We thank the reviewer for their response. First, we acknowledge the reviewer's point that the cosine similarity evaluation metric used in the non-affine symmetry detection experiment is incomplete. When a GT vector field is not strictly expressible in terms of a pre-determined model, the evaluation metric used is not applicable. Moreover, the uniqueness issue with vector fields is not addressed using this metric, since vector fields $X$ and $fX$ are not generally evaluated equally. Second, we provide an experiment in which the GT vector field is not recovered by our method. For this, we use the dataset provided in a previous response, where 2048 numbers $x_i$ and 2048 numbers $y_i$ are generated, each separately from $U(0,2\pi)$. Numbers $f_i$ are calculated by means of $f_i=\sin(x_i)-\cos(y_i)$, and we seek to identify a symmetry of $f(x,y) = \sin(x)-\cos(y)$. A GT vector field which characterizes the symmetry of $f$ is given as \begin{equation*} X = \sin(y)\partial_x - \cos(x)\partial_y. \end{equation*} Applying our method with a pre-determined model of degree 2 polynomials gives an estimated vector field $\hat{X}$ of \begin{equation*} \hat{X} = \left( 0.7024-0.1874x-0.2203y+0.0121x^2+0.0242xy+0.0133y^2 \right) \partial_x \end{equation*} \begin{equation*} + \left( -0.5783+0.2665x+0.1236y-0.0311x^2-0.0150xy-0.0097y^2 \right) \partial_y. \end{equation*} This result was obtained using the L1 Loss function, the Riemannian Adagrad optimizer with learning rate $0.1$, training for $5000$ epochs. It is clear that the estimated vector field does not cover the GT vector field. Moreover, the limitations of the cosine similarity as used in experiment 4.2 are evident, since this evaluation metric cannot be applied in this case. As the reviewer has said, this leaves the problem of finding suitable evaluation metrics as an open problem in symmetry detection, since no suitable evaluation metrics have been applied in a general setting. We will clarify this point in the revision. Third, we offer the following improved evaluation metric as a plausible alternative. A cosine similarity can be defined between two smooth functions directly, without using parameters in a pre-determined model. The set of $\mathcal{C}^{\infty}$ functions on a closed, bounded subset $\Omega$ of $\mathbb{R}^n$ forms a vector space which can be equipped with the inner product defined by the definite integral of the product of two functions: \begin{equation*} \langle f,g \rangle = \int_{\Omega} fg dx. \end{equation*} Thus, with a norm induced by this inner product, a cosine similarity between functions $f$ and $g$ can be obtained by means of \begin{equation*} \cos(\theta) = \dfrac{\langle f,g \rangle}{||f|| \cdot ||g||}. \end{equation*} An improved evaluation metric for vector fields is thus an aggregation (such as the mean) of cosine similarity scores for each component pair of the vector fields. Concretely, given vector fields $X=f_i\partial_{x^i}$ and $Y=g_i\partial_{x^i}$, their similarity can be estimated by \begin{equation*} \text{sim}\left( X,Y \right) = \dfrac{1}{N}\sum_{i=1}^{N} \dfrac{| \langle f_i,g_i \rangle |}{||f_i|| \cdot ||g_i||}, \end{equation*} which will take values in $[0,1]$, with $0$ signifying minimal similarity and $1$ signifying maximal similarity. In our example, $x_i,y_i \in [0,2\pi]$, so that a suitable domain for integration is $[0,2\pi] \times [0,2\pi]$. (The distribution of the data defines this domain of integration in general.) We apply the above formula using the GT vector field and our estimated vector field and obtain a similarity score of approximately $0.62$. For the results of the final trial in experiment 4.2, the similarity scores would be adjusted from $0.9919$ to $0.9976$ (ours) and $0.4930$ to $0.4583$ (LieGAN's). Our new similarity score is imperfect, since the multiplication of $X$ by any smooth function is a valid GT vector field. However, this new similarity score is an improvement over the original, since it does not rely on pre-determined models. We offer this improved similarity score as a plausible alternative to the parameter-based cosine similarity score used in experiment 4.2. Another possible evaluation metric would make use of the point-wise inner product of two vector fields, as suggested in the reviewer's previous comment about an average of cosine similarities for vectors at each point. This may be suitable, though the inner product of two tangent vectors can only be computed if a metric tensor is given: thus, this method would need to make an assumption about the metric tensor for the dataset. As we discuss in appendix A, it is common in machine learning to assume that the metric tensor is the Euclidean metric. However, we believe, as we mention in the appendix, that this commonly-accepted assumption may eventually be challenged, and so we prefer the first alternative evaluation metric. --- Rebuttal 3: Comment: I appreciate for the authors' anthusiastic discussion and additional experiment. Clarifying my conclusion first, I raised my score to weak accept. The first version of the manuscript evaluated symmetry detection in the coefficient space of a predetermined model of vector fields, but the authors finally proposed the way to evaluate in the space of vector fields. The reason why I leaned toward rejection was based on the coefficient space evaluation. The latter evaluation is much technically sound than the former one, for me. Of course evaluating on the space of vector fields still has limitations, since handling symmetries in the sense of {$S$ | $f\circ S = f$} and the sense of the vector fields are not equivalent e.g. X and fX. However, overcoming these remained challenges is considered as a good future work, but the authors are not needed to provide a method and results to overcome them. I think attaching relavent discussions from this rebuttal & discussion period into the revision will be fine. There are also remained concerns which I hope the authors to adress them in the revision (not need to response during this discussion period, which has only one hour left). * The vector field similarity depends on the choice of the bounding volume. It will be clarified. * For $\mathrm{sim}\left(X,Y\right)$, The current formula seems to depend on choice of coordinates so that lacking geometric meaning. Please check. I think the summation symbol should appearr three times, once in the numerator and twice in the denominator.
Rebuttal 1: Rebuttal: We thank all of the reviewers for their very helpful comments. We first wish to emphasize here our novel contribution to symmetry detection, particularly in light of current SOTA methods. To date, the most successful method of detecting continuous symmetry is LieGAN, which is a large neural network that can successfully detect affine symmetries in low dimensions. In contrast, our method uses polynomial regression, with constrained coefficients, to detect not only affine symmetries, but symmetries of much higher complexity. The computational advantage alone, owing to our novel use of vector fields, offers a highly non-trivial advantage to current SOTA methods. Moreover, we appear to be the first to detect any continuous symmetries beyond affine transformations using only training data. The current SOTA method (LieGAN), after decades of work on the subject, uses a GAN to detect rotational symmetry in two dimensions, where such a transformation is likely the most rudimentary of all symmetries, from a mathematical perspective. Our experiments use polynomial regression, though we note that our method can be trivially adapted to accommodate different functions. We use polynomials not only for better comparison with current SOTA, (which, if presented in terms of vector fields, would assume linear polynomial vector field components) but also due to the universal approximation property of polynomials, as polynomials can approximate any smooth function to an arbitrary degree of accuracy [R1]. We have also included an experiment which uses a non-polynomial basis in response to a specific reviewer's concern for this. We also wish to address a technical question relating to the potential need for careful handling of non-affine symmetry detection. Given a vector field $X$ and a vector field $fX$, a function $h$ is $X$-invariant if and only if it is $fX$-invariant. If $h$ is $X$-invariant, the flow of $X$ is a symmetry of $h$, so that the flow of $fX$ is also a symmetry of $h$: this follows from our discussion of vector fields. For example, a circle centered at the origin, characterized by $F=0$ where $F(x,y) = x^2+y^2-r^2$ for some real number $r$, exhibits rotational symmetry described by $X=-y\partial_x + x\partial_y$. However $\frac{1}{y} X (F) = 0$, so that the flow of the vector field $-\partial_x + \frac{x}{y} \partial_y$, where defined, is also a symmetry of the circle. This may seem to introduce a theoretical problem requiring greater care when detecting non-affine symmetry. However, our proposed method of constructing models which are invariant with respect to the symmetry group requires only the identification of functions which are invariant with respect to the vector fields. Thus, both $X$ and $fX$ are valid answers for "ground truth" symmetries, since a function $h$ is $X$-invariant if and only if it is $fX$-invariant. This lack of uniqueness does present a challenge when estimating suitable vector fields. When estimating the symmetries via constrained regression, $X$ and $fX$ may or may not both be present in the search space. As we allude to on page 5, symmetry estimation can be done symbolically, and this would eliminate the possibility of both $X$ and $fX$ appearing in the set of discovered vector fields. A non-symbolic technique is also discussed, with the challenge being partially addressed in lines 226-231, though this issue may also warrant a discussion in a new ``limitations'' subsection of the methods section. We also briefly mention that this issue can be addressed by identifying "special" vector fields and reducing the search space of vector fields to linear combinations of these special vector fields known as Killing vectors. This would eliminate the uniqueness problem and is experimented with in Appendix A. This approach is a special case of symmetry detection where only isometries can be detected. Another point which may help to address concerns about our handling of the difficult nature of non-affine symmetry detection is in the relationship between the flows of $X$ and $fX$ generally. The trace of the flow of $X$ through the point $p$ is characterized by the level set $h_i=c_i$, where $\{h_i\}$ is a complete set of $X$-invariant functions. Since a function is $X$-invariant if and only if it is $fX$-invariant, a complete set of invariant functions for $fX$ can be taken to be $\{h_i\}$ without loss of generality. Thus, the level set $h_i=c_i$ also characterizes the trace of the flow of $fX$ through the point $p$. This argument assumes that the flows of $X$ and $fX$, as well as the vector fields themselves, are well-defined in an open neighborhood about the point $p$. In fact, in our example, the flow of $-\partial_x + \frac{x}{y} \partial_y$, assuming $y>0$, is given as $\Phi(t,(x,y)) = \left(x-t, \sqrt{y^2+2tx-t^2} \right)$, which trace is a (part of a) circle (we note that the flow parameter is $-x$). The trace of this flow through a point is equivalent to the trace of $X$ through the same point. [R1] A. Pinkus, “Weierstrass and approximation theory,” Journal of Approximation Theory, vol. 107, no. 1, pp. 1–66, 2000. Pdf: /pdf/ab58f5db9dd3a8de67f097415e3be085ce9a72c0.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Identifiable Object-Centric Representation Learning via Probabilistic Slot Attention
Accept (poster)
Summary: This paper studies the theoretical identifiability of object-centric representations (slots) in object-centric learning (OCL). Prior works study the same problem, but are limited to OCL models with an additive decoder. This work relaxes the constraint to non-additive decoders, which have shown important to scale up to more complex data in recent OCL works (e.g., TransformerDecoder, Diffusion Model). To tackle this generalized setting, authors propose Probabilistic Slot Attention (PSA), which applies the Gaussian Mixture Model (GMM) to produce slots from each data sample (e.g. an image). The authors prove the effectiveness of PSA both theoretically and empirically, using results on both low-dimensional data and 2D images. Strengths: - This paper tackles an important problem -- what assumptions are required to learn identifiable object slots? - The paper is generally clearly written and well presented. - I appreciate the efforts in experimenting with common OCL image datasets such as CLEVR and ObjectsRoom. They are missed in previous identifiable OCL works. - The experimental results with PSA and PSA-Proj (both using additive decoders) are solid. Weaknesses: I have only one big concern. However, this concern is closely related to the main contribution of this paper (correct me if I am wrong), and I cannot accept the paper if it is not well addressed. - This paper claims to be the first work learning identifiable slots with a non-additive decoder (NoA). This is great, as recent OCL works show impressive results using a Transformer-based or a diffusion-based decoder. - However, the paper does not really have experimental results supporting this claim. PSA-NoA seems to underperform PSA-Proj consistently on all datasets & metrics. - What's worse, even compared to vanilla SA, PSA-NoA still underperforms in FG-ARI (Table 3), and even on some identifiability-related metrics (Table 2). These results are concerning to me, as we are not sure if the proposed algorithm can really scale to recent OCL models with better decoders. Minor: - Please unify the citation format -- for papers that are published at conferences, please use their conference version, e.g., [5] (ICML), [45] (NeurIPS). - In Sec.4, the paper claims that PSA can dynamically determine the number of object slots. While Fig.10 in the Appendix shows a few results, I don't think that's enough to claim PSA "offers an elegant solution to this problem". [1] is a recent work that studies this problem and provides in-depth analysis. The authors can conduct experiments following [1]'s setting. But I understand that the dynamic slot number is not the main contribution of this work. - In line 51, the authors claim that the computation cost of non-additive decoders is invariant to the number of slots K. Is this true? In recent Transformer-based and diffusion-based decoders, they use cross-attention to condition the reconstruction on slots, and the computation cost of attention is quadratic to the token size, i.e., K in this case. [1] Fan, Ke, et al. "Adaptive Slot Attention: Object Discovery with Dynamic Slot Number." CVPR. 2024. Technical Quality: 2 Clarity: 3 Questions for Authors: - Can the authors apply PSA to a better non-additive decoder? For example, the Transformer decoder proposed in [61]. Otherwise, it is hard to assess the contribution of this work. - What is the implementation detail of the non-additive decoder "standard convolutional decoder"? How to decode an image from K 1D slots using a CNN decoder? Minor: - How can we apply the theory proposed in this paper to more complex datasets, e.g., real-world COCO images? This is not required for a theory paper, but I'm curious about the authors' thoughts. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors have adequately addressed the limitations in Sec.7. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments and constructive feedback. We appreciate the fact that our paper was considered to be clearly written and well presented, and we are glad that our results are perceived to be solid. > **"However, the paper does not really have experimental results supporting this claim..."** We respectfully disagree that the results do not support this claim, PSA-NoA and PSA-Proj both include the probabilistic latent setting as proposed in our work. Notably, there is a known trade-off between identifiability and expressivity induced by the choice of decoder structure [45]. Depending on the use case, it may be beneficial to combine both latent and additive decoder structures in practice, particularly if the latter introduces useful inductive biases and/or simplifies the optimization problem. In our experiments, we observed that when using PSA in tandem with an additive decoder it is possible to outperform all other baselines. As for non-additive decoder-based experiments, PSA-NoA must be compared directly against SA-NoA for it be a fair comparison, and we observed it to perform better than SA-NoA while remaining competitive with the remaining baselines. > **"Can the authors apply PSA to a better non-additive decoder? For example, the Transformer decoder proposed in [61]. Otherwise, it is hard to assess the contribution of this work."** We agree that the applicability of our framework on large-scale datasets is a crucial evaluation of our theoretical results, but that does not reduce the contribution of our work, which is primarily theoretical. The main focus of this work is to investigate the theoretical identifiability of slot representations and the conditions that ensure this property, rather than provide state-of-the-art results on large-scale datasets. To verify our theoretical claims, we first conduct detailed experiments on controlled datasets, and then extend our demonstrations to unstructured image data. We stress that the synthetic datasets we used are necessary for properly testing our identifiability hypotheses. One of the main assumptions necessary to prove slot identifiability in our setting is weak injectivity, which is achieved when we use piece-wise linear decoders. In the case of transformer decoders, this assumption is not guaranteed to hold because of the complexity of the attention mechanism (further theory is required here). With that said, we have conducted additional empirical evaluations on the identifiability of slot representations obtained with more complex transformer decoders, which result in an SMCC of $\mathbf{0.73 \pm 0.04}$ and R2 of $\mathbf{0.55 \pm 0.06}$ on the CLEVR dataset, which is significantly better than SA and all other baselines. ***For more details and additional experiments please see the general comment at the top.*** > **"What is the implementation detail of the non-additive decoder "standard convolutional decoder"? How to decode an image from K 1D slots using a CNN decoder?"** Thank you for pointing this out as missing as it escaped our attention. We simply concatenate all K slots together and upscale the resolution using four transposed convolutions until we reach the image resolution, we also use Leaky-ReLU activations. The architecture is very similar to the one used in the original SA work, we will be sure to add the remaining details to the paper, thanks again. > **"How can we apply the theory proposed in this paper to more complex datasets, e.g., real-world COCO images? This is not required for a theory paper, but I'm curious about the authors' thoughts."** We have included experiments on PascalVOC2012 datasets and also tested our model with more complex decoders, ***please refer to the general comment above for details and additional results.*** --- Rebuttal 2: Title: Re: rebuttal Comment: I thank the author for the rebuttal. The experiments on the Transformer-based slot decoder and the large-scale experiments on real-world datasets are extensive and strong. My main concerns are addressed. Now I recommend acceptance of the paper and have adjusted my score to Weak Accept.
Summary: This paper addresses the problem of identifiability of object-centric representations. In contrast to prior works which achieve identifiability via assumptions on the generator, this paper explores identifiability via assumptions on the latent distribution. To do this, the authors introduce a probabilistic variant of the popular Slot Attention algorithm and prove theoretically that this method identifies the ground-truth object representations. The authors verify their theory on toy data as well as test their method on high dimensional image data, showing improved performance over baseline methods. Strengths: * The authors address an important problem in representation learning. Namely, understanding when learning object-centric representations is theoretically possible. &NewLine; * The paper is well written, well structured, and, in general, easy to understand. &NewLine; * The authors position their contribution well within the broader representation learning literature and provide a good review of prior work. &NewLine; * Section 4 is well written. I particularly appreciated being able to look at Algorithm 1 while reading the section to guide my intuition. &NewLine; * Exploring probabilistic constraints for identifiability in object-centric learning is an important problem to understand, thus I think the author’s contribution is of interest for the representation learning community. &NewLine; * The authors achieve identifiability by proposing an adaptation to a widely used method, making their method potentially easy to adopt in practice. &NewLine; * The authors conduct a solid empirical study and achieve promising results, in particular when coupling their method with structured decoders. &NewLine; * The figures in the manuscript are conceptually helpful and aesthetically well done. Weaknesses: ### __Paper Positioning/Storyline__ One of the main issues I have with this work is that I do not think that the paper’s storyline i.e. how the authors motivate and position their contribution, accurately reflects the actual contributions of this work. As I understand, the current pitch of the paper is the following:
 &NewLine; *Previous works from [1, 2], prove identifiability of object-centric representations by making assumptions on the decoder. Enforcing the assumptions in [1] is not tractable in practice, however, due to scalability issues with the compositional contrast in [1]. In this work, we remedy this by exploring probabilistic constraints for identifiability which yield identifiability theoretically and empirically but do not suffer from the same empirical scalability issues as prior works.* I think this pitch is problematic for the following reasons:

 &NewLine; __Firstly__, it is important to note that the compositional decoders explored in [1] were proven to be a subset of the additive decoders explored in [2]. Consequently, if the ground-truth decoder is compositional and one uses an additive decoder for the inference model, then assuming the assumptions from [2] are met, the inference model is slot identifiable i.e. it will implicitly minimize compositional contrast. &NewLine; This does not completely dismiss the scalability issues noted by the authors, since as mentioned in Lines 51-52, additive decoders also suffer from some scalability issues. I think, however, that the author’s current discussion of scalability, in particular wrt [1], misses the key nuance discussed above. I would suggest the authors incorporate this discussion into the paper by altering their writing and positioning of their contribution accordingly.
 &NewLine; __Secondly__, the storyline and writing in the manuscript give the impression that one could dispense with decoder structure all together in favor of probabilistic structure on the latent space. As the authors show empirically in Section 6, however, this is not exactly the case. While probabilistic structure gives identifiability gains relative to baselines, without incorporating decoder structure, identifiability drops non-trivially across all metrics. 
 &NewLine; Moreover, one of the core motivations for object-centric learning is learning representations which generalize compositionally [2, 3]. Such compositional generalization is only possible through decoder structure where additivity is one such structure (see [3] Section 2.). If one only focuses on enforcing structure on the latent distribution, it is not clear to me how such compositional generalization can be achieved. &NewLine; With all of this being said, I think a more accurate and superior pitch for the contributions in this work should focus on the advantage of using probabilistic and decoder structure in tandem opposed to suggesting that probabilistic structure is somehow superior. Something like:
 &NewLine; *In this work, we show how identifiability can be achieved via probabilistic constraints on the latent space. We show how such constraints can be naturally and tractably incorporated into Slot Attention. We verify our theory and method on toy data. We then show on image data that our probabilistic structure leads to improved identifiability over unstructured baselines. Furthermore, when coupled with structured decoders, our method yields performance which outperforms both probabilistic structure and decoder structure in isolation.*
 &NewLine; For the reasons stated above, I would encourage the authors to alter their writing in the manuscript to adhere closer to a storyline along the lines of the one presented above. As things currently stand, the messaging in the paper feels a bit misleading and over-claiming.
 &NewLine; $\newline$ &NewLine; ### __Theory Explanation__ I would have appreciated a short paragraph giving some intuition on how exactly the probabilistic constraints imposed by the authors sufficiently restrict the problem such that slot identifiability is possible. Specifically, unsupervised identifiability via probabilistic structure is challenging and, in most cases, impossible [4., 5.]. Thus, I think it would be helpful if the authors could explain which structure in their method is key towards overcoming these well known unidentifiability issues. For example, is it the permutation invariance, the complex aggregate posterior etc.? &NewLine; $\newline$ &NewLine; ### __Metrics__ The authors use two main metrics to validate identifiability: the slot identifiability score (SIS) from [1] and a new metric, slot MCC (SMCC). I found the authors explanations of the differences in these metrics unclear, which made it difficult to interpret some of the empirical results. My understanding is that SMCC is just a linear/affine version of SIS in terms of the predictor fit between latents. Thus, it is a bit unclear to me why the values for SIS should be so much lower than SMCC in the experiments on toy data in Section 6. &NewLine; $\newline$ &NewLine; ### __Experiments__ The authors experiments on image data focus primarily on simple decoders opposed to e.g. Transformers and visually complex datasets. To assess the scalability of the author’s method it would be important to test the method on more complex models/datasets. I view this as a more minor weakness of this work, however, given that the contribution is largely theoretical. &NewLine; $\newline$ &NewLine; ### __Appendix__ In section A. of the Appendix, the authors review some definitions from prior works e.g. in [1]. These definitions, however, are presented in an informal, imprecise way. I would encourage the authors to be precise in this section if they are going to include definitions from prior works. &NewLine; $\newline$ &NewLine; ## __Bibliography__ 1. Provably Learning Object-Centric Representations (https://arxiv.org/abs/2305.142290) &NewLine; 2. Additive Decoders for Latent Variables Identification and Cartesian-Product Extrapolation (https://arxiv.org/abs/2307.02598) &NewLine; 3. Provable Compositional Generalization for Object-Centric Learning (https://arxiv.org/abs/2310.05327) &NewLine; 4. Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations (https://arxiv.org/abs/1811.12359) &NewLine; 5. Variational Autoencoders and Nonlinear ICA: A Unifying Framework (https://arxiv.org/abs/1907.04809) Technical Quality: 3 Clarity: 3 Questions for Authors: * Do the authors view their method as being a replacement for structured decoders in object-centric learning or do they view the method as being better used in tandem with structured decoders? * What is the key theoretical assumption/constraint which allows for identifiability to be possible?
 * What are the key differences between SMCC and SIS? * Do the authors have an explanation for the low SIS score on the toy data experiments? * Does “R2” refer to SIS when used or is this a different score?
 * Do the authors have any intuition about how well their method would perform for more complex models/datasets? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors include a limitation section which discusses some limitations of their theory. I would encourage the authors to discuss some limitations in their experiments, as discussed above in the “Weaknesses” section. I would also include some discussion of the limitations of purely probabilistic structure for object-centric learning as it pertains to compositional generalization, as discussed above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their very detailed comments and constructive feedback which helped improve our paper significantly! We also appreciate the fact that our paper was perceived to be well-written, easy to understand, and of interest to the community. > **"One of the main issues I have with this work is that I do not think that the paper’s storyline ..."** We thank the reviewer for their very insightful and constructive suggestion regarding the messaging of the paper. We largely agree with the overall sentiment and will incorporate the suggested changes which will significantly improve the paper’s pitch. To clarify, it is not our intention to compete with or replace additive decoders as a model choice since they undoubtedly provide useful inductive biases for object-centric learning as ours and many other previous works show. We further remark that before our work there was a lack of explanatory theory for why state-of-the-art results were able to be obtained using non-additive autoregressive Transformers (DINOSAUR [60]) and/or diffusion-based decoders (Slot-Diffusion [r2]). We showed that by viewing slot attention through a probabilistic graphical modelling perspective it is possible to prove slot identifiability for non-additive decoders using proof techniques from identifiable generative modelling. Given that vanilla slot attention can be seen as a simplified version of probabilistic slot attention, akin to the relationship between soft k-means and GMMs, our theoretical results suggest why non-additive decoder structures can work well given the appropriate latent structure and inference procedure are in place. With that said, there is an identifiability and expressivity trade-off [45] induced by the choice of decoder structure, so depending on the use case it may indeed be advantageous to combine both latent and additive decoder structure! > **"I would have appreciated a short paragraph giving some intuition on how exactly the probabilistic ..."** We would like to point out that we do discuss the intuition of our identifiability results in depth in the appendix, just before the proofs. As opposed to [4, 5], we use a more expressive GMM latent distribution followed by a weakly injective piece-wise linear decoder which in combination ensures that the slot representations are identifiable. > **"The authors use two main metrics to validate identifiability: the slot identifiability score (SIS) from ..."** SIS is the relative R2 (coefficient of determination) score-based measure, capturing the relative ratios of variance for a dependent variable explained by an independent variable with baseline model scores that are trained on the slot representations of a dynamically updating model. As for SMCC, it measures the linear/affine relationship between permuted slots and features. We provide more detailed explanations in appendix F. > **"The authors experiments on image data focus primarily on simple decoders opposed ..."** Although we agree that assessing the performance of probabilistic slot attention on more complex datasets would be useful, the controlled scenarios and datasets we used are necessary for properly testing our theoretical identifiability hypotheses, which is the objective of this work. Regarding scalability concerns of probabilistic slot attention (PSA), we stress that probabilistic slot attention PSA retains the $\mathcal{O}(TNKD)$ computational complexity of vanilla slot attention and as such enjoys the same scalability properties. We did evaluate our model on large-scale datasets and with more complex decoders, we discuss them in the general comment. > **"Do the authors view their method as being a replacement for structured decoders in object-centric learning or do they view the method as being better used in tandem with structured decoders?"** From a theoretical identifiability viewpoint, latent distributional structure is a sufficient requirement as long as the decoder is piece-wise linear. However, additive decoders provide strong, useful inductive biases and may be easier to optimize relative to a probabilistic model with an arbitrary decoder. In our experiments, we observed that using our approach in tandem with an additive decoder tends to outperform other models. > **"Do the authors have an explanation for the low SIS score on the toy data experiments?"** SIS was proposed by [1] and it uses the relative R2 scores, where baseline models are trained on the slot representations of a dynamically updating model. Due to this, it tends to exhibit high variability as illustrated in Appendix F (we use the official implementation on GitHub for all our analysis). We believe this instability in the baseline model, which is also pointed out by others on their GitHub repository, could be due to the validation dataset size resulting in lower scores. > **"Does “R2” refer to SIS when used or is this a different score?"** R2 is the coefficient of determination, while SIS is a relative R2 score. R2 is propositional to SIS but due to the instability of SIS, we use R2 for all imaging experiments. > **"Do the authors have any intuition about how well their method would perform for more complex models/datasets?"** The method can be scaled to large-scale datasets using the approaches outlined in [60], with convolutional or transformer-based decoder since the computational complexity is the same as vanilla slot attention ($\mathcal{O}(TNKD)$). **Please refer to the general comment for results on complex datasets using more powerful decoders.** [r2] Wu, Z., Hu, J., Lu, W., Gilitschenski, I. and Garg, A., 2023. Slotdiffusion: Object-centric generative modeling with diffusion models. Advances in Neural Information Processing Systems, 36, pp.50932-50958. --- Rebuttal 2: Title: Re: Rebuttal Comment: I thank the authors for their reply.

 &NewLine; $\newline$ &NewLine; **"One of the main issues I have with this work is that I do not think that the paper’s storyline ..."** I appreciate the authors acknowledgement of my concerns and look forward to reading an updated version incorporating this feedback. Regarding the connection between slot attention and Transformer’s success in object-centric learning: If this is the messaging of the paper that the authors wish to convey, then I would encourage them to rewrite the introduction accordingly. This messaging was not clear to me from reading the paper. Moreover, I am not completely convinced by this explanation given that decoder structure does indeed play an important role empirically. Thus, I find it more likely that the success of Transformers is due to a combination of probabilistic and decoder structure (i.e. the inductive biases of the Transformer). In other words, I do not think the authors results on their own provide a complete explanation of Transformer's success in object-centric learning but may provide some evidence to this end. &NewLine; $\newline$ &NewLine; **I would have appreciated a short paragraph giving some intuition on how exactly the probabilistic …** Thank you for the paragraph reference. Based on this, I would like to make a related point on the clarity of the theory. Namely, I do not think that the authors sufficiently highlight the piece-wise linear structure needed on the decoder for their theoretical result. 

 Up until page 7, the paper gives the idea that no decoder structure is needed for the theory. For example, the authors state in Section 2 that their theoretical contribution falls into the category of “(iii) imposing structure in the latent space via distributional assumptions.” This is not exactly correct, as the piece-wise linear structure is indeed decoder structure. I understand that the authors may view this structure as more easily implemented than e.g. additivity, and thus possibly less noteworthy. However, this piecewise linear structure is a key aspect of the theoretical contribution of this paper, and moreover, is important for contextualizing the authors theoretical contribution relative to prior identifiability results. Thus, I think this structure should be mentioned a bit more transparently in the introduction, and discussed with more precision in the related work. Otherwise, the messaging of the paper once again feels misleading.


 &NewLine; $\newline$ &NewLine; **"The authors use two main metrics to validate identifiability: the slot identifiability score (SIS) from ..."** 
I appreciate the authors reply on this point. I am curious, however, what the procedure was to resolve the matching problem when computing SMCC. Specifically, slots are locally permutation invariant opposed to globally due to the local permutation invariance of the decoder. Thus, how did the authors resolve this local permutation invariance when computing SMCC? --- Rebuttal 3: Comment: We thank the reviewer for engaging with us and for providing feedback. > **“If this is the messaging of the paper that the authors wish to convey, then I would encourage them to rewrite the introduction accordingly…”** We are glad to hear that and look forward to incorporating the suggested changes, thanks again. To clarify, our discussion of slot attention and Transformers is in response to the reviewers' requests and does not represent a change in the core message of the paper. We also acknowledge that further theory is required when using non-additive Transformer-based decoders as technically the weak injectivity property our proofs rely upon is not known/guaranteed to hold for Transformers due to the complexity of the attention mechanism (c.f. response to Reviewer HxYt). We believe that extending our theoretical results by relaxing the weak injectivity decoder assumption offers a promising direction for future research. > **“I find it more likely that the success of Transformers is due to a combination of probabilistic and decoder structure (i.e. the inductive biases of the Transformer).”** We generally agree as there is a known trade-off between identifiability and expressivity induced by the choice of decoder structure [45]. As such it may be beneficial to combine both latent and decoder structures, particularly if the latter introduces useful inductive biases and/or simplifies the optimization problem. In our experiments, we observe that the combination of both typically yields better results. > **“I do not think that the authors sufficiently highlight the piece-wise linear structure needed on the decoder for their theoretical result ”** We understand the reviewer's concern but respectfully disagree with their conclusion. Our theoretical results show that the decoder additivity constraint is not required if the decoder is piecewise linear and the latent space is GMM distributed. Although any decoder possesses some structure in terms of its architecture, an MLP decoder with LeakyReLU activations (satisfying weak injectivity) does not impose structure in the same sense as an additive MLP decoder, as the latter is a stronger restriction on the functional class and departs from the standard MLPs commonly used outside of object-centric learning. We will emphasize the weak injectivity assumption and the implications of piecewise decoders earlier in the introduction. > **“How did the authors resolve this local permutation invariance when computing SMCC?”** As detailed in Appendix F, we used Hungarian matching to resolve this when computing SMCC. Additionally, we believe our new experiments on PascalVOC address the reviewer’s concerns regarding the scalability of probabilistic slot attention. --- Rebuttal Comment 3.1: Title: Re: Rebuttal Comment: Thank you for the reply! &NewLine; $\newline$ &NewLine; **“I do not think that the authors sufficiently highlight the piece-wise linear structure needed on the decoder for their theoretical result ”** Which conclusion is being disagreed with here? Decoder structure is assumed in the theory. I presume we agree on this? I agree, as stated, that this is weaker structure than additivity, however, it is stronger than the decoder being a diffeomorphism, which is generally all that is assumed in identifiability results which “(iii) impose structure in the latent space via distributional assumptions.”. Therefore, I do not think it makes sense for the authors to group their theoretical contribution in this category. I hope this is more clear now. &NewLine; $\newline$ &NewLine; **“How did the authors resolve this local permutation invariance when computing SMCC?”** Apologies if my question was unclear. My concern is not on scalability. I will rephrase the question the following way: Do the authors agree that the permutation ambiguity between ground-truth slots and inferred slots is "local" i.e. can change for each data point? If so, then do they agree that a different Hungarian matching problem needs to be solved for every datapoint opposed to a "global" matching as is typically done in disentanglement? If so, then how was this "local" matching problem solved? --- Reply to Comment 3.1.1: Comment: We apologise for the confusion. > **“Which conclusion is being disagreed with here? Decoder structure is assumed in the theory. I presume we agree on this? I agree, as stated, that this is weaker structure than additivity, however, it is stronger than the decoder being a diffeomorphism...”** We agree that piecewise decoder structure is assumed, but we stress that it is a weaker assumption than both additive and diffeomorphic decoders and materializes as e.g. standard MLPs with LeakyReLU activations. Diffeomorphic decoders assume bijectivity of the mixing function, whereas the piecewise decoders we use need only be weakly injective for our proofs. To improve clarity, we will adjust the relevant sentence in the paper (Line 76) to read: “In this work, we prove an identifiability result via strategy (iii) but within an object-centric learning context, where the latent variables are a set of object slots [50], and piecewise linear mixing functions are employed.” We will also make it clearer earlier on in the introduction that piecewise decoders are necessary for our theoretical results. > **“how was this "local" matching problem solved?”** Yes as detailed in Appendix F, we apply Hungarian matching for every data point across the estimated slots. To clarify, our previous statement about scalability was not related to this question but a general reminder. --- Rebuttal 4: Title: Re: Rebuttal Comment: Thank you for the reply! I would encourage the authors to include these details in the appendix opposed to just a code implementation. As far as I know, matching based on Euclidean distance is non-standard. In future iterations of this work, the authors can consider including experiments to test the effectiveness of this matching protocol compared to other methods such as matching based on slot-wise mask, or matching based on R2 score (determined in an online fashion). --- Rebuttal Comment 4.1: Comment: Thanks for the great suggestion, an in-depth study of the effects of the distance function in the matching algorithm is definitely valuable future work. We will expand Appendix F, with more details about the metric and its implementation.
Summary: Solving the problem of identifiability is necessary to find consistent and interpretable factors of variation. There are two approaches to do so: a) place restrictions on the decoders, and b) impose distributional constraints on the latent space. This work takes the second approach and aims to impose a GMM on the latent space. The paper does this by proposing a modification to the vanilla slot attention framework that they call probabilistic slot attention. Under this framework the papers shows theoretical and empirical identifiability of slots. Strengths: + The proposed probabilistic slot attention framework is an intuitive extension of the vanilla slot attention with the updates in each iteration resembling the familiar EM algorithm. + The proposed framework offers a possible solution to the problem of dynamically determining the required number of slots. + This is the first work to experiment with imposing a distribution on the latent space where prior works either focus on the generator or the decoder. + Recovering the latent space upto an affine transformation is shown in the synthetic modelling scenario with theoretical guarantees provided. + The paper studies an important premise – the theoretical understanding of object-centric representations. The paper is written well, with clear motivations for the proposed contributions. Weaknesses: * The paper states that the framework allows for tractably sampling from the aggregate posterior distribution and using it for scene composition tasks; however, this is not empirically qualified anywhere. * Experiments only validate the theory on simple synthetic datasets. Testing on more diverse and realistic data would better demonstrate applicability, though the evaluation would also be more challenging. Generally speaking, I would be concerned about the scalability of such an approach leveraging GMMs. I understand however that the objective of this work is theoretically study the identifiability of object-centric representations under less-constrained settings as compared to previous work. Maybe, having a short discussion on how such an approach can be scaled would be nice to see. * Table 1 lists $\beta$-disentanglement and weak injectivity as core assumptions. While the former is common to all other related methods, weak injectivity is newly introduced. The implications of this assumption is hence important, but is missing in the paper. * In fig. 3, it is not clear what experiment the latents (x-axis) correspond to. * It might be helpful to visit slot-identifiability in the related work section, considering its literature is the most closely related to this work. * Adequate details about the encoder and decoder in the synthetic modelling scenario have not been provided. Minor/editorial * Typo in 769, “Obsereved” -> observed; Typo in 234, “emphirically” -> empirically * Some references are repeated. Eg. 36-37. Please check carefully across the full list. Technical Quality: 3 Clarity: 2 Questions for Authors: * Could you elaborate on assumption 6 and why it is not necessary in the current work? * In Algorithm 1, line 6, where attention $A_{nk}$ is calculated, mean is calculated as $W_q \mu(t)_k$ but variance is not calculated as $W_q \sigma_k^2$ ? * Should we not return $\pi (t)$ at the end of the algorithm? * The work explicitly mentions that having additive decoders is not a necessity of the current work, but any experimental results are shown only with additive decoders. Perhaps it may be enlightening to see experiments akin to [45] for non-additive decoders (transformer based auto-regressive decoders). There are no results to go along with the choice of a convolutional decoder as the non-additive variant which is mentioned in L277. Did I miss something here? * R2 score is reported in Sec 6 without definition. I assume this is the correlation? This is defined as MCC in the paper. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The statistical distributional assumption on the latent space excludes for identification of any causal dependencies between objects which could be made explicit in the “Limitations and Future work” section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful, detailed comments and constructive feedback. We greatly appreciate the positive outlook and the fact that our work was found to be well-written, well-motivated and novel. > **"The paper states that the framework allows for tractably sampling from the aggregate posterior ..."** We indeed show that it is possible (Lemma 1) to use the aggregate posterior for compositional tasks, but we felt that it does not meaningfully add to the paper's main theoretical identifiability contributions. We have now included some preliminary results for demonstration purposes (see pdf). Performing a more comprehensive compositional analysis of the aggregate posterior is certainly valuable and warrants dedicated investigation in future work. > **"Experiments only validate the theory on simple synthetic datasets. Testing on more diverse and realistic ..."** As rightly pointed out by the reviewer, this work aims to study theoretical identifiability of slot representations and the conditions that ensure this property, rather than provide state-of-the-art results on large-scale datasets. To verify our theoretical claims, we first conduct detailed experiments on controlled datasets and then extend our demonstrations to unstructured image data. We stress that the synthetic datasets we used are necessary for properly testing our identifiability hypotheses. Regarding scalability concerns of probabilistic slot attention (PSA), we emphasise that PSA retains the $\mathcal{O}(TNKD)$ computational complexity of vanilla slot attention, where $T$ denotes the number of attention iterations, $N$ the number of input vectors, $K$ the number of slots and $D$ the slot/input dimension. The additional operations we introduce for calculating slot mixing coefficients and slot variances (under diagonal slot covariance structure) have complexities of $\mathcal{O}(NK)$ and $\mathcal{O}(NKD)$ respectively, which do not alter the dominant term. Furthermore, when used in conjunction with additive decoder-based models, PSA can reduce computational complexity by pruning inactive slots via automatic relevance determination (ARD) as outlined in Section 4. Finally, **we have now demonstrated the applicability of PSA to transformer-based decoders** - please refer to the general comment above for details. > **"Table 1 lists -disentanglement and weak injectivity as core assumptions. While the former ..."** We have included a discussion on the weak injectivity assumption in the remark just below it - we’ll also include a similar discussion in the main text. In summary, weak injectivity ensures that a mixing function $f_d$: (i) in a small neighbourhood around a specific point $x_0 \in \mathcal{X}$ is injective – meaning each point in this neighbourhood maps to exactly one point in the latent space $\mathcal{Z}$; and (ii) while $f_d$ may not be globally injective, the set of points in $\mathcal{X}$ that map back to an infinite number of points in $\mathcal{Z}$ (non-injective points) is almost non-existent in terms of the Lebesgue measure on the image of $\mathcal{Z}$ under $f_d$. This assumption is generally satisfied when using Leaky-ReLU networks with randomly initialized weights (Appendix C). > **"In fig. 3, it is not clear what experiment the latents (x-axis) correspond to."** Our apologies for the confusion. Figure 3 is a simple illustrative example of an aggregate Gaussian mixture density, it is there to provide the reader with a conceptual intuition and does not correspond to an experimental setting. > **"Adequate details about the encoder and decoder in the synthetic modelling scenario have not been provided."** We thank the reviewer for pointing this out as it escaped our attention. We have now added the architectural details for all our models in the appendix - all the code will be made available also. > **"In Algorithm 1, line 6, where attention A is calculated, mean is calculated as $W_q\mu$ but variance is not calculated as $W_q \sigma^2$?"** This suggestion is a valid design choice if either the weights $W_q$ are constrained or we use an activation function to ensure all entries in $W_q \sigma^2$ remain positive. However, since $\sigma(t)^2$, at attention iteration $t$, already has an indirect dependency on $W_q$ through $\mu(t-1)$ we omit this style of projection of the variances for simplicity. > **"The work explicitly mentions that having additive decoders is not a necessity of the current work, but any ..."** This is unfortunately incorrect as our "NoA" model variants do in fact correspond to a non-additive convolutional decoder as stated in Section 6. Switching these with more powerful autoregressive transformer-based decoders would possibly improve the results but would constitute an unfair comparison with our baselines. For new experiments please refer to the general comment. > **"R2 score is reported in Sec 6 without definition. I assume this is the correlation? This is defined as MCC in the paper."** R2 score is a coefficient of determination, it is proportional to correlation, which can be empirically observed even in our experiments. SIS score as introduced in [5] is a relative measure of R2. > **"Should we not return \pi at the end of the algorithm?"** Yes thanks for pointing this out, we have now corrected it. > **"Could you elaborate on assumption 6 and why it is not necessary in the current work?"** Thanks, we will add a remark explaining this in the paper. Object sufficiency is crucial when learning *grounded* object representations [43]. Here we do not focus on grounding so strict object sufficiency is technically not required. [r1] Yao, W., Sun, Y., Ho, A., Sun, C. and Zhang, K., 2021. Learning temporally causal latent processes from general temporal data. arXiv preprint arXiv:2110.05428. --- Rebuttal Comment 1.1: Comment: Dear Reviewer 27BN, As the author-reviewer discussion period is soon coming to a close, we kindly ask the reviewer to take the opportunity to engage with us. We sincerely appreciate the time and effort the reviewer has already contributed to the review of our work and hope our thoughtful rebuttal addresses your concerns. Best wishes, The Authors --- Rebuttal 2: Title: Response to authors' rebuttal Comment: I thank the authors for the responses, and the new results in the common response. Please find my responses below: * In the new results in the common response, it appears that the proposed method uses the training strategies of SPOT. The baselines could also benefit from this, after all. Shouldn't this be the fair comparison? * Since PSA Transformer is included, a natural comparison is that of SA Transformer. Is there a reason why this was not included? * I appreciate the qualitative results on PASCAL VOC. * (Minor) In the attached pdf, compositional generation is shown using PSA, it would have been nice to see qualitative comparison with other SA-based efforts that allow compositional generation. I understand this is nitpicky, considering the limited space. But this would have been useful for completeness. * While I understand the computational complexity discussion, I would have liked to see wall-clock times of training with PSA as opposed to vanilla SA, at least in approx values. Having said the above, I do see the strengths of the paper mentioned in my original review, and stay with WA as my decision at this time. --- Rebuttal 3: Comment: We thank the reviewer for engaging with us and for the feedback. > **"In the new results in the common response, it appears that the proposed method uses the training strategies of SPOT. The baselines could also benefit from this, after all. Shouldn't this be the fair comparison?"** We do have a fair baseline comparison as all the methods we trained used the same strategies, please refer to rows SA MLP (w/DINO) and SA MLP (w/DINO)$^\ddagger$. > **"Since PSA Transformer is included, a natural comparison is that of SA Transformer. Is there a reason why this was not included?"** As explained in the general comment, we were previously not able to complete the PSA Transformer training run (\approx 15K steps) due to time constraints so it would not have been fair to compare directly with a fully trained SA Transformer (250K steps [60]). The main point was to show that PSA is scalable and using a more powerful Transformer decoder outperforms the MLP variants. Please find the updated results for SA and PSA Transformers on the PascalVOC dataset (SA Transformer results are based on our reimplementation of the DINOSAUR strategy). | Model | $\text{mBO}_i$ | $\text{mBO}_c$ | |-------------------------|---------|---------| DINOSAUR Transformer [60] | 0.44 | 0.512 | | **Ours:** | | SA Transformer (w/ DINO) | 0.427 | 0.503 | | SA Transformer (w/ DINO)$^\ddagger$ | 0.440 | 0.512 | | PSA Transformer (w/ DINO) | 0.436 | 0.515 | | PSA Transformer (w/ DINO)$^\ddagger$ | 0.447 | 0.521 | > **"I appreciate the qualitative results on PASCAL VOC."** We are glad to hear that! > **"(Minor) In the attached pdf, compositional generation is shown using PSA, it would have been nice to see qualitative comparison with other SA-based efforts that allow compositional generation. I understand this is nitpick, considering the limited space. But this would have been useful for completeness."** We emphasize that compositional generation is not the focus of the paper but is a byproduct of our theoretical framework which would be interesting to explore further in future work. > **"While I understand the computational complexity discussion, I would have liked to see wall-clock times of training with PSA as opposed to vanilla SA, at least in approx values."** No problem, please find below the training iteration speeds for both models on PascalVOC with a single RTX 3090: - SA runs at 2.31 iterations per second - PSA runs at 2.23 iterations per second This results in PSA being approximately 3 seconds slower per training epoch, which is quite negligible.
Summary: This paper propose a probabilistic slot attention method that can learn identifiable object-centric representation. Compared with former work on identifiable object centric representation methods, the proposed model can scaling slot-based methods to high-dimensional images. Both theory analysis and experiments verify the effectiveness of proposed model. Strengths: 1. This paper propose a novel probabilistic slot attention model, which can learn Identifiable Object-Centric Representation. 2. the proposed model can scale to high-dimension image dataset. 3. the proposed model is seem to be solid. 4. the paper is well written and read well. Weaknesses: 1. the experiments are only take on two toy datasets. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How much computational complexity has increased? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their effort and overall positive outlook on our paper. We are encouraged to read that our work is perceived as solid, novel, and well-written. Please see our responses to the questions raised below. > **"the experiments are only take on two toy datasets."** We kindly remind the reviewer that the objective of our work is to study the theoretical identifiability of slot representations and the conditions that ensure this property, rather than to pursue state-of-the-art empirical results. Understanding when object-centric representations can theoretically be identified is crucial for scaling slot-based methods to high-dimensional images with correctness guarantees. To verify our theoretical results, we first conduct detailed experiments on controlled datasets and then extend our demonstrations to unstructured image data. We stress that the synthetic datasets we used are necessary for properly testing our identifiability hypotheses. Before our work, there was a lack of explanatory theory for why state-of-the-art results were able to be obtained using non-additive autoregressive Transformers (DINOSAUR [60]) and/or diffusion-based decoders (Slot-Diffusion [r2]). We showed that by viewing slot attention through a probabilistic graphical modelling perspective it is possible to prove slot identifiability for non-additive decoders using proof techniques from the identifiable generative modelling literature. Given that vanilla slot attention can be seen as a simplified version of probabilistic slot attention, akin to the relationship between soft k-means and GMMs, our theoretical results suggest why non-additive decoder structures can work well given the appropriate latent structure and inference procedure are in place. Nonetheless, **we have now evaluated our method on real-world large-scale datasets** and using more powerful decoders to demonstrate that our method also scales well - please find the details of our experiments in the general comment at the top. > **"How much computational complexity has increased?"** We emphasise that probabilistic slot attention (PSA) retains the $\mathcal{O}(TNKD)$ computational complexity of vanilla slot attention, where $T$ denotes the number of attention iterations, $N$ the number of input vectors, $K$ the number of slots and $D$ the slot/input dimension. The additional operations we introduce for calculating slot mixing coefficients and slot variances (under diagonal slot covariance structure) have complexities of $\mathcal{O}(NK)$ and $\mathcal{O}(NKD)$ respectively, which do not alter the dominant term. Furthermore, when used in conjunction with additive decoder-based models, PSA can reduce computational complexity by pruning inactive slots via automatic relevance determination (ARD) as outlined in Section 4. --- Rebuttal Comment 1.1: Comment: Dear Reviewer 11Sk, As the author-reviewer discussion period is soon coming to a close, we kindly ask the reviewer to take the opportunity to engage with us. We sincerely appreciate the time and effort the reviewer has already contributed to the review of our work and hope our thoughtful rebuttal addresses your concerns. Best wishes, The Authors
Rebuttal 1: Rebuttal: We extend our thanks to all the reviewers for their time and constructive feedback which has undoubtedly helped improve the paper. We are pleased that the work was perceived to be well-written, well-presented, and novel, with solid results and of interest to the community. In the following, we highlight the main clarifications of our work raised by multiple reviewers and ***provide additional large-scale experimental results*** addressing all requests (see Table below and the attached pdf). **General Clarifications:** As correctly noted by all reviewers, the primary focus of our work is theoretical. To the best of our knowledge, before our work, there was a lack of explanatory theory for why state-of-the-art results were able to be obtained using non-additive autoregressive Transformers (DINOSAUR [60]) and/or diffusion-based decoders (Slot-Diffusion [r2]). We showed that by viewing slot attention through a probabilistic graphical modelling perspective it is possible to prove slot identifiability for non-additive decoders using proof techniques from identifiable generative modelling. Given that vanilla slot attention can be seen as a simplified version of probabilistic slot attention, akin to the relationship between soft k-means and GMMs, our theoretical results suggest why non-additive decoder structures can work well given the appropriate latent structure and inference procedure are in place. However, there is a trade-off between identifiability and expressivity induced by the choice of decoder structure [45]. Depending on the use case, it may be beneficial to combine both latent and additive decoder structures in practice, particularly if the latter introduces useful inductive biases and/or simplifies the optimization problem. We stress that probabilistic slot attention (PSA) retains the $\mathcal{O}(TNKD)$ computational complexity of vanilla slot attention (SA). The additional operations we introduce for calculating slot mixing coefficients and slot variances (under diagonal slot covariance structure) have complexities of $\mathcal{O}(NK)$ and $\mathcal{O}(NKD)$ respectively, which do not alter the dominant term. Furthermore, when used in conjunction with additive decoder-based models, PSA can reduce computational complexity by pruning inactive slots via automatic relevance determination (ARD) as outlined in Section 4. **Large-scale Experiments:** We empirically tested slot identifiability using more complex non-additive, transformer decoders, following the SLATE [61] implementation and simply replaced the sloat attention (SA) module with probabilistic slot attention (PSA). On the CLEVR dataset, we observed an SMCC of $\mathbf{0.73 \pm 0.04}$, and R2 of $\mathbf{0.55 \pm 0.06}$, which are significantly better than all other models listed in Table 2 in the paper. To demonstrate that PSA can scale to large-scale real-world data we ran additional experiments on the Pascal VOC2012 dataset, following the exact "DINOSAUR" strategies and setups described in [60, r4] for fairness, then simply swapping out SA with PSA . Note that SA MLP (w/ DINO) denotes our replication of DINOSAUR MLP from [60] as a baseline. The table below shows the obtained results (all baselines are standard results taken from [60, r3]): | Models | $\text{mBO}_i$ | $\text{mBO}_c$ | |---------------|----------------|---------------| | Block Masks | $0.247 \ \small{\pm \ 0.000} $ | $0.259 \ \small{\pm \ 0.000}$ | | SA | $0.222 \ \small{\pm \ 0.008}$ | $0.237 \ \small{\pm \ 0.008}$ | | SLATE | $0.310 \ \small{\pm \ 0.004}$ | $0.324 \ \small{\pm \ 0.004}$ | | Rotating Features | $0.282 \ \small{\pm \ 0.006}$ | $0.320 \ \small{\pm \ 0.006}$ | | DINO k-means | $0.363 \ \small{\pm \ 0.000}$ | $0.405 \ \small{\pm \ 0.000}$ | | DINO CAE | $0.329 \ \small{\pm \ 0.009}$ | $0.374 \ \small{\pm \ 0.010}$ | | DINOSAUR MLP | $0.395 \ \small{\pm \ 0.000} $ | $0.409 \ \small{\pm \ 0.000}$ | | **Ours:** | | | | SA MLP (w/ DINO) | $0.384 \ \small{\pm \ 0.000}$ | $0.397 \ \small{\pm \ 0.000}$ | | SA MLP (w/ DINO)$^{\ddagger}$ | **$0.400 \ \small{\pm \ 0.000}$** | **$0.415 \ \small{\pm \ 0.000}$** | | PSA MLP (w/ DINO) | **$0.389 \ \small{\pm \ 0.009}$** | **$0.422 \ \small{\pm \ 0.009}$** | | PSA MLP (w/ DINO)$^{\ddagger}$ | **$0.405 \ \small{\pm \ 0.010}$** | **$0.436 \ \small{\pm \ 0.011}$** | | PSA Transformer (w/ DINO)$^{\star}$ | **$\mathbf{0.435} \ \small{\pm \ 0.01}$** | **$\mathbf{0.499} \ \small{\pm \ 0.01}$** | $^{\ddagger}$ Using slot attention masks rather than decoder alpha masks for evaluation. $^{\star}$ Trained for $\approx$15K steps only due to time constraints (250K are needed). The results show that PSA is competitive with SA at scale. Finally, we have also included basic illustrations of compositional samples from the aggregate posterior on both CLEVR and Objects-Room datasets to verify our theory. Please note that these models are quite small and were not optimized for sample quality since they were used primarily to measure slot identifiability across runs in our main experiments. [r2] Wu, Z., Hu, J., Lu, W., Gilitschenski, I. and Garg, A., 2023. Slotdiffusion: Object-centric generative modeling with diffusion models. Advances in Neural Information Processing Systems, 36, pp.50932-50958. [r3] Löwe, S., Lippe, P., Locatello, F. and Welling, M., 2024. Rotating features for object discovery. Advances in Neural Information Processing Systems, 36. [r4] Kakogeorgiou, I., Gidaris, S., Karantzalos, K. and Komodakis, N., 2024. SPOT: Self-Training with Patch-Order Permutation for Object-Centric Learning with Autoregressive Transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 22776-22786). Pdf: /pdf/efda9aa5dd25e96aa9b0de8c8a18e8b846981aa4.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Adaptive Proximal Gradient Method for Convex Optimization
Accept (poster)
Summary: The paper refines the analysis of AdGD to accommodate larger stepsizes, an approach that exploits local curvature instead of global smoothness. The technique is later extended to ProxAdGD. Experiments demonstrate the superiority of ProxAdGD over Armijo’s linesearh. Strengths: 1. The paper unfolds by providing intuition and examples, helping the readers to deepen their understanding beyond the technical results. 2. Better guarantees that exploits local properties of the objective are desirable, both in deterministic and stochastic settings. Weaknesses: 3. A more detailed comparison to the results of [MM20] is required. Does the improvement merely tighten the constants or enable a drastically different behaviour than [MM20]? 4. Experiments comparing the stepsize choices of the paper and [MM20] are sorely missing. Both the difference between the stepsizes along the optimization and the optimization error of each method. Technical Quality: 2 Clarity: 2 Questions for Authors: 5. See weaknesses. 6. Did the authors consider the stochastic case? Either by a theoretical analysis or by experiments. [MM20] provided both a theoretical guarantee (without adaptivity to local curvature) and experiments with neural networks. (The reviewer does not ask the authors to conduct new experiments with neural networks but merely asks if such were previously performed.) Overall, the paper presents a clear picture of AdGD which helps deepen our understanding, includes an improved stepsize selection and the new AdProxGD method. That being said, the quantification of the improvement is unclear due to limited comparison with [MM20]. Such a comparison will move along way toward establishing the value of this work. Typos: line 54 - some some, line 272 - experimentswe. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Discussed in weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. The improvements over [MM20] are multiple: (i) We allow for larger steps, leading to improved final complexity. While a direct comparison is missing due to the less explicit final bound in [MM20], we have strived to make our bounds as explicit as possible in this work. (ii) We have gained a better understanding of the proposed algorithm and the reasons behind the complex structure of the stepsize (Theorem 1). (iii) We have extended this improved analysis to the proximal case, which is a non-trivial task compared to the setting with a fixed stepsize. 2. In our work, we haven't conducted experiments in the stochastic case. In [MM20], there wasn't a strong theory for the stochastic case, so a variation of the algorithm proposed in [MM20] was tested with additional hyperparameters. These hyperparameters cover a family of algorithms, including the newly proposed one (though [MM20]'s theory didn't support it). So the method we proposed here was already tested in unconstrained stochastic case by [MM20], and by repeating these experiments we wouldn't gain new insights. Additionally, the constrained case for neural networks is very uncommon and not particularly interesting. We hope that our response addresses your concern and we would greatly appreciate it if you revisited the score given to our work. As it currently stands, it is unclear to us why you voted for the rejection of our work. --- Rebuttal 2: Comment: I thank the authors for their response. From what I gather, the step-size update of Algorithm 1 is larger than that of [MM20] only in cases where $\sqrt{1+\theta_{k-1}} \alpha_{k-1} > \frac{1}{\sqrt{2} L_k}$ (due to the recursive step-size update the actual comparison is more nuanced), so the step-sizes of [MM20] are not strictly smaller but may be equal. The case of Algorithm 2 is even more complicated as $\sqrt{1+\theta_{k-1}} \alpha_{k-1}$ is replaced with $\sqrt{\frac{2}{3}+\theta_{k-1}} \alpha_{k-1}$. Is it possible to show that either the new step-sizes are strictly greater ($>$ and not $\geq$) than that of [MM20]? If not, it is possible to try and demonstrate it by experiments (even plotting the step-sizes on the same figure). --- Rebuttal Comment 2.1: Title: Answer to the question Comment: While a direct comparison of the stepsizes is impossible due to different trajectories, our theory does give an improvement for the **sum** of stepsizes. The latter determines the convergence rate as it appears in the denominator of the upper bound. In this respect, we can rigorously prove that **both** of our proposed algorithms (Algorithms 1 and 2) have a better upper bound on the functional gap than [MM20]. As for the experiments, we didn't include a direct comparison because our experiments focused on constrained (or composite) problems, whereas [MM20] only dealt with unconstrained problems. The methods perform similar in practice, which is in line with theory that only guarantees a constant factor improvement, with the new method converging slightly faster due to the larger sum of stepsizes.
Summary: The paper introduces new algorithms for solving convex optimization problems, where a step size parameter adapts to the underlying objective function. Adaptation is achieved by making good use of the already available gradient information, and does not have a further computational cost. The authors propose multiple variants. One of those rely on a generalized derivation of an algorithm from a recent paper, to achieve an improved step sizes. Another provides yet larger stepsizes, and a third one allows non-differentiable (but still convex) objective functions, provided that can be split into a smooth and a non-smooth function where the proximity operator of the non-smooth function is feasible to realize. They compare their algorithm against a simple alternative on a specific problem to demonstrate how it can reduce computational cost. Strengths: The first algorithm alone, which relies on a generalized derivation of a past algorithm, is of interest. The two additional algorithms also allow to further expand the usefulness of the paper. In particular, the third algorithm, which allows handling of non-smooth functions, would be useful in practice. In particular, it extends the applicability of the algorithm to constrained problems (provided projections to the constraint set is feasible in practice), or potentially, problems with sparsity constraints, which can be enforced by convex functions like some variant of an $\ell_1$ norm. I think such problems are pretty relevant for ML, and I'd expect the paper to be of interest to a wide community. Weaknesses: The attention is restricted to convex problems. To be fair, this is perhaps welcome, as it allows the authors to provide a clean analysis of their algorithms. In practice, many convex algorithms found their way to non-convex problems with success, sometimes without justification. At times, I wished the authors would complete their arguments fully. I have a few comment below. The numerical problem didn't feel very fair for the competitor algorithm (I probably would not have used such an algorithm due to how expensive its steps are -- see below). Technical Quality: 3 Clarity: 3 Questions for Authors: - eqn 13 : I don't see how a simple substitution of the previous inequality to (10) gets you this. If you're using additional steps, please either not, or include details -- it's perfectly understandable to push some content to the appendix, but the manuscript is perfectly brings the development to this stage to suddenly switch gears. Since this subsection is meant to be simple, I'd suggest including details. - line 191, "note that the second bound..." : please note this is the "second bound in step 5 of Alg. 2". - eqn 15 : In this inequality, should $\alpha_k$ and $\alpha_{k-1}$ be interchanged? In that case, the following interpretation is also not correct. - eqn 16 : For a wildly changing function $L_1$ won't capture the behavior globally anyway. Is this meant for a relatively "well-behaved" function? - line 221 : I'd suggest writing out the definition of the proximity operator $arg min_t 0.5 \|x - t\|_2^2 + g(t)$ instead of relying on short hand. The text is perfectly readable without knowing what a proximity operator is. In fact, you could also include $\alpha_k$ in there, since that's used in the algorithm description. - line 222, "Algorithm 3, presented in Appendix C" : Please include the algorithm as part of the main text. - Numerical experiment on the constrained problem : I understand the motivation behind comparing the objective wrt the projections. However, this is particularly a problem where a line search would be feasible due to the high computational cost. It would be interesting to compare against a basic forward backward splitting method with a fairly reasonable estimate of the local $L$. If that's not possible, and the baseline algorithm is the most suitable, it'd be good to include an argument. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the positive evaluation of our work! 1. Indeed, we tried to explain our derivations as thoroughly as possible. As we mentioned, equation (13) is just a substitution of the previous equation into (10). We may have forgotten to clarify that $\alpha^{2}_{k} \|\nabla f(x^{k})\|^{2} = \|x^{k+1} - x^{k}\|^{2}$. We will make this clearer in the revision. 2. True, we will add this in the revision. 3. We believe eq. (15) is correct as it stands. It is derived directly by squaring the previous inequality and dividing both sides by $\alpha^{2}_{k-1}$. 4. We are not sure we fully understood the question. No single number can capture the global behavior of a function. In our case, $L_{1}$ is simply the first approximation of a local Lipschitz constant around $x^{1}$. Note that it serves as a lower bound for the global Lipschitz constant $L$, and this may be the only aspect it captures. 5. We beg to disagree here about the use of the short notation. We believe $\mathrm{prox}$ enhances readability in the same way that the metric projection $P_{C}$ is much more convenient than writing $\arg\min_{x \in C} \| x-a \|^{2}$ each time. However, we agree that it's better to include the definition you mentioned when we define the proximal operator in line 221. 6. Thanks for this comment! You cannot imagine how much time we spent trying to fit it into the main text, but it always pushed out something more important. If the paper is accepted, we will have an additional page, so it won't be a problem anymore. 7. We think it is actually important to compare the algorithm with the most widely used and robust approach — linesearch. Choosing a reasonably accurate estimate of the local Lipschitz constant is easier said than done. For instance, it is far from trivial to determine what to choose in problem (24). One of the main goals of this paper was to study composite optimization, where prox or projections matter. However, we conducted an extensive study for the unconstrained case already in [MM20]. Also, note that problems (50) and (52) in the Appendix do not consider projections and only compare gradients. --- Rebuttal Comment 1.1: Title: response to rebuttal Comment: Thank you for the responses. My comments were mainly suggestions -- I appreciate the additional clarification that you'll be adding. For the suggestions you choose not to follow, I won't press further. I was going to insist on (15), but I realized both versions are correct. Starting from (assuming the term inside $\sqrt{\cdot}$ is positive) $$ \alpha_k \leq \dfrac{\alpha_{k-1}}{\sqrt{2 \alpha_{k-1}^2 L_k^2 - 1}} $$ we have $$ 2 \alpha_{k-1}^2 L_k^2 - 1 \leq \dfrac{\alpha^2_{k-1}}{\alpha^2_k}. $$ Rearranging and dividing by 2, I get $$ \alpha_{k-1}^2 L_k^2 - \dfrac{\alpha^2_{k-1}}{2\alpha^2_k} \leq \frac{1}{2}. $$ I thought this contradicts (15) but multiplying both sides by $\dfrac{\alpha^2_{k}}{\alpha^2_{k-1}}$ and rearranging, we get (15). --- Reply to Comment 1.1.1: Title: Thank you for getting back to us Comment: We thank you for giving the suggestions, they helped us identify places that required clarification. Please let us know if you have unresolved concerns. If however our feedback addressed your concerns, we'd also appreciate it if you updated the paper score.
Summary: This paper considers optimization problems where the function is convex and possibly composite. Gradient Descent and the Proximal Gradient method are studied, and one of the main motivations of this work is the paper [MM20], which developed a locally adaptive step size and an associated algorithm 'adaptive gradient descent without descent AdGD'. In the current work, the authors consider whether the bounds on the step for AdGD are essential, and they find that they are, but can be improved upon. They present Algorithm 2, which employs their refined step size, and Theorem 2 details the convergence properties. As well as improving the step size criteria, this work also extends that of [MM20] by presenting an adaptive proximal variant. The algorithm is given in Algorithm 3, and convergence properties are presented in Theorem 3. The authors compare their algorithm with others in the literature in Section 5, including detailing possible extensions, and why they might prove tricky. Finally, numerical experiments are presented in Section 6 (with proofs and further experiments in the appendix). Strengths: Writing. The paper was well written and I enjoyed reading it. I liked the writing style because the authors made efforts to explain *why* they were doing what they were (not just what they were doing). I liked that the authors opted to keep the exposition simple and easy to follow (e.g. see Remark 1 regarding the potential use of cocoercivity). Contribution. Gradient descent is one of the most important algorithms in this field, and the step size has a major impact on the practical performance of the algorithm. This work presents an step size selection condition that is conceptually simple, and is cheap to evaluate, which is a nice contribution. The extension to the proximal setting is also welcome. Weaknesses: It would be good if the authors tightened up the wording of, for example, their theorem statements. For example, for Theorem 2, I think it would be better if they mentioned "given an initial point $x_0$, and an initial step size $\alpha_0>0$. Also, the phrase "converges to a solution" is used, but it would be better if they also referred to the problem they were trying to solve and said what they mean by a solution (and they should also specify that $F=f$, define $f_*$, etc). They should then check the other theorem statements and update them appropriately. I feel it is really important for them to be precise. In Lemma 1 I think it should be mentioned that $\alpha_k>0$ (i.e., the step size can be 'arbitrary but positive'). Numerical experiments. The authors presented several numerical experiments, which is good. However, it would have been good to have had more discussion explaining to the reader what the plots were showing, i.e., more emphasis describing the observed practical behaviour of the algorithms, and specifically commenting on the performance for the parameter choices. Technical Quality: 3 Clarity: 4 Questions for Authors: N/A Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive evaluation of our work and the high praise! We are especially pleased that you liked the way the paper is written; we put a lot of effort into making it readable and enjoyable. We agree with tightening the statements and will make the necessary changes in the revision. The same goes for Lemma 1. Regarding the descriptions of the experiments, we were limited by space. If the paper is accepted, we will make sure to add more details.
Summary: In the paper, the authors explore two adaptive first-order algorithms in convex optimization, namely gradient descent and proximal gradient methods, which are based on observed gradient differences. With a novel Lyapunov energy, the authors prove its convergences assuming only local Lipschitz condition of the gradient and extend it to the proximal case. In addition, the methods allow larger initial stepsizes than those in previous work. Strengths: For many other methods, such as Gradient Descent, we should assume the problems are globally L-smooth and calculate the value L. However, we just ensure the object functions are locally L-smooth if using the methods in this paper. The same as Barzilai-Borwein stepsize, the methods do not include linesearch procedure. It means they are just based on observed gradient differences and do not need high computational cost in each iteration. However, in the contrast to the Barzilai-Borwein stepsize, the paper could guarantee its convergences theoretically in convex and proximal case. Weaknesses: 1.We may not guarantee its convergences in the nonconvex case. 2. In the second experiment, the first term in problem may be (1+x_1^2 )^(1/2), rather than (1+x_1 )^(1/2). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What about the result by using stochastic gradient? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the positive evaluation of our work! 1. Indeed, the theory won't be applicable in the nonconvex case. Convexity was instrumental in deriving all the proofs, and in the weakly convex case it is not clear how to proceed. However, we don't agree that this is a weakness. First of all, convex problems arise as subproblems in many applications such as bilevel optimization and optimal control, and black-box methods are of particular interest in such cases. Secondly, many optimization methods such as momentum and Nesterov's acceleration have been designed using the convex framework and have been used across problem classes. Moreover, we do not believe it is possible to achieve same adaptivity as in our method in the general nonconvex case. While we cannot prove this definitively, there are no works, at least to our knowledge, that demonstrate adaptivity (as defined in the paper) in the general nonconvex case with a sound theory and convergence rate. Thus, we believe it is unjust to criticize the paper for not achieving something that is currently beyond reach for all. 2. Thank you for noticing the typo! 3. Unfortunately, it is again unclear how to develop a sound theory in the stochastic case, the theory is lacking even for linesearch methods. The difficulty comes from the fact that convergence guarantee for SGD usually require the stepsize to either use the Lipschitz constant of the full gradient or the maximum over all Lipschitz constants across all stochastic samples. Empirical stochastic gradients would only give a rough estimate of these quantities, so it is not immediate how we could get solid convergence guarantees for such methods. However, we believe it is a very intriguing and interesting direction for future research and we hope that adaptive methods will be extended to this setting.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
SocraticLM: Exploring Socratic Personalized Teaching with Large Language Models
Accept (spotlight)
Summary: This paper aims to introduce a “Thought-provoking” paradigm into LLM-based personal teaching. The authors propose an innovative “Dean-Teacher-Student” pipeline with three LLM-based agents to collect Socratic teaching data. During this process, the authors also contribute a student cognitive system to simulate six types of authentic students with the “Student” agent. Then, the authors adopt data augmentation for four crucial teaching abilities and collect more single-round dialogue data. Finally, the authors investigate three training strategies to balance the problem-solving ability and Socratic teaching ability of LLMs. The fine-tuned SocraticLM, shows great teaching performance compared with several LLMs including GPT4 and EduChat. The authors also provide sufficient experiments to validate the importance of single-round data, training strategies, which makes the method self-contained and instructive. Strengths: - Motivation: I agree that the idea of LLMs-based teaching is essential for current intelligent education systems, and the existing methods are hard to achieve satisfactory teaching effects through simply answering students’ questions. Therefore, this paper is beneficial for practical applications. - Dataset Contribution: I think one of the most important contributions of this paper is the proposed SocraTeach dataset that consists of high-quality teaching dialogues. It is the first public large-scale Socratic teaching dataset. The authors claim to release it, which can benefit the community to conduct more research for teaching-LLM. - Methodology: The proposed “Dean-Teacher-Student” is reasonable and easy to reproduce. The introduction of student cognitive system that simulates six kinds of students guarantee the diversity of the teaching dialogue data. It has potentials to be generalized to other works. - Comprehensive Assessment: Sufficient experiments is conducted to demonstrate the performance of SocraticLM over several LLMs, which makes the improvement convincing. Moreover, I think the analyses with different data scale is beneficial for reproducing and assist for building other educational LLMs. Weaknesses: Some issues that hope to be further addressed by authors: - It is rational and acceptable to use GPT4 to construct data, but I am interesting about whether the performance of SocraticLM will be limited by GPT4 (please refer to question 1 below). - I recommend the authors to provide more explanations of some experimental results, which can better reflect the effectiveness of the proposed SocraticLM (please refer to question 2 below). Technical Quality: 3 Clarity: 3 Questions for Authors: - In my opinion, the quality of the collected SocraTeach dataset highly depends on the Teacher agent, which is implemented with GPT4. Therefore, if GPT-4 itself cannot correctly solve a problem, can it still serve as a teacher in an educational role? - I notice that in Table 2, SocraticLM performs even better in problem-solving accuracy on MAWPS dataset. Could the authors provide some explanations for this? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: Please see questions above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your affirmation of our motivation, the contribution of our dataset, the novelty and generalization ability of our pipeline, and our sufficient experiments and comprehensive assessment. $\bf{Q1}$:If GPT-4 itself cannot correctly solve a problem, can it still serve as a teacher in an educational role? $\bf{A1}$:Thank you very much for your insightful question. We believe it still can serve as a teacher for two main reasons. Firstly, the Dean agent can judge and correct each round of the Teacher agent's instruction. Within a single round, the instruction needed to be judged usually involves just one reasoning step, making this process easier than having GPT-4 solve a problem directly. Therefore, even if GPT-4's problem-solving ability is limited, it still has the potential to judge and revise the instruction accurately. Secondly, our pipeline does not rely on GPT-4's inherent problem-solving ability because we input the correct solutions into the prompts for both the Dean and Teacher agents. In other words, our pipeline is indeed having SocraticLM teach the correct solution to students rather than solving the problem itself. Therefore, the problem-solving ability of GPT-4 does not limit the effectiveness of our pipeline. Your question is very thought-provoking, and we will supplement these discussions in the revised version of our paper. $\bf{Q2}$:Why SocraticLM performs even better in problem-solving accuracy on MAWPS dataset? $\bf{A2}$:Thanks for your valuable question. We think the reason is that from fine-tuning on our SocraTeach dataset, SocraticLM indeed learns to answer multiple questions from students about various aspects of a single problem (e.g., asking about each reasoning step and the involved knowledge). This process may allow SocraticLM to develop a deeper understanding of the problem-solving process, which in turn can improve its problem-solving accuracy. --- Rebuttal Comment 1.1: Comment: Thanks for your answer. These responses fulfill my doubts and reinforce my score. --- Reply to Comment 1.1.1: Comment: Many thanks for your quick response! We are very happy to resolve your concerns and will add all suggested modifications into the revised version based on your comments. Thank you again for the time you took to review our paper and your affirmation of our work. If there are any further questions, please feel free to raise them, and we can discuss any questions at any time. We will try our best and respond as soon as possible.
Summary: The authors propose a novel method based on Socratic teaching for improving LLM teaching abilities. Strengths: - novel, interesting, and well-described method for improving LLM teaching ability - creation and release of a useful, novel teaching dialogue dataset - propose and validate novel ways of testing LLM teaching ability - extensive experiments; I especially appreciate the ablation study - code released - I really appreciate the figures which do a great job of laying out the proposed methods and providing examples Weaknesses: - results could benefit from having standard errors or confidence intervals (authors say 'Yes' to this in NeurIPS checklist but only list having the kappa score in justification) - there seem to be a large number of related methods (e.g., search for "socratic LLM teaching" on Google Scholar), the authors should contextualize their work among this literature in the related works section, and clarify how their method differs from and outperforms existing methods from that literature (e.g., a goog example is Socratic Playground - https://arxiv.org/abs/2406.13919) Minor (did not affect score): - Line 122: "While some student" --> "While some students" - Line 173: "such not providing" --> typo here - Line 312: "refuse" --> "refuses" or "refused" - Table 1 caption: "denote the" --> "denotes" - Line 595: "Border" -> "Broader" Ethics flag: - Research involved human subjects but unclear if authors had IRB approval. Not sure whether IRB approval is needed or not in this case so flagging for review by relevant experts just in case (note: the authors say the paper involves no crowdsourcing or human subjects in question 15 of the NeurIPS checklist, right after saying it does in question 14). Technical Quality: 3 Clarity: 3 Questions for Authors: - why does performance fall on some metrics at 125% data scale? - will the full SocraTeach dataset be publicly released? - I would be open to increasing my score if concerns from the previous section are addressed Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: - authors discuss limitations but would be worth it to add that current experiments are only in English, and that current human evaluations come from a very small (and likely not broadly representative) sample Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Comment: Hello. This submission has been labeled by the reviewer for additional ethics review, I recommond the authors to priortize their response to the ethics concern so the ethics reviewers could get a clearer picture of the situation as early as possible. Please find above the ethics concerns raised by Reviewer sW5a. The purpose of the additional ethics review is to help the authors mitigating potential ethical, legal or societal harm at an early stage (if there's any). Thank you for your understanding. --- Rebuttal 2: Rebuttal: We sincerely appreciate your affirmation of the novelty of our SocraticLM, the value of our constructed teaching dialogue dataset, the innovation of our teaching ability evaluation system, and the clarity of our writing. For your concerns regarding the ethics flag, we want to express our sincere gratitude for your attention and sorry for the misunderstanding. In this paper, we aim to construct a SocraticLM for personalized teaching. To achieve this, we first propose a “Dean-Teacher-Student” (DTS) pipeline to collect large-scale Socratic teaching dialogue data, where we implement all agents with GPT-4 and do not incorporate human subjects. As for our evaluation system, similar to other works, we invited 10 annotators to assess and compare the teaching instructions of different LLMs to test their performances. As shown in our annotation template in Appendix F, this process also does not involve research on humans or privacy/security risks. Following your suggestion, we will provide the necessary clarification and corrections to the conference and give a clearer introduction of our annotation process in the revised version. The following are the responses to your other questions: $\bf{Q1}$:Standard errors or confidence intervals. $\bf{A1}$:Thanks for your constructive comments. We add the following results: Model|Overall|IARA|CARA|SER|SRR -|-|-|-|-|- ChatGPT|0.29±0.024|0.42±0.034|0.93±0.001|0.62±0.052|0.19±0.020 GPT4|0.50±0.000|0.76±0.015|0.91±0.017|0.65±0.060|0.55±0.050 ChatGLM3|0.11±0.006|0.18±0.014|0.87±0.022|0.46±0.015|0.07±0.006 SocraticLM|0.62±0.029|0.83±0.028|0.98±0.015|0.74±0.039|0.78±0.026 $\bf{Q2}$:How this paper differs from and outperforms existing methods. $\bf{A2}$:Thanks for your thought-provoking question. As summarized in our related work section, existing methods on Socratic teaching with LLMs can be divided into two categories. The first uses a general LLM (e.g., ChatGPT, GPT-4) to assist in conversation design in courses[1], content authoring[2], explaining learning-paths[3], and providing feedback[4]. Notably, the work you mentioned[5] uses GPT-4 to create an innovative Socratic Playground for Learning, constructing diverse learning scenarios where GPT-4 interacts with students in a Socratic manner. The second category collects data to train a specialized teaching LLM, with EduChat[6] being the most representative (also used as our baseline). Compared to them, our SocraticLM improves from two main aspects. 1) When constructing the SocraTeach dataset, we build a student cognitive system to simulate six kinds of authentic students and enhance four key teaching abilities. This enables SocraticLM to handle more complex teaching scenarios and have more comprehensive and systematic teaching abilities. 2) Our DTS pipeline contains a novel Dean agent to judge and revise the GPT-4-based Teacher agent, addressing the limitations that existing general LLMs may still make for a bad teacher[7] and improving teaching quality. Experiments show that SocraticLM outperforms GPT-4 and EduChat across various dimensions, verifying the effectiveness of our pipeline and model. Thank you for pointing out the excellent works and we will cite them in our revised paper. [1] Chatgpt in the generalized intelligent framework for tutoring. [2] Ruffle&riley: Towards the automated induction of conversational tutoring systems. [3] Supporting student decisions on learning recommendations: An llmbased chatbot with knowledge graph contextualization for conversational explainability and mentoring. [4] How can i get it right? using gpt to rephrase incorrect trainee responses. [5] SPL: A Socratic Playground for Learning Powered by Large Language Model. [6] Educhat: A large-scale language model-based chatbot system for intelligent education. [7] The ai teacher test: Measuring the pedagogical ability of blender and gpt-3 in educational dialogues. $\bf{Q3}$:Minor typos. $\bf{A3}$:Thank you very much for thoroughly reading our paper and pointing out them. We will revise our paper carefully. $\bf{Q4}$:Performance fall on some metrics at 125% data scale? $\bf{A4}$:Many thanks for your insightful question. In Figure 4, at the 125% data scale, the metrics that decline are IARA and overall quality, indicating that the root cause is a decrease in the model's ability to identify incorrect answers (the decline in overall quality is a subsequent result). This may be because, with the increase in multi-round dialogue data, the proportion of single-round dialogue data for “Incorrect reply” decreases. When multi-round data scale exceeds 125%, the proportion of this single-round dialogue data may fall below a certain threshold, which results in diminishing effectiveness. Many thanks for pointing out this phenomenon. We will supplement these discussions and explanations in the revised paper. $\bf{Q5}$:Will dataset be released? $\bf{A5}$:Yes! We have already made the test set of our SocraTeach dataset and the model training code available through an anonymous repository (https://anonymous.4open.science/r/NeurIPS-4310). If our paper is accepted, we will release the full dataset as soon as possible. $\bf{Q6}$:Current experiments are only in English and human evaluations come from a small sample. $\bf{A6}$:Thanks for your constructive comments. Although our current dataset is in English, as mentioned in Section 3.2, our "Dean-Teacher-Student" pipeline is general and can easily be applied to other datasets, such as the Chinese Math23K dataset. For human evaluations, we sampled over 1,000 dialogues from SocraTeach dataset to calculate metrics and calculated the Kappa score within the annotators. The results demonstrate the consistency of the manual evaluations and the effectiveness of our SocraticLM. Inspired by your comments, we will consider constructing multilingual teaching datasets and training judging networks based on human-annotated results for larger-scale automatic testing in the future. --- Rebuttal Comment 2.1: Comment: We wish to once again express our great appreciation for the time you have taken to review our paper. We would appreciate your feedback on whether your main concerns have been adequately addressed. We truly value your understanding and support, and will carefully revise the paper according to your suggestions. Thank you very much! --- Rebuttal Comment 2.2: Comment: Thank you for the rebuttal! My concerns have been addressed and I have increased my score accordingly. --- Rebuttal 3: Comment: Many thanks for your response and increasing your score! We are very happy to address your concerns and will add all suggested clarifications and modifications into the revised version following your comments. Thank you again for the time you took to review our paper and your affirmation of our work. If there are any further questions, please feel free to raise them, and we can discuss any questions at any time. We will try our best and respond as soon as possible.
Summary: In this paper, the authors propose a novel SocraticLM to address the limitations of existing personal teaching methods that follow a “Question-answering” paradigm. To do this, the authors first propose a novel “Dean-Teacher-Student” pipeline to collect multi-round Socratic teaching dialogues, where the authors also design a student cognitive system to guides the agent behaviors. Then, the authors elaborate on four teaching abilities and propose corresponding data augmentation. Next, the authors fine-tune ChatGLM3 on the collected dataset with three strategies to balance teaching and reasoning abilities. Finally, the authors design the first systematic evaluation system for the teaching quality of LLMs. Experimental results clearly show the superiority of SocraticLM. Overall, this paper is easy to follow and presents a nice structure. Strengths: This paper has the following strengths: 1.The motivation to introduce a Socratic-teaching LLM that follows “Thought-Provoking” paradigm is reasonable and necessary for AI and intelligent education. 2.The “Dean-Teacher-Student” multi-agent pipeline can sufficiently collect high-quality teaching dialogues. Besides, this pipeline is general enough to expand to other domains. Additionally, the data augmentation for four teaching abilities makes sense and is clearly organized. 3.This paper discuss and design a comprehensive systematic evaluation system for the teaching quality of LLMs, which provides an effective way in the related research. The experimental results compared with 8 LLMs are sufficient, and the significant improvement over GPT4 clearly verifies the effectiveness of SocraticLM. Besides, the experiments for evaluating the necessity of component of SocraticLM including single-round dialogues, training strategies, making the paper instructive. Weaknesses: I have some concerns: 1.One of the most important roles in the proposed pipeline is Dean agent, which plays a supervisory role compared to previous work. Thus, I recommend to add more analyses of its effectiveness in experiments. (please refer to question 1 below). 2.The single-round teaching dialogues are also important in this paper because they correspond to four crucial teaching abilities. Thus, I recommend to add more explanations of the experimental results in Section 6.2. (please refer to question 2 below). 3.I found some typos, for example, --Line 193, “a student need” should be “a student needs” --Line 275, “dialogue” should be “dialogues” Technical Quality: 3 Clarity: 3 Questions for Authors: 1.I think the Dean agent is important in the proposed pipeline. Therefore, I have the question: How to assess the importance of Dean agent? Can it be reflected by comparing the results of SocraticLM and GPT4? 2.In Table 1, comparing the results of “w/o DiaS” and “w/o Correct”, I have a minor question: why the CARA metric of “w/o Correct” is lower than “w/o DiaS”? In my opinion, “w/o Correct” incorporates more single-round teaching dialogues, and thus its performance should be better. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your affirmation of the significance of our motivation, the innovation and generalizability of our pipeline, the contributions of our evaluation system, and the effectiveness of our SocraticLM. $\bf{Q1}$:How to assess the importance of Dean agent? $\bf{A1}$:Many thanks for your careful comment. Yes, as you have noted, the effectiveness of Dean agent can be reflected by comparing the performance of SocraticLM and GPT-4. For a given preceding dialogue, the Dean agent judges and corrects the responses generated by the GPT-4 simulated Teacher agent. This is the fundamental root for the differences between GPT-4 and SocraticLM after fine-tuning. Therefore, the superior performance of SocraticLM compared to GPT-4 as shown in Table 1 can directly demonstrates that Dean agent is effective and necessary. $\bf{Q2}$:Why the CARA metric of “w/o Correct” is lower than “w/o DiaS” $\bf{A2}$:Thank you very much for the question. We think the reason is that the "w/o Correct" model was still fine-tuned on 10K single-round dialogues corresponding to students’ "Incorrect-reply" as explained in Section 3.4. This led the model to develop a stronger tendency to perceive a student's reply as incorrect, causing it to incorrectly classify some correct student replies as wrong. As a result, the model's performance on the CARA metric was worse compared to the model that did not include single-round data training (i.e., “w/o DiaS”). This phenomenon indicates that it is necessary to construct data for the "Correct-reply" category and balance it with the "Incorrect-reply" category. $\bf{Q3}$:Exist some typos. $\bf{A3}$:Thank you for pointing this out. We will carefully review and polish the writing of our paper. --- Rebuttal Comment 1.1: Comment: We wish to once again express our great appreciation for the time you have taken to review our paper. We would appreciate your feedback on whether your main concerns have been adequately addressed. We truly value your understanding and support, and will carefully revise the paper according to your suggestions. Thank you very much! --- Rebuttal Comment 1.2: Title: comment from reviewer Comment: Thank you for clarifying the doubts. All my concerns have been addressed. --- Reply to Comment 1.2.1: Comment: Many thanks for your response! We are very happy to address your concerns and will add all your suggested modifications into the revised version. Thank you again for the time you took to review our paper and your affirmation of our work. If there are any further questions, please feel free to raise them, and we can discuss any questions at any time. We will try our best and respond as soon as possible.
Summary: In this paper, the authors fine-tuned a language model on synthetic data for Socratic Personalized Teaching and evaluated the performance of the proposed model in comparison with a number of baseline options. The authors proposed a multi-agent data synthesis pipeline. Using GPT-4, the authors simulated responses from both the AI teacher (the language model) and the student (the human user.) The author also included a "dean" agent role for evaluating proposed responses against a set of principles for Socratic Personalized Teaching. The authors fine-tuned a language model on the resultant synthetic data. The authors recruited human annotators to evaluate the overall quality of the fine-tuned model. Based on their synthetic dataset, the authors proposed a set of additional metrics for evaluating the performance of language models in the context of Socratic Personalized Teaching. The authors also shared insights on ways to prevent forgetting during the fine-tuning process. Strengths: The multi-agent pipeline for synthesizing data in the field of Socratic Personalized Teaching is a rather novel approach. The proposed method of leveraging synthetic data for evaluation resolves the limitation where human-written data in this field is difficult to obtain. Results on the impact of the "Scale of Problem-solving Data" (Section 6.3) provides useful insights into the trade-off between fine-tuning data (teaching style) and data from the original domain (ability to solve problems.) Examples listed in the appendix suggest that the proposed approach indeed performs better than general-purpose baseline models such as GPT-4 or GPT-3.5. Weaknesses: Given that the evaluation method is a key contribution of this paper, the authors might also want describe how their approach compare with related works in evaluating LLMs for education. Instead of specifying one of the six "Student Cognitive State" when generating a response as a student, the authors allowed the LLM to pick one of these options on its own (see section B.1,) resulting in an imbalanced data distribution (see figure 6 (b) in the appendix.) It might be helpful if the authors can elaborate more on this design choice, as personally I find it unclear how this imbalanced distribution might affect the model performance in the real world. It is nice to see that fine-tuning ChatGLM3-6B on the training split of the synthetic dataset improved performance on the test split, exceeding that of GPT-4, which generated the synthetic responses. Nevertheless, it is unclear from the paper how robust the model would be when the data distribution changes. One possible approach might be to collect additional MOOC question-answer data (as the authors already did) and test the model on these examples. Technical Quality: 3 Clarity: 2 Questions for Authors: It is unclear whether the seed questions from GSM8K and MAWPS behind the synthetic data for evaluation are entirely separate from the ones for generating the train split of the synthetic dataset. For example, would the same question from GSM8K appear in both the synthetic evaluation set and the synthetic fine-tuning set? Line 610 in the limitation section seems to allude to that, but wasn't entirely clear in my opinion. While "Overall Quality" is evaluated through human annotation (see appendix F), what about the metrics IARA, CARA, SER, and SRR? Are these human-in-the-loop as well? It will be helpful if you can elaborate on these details. As a side note, the repository (https://anonymous.4open.science/r/NeurIPS-4310) does not seem to contain code for data synthesis. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The use of ChatGLM3-6B as the base model is a possible limitation- given that the base model correctly solves only around 65% of the GSM8K questions (vs ~90% for GPT-4,) it might be unclear whether the model would be able to consistently give a correct answer. The authors acknowledged this limitation in Appendix J by stating that the same synthetic dataset can be used to fine-tune other LLMs. This limitation (related to forgetting during fine-tuning) was also touched upon in section 6.3. Overall, I find the handling of this limitation quite reasonable- given that this paper is about Socratic Personalized Education, not about improving scores on GSM8K. Nevertheless, the authors might want to adjust the evaluation setup to account for this limitation- e.g., excluding from the evaluation GSM8K questions that cannot be solved using the original ChatGLM3-6B. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your affirmation of the novelty of our pipeline, the contribution of our evaluation system, and the significance of our experiments. $\bf{Q1}$:Compare with related works in evaluating LLMs for education? $\bf{A1}$:Thanks for your valuable question. The related works can be divided into two categories. The first involves objective similarity assessments with human-annotated instruction, using metrics like BLEU and BERTScore[1]. The second involves subjective human evaluations, such as calculating the percentage where student reaches the correct answer given the instructions[1], analyzing the correlation between student grade changes and the use of LLMs[2,3], and issuing questionnaires[4]. Compared to them, our evaluation system offers several advantages: 1. More comprehensive and adequate assessment. It includes evaluations of overall teaching quality and four key teaching abilities, which current works lack a systematic organization. 2. Better comparability. Since a real student can only be taught by one LLM at a time, it is hard to compare the teaching effects of different LLMs by the aforementioned subjective human evaluations. In comparison, our evaluation is based on shared teaching dialogues, which can support the fair evaluation of multiple LLMs simultaneously. 3. More extensive. The latest dataset[1] contains only 600 testing dialogue data, which is far less than our data volume. Frollowing your suggestions, we will supplement these discussions in our paper to highlight our contributions. [1] A dialogue tutoring dataset with rich pedagogical properties grounded in math reasoning problems. [2] Gptutor: a chatgpt-powered programming tool for code explanation. [3] GPT-Empowered Personalized eLearning System for Programming Languages. [4] Recipe: How to integrate chatgpt into efl writing education. $\bf{Q2}$:Why allow the LLM to pick instead of specifying student cognitive states? The imbalanced distribution affects the performance? $\bf{A2}$:Thanks for your insightful question. We allow the LLM to pick a student cognitive state for two reasons. First, real teaching processes are inherently complex, which may cover multiple cognitive dimensions (e.g., calculation, knowledge mastery) within a single teaching dialogue. Thus, manually specifying the state may limit the diversity and hinder the simulation of real teaching process. Second, it may disrupt the dialogue fluency. For example, if the Teacher agent asks "Do you know the concept of trigonometry?", it would be unnatural to ask the Student agent to response according to a weak "Problem Understanding" state, which may lead to an almost irrelevant reply, making it less like a real teacher-student conversation. The imbalanced distribution will affect the model's performance. For instance, for a real student with weak "Instruction Understanding" state (i.e., type 2 in Figure 6), a model trained on an imbalanced dataset with much less data on type 2 may struggle to provide appropriate instructions. To address this, in Section 3.4, we employ data augmentation for four key teaching abilities, and the ablation study in Section 6.2 clearly verifies the necessity of this enhancement. $\bf{Q3}$:Robustness when the data distribution changes (e.g., test on more MOOC question-answer pairs) and what about the dataset partition? $\bf{A3}$:Thanks for your meticulous attention to our data collection. Firstly, sorry for our unclear statements. In Section 3.4, we collected 200 genuine student inquiries from MOOCs that are unrelated to teaching to conduct data augmentation for students’ “Irrelevant” responses, rather than collecting question-answer pairs. Secondly, we greatly appreciate your concern regarding the model robustness. As stated in lines 316-317 of Section 6, we remove the dialogues for questions in the evaluation set during training. Therefore, the issue of "the same questions appear simultaneously in both the evaluation set and the fine-tuning set" does not arise. This also addresses your point about the distribution shift or the need to test on new MOOC data, as our test set has not appeared in training, which validates the robustness of our model. $\bf{Q4}$:Elaborate on metrics IARA, CARA, SER, and SRR. $\bf{A4}$:Yes! These metrics are also evaluated manually. Since they are considered objective binary classification tasks as explained in Section 5, when annotating, we simply provide the preceding dialogue and ask annotators to evaluate whether the LLM's response "identifies the incorrect/correct student reply" (for IARA/CARA), "addresses the student's question" (for SER), or "refuses to answer the student's irrelevant question" (for SRR). $\bf{Q5}$:Code for data synthesis. $\bf{A5}$:Thanks for your comments. We supplement all the code of our agents needed for data synthesis in the folder “dataset_synthesis” in our repository. If the paper is accepted, we will also release all the data and code for public use as soon as possible. $\bf{Q6}$:Evaluation that excludes the GSM8K questions that ChatGLM3 cannot be solved. $\bf{A6}$:Thanks for your constructive suggestion. From the Table below, we observe that the performances of LLMs improve, but the increase is not particularly significant. We think this is because, in constructing our SocraTeach dataset, the prompts for Dean and Teacher agents include the correct solutions for the problems. In other words, we do not require ChatGLM3 to solve the problems directly but rather to learn how to teach the correct solution to a student. Thus, the problem-solving ability of ChatGLM3 does not set the upper limit for the teaching ability of our SocraticLM. Model|Overall|IARA|CARA|SER|SRR -|-|-|-|-|- ChatGPT|0.34±0.079|0.50±0.010|0.95±0.004|0.68±0.079|0.23±0.001 GPT4|0.50±0.000|0.78±0.059|0.91±0.014|0.68±0.002|0.53±0.005 ChatGLM3|0.14±0.039|0.21±0.052|0.88±0.010|0.49±0.079|0.07±0.009 SocraticLM|0.66±0.045|0.84±0.091|1.00±0.000|0.78±0.076|0.77±0.006 --- Rebuttal Comment 1.1: Comment: We wish to once again express our great appreciation for the time you have taken to review our paper. We would appreciate your feedback on whether your main concerns have been adequately addressed. We truly value your understanding and support, and will carefully revise the paper according to your suggestions. Thank you very much!
Rebuttal 1: Rebuttal: We sincerely thank all reviewers’ efforts in reviewing our paper. We would like to thank all of them for providing constructive and valuable feedback, which we will leverage to improve this work. We are encouraged by the positive comments from reviewers, including: - **Motivation**: “The motivation to introduce a Socratic-teaching LLM that follows “Thought-Provoking” paradigm is reasonable and necessary for AI and intelligent education” (Reviewer 3yrk), “LLMs-based teaching is essential for current intelligent education systems” (Reviewer 3PwF). - **Method**: “novel multi-agent pipeline” (Reviewer twUm, Reviewer 3yrk), “shared insights on ways to prevent forgetting” (Reviewer twUm), “novel SocraticLM” (Reviewer 3yrk), “interesting, and well-described method” (Reviewer sW5a), “useful, novel teaching dialogue dataset” (Reviewer sW5a), “novel ways of testing LLM teaching ability” (Reviewer sW5a), “innovative, reasonable and easy to reproduce” (Reviewer 3PwF), “self-contained and instructive” (Reviewer 3PwF). - **Experimental Results**: “provides useful insights” (Reviewer twUm), “performs better than baseline models such as GPT-4, GPT-3.5, and EduChat” (Reviewer twUm, Reviewer 3PwF), “sufficient experimental results” (Reviewer 3yrk, Reviewer 3PwF), “significant improvement over GPT4” (Reviewer 3yrk), “especially appreciate the ablation study” (Reviewer sW5a). - **Significance**: “resolves the limitation where human-written data in this field is difficult to obtain” (Reviewer twUm), “evaluation system provides an effective way in the related research.” (Reviewer 3yrk), “contribute a student cognitive system” (Reviewer 3PwF), “the first public large-scale Socratic teaching dataset” (Reviewer 3PwF), “beneficial for practical applications” (Reviewer 3PwF), “beneficial for reproducing and assist for building other educational LLMs.” (Reviewer 3PwF), “code and dataset released” (Reviewer sW5a, Reviewer 3PwF). **[Response to Ethics Reviewers]** We deeply appreciate your thoughtful consideration of the ethics flag of our paper. Firstly, our paper focuses on introducing a Socratic-style teaching LLM, SocraticLM. To achieve this, we propose a “Dean-Teacher-Student” pipeline to first collect a teaching dialogue dataset, SocraTeach. As we introduce in Section 3, the entire pipeline is implemented without human involvement, as all three agents in the pipeline are simulated using GPT-4. Therefore, from a technical perspective, our work does not raise ethical issues. Secondly, regarding our evaluation system, similar to other works that involve human evaluation of LLM outputs, we invited 10 annotators to anonymously assess and compare the outputs (i.e., teaching instructions) of different LLMs for performance testing. As shown in the annotation template in Appendix F, this process does not involve testing of annotators or research with human subjects. While our paper aimed to present and evaluate a novel personalized teaching LLM, your feedback highlights the necessity of addressing the potential ethical pitfalls. Many thanks for your concerns, and we will revise the paper to incorporate a dedicated subsection that discusses and clarifies our annotation process.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Entity Alignment with Noisy Annotations from Large Language Models
Accept (poster)
Summary: The paper proposes LLM4EA for annotating entity alignment pairs using an LLM. It introduces an active learning policy and an unsupervised label refiner to efficiently collect pseudo-labels. Experiments on OpenEA benchmarks demonstrate the strong performance of LLM4EA. Strengths: The paper is well-structured and straightforward, making it easy to understand the design and motivation behind LLM4EA. Applying an LLM for label annotation is both simple and effective. The results on OpenEA-V2 significantly outperform the baseline methods. Weaknesses: The algorithm in this paper somewhat echoes that of BootEA, which limits its novelty. While earlier iterative or bootstrapping algorithms generated pseudo-labels using cosine similarity or other metrics, this paper introduces the use of an LLM to select pseudo-labels. Beyond this aspect, the procedure closely resembles existing methods. The declaration in Line 237 is incorrect; "V1" is the dataset that resembles real-world KGs, and "V2" is the easier version. Table 1 is not clear. Is LLM4EA the only method evaluated that considers pseudo-labels generated by an LLM? If so, why are the performances of the other methods significantly lower than those reported in the OpenEA paper? Utilizing LLMs to annotate labels carries a risk of data leakage, as many entity name pairs in the OpenEA datasets are identical. This flaw was also noted by the creators of OpenEA. If the authors account for the entity name as a feature, they should also benchmark LLM4EA against approaches that incorporate textual information, for example, MultiKE. Technical Quality: 2 Clarity: 3 Questions for Authors: Please see Weaknesses. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 1 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback, which has greatly helped improve the draft. Below, we address your concerns and weaknesses (w1-w4). # [w1] Technical Novelty (distinctions from bootEA) We respectfully point out that the LLM4EA is significantly distinct from bootEA, we outline three major differences between them: 1. **Pseudo-label generation.** Comparing with the bootEA, our work generates pseudo labels by directly processing semantic information, while bootEA generates pseudo labels based **on similarity scores grounded on embeddings trained on pre-aligned labels**. LLM4EA exploits the potential of automating the EA task in a **label-free** fashion. 2. **Unsupervised label refinement**. Our label refiner is unsupervised, while BootEA relies on the embeddings **learned on training labels** to compute the label’s confidence. As a result, **directly adopting BootEA to this task cannot mitigate the initial noisy annotated pseudo-labels**. As we have shown in Table 1 and discussed in line 265-268 in the paper draft, BootEA still suffers from false labels if directly trained on the noisy labels from LLMs. 3. **Optimized resource allocation**. We introduce an active search policy to improve the utility of the query budget. # [w2 ] Results on OpenEA V1 Thank you for the detailed review and helpful feedback, we will correct the false statement of the dataset version in the revision. To validate the performance on real-world KGs, we attach the experimental results on OpenEA V1 dataset as below. Due to character limit, we only attach results of strong baselines. | | D_W | (w/o ent | name) | | D_Y | | | EN_DE | | | EN_FR | | | --- | -- | - | -- | -- | -- | -- | -- | -- | -- | -- | -- | - | | | hit@1 | hit@10 | mrr | hit@1 | hit@10 | mrr | hit@1 | hit@10 | mrr | hit@1 | hit@10 | mrr | | BootEA | 0.302 | 0.584 | 0.413 | 0.664 | 0.811 | 0.718 | 0.655 | 0.844 | 0.720 | 0.429 | 0.685 | 0.515 | | Dual-Amn | 0.386 | 0.639 | 0.479 | 0.748 | 0.858 | 0.790 | 0.728 | 0.910 | 0.793 | 0.537 | 0.800 | 0.628 | | RDGCN | 0.337 | 0.436 | 0.372 | 0.817 | 0.923 | 0.858 | 0.646 | 0.775 | 0.692 | 0.597 | 0.742 | 0.649 | | LLM4EA | **0.472** | **0.654** | **0.536** | 0.755 | 0.859 | 0.795 | **0.736** | **0.918** | **0.800** | 0.581 | **0.819** | 0.662 | | LLM4EA (RDGCN) | 0.421 | 0.532 | 0.476 | **0.842** | **0.931** | **0.883** | 0.672 | 0.787 | 0.736 | **0.642** | 0.761 | **0.683** | As the results show, our model consistently outperforms baselines. Noteworthy that LLM4EA is a general framework and can employ any base EA model such as RDGCN (last row) to perform label-free entity alignment. **We kindly argue that our conclusions and claims hold on this more realistic dataset**. And we will include the updated results in the draft revision. # [w3] Clarification of baseline setting Allow us to clarify the confusion about the experimental setting in weakness3. 1. **The input of baselines are pseudo labels**. As stated in the experimental setting (line244-245 in the draft), baseline models are also trained on the pseudo-labels generated by the LLMs. This setting ensures **fair comparison** because the inputs are the same (same query budget to the LLMs). 2. **Baselines perform lower than OpenEA's paper for two reasons:** **1) the annotated pseudo-labels are noisy** and existing methods cannot handle without a pre-refinement of noisy labels before training; and **2) the training label size is smaller** ($0.1|\mathcal{E}|$, but in OpenEA's paper it's $0.2|\mathcal{E}|$ ). # [w4] name bias concern The use of name information is often referred to as name bias. We answer this concern from two aspects: 1. **When name information is used**: LLM4EA can generate pseudo-labels and enable robust learning. While most existing methods like MultiKE and RDGCN exploit entity names as features, they <u>rely on pre-aligned pairs</u> for training. LLMs can serve as pseudo-label generators. 2. **When names are not available**: LLM4EA still works. By processing semantics within attributes, LLM4EA can generate effective labels. We empirically evaluate this on D_W_15K_V1 (**1st column** in the following table), we masked target entity names with IDs to avoid name bias. And compare with MultiKE and RDGCN which both leverage semantic features. | | D_W | (w/o ent | name) | | D_Y | | | EN_DE | | | EN_FR | | | - | - | - | - | - | - | - | - | -- | - | - | -- | -- | | | hit@1 | hit@10 | MRR | hit@1 | hit@10 | MRR | hit@1 | hit@10 | MRR | hit@1 | hit@10 | MRR | | MultiKE | 0.021 | 0.064 | 0.036 | 0.503 | 0.767 | 0.598 | 0.357 | 0.666 | 0.461 | 0.274 | 0.590 | 0.380 | | RDGCN | 0.337 | 0.436 | 0.372 | 0.817 | 0.923 | 0.858 | 0.646 | 0.775 | 0.692 | 0.597 | 0.742 | 0.649 | | LLM4EA | **0.472** | **0.654** | **0.536** | 0.755 | 0.859 | 0.795 | **0.736** | **0.918** | **0.800** | 0.581 | **0.819** | 0.662 | | LLM4EA (RDGCN) | 0.421 | 0.532 | 0.476 | **0.842** | **0.931** | **0.883** | 0.672 | 0.787 | 0.736 | **0.642** | 0.761 | **0.683** | As shown in the above table, - LLMs serve as an effective annotator for all methods to perform label-free EA, which justifies the above 1st argument. - LLMs can generate effective labels without name information, which justifies the 2nd claim. We hope our responses can satisfactorily address your concerns and we would appreciate it if you could consider raising the score. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. However, I am still confused and curious about why you chose a smaller training label set and why your new experiments still do not follow the standard setting (i.e., 0.2|E|). The OpenEA datasets have a default setting of 0.2 because it is convenient for 5-fold cross-validation. Were your results based on this setting? The pseudo-label setting for baselines is also unfair. Why do the baselines have to use the pseudo-labels generated by the LLM for training? Methods like BootEA have their own iterative process to refine their pseudo-labels. This is also why I think the proposed method lacks novelty. The authors just replaced their decision function with an LLM. The usage of the name feature is still unclear. How can "LLMs generate effective labels without name information, which justifies the 2nd claim"? In Lines 139-142, the authors claim that the annotation filtering is based on string matching score or word embeddings (this also justifies that your method also generates pseudo-labels based on similarity scores grounded on embeddings). If the string or word does not exist, how do you filter out the less likely counterparts? Then, in Lines 147-148, the authors state that the prompts include the name and relational triplets. If the name feature is unavailable, how can you represent these triplets? --- Reply to Comment 1.1.1: Title: Response to reviewer SZjY (1) Comment: We sincerely thank you for engaging in this discussion and allowing us to address any concerns. Below we first restate our setting and workflow in detail to clarify any confusion about our setting, then answer each of your questions. # 1. Settings **1.1 All experiments are under the label-free setting without ground truth labels for training** Our work is motivated to explore label-free entity alignment, aiming to train EA models using **only LLM-annotated pseudo-labels**. All experiments, including evaluations of both LLM4EA and baseline methods, are conducted **without ground truth labels for training.** This setting is specifically designed to test whether all methods effectively achieve label-free EA. The baselines assess the effectiveness of directly training existing EA models on these noisy pseudo-labels, whereas LLM4EA is a framework designed to mitigate noise and optimize budget allocation for effective learning of the integrated base EA model. **1.2 LLM4EA works in 5-steps** LLM4EA operates in a five-step process for each iteration: Step1. Select important source entities Step2. Recall top-k candidate counterparts for each source entity by string match or word embedding matching Step3. Use an LLM to identify the target entity from the top-k candidates for each selected source entity. Step4. Refine the pseudo-labels by probabilistic reasoning Step5. Train the EA model We respectfully point out that, although LLM4EA is iterative, this process is not bootstrapping like BootEA. Instead, this iterative manner is introduced to dynamically adjust the source entity selection (step1) policy to maximize the utility of a fixed annotation budget. If the budget is $\mathcal{B}$ and the iteration number is 5, then LLM4EA selects and annotates $\mathcal{B}/5$ source entities at each iteration. **If we remove the active selection module in LLM4EA, the LLM generates the seed alignments at once as training data and has no iterations, as in baselines**. # 2. Answers to the questions We will first list the questions for clarity and then provide corresponding answers, based on the detailed settings outlined above. **2.1 Label ratio and the experimental setting** **Q1.** why you chose a smaller training label set and why your new experiments still do not follow the standard setting (i.e., 0.2|E|).? **Q2.** Were your results based on this setting(5-fold cross validation)? **Q3.** The pseudo-label setting for baselines is also unfair. **Q4.** Why do the baselines have to use the pseudo-labels generated by the LLM for training? **Answers: ** - **We did not follow the 5-fold setting of OpenEA because we are under label-free setting.** We undertand the OpenEA dataset by default contains $0.2|\mathcal{E}|$ **ground truth** labels for training, which is designed for **supervised/semi-supervised setting**. As stated in setting 1.1, our experiments evaluate **label-free** EA with LLMs, the only input to the baselines and LLM4EA is the LLM-annotated pseudo-labels (denoted as $\mathcal{L}_a$ ). The training labels we referred to in the previous response is actually $\mathcal{L}_a$, rather than the ground truth labels. This clarification of experimental setting also answers **Q4**. - **Label ratio is relevant to the annotation budget.** The input to both the baselines and LLM4EA is denoted as $\mathcal{L}_a$, which is annotated with a budget of $\mathcal{B}=0.1|\mathcal{E}|$. In practice, the size of the pseudo-labels $|\mathcal{L}_a|$ is less than or equal to $0.1|\mathcal{E}|$ because some queries result in negative outputs (i.e., no matched target is found within the top-k), and thus no label is generated. The label size $|\mathcal{L}_a|$ scales linearly with the annotation budget $\mathcal{B}$. More pseudo-labels can be obtained by increasing this budget. We intentionally did not use a larger default budget to demonstrate how LLM4EA and the baselines perform under challenging cost-constrained settings. If more labels are required, the budget can simply be increased to improve performance, as shown in Figure 2 of our draft. - **Fair comparison by same input.** As discussed above, both baselines and the LLM4EA use the same input (same annotation budget to the LLMs to get $\mathcal{L}_a$, and no ground truth labels).
Summary: This paper proposes a new setting that uses LLM to generate entity alignment training pairs, then use the generated pairs to train a matching model for entity alignment between two KGs. The proposed method is evaluated on the OpenEA dataset and achieves better performance than baseline methods on the setting. Strengths: A new setting for entity alignment. The proposed method achieves better performance than baseline methods on the OpenEA dataset in the setting. The paper is well-written and easy to follow. Weaknesses: 1. The setting is impractical. Is LLM really necessary for generating training pairs of entity alignment? if you already know the names of entities in two KGs, you can just use string matching / semantic similarity to generate training pairs. It produces much more accurate training pairs than LLM with "a fixed budget". I don't see the point of using LLM for generating training pairs of entity alignment. For the mono-lingual datasets, simply string matching can already achieve 100% accuracy, and for the cross-lingual datasets, lots of papers using semantic similarity have proven its effectiveness (https://arxiv.org/abs/2210.10436, https://aclanthology.org/2022.acl-long.405/, https://www.ijcai.org/proceedings/2020/439, https://dl.acm.org/doi/abs/10.1145/3404835.3462870, https://arxiv.org/abs/2203.01044, https://aclanthology.org/2021.emnlp-main.226/). 2. Line 237 "We have chosen "V2" because it more closely resembles existing KGs". This statement is not true. In V2's generation process, they first randomly delete entities with low degrees, which results in a denser graph. This is not the case for existing real-world KGs, which are usually sparse. In their original paper, they said that the V2 benchmark is "more similar to existing datasets" like DBP15k, these datasets are not real-world KGs, they are synthetic datasets. Thus, the statement is misleading and should be corrected. 3. Following the above point, the proposed method is evaluated on the OpenEA V2 dataset, which is twice denser than real-world KGs. The performance on the OpenEA V2 dataset may not be representative of the performance on real-world KGs. This raises the question of the generalization of the proposed method to real-world KGs. 4. Baselines are old, see point 1 for more details. 5. The setting can actually be seen as a special case of the setting in https://dl.acm.org/doi/10.1145/3394486.3403268. The difference is the noise's source, in this paper, the noise comes from LLM, in the other paper, the noise is manually added. Mitigating noise from the training set/textual label is not new, they can be found in point 1's references. 6. As far as I know, the DBpedia, Wikidata, and YAGO datasets are all from the same domain, which gives them identity entity names. Take D_Y_15K_V2 for example, if you calculate the edit distance between the entity names in these datasets, you will find that the same entity in different datasets has exactly 0 edit distance. How exactly is Table 7 obtained? Table 7 shows the performance of semantic similarity/string matching is a lot worse than the proposed method and the F1 can be as low as 2% in some cases. This is not true, the F1 of semantic similarity/string matching should be 100% in the monolingual case. The results in Table 7 are not reliable and should be corrected. I suggest the authors to check the correctness of the dataset and the code implementation. Overall, this paper contains interesting but impractical new settings, false statements, and unreliable results. I recommend rejection. Technical Quality: 2 Clarity: 2 Questions for Authors: See above Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: A limitations section is provided in the paper Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We respectfully thank you for the helpful feedback that helps improve the draft. We answer the concerns and address weaknesses (w1-w6) as below: # [w1, w4] Reasonability of using LLMs in EA and comparison with recent methods We sincerely thank for pointing out this subtle point, and we answer this question from following two aspects: 1. **Potential in more challenging scenario.** Although we perform evaluation in a simple scenatio (entity names are available) in the experiments, the potential of using LLMs in EA tasks is more than that. Specifically, **language models can annotate challenging datasets where no entities names are available**, by exploiting the semantic information within textual attributes, where **existing name-based methods (e.g. LightEA, SEU, BERT-INT) cannot handle**. To empirically evaluate this, we entity alignment on D_W_15K_V1 with LLM4EA, where the target entities are IDs to avoid name bias. Results are as below, | Alignment performance | Annotation precision | | -- | - | | 0.472 (hit@1) | 301 (true positive) | | 0.654 (hit@10) | 49 (false postive) | | 0.536 (MRR) | 1150 (abandoned) | As shown in the results, LLMs can generate effective pseudo-labels without the need for identical names and get promising results. 2. **Scalability to larger KGs.** The target of entity alignment is building a larger unified KG. Although existing finetuning-based models (BERT-INT, SDEA, CEA) can also leverage semantic information, our incontext-learning based framework 1) does not require pre-aligned labels to finetune; and 2) is more scalable as it does not require local hardware configuration for LLMs. we appreciate the listed methods for boosting the comprehensiveness of the discussion of related work, and we will add these results in the draft revision. # [w2 & w3] Results on OpenEA V1 Thank you for detailed reviews, we will correct the false statement about dataset version in the revision. To validate the performance on real-world KGs, we attach the experimental results on OpenEA V1 dataset as below. Due to character limit, we only show the results of strong&new baselines. | | D_W | (w/o ent | name) | | D_Y | | | EN_DE | | | EN_FR | | | -- | - | -- | - | - | - | - | - | -- | -- | --- | - | - | | | hit@1 | hit@10 | mrr | hit@1 | hit@10 | mrr | hit@1 | hit@10 | mrr | hit@1 | hit@10 | mrr | | BootEA | 0.302 | 0.584 | 0.413 | 0.664 | 0.811 | 0.718 | 0.655 | 0.844 | 0.720 | 0.429 | 0.685 | 0.515 | | Dual-Amn | 0.386 | 0.639 | 0.479 | 0.748 | 0.858 | 0.790 | 0.728 | 0.910 | 0.793 | 0.537 | 0.800 | 0.628 | | RDGCN | 0.337 | 0.436 | 0.372 | 0.817 | 0.923 | 0.858 | 0.646 | 0.775 | 0.692 | 0.597 | 0.742 | 0.649 | | LLM4EA | **0.472** | **0.654** | **0.536** | 0.755 | 0.859 | 0.795 | **0.736** | **0.918** | **0.800** | 0.581 | **0.819** | 0.662 | | LLM4EA (RDGCN) | 0.421 | 0.532 | 0.476 | **0.842** | **0.931** | **0.883** | 0.672 | 0.787 | 0.736 | **0.642** | 0.761 | **0.683** | As shown by the results, LLM4EA consistently outperform strong baselines. It is noteworthy that LLM4EA is a general framework and can employ any base EA model such as RDGCN (last row) to perform robust label-free entity alignment with LLMs. We argue that **the conclusions and claims holds on this sparse dataset**. And we will include the updated results in the draft revision. # [w5] Comparison with existing work that addresses noise labels (REA) The paper referenced in weakness 5 (REA) also addresses noise in the training set. However, our work is significantly distinct from theirs in three key aspects: **1) Technical novelty**. The noise detection model in REA is trained as a binary classifier using an adversarial training paradigm, In contrast, our label refiner employs probabilistic reasoning for robust label refinement. **2) Flexibility.** LLM4EA can leverage any off-the-shelf EA models as base models without altering the model architecture or training objectives. **3) Superior performance**. We empirically tested the performance of REA on annotated pseudo-labels generated by LLMs and present the results below and show the superiority of LLM4EA. | | D_W | (w/o ent | name) | | D_Y | | | EN_DE | | | EN_FR | | | ------ | ---- | ----- | --------- | --------- | ------ | ------ | --------- | --------- | --------- | --------- | --------- | --------- | | | hit@1 | hit@10 | mrr | hit@1 | hit@10 | mrr | hit@1 | hit@10 | mrr | hit@1 | hit@10 | mrr | | REA | 0.213 | 0.467 | 0.298 | 0.447 | 0.798 | 0.563 | 0.363 | 0.742 | 0.475 | 0.226 | 0.536 | 0.352 | | LLM4EA | **0.472** | **0.654** | **0.536** | **0.755** | **0.859** | **0.795** | **0.736** | **0.918** | **0.800** | **0.581** | **0.819** | **0.662** | # [w6] Low recall/f1 on semantic matching Thank you for the detailed review and constructive suggestions. We identified that the low recall was due to the word embedding model not generating embeddings for out-of-vocabulary names. We addressed this by replacing it with a pretrained language model (bert-base-uncased) and implementing a filtering process that only considers 1-1 matching. The updated results are as follows: | | Precision | Recall | F1 | | ----- | --------- | ------ | ----- | | D-W | 0.918 | 0.624 | 0.743 | | D-Y | 1.0 | 1.0 | 1.0 | | EN-DE | 0.897 | 0.695 | 0.783 | | EN-FR | 0.889 | 0.736 | 0.805 | We hope our response can satisfactorily address your concerns and we would appreciate if you could consider raising the score. --- Rebuttal 2: Comment: Thank you for your response and your effort to address my concerns. I appreciate your time and effort. I have raised my score to 4. This is because some parts of my concerns aren't fully addressed. Regarding the motivation, I hope you can address these in further responses: 1. The ID experiment is interesting. Please provide more detailed settings for this experiment. For example, how do you ensure that the IDs are not memorized by the LLMs? Does the accuracy come from the IDs already being memorized by the LLMs as part of the knowledge base? If so, how can we ensure that the LLMs can identify IDs that are not memorized? Wouldn't this memorization lead to more bias than name-bias issues? If we want to use such systems in applications where the resources are so limited that even names are not available, how can we ensure that the LLMs can still identify the IDs? Could you provide a more detailed experiment on this? For example, reassigning the IDs to the entities in the knowledge base and seeing if the LLMs can still identify the IDs. I assume you may need to design specific prompts, for example, showing the neighborhood IDs, to explore the reasoning ability beyond memorization 1. Scalability to larger KGs. Do you have experimental results on larger KGs such as DBP1M? Using an API to query may eliminate the need for local machines, but the API is still costly and there are rate limits. Do you have a cost analysis of using the API compared to deploying a BERT model locally? I would assume that the BERT solution is far more cost-effective and scalable, while still maintaining decent accuracy. I would advise the authors to compare the recent SOTAs such as LightEA (https://arxiv.org/pdf/2210.10436), both in terms of accuracy and scalability. Some other works like EASY (https://dl.acm.org/doi/abs/10.1145/3404835.3462870) also have some kind of error correction mechanism. It would be interesting to discuss how your work is different from these works. --- Rebuttal Comment 2.1: Title: Response to answer remaining concerns Comment: Thank you for your prompt reply and support, we anwer your remaining conerns one by one below. # 1. Concerns about ID leakage Below we attach the detailed experimental setting and results to answer the concern. **1.1 Experimental setting** - **ID random reassignment for eliminating name bias**. When masking entity names with IDs, we randomly reasign new IDs (rather than the original IDs extracted from wikidata dump file) to the entities of D-W-15K-V1. - **Attribute-based prompting**. Since the entity names are not available, we describe each entity by its associated attributes in the prompt. Noteworthy that most attributes don't contain semantic information, we use a python regular expression to filter out meaningless attributes. **1.2 Experimental results** Entities are annotated in two steps. - Counterpart selection Using the attributes, we first employ a BERT to generate embedding for each entity, then get the top-20 counterparts by semantic matching. The recall (hit@k) of this process is: |Hit@1|Hit@20|MRR| |-|-|-| |0.125|0.237|0.158| - Target identification Given a source entity and its top counterparts, an LLM determines if an aligned target entity exists in these candidates. If the LLM finds no match, the query is discarded. Otherwise, it predicts a positive label (true or false positive). The results below are based on 1500 queries. | Abandoned | True positive | False positive | | - | - | - | | 1150 | 301 | 49 | # 2. Comparison with BERT model in terms of scalability Below we compare LLM4EA and Bert models in terms of cost scalability and performance scalability. **2.1 Cost scalability** - **LLM4EA's costs scale linearly with KG size.** The costs of LLM4EA mainly come from its query to LLMs. This cost scales linearly with the KG size. In our experiments, the average token cost for each entity is around 1100. Thus, each experiment on 15k dataset costs 1100x(0.1x15000)x0.5/10^6 =0.825 dollars, and for each experiment on 100k, it costs 5.5 dollars, under the pricing scheme of gpt-3.5-turbo-0125. - **Querying LLMs can be accelerated by parallel or batch query**. Acceleration can be achieved by parallel query, or by querying OpenAI's batch API. Noteworthy that the LLMs response speed is growing rapidly and we can benefit from these advances. **2.2 Performance scalability** Given the hardware demands and need for pre-aligned pairs in finetuning, an alternative is to use BERT for embedding matching without finetuning. In previous emb-match experiments, BERT performs well on small datasets with names, but **the following experiments show its precision decreases as KG size increases**. |DBP1M |Precision|Recall|F1| |-|-|-|-| |$DBP_{EN-DE}$|0.492|0.679|0.571| |$DBP_{EN-FR}$|0.490|0.654|0.560| |OpenEA V1 (w/o name)|Hit@1|Hit@20|MRR| |-|-|-|-| |D-W-15K|0.125|0.237|0.158| |D-W-100K|0.047|0.107|0.068| - **Bert is less accurate on large KGs**. The DBP1M dataset contains name information, while BERT model shows decreased precision. We investigate and identify that **this precision decline is mainly due to the increased number of similar names as the KG size grows**. And as expected, when name information is not available in D-W dataset (second table above), the precision also decreases. - **LLMs generate more precise labels**. Bert model can help recall the possible counterparts but fail to generate effective labels, employing a LLM can annotate more precise labels from these recalled counterparts, as discussed and shown by experiments in **1.2**. # 3. Comparison with related work **3.1 Comparison with LightEA**. |results on $DBP_{EN-DE}$|hit@1|Hit@10|mrr| |-|-|-|-| |LightEA | 0.055 |0.089 |0.067| |LLM4EA(LightEA)|0.099|0.152 |0.117| |results on $DBP_{EN-FR}$|hit@1|Hit@10|mrr| |-|-|-|-| |LightEA|0.034|0.066|0.045| |LLM4EA(LightEA)|0.044|0.086|0.059| - **LightEA is efficient.** LightEA performs EA with notable efficiency, scalable to DBP1M. - **LightEA can be enhanced by LLM4EA and in return improve scalability of LLM4EA.** We respectfully point out that, LLM4EA is a general framework and can incorporate any EA model as its base model to perform effective learning. In return, LLM4EA stands to benefit from advancements in efficient base EA models. **Comparison with EASY**. - **EASY only consider one-hop structure while LLM4EA leverage higher-order structures.** EASY generates confident pseudo-labels by selecting entity pairs with high (one-hot) neighborhood similarities. In contrast, LLM4EA incorporates iterative reasoning that **implicitly performs multi-hop reasoning** through logical deduction. Realworld KGs can be sparse and contain many long-tail entities that have no aligned neighbors. - **EASY neglects relational properties while LLM4EA does**. Statistical properties of relations, such as functionalities, are crucial in quantifying the contributions of neighbors. EASY simply relying on neighbor counts and overlooks these properties.
Summary: This paper tackles the challenge of entity alignment (EA) in merging knowledge graphs (KGs) by leveraging Large Language Models (LLMs) to automate annotations, addressing the costly and impractical reliance on human-generated labels despite difficulties like large annotation space and noisy labels. They propose LLM4EA, a unified framework designed to harness LLMs effectively for EA tasks. Key contribution include a novel active learning policy that reduces the annotation space by prioritizing the most valuable entities based on the overall inter-KG and intra-KG structure, and an unsupervised label refiner that enhances label accuracy through probabilistic reasoning. The framework iteratively optimizes its policy using feedback from a base EA model. Experimental results on four benchmark datasets highlight the framework's effectiveness, robustness, and efficiency, showcasing its potential as a significant advancement in the field of entity alignment. Strengths: - This paper proposes an iterative annotation framework leveraging the capability of LLMs' zero-shot in-context learning (ICL). The proposed framework, using an iterative refinement strategy, can effectively reduce costs by utilizing cheaper LLMs such as GPT-3.5. - The experimental design of the paper is reasonable and sufficiently demonstrates the effectiveness of the proposed framework. - This paper is well-written and easy to follow. Weaknesses: - The authors do not consider any open-source LLMs such as Llama2/3. For example, Llama3 8b/70b may have better performance in EA tasks compared to GPT-3.5. - The authors did not report the actual API costs for the experiments. For the baselines, what budgets were used to annotate the dataset to train these models? Are they the same as those used in your iterative framework? - Figure 1: Replacing the human icon with a robot for the LLM annotator may be better. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weakness. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This paper has one limitation section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful feedback that helps us improve the draft. We answer concerns and address the three weaknesses (w1, w2, w3) in the following. **[w1] Ablation of LLM model** The choice of LLM model can affect annotation accuracy and cost. We recognized this factor and employed both GPT-3.5 and GPT-4 as annotators, comparing the differences (Table 1 & Figure 2 in the draft) between less powerful and more powerful LLMs. Thank you for the suggestion; we will include experimental results of additional LLMs, especially open-source ones, to complement the empirical analysis. **[w2] Cost measurement** As we have stated in the experimental setting (line244-245), baselines and our framework use the same budget (line239), which is $0.1\times|\mathcal{E}|$. To ensure a fair comparison, we use the same prompt template for all experiments. This also ensures that the actual API cost for baselines and our framework is statistically the same (although a slight variance brought by the different lengths of entity names). Based on our experimental observation, the average API cost for annotating each entity is around 1100 tokens, thus it takes around 1100x(0.1x15000)x0.5/10^6 =0.825 dollars to run each alignment task using gpt-3.5-turbo-0125. **[w3] Improvement of presentation.** Thank you for the detailed review, we will replace the icon in figure1 in the draft revision. --- Rebuttal Comment 1.1: Title: To Authors Comment: Thanks for your responses. Your answers address most of my concerns. I think my rating is reasonable and fair. So I maintain my scores.
Summary: The paper proposes an active learning/weak supervision based approach for knowledge base alignment (aligning entities across KGs). The paper explores an LLM labeler for generating entity alignment labels but explores active selection of source nodes labeled by the LLM, a label refiner which refines the LLM annotations based on the structural consistency of the KGs (nodes connected to aligned nodes should be aligned), and finally trains a entity alignment model on the gathered data. The paper compares to a range of baselines and presents several ablations to indicate the efficacy of the proposed approach. Strengths: - The paper proposes an interesting approach which seems novel and applied to a meaningful problem. - The paper's experiments are thorough. Weaknesses: - The papers writing is hard to follow and could benefit from reduction in notation, addition of intuition for modeling choices, and clearer distinguishing of its own contributions wrt prior work. Technical Quality: 3 Clarity: 2 Questions for Authors: NA Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and effort in reviewing our work. We appreciate your insights and will address these points. Below is a summary of the content of our work. - **Motivations.** Our work is motivated by the fact that existing methods heavily rely on accurate seed alignment pairs for training. Annotating such pairs requires substantial cross-domain knowledge, making it very costly. Large Language Models (LLMs) offer new opportunities in automating this task by processing semantic information and generating pseudo-labels. - **Challenges.** Employing LLMs for this task is nontrivial because the search space is vast, and LLMs can generate false labels, which can harm the final performance. There is no existing framework to handle these challenges effectively. - **Contributions**. As itemized at the end of the introduction section (lines 69-84), our contributions include 1) A novel LLM-based in-context learning framework for label-free entity alignment; 2) An unsupervised label refiner to enable effective training on noisy pseudo-labels; and 3) An active sampling module to maximize the utility of the annotation budget. We sincerely thank you for your recognition and helpful feedback. We will further polish the paper presentation based on your comments in the revision.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Reinforcement Learning Under Latent Dynamics: Toward Statistical and Algorithmic Modularity
Accept (oral)
Summary: This work investigates representation learning for decision-making in episodic MDPs. They consider a problem setting in which the goal is to through interactive decision-making learn a good policy. The task involves learning a good "decoder", i.e., function that maps the observations to "state" representations. They define a notion of "statistical modularity" in this problem which means that there exists an algorithm that can learn the optimal policy (with $\epsilon$ error) with high probability with a number of episodes that is polynomial in the base MDP and the capacity of the decoder function class. They then prove an impossibility result regarding statistical modularity in this problem in general and prove statistical modularity in MDPs under some condition. They also define a notion of "algorithmic modularity" by introducing an algorithm in a hindsight observability setting in which one can use any decoder of interest and any standard episodic MDP algorithm for decision-making; they prove a regret bound in terms of the quality of the quality of the base MDP algorithm and the quality of the decoder. Strengths: - I thought the problem the authors worked on was interesting and meaningful. Overall the ideas of introducing the concepts of statistical and algorithmic modularity was interesting. - The authors provide many theoretical results and the result, especially regarding the algorithmic modularity result in 4.1 seems interesting, intuitive, and rather elegant. Weaknesses: - I found the notation and terminology used in this paper to be very dense. (I put specific questions / notes about this in the next section). I think I would prefer that the authors to have fewer theoretical results, but more thorough discussion of the results. (E.g., the self-predictive estimation idea seems interesting, but is literally 1 paragraph in the paper.) Technical Quality: 4 Clarity: 2 Questions for Authors: - Could you clarify if you are using the term "latent states" in the sense of partially observed MDPs. Or if you really mean the state is "latent" in the sense that the best representation of the state is unknown? If this is the case, it seems like this is more a representation learning problem rather than a "latent state" problem. Could you add further discussion of how your formulation relates to POMDPs? - It would be helpful to provide an intuitive definition of decoder earlier in the paper, as it is used in the intro without much context. Confidence: 3 Soundness: 4 Presentation: 2 Contribution: 4 Limitations: - There is no empirical evaluation of the algorithm. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read our paper and for their positive review. We address questions/weaknesses below. > I found the notation and terminology used in this paper to be very dense. [...]I think I would prefer that the authors to have fewer theoretical results, but more thorough discussion of the results. (E.g., the self-predictive estimation idea seems interesting, but is literally 1 paragraph in the paper.) We apologize that the reviewer found the paper to be dense, and will strive to revise the text to improve readability. Towards this, we will happily accept any specific recommendations that the reviewer has for which content should be emphasized. In addition, we are happy to use the extra page available for the camera-ready version to expand the discussion around topics like self-predictive estimation. > Could you clarify if you are using the term "latent states" in the sense of partially observed MDPs. Or if you really mean the state is "latent" in the sense that the best representation of the state is unknown? If this is the case, it seems like this is more a representation learning problem rather than a "latent state" problem. Could you add further discussion of how your formulation relates to POMDPs? It is best to think of our use of the term “latent state” as in the sense of the second definition you mention (finding the best representation which remains unknown). However, our use of the term “latent state” is consistent with both of the definitions you mention—in fact, they are the same under the decodability assumption we consider. In detail, our problem formulation studies a restricted class of POMDPs where the emission processes are assumed to be decodable (Definition 2.1 and 2.2). This means that the dynamics are governed by the latent state (which is unobserved, as in POMDPs), but it also means that there exists a representation which can decode the unknown latent state. The decodability assumption also removes any partial observability issues. Thus, we are in the representation learning problem, where the aim is to recover the underlying latent state (of course, as we discuss in the paper, representation learning and exploration must be interleaved in our setting). We are happy to add more discussion to emphasize how our formulation relates to POMDPs, and we thank the author for this suggestion. >It would be helpful to provide an intuitive definition of decoder earlier in the paper, as it is used in the intro without much context. We agree that this would be helpful, and thank the reviewer for the suggestion. We will revise the introduction to include a more intuitive explanation. --- Rebuttal 2: Comment: Thank you for your comments. Your proposed writing revisions sound good and I think they would improve the presentation!
Summary: This paper provides a theoretical analysis of statistical and algorithmic modularity for RL with latent dynamics. Specifically, it offers conditions and theoretical analysis under which RL with latents is tractable. For statistical modularity, both lower and upper bounds are presented. For algorithmic modularity, observation-to-latent reductions are analyzed under two conditions: hindsight observability and self-predictive estimation. Overall, the theory and proofs are technically solid, addressing a critical problem in RL, especially in scenarios where only high-dimensional pixels are observed. Although I am not an expert in RL theory (my focus is more on algorithms and applications), I would give an acceptance rating for this initial review and will be engaged in the discussion. Strengths: - [**Motivation and Significance**]: The problem of learning from observation for RL is important, and this paper provides fundamental theory on this topic. The statistical and algorithmic guarantees are critical contributions to the field. The theoretical findings, particularly on algorithmic modularity, have the potential to encourage more empirical work on efficiently identifying self-predictive latent states that facilitate RL policy learning. - [**Technical Soundness**]: Although I am not an expert in RL theory, I reviewed the main paper thoroughly and found the theoretical foundations and proofs to be solid. - [**Presentation**]: The presentation is clear and accessible, even for readers outside the theory domain. Weaknesses: Since I am not an expert on RL theory, I have listed most of my questions in this section. The major question from an empirical point of view is how to leverage some of these theoretical results to enhance RL learning from latent dynamics. Q1: The authors mentioned that block MDP or factored MDP would be a special case of this general framework. Suppose we narrow down the problem to block MDP or factored MDP, can the statistical or algorithmic modularity be easier to achieve? Q2: Similar to the previous question, what kind of structure (e.g., symmetry, disentanglement) or distribution assumptions in the latent space could mostly benefit the current theoretical framework? Technical Quality: 3 Clarity: 3 Questions for Authors: I listed my questions in the above section. Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations and discussions are given in the paper. As this is a theoretical work, I do not think it will pose any negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive review and their thoughtful questions. We address each of the individual questions below. > The authors mentioned that block MDP or factored MDP would be a special case of this general framework. Suppose we narrow down the problem to block MDP or factored MDP, can the statistical or algorithmic modularity be easier to achieve? Modularity is indeed easier to achieve for Block MDPs (note that Block MDPs correspond to the case where $\mathcal{M}_{\mathrm{lat}}$ is tabular, and we indicate that this setting is modular in Figure 1). In particular, for statistical modularity, there are many prior algorithms which achieve the desired sample complexity of $\mathrm{poly}(S,A,H,\log\Phi)$ [Zhang et al., 2022, Mhammedi et al., 2023], which is statistically modular by our definition. Regarding factored MDPs, statistical modularity can be achieved under additional assumptions on the emission process [Misra et al., 21], but the general case remains an interesting open question. As for algorithmic modularity, no prior works had studied this desiderata. However our reduction based on self-predictive representation learning (Theorem A.1) can be applied in the tabular (Block MDP) setting to achieve algorithmic modularity, as all the assumptions required by the self-predictive representation learning oracle are satisfied when the latent state space is tabular. > Similar to the previous question, what kind of structure (e.g., symmetry, disentanglement) or distribution assumptions in the latent space could mostly benefit the current theoretical framework? We agree that this is an interesting question, and have tackled it in the paper – for example, we have identified that latent pushforward coverability is a general structural condition on the latent space which allows for statistical and algorithmic modularity (this subsumes, for example, the block MDP and latent low-rank MDP results). However, we do not yet have a complete picture of which latent structures or additional parameters are necessary and sufficient, and have posed this as an open question in the conclusion of the paper. We view the introduction of this question, along with partial steps towards addressing it, as one of our main contributions. **References** 1. Zhang X, Song Y, Uehara M, Wang M, Agarwal A, Sun W. Efficient reinforcement learning in block mdps: A model-free representation learning approach. InInternational Conference on Machine Learning 2022 Jun 28 (pp. 26517-26547). PMLR. 2. Mhammedi Z, Foster DJ, Rakhlin A. Representation learning with multi-step inverse kinematics: An efficient and optimal approach to rich-observation rl. InInternational Conference on Machine Learning 2023 Jul 3 (pp. 24659-24700). PMLR. 3. Misra D, Liu Q, Jin C, Langford J. Provable rich observation reinforcement learning with combinatorial latent states. InInternational Conference on Learning Representations 2021. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. My concerns have been well addressed and I would keep my rating.
Summary: This paper considers theoretical aspects of reinforcement learning in a certain class of MDPs whose observations are governed by a separate, potentially smaller, MDP. They formalize this class of MDPs and denote them latent MDPs. The authors then consider when such MDPs are statistically learnable, beginning with a negative result: they show that in general, even with known latent dynamics, statistical modularity is impossible. They then highlight that statistical modularity in this setting is in some sense distinct from previous works which assume regularity in the value function, and mention that this is because this structure might be useless without a good learnt representation. The authors then go through a laundry list of MDP formalisms in previous work, and provide for most a result on whether or not they are statistically modular. They finally consider algorithmic results, and introduce a 'meta-algorithm' which balances representation and RL learning (where the underlying RL algorithm is arbitrary). Under some additional assumptions, they prove that the additional representation learning adds sublinear risk. Strengths: - I believe that this is an important step in bridging RL theory and the issues of RL in practice. - Balancing representation learning and standard RL learning is an important issue many RL practitioners need to balance. This work paves the way for theoretically-guided answers to those questions. - I found it rather interesting how the authors demonstrated that much of the structure used in previous work is not amenable to this setting, and that in those cases statistical modularity is not possible. Weaknesses: - Assuming that latent states can be uniquely decoded from the observations is a rather strong observation. - There are no experiments in the paper. Of course the contribution of this work is theoretical, but theoretical work can still benefit greatly from some simple experiments which illustrate results in their paper. In particular, doing this allows some readers to better understand the result, and importantly, shows that the results obtained (which are often under unrealistic assumptions) do not break down in practice. Technical Quality: 4 Clarity: 3 Questions for Authors: - Do you believe there are any toy experiments which can be done to illustrate any of your results? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: They discuss avenues for future work, and are clear on the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive review and their helpful comments! Please see responses to individual questions below. > Assuming that latent states can be uniquely decoded from the observations is a rather strong observation. It is true that this imposes stronger assumptions on the observation-space MDP (i.e., there is no partial observability). However, this assumption is well-established in the line of research on Block MDPs and RL with rich observations and permits the design of computationally/statistically efficient algorithms in various cases (e.g. tabular latent MDPs), whereas the analogous POMDP setting would otherwise be intractable (see, e.g., the lower bound in [Krishnamurthy et al., 2016]). Our work addresses the question of generalizing the aforementioned positive results to general latent dynamics, which had remained largely unaddressed, and one of our main contributions is to show that, despite the seemingly nice structure of decodability, strong *negative results* are present. These also imply negative results for the setting *without* decodability. Thus, for many interesting classes of latent dynamics, one cannot hope to remove even this decodability assumption (without placing alternative assumptions). Nonetheless, we hope that by addressing the decodable setting, our work can serve as a starting point toward building a similar understanding for partially observed settings. > There are no experiments in the paper. Of course the contribution of this work is theoretical, but theoretical work can still benefit greatly from some simple experiments which illustrate results in their paper. [...] Do you believe there are any toy experiments which can be done to illustrate any of your results? We acknowledge that experiments are an important next step for our results. However, let us emphasize that we believe our theoretical contributions alone are sufficient for publication, and stand on their own merits. Indeed, as is typically the case with theoretically motivated algorithms, developing practical implementations will require non-trivial adaptations and significant implementation effort; given the scope of our theoretical results, we believe it is appropriate to leave a full-scale empirical evaluation for future work. Regarding toy experiments: a classical toy experiment considered in prior works (for the latent tabular setting) is the “diabolical combination lock” [Misra et al. ‘20, Zhang et al. ‘22, Mhammedi et al. ‘23], which consists of a small latent combination lock with very high-dimensional observations and which traditional deep RL algorithms fail to solve. For future experiments, since our representation learning oracle allow for sample-efficiency under much more complicated latent dynamics (beyond tabular), it would be interesting to design and test our algorithms on a more complicated version of this domain, which would be unsolvable by both deep RL algorithms as well as prior theoretical latent-dynamics algorithms. **Refs** 1. Krishnamurthy A, Agarwal A, Langford J. Pac reinforcement learning with rich observations. Advances in Neural Information Processing Systems. 2016;29. 2. Misra D, Henaff M, Krishnamurthy A, Langford J. Kinematic state abstraction and provably efficient rich-observation reinforcement learning. InInternational conference on machine learning 2020 Nov 21 (pp. 6961-6971). PMLR. 3. Zhang X, Song Y, Uehara M, Wang M, Agarwal A, Sun W. Efficient reinforcement learning in block mdps: A model-free representation learning approach. InInternational Conference on Machine Learning 2022 Jun 28 (pp. 26517-26547). PMLR. 4. Mhammedi Z, Foster DJ, Rakhlin A. Representation learning with multi-step inverse kinematics: An efficient and optimal approach to rich-observation rl. InInternational Conference on Machine Learning 2023 Jul 3 (pp. 24659-24700). PMLR.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
einspace: Searching for Neural Architectures from Fundamental Operations
Accept (poster)
Summary: Neural Architecture Search (NAS) often produces incremental improvements due to limited diversity in traditional search spaces. To address this, the paper introduces "einspace," a versatile search space built from probabilistic context-free grammar (CFG). Einspace supports a wide range of architectures and operations, enabling the modelling of convolutions and attention mechanisms. Experiments on Unseen NAS datasets show that einspace can discover novel architectures from scratch and with grammars containing different foundation model architectures. Strengths: 1. The paper is written very clearly and examples of CFGs aid understanding of the paper. 2. The authors release code which improves the reproducibility of the work and potential future work 3. While I do think that this paper opens up new opportunities and challenges for scaling NAS from scratch to larger spaces and datasets, I have questions (check questions) for the authors to improve the quality of their evaluations and make a stronger case for the practical applicability of einspace. Weaknesses: 1. **Contribution**: Currently I fail to see the main contribution of the paper to the area to "searching architectures from scratch" [1]. I find the representation properties of the search space to be same as [1] (except the use of probabilistic CFGs). Moreover, the search schemes proposed random and evolutionary search are not very sample efficient in such large spaces compared to [1], which uses bayesian optimization with graph kernels. 2. **Evaluation**: The paper does not compare to natural baselines like [1]. Moreover I find the evaluation to be quite limited in terms of the datasets evaluated and the base models finetuned/evaluated on. [1] Schrodi, S., Stoll, D., Ru, B., Sukthanker, R.S., Brox, T. and Hutter, F., 2022. Towards discovering neural architectures from scratch. Technical Quality: 3 Clarity: 3 Questions for Authors: Questions: 1, The paper claims that it is the first one to represent foundation model architectures with a NAS search space. However [1] already represented a language model using CFGs. Also [1] can potentially present any architecture (ViT, MLPMixer). Is my understandng correct that this paper introduces a PCFG on [1] to limit the search space size? I am also currently lacking a CFG which is a union of multiple foundation model architectures in a single grammar. Could the authors provide an example for this? 2. The paper uses random search and evo search, both of which are sample inefficient and do not scale well with the size of the dataset and search space. How do more sample efficient bayesian optimization algorithms perform here? 3. Currently the standard for small unseen datasets is finetuning a pretrained model on the smaller downstream task. Could the authors should ideally compare to for eg: simply finetuning a pretrained vision transformer or efficientnet on the unseen dataset? 4. Evaluation on larger datasets is missing. Architectures which are able to exploit dataset patterns change depending on the scale of datasets available (eg: Vision transformers [2]). Are the architectures discovered significantly different from the existing ones even when the dataset is scaled up. How efficient is the search on this scale? 5. Could you present parameter and FLOPs counts in Table 1? 6. How is the branching rate hyperparameter set for a newer search space? [1] Schrodi, S., Stoll, D., Ru, B., Sukthanker, R.S., Brox, T. and Hutter, F., 2022. Towards discovering neural architectures from scratch. [2] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C. and Dosovitskiy, A., 2021. Do vision transformers see like convolutional neural networks?. Advances in neural information processing systems, 34, pp.12116-12128. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: I do think that the paper opens up newer avenues of research in neural architecture search, however, given the limited evaluation I do have concerns about the applicability of the method. Check weakness and questions for details on the limitations. I am happy to raise my score if each of my concerns are appropriately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback, and respond to each point below. _Contribution:_ There are three key differences between einspace and the CFG-based spaces of Schrodi et al [1]. - Our space unifies multiple architectural families (ConvNets, Transformers, MLP-only) into one single expressive space while [1] present variations of their spaces centred around ConvNets only, with a separate instantiation focusing only on Transformers. - einspace extends to probabilistic CFGs. This constitutes a significant contribution by enabling a set of benefits that include (i) allowing experts to define priors on the search space via probabilities, and (ii) enabling a broader range of search algorithms that incorporate uncertainty estimates inside the search itself. - einspace contains recursive production rules (e.g. M -> MM), meaning the same rule can be expanded over and over again, providing a very high flexibility in macro structures. [1] instead focuses on fixed hierarchical levels that limits the macro structure to a predefined (though very large) set of choices. We will ensure that these differences are better highlighted in the manuscript. _Comparing to baseline search space:_ We appreciate the reviewer’s concern and provide a comparison with the baseline of [1]. Table A presents results comparing einspace to the CFG-based, hNASBench201 from Schrodi et al. This allows for a fairer and more direct comparison of the search spaces. These results show how einspace compares favourably under the same evolutionary search. Overall, we highlight that our search results on einspace are competitive, even with far weaker priors on the search space. Table A: Comparing einspace to hNASBench201 (from Schrodi et al. [1]) ||RE(hNASBench201)|RE(Mix)(einspace)| |-|-|-| |AddNIST|93.82|97.72| |Language|92.43|97.92| |MultNIST|93.44|92.25| |CIFARTile|58.31|62.76| _Foundation model architectures:_ We apologise for any confusion caused, as we do not wish to claim that we are the first to represent foundation models within a NAS search space. We claim that we are the first to represent multiple architectural families (specifically ConvNets, Transformers, MLP-only) in a unified search space. The spaces presented in [1] focus on either ConvNets or Transformers, but not both in a unified space. If there is a particular wording that caused this confusion, we are happy to rephrase it to be more clear. _PCFG extension:_ See previous answer above. _Unified grammar:_ We apologise for any confusion here too. We only present one grammar in the paper, and it is shown in section 3.3. This grammar is expressive enough to be the union of multiple foundation model architectures in a single grammar. All experiments and all figures in the paper are from architectures generated by that grammar. Please let us know if any specific wording caused this confusion, and we will be happy to improve it. _Sample efficiency and BO:_ Due to the large set of experimental requests and queries, suggested by our six reviewers, we unfortunately did not have enough time nor spare compute to explore sample efficient BO during the time limited rebuttal period. We conjecture that such sample efficiency strategies may combine well with our large search space and provides for an exciting future direction. _Finetuning pretrained models:_ We thank the reviewer for their suggestion, and agree that this is an interesting comparison. Below we present results for the finetuning of the ResNet18 and EfficientNet-B0 architectures on the Unseen NAS datasets. The results show that we can often get a significant boost from finetuning, but for datasets that differ too much from the pretraining task (ImageNet) such as Language and Gutenberg, there is actually a degradation in performance. These are in fact the datasets where we see some of the biggest improvements from using einspace, and it highlights the expressiveness of our new search space. Table B: Finetuning pretrained models ||RN18|FT(RN18)|FT(EfficientNetB0)| |-|-|-|-| |AddNIST|93.36|94.69|94.77| |Language|92.16|90.31|90.62| |MultNIST|91.36|91.12|91.90| |CIFARTile|47.13|52.26|79.32| |Gutenberg|43.32|42.52|42.08| |Isabella|63.65|62.35|67.46| |GeoClassing|90.08|90.70|95.81| |Chesseract|59.35|61.29|62.24| _Larger datasets:_ In the rebuttal we present an array of additional experimental results. Our updated evaluation now covers 16 different datasets, with sizes ranging from thousands of data points to over a million ('Satellite', NB360), and spatial resolutions of up to 256x256 (‘Cosmic’, NB360). Unfortunately we did not have enough time nor spare compute to explore e.g. ImageNet, but we believe our evaluation is now broad enough to highlight the expressiveness and utility of einspace. _Parameter counts and FLOPs:_ Thank you for this suggestion, parameter counts are now presented in Table C. Table C: Parameter counts ||AddNIST|Language|MultNIST|CIFARTile|Gutenberg|Isabella|Geoclassing|Chesseract| |-|-|-|-|-|-|-|-|-| |DrNAS|4M|4M|5M|3M|3M|4M|4M|4M| |PC-DARTS|3M|-|3M|3M|2M|2M|-|2M| |RE(Mix)|20M|1M|25M|5M|1M|5M|4M|11M| _Setting branching rate hyperparameter:_ We assume the reviewer is asking “how is the branching rate hyperparameter set for a new task?”. In this case, we can confirm that the branching rate is set only once, based on the theoretical guidance of Section 3.7. We highlight that this hyperparameter remains constant across -all- of our experiments and tasks. We believe this evidences the generalisability of our search space, and its ease of use. If we have misunderstood the query, we would ask the reviewer to further clarify on this point. Thank you. We thank the reviewer for detailed comments that help us improve our manuscript. We have provided additional experiments that significantly extend our evaluation. We hope we have addressed all fundamental questions raised and, in light of our clarifications, we ask that the reviewer considers increasing their score. --- Rebuttal 2: Title: Response to rebuttal Comment: I appreciate the efforts of the authors in addressing my questions I am increasing my score to 4. That said, I am still concerned about the parameter sizes of architectures discovered by einspace in most cases being much larger than architectures being compared with. --- Rebuttal Comment 2.1: Title: Response to reviewer Comment: We thank the reviewer for their response and updated scores. We appreciate that there may be concern that at times our method finds architectures of higher parameter counts. However, we argue that this shows the flexibility of the search space in adapting to the difficulty of the given task. On three datasets (AddNIST, MultNIST and Chesseract) the parameter count is significantly increased, but in other cases it is significantly reduced (Language and Gutenberg). The increases could be easily controlled by either rejecting architecture candidates that are too big, or by including an efficiency component to the optimisation objective.
Summary: This paper proposes a new type of neural architecture search (NAS) search space that goes beyond standard, small NAS search spaces. Most popular search spaces in NAS are quite small, and include architectures that include known motifs because they are built around standard architectures. The authors point out that this is why NAS has failed to produce fundamentally new architectures, and instead, that most advancements in neural network architectures are from expert driven hand-design. Their search space, einspace, is a probabilistic context-free grammar (CFG) of fundamental operations. Strengths: - The paper is well-written, well-motivated, and easy to follow. - The paper is clear in that its innovation is related to search space design, with many suggestions for possible search algorithms. The search space itself is quite impressive, and includes both convolutions as well as self-attention operations (built-up from more fundamental building blocks). - The results of search seem impressive -- while einspace does not always yield stronger architectures compared to other methods or hand-designed baselines, it does sometimes yield significantly better architectures. Furthermore, the comparison to the random search baseline is appreciated. - The search space supports initialization using existing architectures -- this is important for setting priors for problems in which good architectures are already known. Weaknesses: - The resulting architectures in Table 1 do not always outperform SOTA architectures, but when they do, the improvement can be significant. On the other hand, there does not seem to be a comparison of the computational costs involved -- this would be helpful to include, especially as the search space is quite large. - The authors include results on NAS-Bench-360, which is great, however the results are incomplete and it is unclear why the authors only evaluate on 5 out of the 10 tasks. This seems particularly important as the motivation of einspace is similar to the line of work related to NAS-Bench-360. - In order to make search tractable, it seems like many assumptions are made on the priors on the search space. This makes sense, however, it would be interesting to explore variations on these choices more in-depth. If the authors adequately address these concerns, I will gladly raise my score. Technical Quality: 3 Clarity: 3 Questions for Authors: - Out of curiosity, what do the authors view as the key reason for einspace not being able to express RNNs and SSMs? This is a valid design decision, but it also seems as though it should be fairly straightforward to design a version of einspace that supports recurrence (I could be mistaken about this). - Why did the authors choose the 5 NAS-Bench-360 tasks that were used in their evaluation? Does the method work on the other tasks? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors clearly state the limitations, and even discuss avenues for future work. This includes extending the search space to handle recurrent architectures and state space models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback, and respond to each point below. _Computational costs:_ We thank the reviewer for the useful suggestion. In the table below we include search time results for NAS methods DrNAS, PC-DARTS and RE(Mix), that were originally listed in Tab.1. Note that two numbers are missing due to missing logs from the authors of [2]. We see that, as expected, the gradient-based DrNAS and PC-DARTS are significantly faster compared to the black box RE(Mix) which trains 500 networks independently. We update our manuscript to report these results and add some further discussion on the tradeoff between diversity and time complexity in light of these observations, towards providing the reader with additional insight into the related issues. Table D: Time-consumption (in hours) | AddNIST | Language | MultNIST | CIFARTile | Gutenberg | Isabella | GeoClassing | Chesseract | |-|-|-|-|-|-|-|-| | DrNAS | 10 | 9 | 11 | 25 | 13 | 59 | 23 | 10 | | PC-DARTS | 4 | - | 5 | 12 | 9 | 30 | - | 2 | | RE(Mix) | 55 | 71 | 32 | 62 | 42 | 80 | 65 | 42 | _NAS-Bench-360 tasks missing:_ We appreciate the reviewer's concern regarding our original NAS-Bench-360 evaluation. Due to resource and time constraints we did not manage to have all NAS-Bench-360 results ready. The five tasks we reported in our original submission were the most amicable to our search space and required no adjustments. In this rebuttal, we present further results on 1D datasets within the benchmark, which required us to adjust the einspace CFG to make it compatible. Essentially, these adjustments include replacing our decomposed convolutional operators with 1-dimensional versions. We add the results for these 1D tasks, along with details of these minor adjustments, to our updated manuscript. The remaining three datasets, within the benchmark, are in progress and will also be included for the camera-ready submission. Table A: Additional datasets from NAS-Bench-360 (one-dimensional) | | WRN | DARTS (GAEA) | Expert | RE(WRN) einspace | |-|-|-|-|-| | Satellite | 15.29 | 12.51 | 19.80 | 12.55 | | DeepSea | 0.45 | 0.36 | 0.30 | 0.36 | In the rebuttal in general, we present an array of additional experimental results, towards strengthening our submission. Our updated experimental evaluation now covers 16 different datasets, with sizes ranging from thousands of data points to over a million ('Satellite', NB360), and spatial resolutions of up to 256x256 (‘Cosmic’, NB360). We provide further evidence of the efficacy of our proposed space, including more expensive tasks, and the experimental breadth can be considered of high diversity. We believe this serves to strengthen our motivation for einspace, that is indeed aligned with works related to NAS-Bench-360 and we thank the reviewer for the suggestion. _Exploring design priors of einspace:_ We agree with the reviewer that this provides an additional interesting direction for exploration. Towards exploring this aspect further, we present results comparing einspace to the previous, CFG-based, hNASBench201 from Schrodi et al. This allows for an initial study on the effects of our search space design choices and, in particular, the increased expressiveness compared to hNASBench201. These results show how einspace compares favourably to a different search space under the same evolutionary search. Overall, we highlight that our search results on einspace are competitive, even with far weaker priors on the search space. Table A: Comparing einspace to hNASBench201 (from Schrodi et al. [1]) || RE (hNASBench201) | RE (Mix) (einspace) | |-|-|-| | AddNIST | 93.82 | 97.72 | | Language | 92.43 | 97.92 | | MultNIST | 93.44 | 92.25 | | CIFARTile | 58.31 | 62.76 | We update our manuscript with these findings with the aim to further improve reader understanding for choices relating to search space priors and thank the reader for the suggestion. Questions: _Recurrence missing:_ The reviewer raises an interesting question and we agree that recurrence, and related operations, make for an appealing einspace extension. We believe recurrent operations can likely be integrated via the inclusion of a recurrent module that repeats the computation of the components within it; however we leave more detailed exploration of this direction to future work. _Why not full NAS-Bench-360?:_ As already mentioned, the reasons for only including 5 out of 10 NAS_Bench-360 tasks was due to limited time and compute resources, and that some of the tasks require further adjustments to our CFG codebase. We evidence here that we have been able to successfully extend our experimental work on this axis and further, that the full set of tasks will be included in any camera-ready version. We thank the reviewer for detailed comments that (1) thoroughly test the experimental and theoretical underpinnings of our ideas and (2) enable us to update our manuscript towards further improving clarity. We hope we have addressed all fundamental questions raised and, in light of our clarifications, we invite the reviewer to consider increasing their score. [1] Construction of Hierarchical Neural Architecture Search Spaces based on Context-free Grammars, Schrodi et al, NeurIPS 2023. --- Rebuttal Comment 1.1: Title: Message to reviewer ZVSn Comment: Thank you again for taking the time to review our paper. We hope our response has addressed your concerns, and would be very grateful if you would reconsider your scores. If there are further questions we are happy to continue discussing until the deadline tomorrow.
Summary: This paper proposes a new search space named einspace based on a parameterized probabilistic context-free grammar for neural architecture search. In contrast to conventional search space composed of high-level operations and modules such as recurrent layer, convolutional layer, and activation layer, einspace consists of more fundamental operations, such as clone, summation, permutation, and activation. This einspace can be defined by a context-free grammar and can represent the neural architecture as the derivation tree. The experiments on Unseen NAS and NAS-Bench-360 demonstrate the advantage of this search space. Strengths: 1. The paper is well-written and is easy to follow. 2. The idea of fundamental operations and context-free grammar for search space is great and interesting. 3. This search space has the advantage of designing new architectures based on existing SOTA architectures such as ResNet, Wide-ResNet, ViT, and MLP-Mixer. Weaknesses: Despite the strengths of this paper, I still have some concerns that prevent me from confidently recommending acceptance: 1. The first problem is about the novelty. Since einspace is not the first search space based on CFG, and the fundamental operations of einspace are common in the existing search space, the novelty of einspace may not reach my expectations. Although the fundamental operations are different from and more expressive than the high-level operations and rigid structures of existing search space, these fundamental operations are also commonly used in existing search space. Furthermore, it is complicated to represent convolution and skip connection using these operations and it is impossible to represent recurrent computation, as the authors discussed in the Limitation section. 2. The second problem is about the tradeoff between diversity and time complexity of the search space. It would be better to analyze the time complexity or compare the time consumption between einspace and others, such as the space of DARTS. 3. From the experimental results, the architectures searched in einspace do not seem to be that competitive compared to other methods, as shown in Table 1. I think the results are also influenced by the search strategy. Is it possible to apply RL or differentiable search strategies to this search space? If possible, the author can give a table listing which search strategies are suitable for einspace and which search strategies are better. Technical Quality: 4 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations in their paper. Besides, there are no potential negative societal impacts of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback, and respond to each point below. _Novelty:_ We acknowledge previous CFG-based studies exist, however we would highlight that we are the first work to introduce the concept of pCFGs for search space design, which we show brings advantages in controlling the expected architecture complexities. We are also the first to unify multiple architectural families into one single expressive space. We provide direct experimental evidence, namely that regularised evolution seeded with diverse SOTA architectures performs competitively across a broad range of datasets. We believe that this constitutes a meaningful step forward, beyond previous work on CFG based search space design. The fundamental operations in einspace are atomic, and rather than focussing on the fact that they are pre-existing, we highlight that the novelty of our work stems from our thinking around the required expressiveness of the search space. The unique manner in which we organise and leverage such atomic operators results in the emergence of new search spaces that contribute a previously unseen expressiveness for NAS-based tasks. _Complexity of components:_ We agree with the reviewer that existing hand-designed high-level operators (e.g. convolutions, skips) often require a complicated composition of atomic operators and that this typically necessitates specific human expertise. This observation is actually at the very crux of our argument. Rather than assuming that such coarse operators are optimal, and hiding their inner workings from the model, we instead enable the autonomous construction and discovery of such related operators. _Recurrence missing:_ While we account for a large set of important architectures, we are also clear to communicate that this is a non-exhaustive set. We do not view this as a large limitation, rather, it opens the door for interesting follow-up work, such as support for recurrent operations (e.g. via inclusion of a recurrent module that repeats the computation of the components within). We leave such directions for future work. _Tradeoff between diversity and time complexity:_ We thank the reviewer for the useful suggestion. In the table below we include search time results for NAS methods DrNAS, PC-DARTS and RE(Mix), that were originally listed in Tab.1. Note that two numbers are missing due to missing logs from the authors of [2]. We see that, as expected, the gradient-based DrNAS and PC-DARTS are significantly faster compared to the black box RE(Mix) which trains 500 networks independently. We update our manuscript to report these results and add some further discussion on the tradeoff between diversity and time complexity in light of these observations, towards providing the reader with additional insight into the related issues. Table D: Time-consumption (in hours) | AddNIST | Language | MultNIST | CIFARTile | Gutenberg | Isabella | GeoClassing | Chesseract | |-|-|-|-|-|-|-|-| | DrNAS | 10 | 9 | 11 | 25 | 13 | 59 | 23 | 10 | | PC-DARTS | 4 | - | 5 | 12 | 9 | 30 | - | 2 | | RE(Mix) | 55 | 71 | 32 | 62 | 42 | 80 | 65 | 42 | _Competitiveness:_ We highlight that, in Table 1, RE(RN18) and RE(Mix) achieve average ranks that are only beaten by BonsaiNet, which shows the high performance of our approach. Additionally, as part of our new results, we also present a direct comparison to hNASBench201, the hierarchical CFG-based search space from Schrodi et al [1]. These results show how einspace compares favourably to a different search space under the same evolutionary search. Overall, we highlight that our search results on einspace are competitive, even with far weaker priors on the search space. Table A: Comparing einspace to hNASBench201 (from Schrodi et al. [1]) || RE (hNASBench201) | RE (Mix) (einspace) | |-|-|-| | AddNIST | 93.82 | 97.72 | | Language | 92.43 | 97.92 | | MultNIST | 93.44 | 92.25 | | CIFARTile | 58.31 | 62.76 | _Potential search strategies:_ We thank the reviewer for the idea of a table clarifying the potential search strategies for einspace. Like hNASBench201 from Schrodi et al [1], einspace is too large for one-shot methods that require all architectures to be instantiated into a single supernet. However, there are other weight-sharing methods relating to MCTS and SPOS that may be applicable. Table A: Comparison of einspace with existing search spaces. † Gradient-based search is difficult in these spaces due to their size, but other weight-sharing methods may be available. *The paper introducing hNASBench201 [1] also considers versions of the search space for Transformer language models. | | Type | Size | Focus | RS | RE | RL | BO | Gradient-based | |-|-|-|-|-|-|-|-|-| | einspace | pCFG | Huge | ConvNets, Transformers, MLP-only | ✓ | ✓ | ✓ | ✓ | † | | hNASBench201 | CFG | 10^446 | ConvNets* | ✓ | ✓ | ✓ | ✓ | † | | NASBench201 | Cell | 10^4 | ConvNets | ✓ | ✓ | ✓ | ✓ | ✓ | | NASBench101 | Cell | 10^5 | ConvNets | ✓ | ✓ | ✓ | ✓ | ✓ | | DARTS | Cell | 10^18 | ConvNets | ✓ | ✓ | ✓ | ✓ | ✓ | In light of our responses and improved evaluation, we invite the reviewer to consider increasing their score. [1] Construction of Hierarchical Neural Architecture Search Spaces based on Context-free Grammars, Schrodi et al, NeurIPS 2023. [2] Insights from the Use of Previously Unseen Neural Architecture Search Datasets, Geada et al, CVPR 2024. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thanks to the authors for the detailed rebuttal. Although there are some issues of einspace, such as high time complexity and complex representation of skip connections, I think it is a valuable exploration of the search space with atomic operations. Therefore, I would like to raise my score to Weak Accept. --- Reply to Comment 1.1.1: Title: Response to Reviewer wLBZ Comment: We thank the reviewer for their response and updated scores. If there are any outstanding concerns regarding the time complexity or representations of architectural components, we are happy to discuss further during the discussion period.
Summary: The manuscript presents "einspace," a novel neural architecture search (NAS) space based on a parameterized probabilistic context-free grammar (CFG). The authors aim to address the limitations of current NAS methods by proposing a highly expressive search space that supports diverse network operations and architectures of varying sizes and complexities. Strengths: 1. The paper introduces a unique NAS search space, "einspace," which is designed to be highly expressive, accommodating a wide range of architectures, including those not traditionally found in NAS literature. 2. The work contributes to the ongoing discussion on the role of search space expressivity and strategic search initialization in NAS, potentially paving the way for future research in this direction. Weaknesses: 1. While the authors claim to introduce a new search space, the manuscript's Method Section appears to describe a set of rules that break down operators into smaller elements, which could be misinterpreted as a mere decomposition rather than a novel search space construct. This raises questions about the actual size and scope of the proposed search space, which would benefit from further clarification. 2. Section 3.7 is not clearly articulated, and the authors are encouraged to provide a simplified explanation of its main content to aid reader comprehension. 3. The primary goal of the proposed method is to design a highly expressive yet constrained search space. To substantiate this claim, it would be beneficial to conduct searches and validating on larger-scale datasets, such as ImageNet, which could more effectively demonstrate the superiority of the proposed search space. The current experiments on smaller datasets may not fully showcase the advantages of the search space. 4. The experimental settings are somewhat unclear. It appears that the authors search within the proposed space and then validate the discovered network structures on other datasets, such as those from NAS-Bench-360. However, NAS-Bench-360 imposes constraints on the design space that may not be compatible with the structures proposed by the authors. Further clarification on how the proposed space's network structures are adapted or validated on NAS-Bench-360 is needed to ensure the experiments are methodologically sound. Technical Quality: 2 Clarity: 2 Questions for Authors: NA Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback, and respond to each point below. _Size and scope of the search space:_ We thank the reviewer for the comment however believe this point to be largely a matter of semantics. By decomposing coarse grained building blocks into atomic operators we meaningfully increase the size, complexity and flexibility of the search space. We evidence that this allows points in our search space to embody (common) architectures that cannot be represented in previously explored spaces. We note that multiple co-reviewers are impressed by the size and scope of our proposed space (ZVSn, wLBZ) and that our experiments serve to evidence the complexity of the search space (duvy). _Clarity of Section 3.7:_ We thank the reviewer for the opportunity to provide a simplified explanation. If we consider network architectural design to be a procedural, decision-making process, then the crux of the message in Sec. 3.7 is that: **we can introduce a probabilistic choice at each step in this process, to help us achieve the desirable level of architecture complexity**. At each step, there is a chance to continue building the network (adding more components) or to stop and finalise a part of it. The probability P(M -> C|M) is particularly important for this. Further, guided by previous work on PCFGs, we can carefully adjust these probabilities to provably control the average complexity of the generated architectures. In essence, the probabilistic approach strikes a balance between creating deep, complex networks and shallow, simpler ones. This helps to explore a wider range of architectural possibilities while maintaining control over the overall complexity of the generated models. We refine our phrasing of Sec. 3.7, towards further aiding reader understanding on this point. _Larger-scale datasets:_ We appreciate the reviewer's concern regarding limited evaluation and would firstly note that this is a common problem of previous NAS works, which often only consider a small number of datasets e.g. CIFAR10. To alleviate this concern we present an array of additional experimental results in our rebuttal, towards strengthening our submission. Our updated experimental evaluation now covers 16 different datasets, with sizes ranging from thousands of data points to over a million ('Satellite', NB360). Our updated experimental work provides further evidence of the efficacy of our proposed space, including more expensive tasks, and the experimental breadth can be considered more diverse than most previous NAS work that we are aware of. We address the issue regarding development of more efficient search strategies above (see reply to **duvy**) and accordingly defer evaluation of ImageNet scale tasks to future work. _Unclear experimental settings:_ We think there may be a misunderstanding on how we performed our experiments on NAS-Bench-360 for Table 5. These results are run independently to those in Table 1, and there is no transfer or adaptation between tasks. Throughout our evaluation, every search is performed on the same dataset that the evaluation is performed on. Thank you for highlighting this, we will revise the manuscript to make it clearer. Some datasets in NAS-Bench-360 do indeed impose some additional constraints on the search. The 5 tasks we considered for the submitted version were the easiest to use and required no adjustments to the search space. The Cosmic and Darcy Flow datasets simply needed a dense prediction output layer instead of a classifier. In our updated results in this rebuttal, we also consider some 1D datasets within the benchmark, and this required us to adjust the einspace CFG to make it compatible. Primarily, these adjustments include replacing our decomposed convolutional operators with 1-dimensional versions. The details of all adjustments will of course be added to the manuscript along with the results on these 1D tasks. Table A: Additional datasets from NAS-Bench-360 (one-dimensional) | | WRN | DARTS (GAEA) | Expert | RE(WRN) einspace | |-|-|-|-|-| | Satellite | 15.29 | 12.51 | 19.80 | 12.55 | | DeepSea | 0.45 | 0.36 | 0.30 | 0.36 | We thank the reviewer for the detailed comments and suggestions that enable us to update our manuscript towards further improving clarity. We hope we have addressed all fundamental questions raised and, in light of our clarifications, we invite the reviewer to consider increasing their score. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal. Comment: After reading the rebuttal, I raise my score to 4 and lower my confidence. There is something I have misunderstood but I fail to figure it out. --- Reply to Comment 1.1.1: Title: Thanks for the response Comment: We thank the reviewer for their response and for updating their scores. Let us know if there is anything we can clarify further to clear up the misconceptions, and we'll be happy to do so in the remaining discussion period.
Rebuttal 1: Rebuttal: We thank the six (!) reviewers for their time and valuable comments that improve the quality of our work. We are encouraged by the positive feedback, namely: - Multiple reviewers appreciate the novelty of our core idea, to move beyond conventional NAS spaces (duvy, wLBZ, ZVSn) - That our experimental results evidence our method efficacy (duvy, s1qw, ZVSn) - Our work opens up new opportunities and future NAS research directions (sBZH, c4rx) - A majority of reviewers note that the paper is well-written, well-motivated, and easy to follow (duvy, wLBZ, ZVSn, c4rx) - Our use of helpful examples and clarity of explanation (duvy, c4rx) - Multiple reviewers appreciate that we release our source code preemptively during the review period (duvy, c4rx) We address individual reviewers’ concerns below inline and also offer a communal reply here in order to address several common and important points. **Novelty**: Our core contribution is a novel search space, which we argue is a meaningful and valid contribution, independently of presenting a completely novel search strategy. This provides an opportunity for the community to develop new search strategies for our space and encourages interesting new lines of work. That said, as a starting point, we have highlighted the effectiveness of seeding search with SOTA architectures, which is straightforward given the flexibility of our space. Our space uses a CFG, as in the excellent work of [1], but is distinct for several key reasons: - Our space unifies multiple architectural families (ConvNets, Transformers, MLP-only) into **one single expressive space** while [1] present variations of their spaces centred around ConvNets only, with a separate instantiation focusing only on Transformers. - einspace extends to probabilistic CFGs. This constitutes a significant contribution by enabling a set of benefits that include (i) allowing experts to define priors on the search space via probabilities, and (ii) enabling a broader range of search algorithms that incorporate uncertainty estimates inside the search itself. - einspace contains recursive production rules (e.g. M &#8594; MM), meaning the same rule can be expanded over and over again, providing a very high flexibility in macro structures. [1] instead focuses on fixed hierarchical levels that limits the macro structure to a predefined (though very large) set of choices. We will ensure that these differences are better highlighted in the manuscript. Further to this, we present a direct comparison to hNASBench201, the hierarchical CFG-based search space from [1]. These results show how einspace compares favourably to a different search space under the same evolutionary search. Overall, we highlight that our search results on einspace are competitive, even with far weaker priors on the search space. Table A: Comparing einspace to hNASBench201 (from Schrodi et al. [1]) || RE (hNASBench201) | RE (Mix) (einspace) | |-|-|-| | AddNIST | 93.82 | 97.72 | | Language | 92.43 | 97.92 | | MultNIST | 93.44 | 92.25 | | CIFARTile | 58.31 | 62.76 | **Experimental results**: We appreciate reviewer concerns regarding experimental evaluation, as this is a common problem in NAS work. To alleviate this concern we present an array of additional experimental results in our rebuttal, towards strengthening our submission. Our updated experimental evaluation now covers 16 different datasets, with sizes ranging from thousands of data points to over a million ('Satellite', NB360), and spatial resolutions of up to 256x256 (‘Cosmic’, NB360). Our updated experimental work provides further evidence of the efficacy of our proposed space, including more expensive tasks, and the experimental breadth can be considered more diverse than most previous NAS work. Table B: Additional datasets from NAS-Bench-360 (one-dimensional) | | WRN | DARTS (GAEA) | Expert | RE(WRN) einspace | |-|-|-|-|-| | Satellite | 15.29 | 12.51 | 19.80 | 12.55 | | DeepSea | 0.45 | 0.36 | 0.30 | 0.36 | [1] Construction of Hierarchical Neural Architecture Search Spaces based on Context-free Grammars, Schrodi et al, NeurIPS 2023.
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper introduces einspace, a hierarchical space of neural architectures based on parametric probabilistic context free grammar, which is expressive enough to accommodate various state-of-the-art architectures including ResNets and Transformers. The Authors further perform Regularized Evolution (RE) search over einspace either searching from scratch or seeding the initial population with state-of-the-art architectures. In particular when seeded with ResNet18, RE generates novel architectures that significantly outperform ResNet18 on multiple datasets from Unseen NAS. Furthermore, RE both seeded with ResNet18 and Mix (a mixture of SOTA architectures) significantly outperform RE from scratch as well as random sampling and random search. Strengths: This study extends previous research on hierarchical architecture spaces by introducing a more expressive framework capable of accommodating diverse structures including convolutional networks and transformers. Novelties include imposing minimal priors which facilitate successful search within a highly expressive architecture space. Additionally, grammar rules are equipped with parameters to ensure the generation of valid architectures through the combination of various components. Moreover, the complexity / depth of architectures are regulated by tuning the probabilities assigned to production rules. The authors demonstrate through a number of experiments involving ResNet and WideResNet architectures that network performance can be significantly enhanced by utilizing einspace in conjunction with RE, when seeding the initial population with these SOTA architectures. Weaknesses: The experiments are currently limited. While the paper does not introduce a search strategy tailored to this search space, it is crucial to emphasize experiments demonstrating the potential for RE, possibly seeded with SOTA networks, to enhance model performance. The current evaluations focus on ResNet18 on the Unseen NAS datasets as well as WRN on datasets from NASBench360. Extending this analysis to include a broader range of models, e.g. those listed in Table.1 of Unseen NAS, such as AlexNet and DenseNet, would provide useful insights on the effectiveness of einspace. Moreover, it would be valuable to explore the application of RE on einspace to improve performance on widely used datasets like CIFAR10, e.g with ResNet and in particular ViT, given that the paper highlights the capability of einspace to support transformer architectures as an advantage. Technical Quality: 3 Clarity: 3 Questions for Authors: How does einspace compare with other hierarchical search spaces in the literature [1] and [2] in terms of performance? To support the advantages of einspace, including expressivity at a reasonable search cost, it would be beneficial to conduct comparative evaluations, at least with RE, across a number of tasks. Does it make sense to apply black-box search methods other than RS, RE to einspace, for example the bayesian optimization-based methods BOHNAS and NASBOWL used in [2]? Approximate search times for RE on datasets of Table.1 are reported in appendix B.3. How do these search times compare with those of other NAS methods listed in Table.1? A similar comparison would also be valuable for the experiments in Table 5. [1] Hierarchical Representations For Efficient Architecture Search (Simonyan et.al 2018) [2] Construction of Hierarchical Neural Architecture Search Spaces based on Context-free Grammars (Schrodi et.al 2023) Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitations are adequately discussed in the final section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback, and respond to each point below. _Limited evaluation:_ We present additional experimental evaluation in this rebuttal on multiple axes. Taking into account all new results, our experimental evaluation now covers 16 different datasets, with sizes ranging from thousands of data points, to a million (Satellite from NB360), and spatial resolutions of up to 256x256 (‘Cosmic’, NB360). The new results further evidence the efficacy of einspace and our seeded RE and we believe our study is now significantly broader and more diverse than most NAS work we are aware of. We will update our manuscript to include the additional experimental work and thank the reviewer for the suggestion. _Broader range of models:_ Our rebuttal now provides initial results on RE(DenseNet121) using einspace. We see gains here as well, especially on the Language dataset, where it almost matches the performance of RE(RN18)=96.84. We believe the further exploration of additional models constitutes potentially valuable future work. Table A: DenseNet121 results | | DenseNet121 | RE(DenseNet121) | |-|-|-| | AddNIST | 94.72 | 94.84 | | Language | 91.26 | 96.42 | _More datasets eg. CIFAR10 with ResNet and ViT:_ Our updated experimental evaluation now covers 16 different datasets, with sizes ranging from thousands of data points to over a million ('Satellite', NB360). Our updated experimental work includes more expensive tasks and provides further evidence of the efficacy of our proposed space. The experimental breadth has been meaningfully increased and can be considered to constitute a diverse range of tasks. We now present results on CIFAR10, using our regularised evolution seeded with Resnet18, and the mix of architectures (including a ViT). We can see that the improvement in this case is not as significant as for other datasets, an effect we attribute to the broad focus of einspace that goes beyond ConvNets, which have long been optimised for datasets like CIFAR10. Table B: CIFAR10 results | | RN18 | RE(RN18) | RE(Mix) | |-|-|-|-| | CIFAR10 | 94.91 | 95.31 | 94.73 | _einspace vs other hierarchical spaces:_ We agree that a direct comparison between einspace and previous CFG-based spaces would be beneficial. We are now happy to report new results comparing RE on einspace vs. RE on hNASBench201 (hierarchical+non-linear) from Schrodi et al [1] in the table below. The results show that our searches in einspace tend to outperform those on hNASBench201, and that both improve upon the baseline network. We thank the reviewer for this suggestion. The full set of results will be included in the updated manuscript. Table C: Comparing einspace to hNASBench201 (from Schrodi et al. [1]) || RN18 | RE (hNB201) | RE (Mix) (einspace) | |-|-|-|-| | AddNIST | 93.36 | 93.82 | 97.72 | | Language | 92.16 | 92.43 | 97.92 | | MultNIST | 91.36 | 93.44 | 92.25 | | CIFARTile | 47.13 | 58.31 | 62.76 | _BO in einspace:_ Bayesian optimisation methods are certainly applicable to einspace. Similar to how BOHNAS is used in [1], we think a hierarchical kernel can work well with our CFG formulation. Unfortunately, due to resource and time constraints, we were not able to perform this experiment in time for this rebuttal. We conjecture that BO could provide a sample efficient search and be further improved through seeding with SOTA architectures, like we do with RE. We leave more in depth exploration of advanced search strategies to future work. _Search times:_ In the table below we include search time results for NAS methods DrNAS, PC-DARTS and RE(Mix), that were originally listed in Tab.1. Note that two numbers are missing due to missing logs from the authors of [2]. We see that, as expected, the gradient-based DrNAS and PC-DARTS are significantly faster compared to the black box RE(Mix) which trains 500 networks independently. We update our manuscript to report these results and add some further discussion on the tradeoff between diversity and time complexity in light of these observations, towards providing the reader with additional insight into the related issues. We thank the reviewer for the useful question. [1] Construction of Hierarchical Neural Architecture Search Spaces based on Context-free Grammars, Schrodi et al, NeurIPS 2023. [2] Insights from the Use of Previously Unseen Neural Architecture Search Datasets, Geada et al, CVPR 2024. Table D: Time-consumption (in hours) | AddNIST | Language | MultNIST | CIFARTile | Gutenberg | Isabella | GeoClassing | Chesseract | |-|-|-|-|-|-|-|-| | DrNAS | 10 | 9 | 11 | 25 | 13 | 59 | 23 | 10 | | PC-DARTS | 4 | - | 5 | 12 | 9 | 30 | - | 2 | | RE(Mix) | 55 | 71 | 32 | 62 | 42 | 80 | 65 | 42 | --- Rebuttal Comment 1.1: Comment: I thank the Authors for their detailed response. Based on the additional results I would like to raise my score to 5. --- Reply to Comment 1.1.1: Title: Response to Reviewer s1qw Comment: We thank the reviewer for their response and updated scores. If there are any outstanding concerns, we are happy to clarify further during the discussion period.
Summary: This paper introduces einspace, a search space that is designed to hierarchically encode architectures using probabilistic context-free grammars (PCFG). It can encode various architectural components, such as convolutions, attention mechanisms, etc. The authors demonstrate the efficacy of simple blackbox optimizers in einspace to discover architectures that perform competitively on various tasks and datasets. Strengths: In general the motivation to move beyond the conventional NAS spaces is valid and really important in my opinion. The paper is also very well-written, with simple examples followed by a more generic definition of the search space and interesting application to various tasks and datasets. Discovering novel architectures is a very challenging problem for the NAS community and as far as I know, this has not been achieved yet. Having a complex and versatile search space is the first step towards this goal. Some other positive aspects of this submission: - Interesting experiments showing the complexity of the search space and the need of guided search methods (e.g. evolutionary strategies as the authors show), instead of random search, which on previous spaces (e.g. the DARTS one) was shown to perform already well. - Available code that fosters reproducibility and enables easier future research on this topic. - Extending the prior work of [1] to probabilistic CFG. This enables (1) easier expert prior definitions on the search space, (2) a broader range of algorithms applied on einspace, that can as well incorporate uncertainty estimates of their choices inside the search itself. Weaknesses: Despite the vast number of architectures that einspace includes, the major problem is how to search on these spaces in an efficient way. The prior work of [1] used BO with a hierarchical graph kernel (a novelty aspect of that paper), however that was still expensive, especially when moving to image classification tasks. I think defining search spaces that are very complex is very useful task, however, the NAS problem is not only solved by defining the search space alone. The search algorithm is a major component of the whole pipeline, and using blackbox methods to search in such search spaces will be computationally demanding and will still require company scale computation. Below I list my major concerns about this submission: - *Novelty*: I think the paper has some novel aspects compared to [1], e.g. the fact that can encode both attention-mechanisms and convolution, or other operators, or the probabilistic extension of the CFG. However, as the authors mention in the limitation section, it would be great if accompanying the search space, there would be a new proposed search method that can efficiently search on this space by exploiting the PCFG. - *Limitations of the search space*: As the authors mention, there are various important architectures that cannot be encoded in einspace. - *Claims*: I think the claim (for instance in the abstract) that you find "novel architectures" can mean many things. Correct me if I am wrong, but in my opinion einspace still encodes most of the known architectures, and searching on it will only re-discover them or improve on top of those architectures, but it won't find a completely novel architectural component, as for instance ResNets introduced the residual connection back then. - *Limited experimental evaluation and results*: The empirical evaluations are not enough in my opinion. The used tasks are simple and one function evaluation is cheap enough to allow blackbox methods to run there. Evaluating on more expensive tasks would require the development of more efficient search strategies that are tailored to einspace. Moreover the results shown in Table 1 are not that impressive considering RE is typically a very strong baseline, so I would have expected it to outperform all other methods. Technical Quality: 2 Clarity: 4 Questions for Authors: - Could you please discuss further in more details on what are the main advantages of using CFG instead of graph-based or other encodings used in NAS [2]? -- References -- [1] https://proceedings.neurips.cc/paper_files/paper/2023/file/4869f3f967dfe954439408dd92c50ee1-Paper-Conference.pdf [2] https://proceedings.neurips.cc/paper/2020/file/ea4eb49329550caaa1d2044105223721-Paper.pdf Confidence: 5 Soundness: 2 Presentation: 4 Contribution: 3 Limitations: The authors have addressed the limitations adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback, and respond to each point below. _Novelty:_ We agree that our core contributions relate to the search space. However we respectfully argue that the introduction of a valuable new space forms a meaningful and valid contribution, independently of presenting a completely novel search strategy. We provide an opportunity for the field to develop new search strategies for our space, and encourage interesting new lines of work. While we acknowledge the impressive work of Schrodi et al. [1], who offer both a new search space framework and search strategy, there are also several recent papers whose main contributions consist solely of a search strategy [2, 3], and this is naturally complemented by papers that contain a search space core contribution. As reviewer **duvy** also notes; our search space highlights the importance of search strategy choice, in relation to space complexity. We believe this further helps to encourage follow-up work on search strategies and we will explicitly strengthen this idea in our revision. Finally, our secondary contribution in this paper is that seeding the search with existing SOTA architectures is a powerful approach that has been previously overlooked. Our expressive search space makes this straightforward as it contains such a diverse set of existing architectures. [1] Construction of Hierarchical Neural Architecture Search Spaces based on Context-free Grammars, Schrodi et al, NeurIPS 2023. [2] ZARTS: On Zero-order Optimization for Neural Architecture Search, Wang et al, NeurIPS 2022. [3] PASHA: Efficient HPO and NAS with Progressive Resource Allocation, Bohdal et al, ICLR 2023. _Limitations of the search space:_ To date, einspace constitutes one of the most expressive search spaces in the NAS field. We evidence that it combines multiple powerful architectural families in a unified space; including ConvNets, transformers and MLP-only architectures. While we account for a large set of important architectures, we are also clear to communicate that this is a non-exhaustive set. We do not view this as a large limitation, rather, it opens the door for interesting follow-up work, such as support for recurrent operations (e.g. via inclusion of a recurrent module that repeats the computation of the components within). We leave such directions for future work. _Claims on novel architectures:_ We clarify that einspace has an 'increased ability to find novel architectures'. It includes both examples of existing SOTA architectures, like ResNets and ViTs, but importantly it includes a huge amount of architectures anywhere between and around these existing models. As an example, many previous search spaces consider the self-attention module to be a fixed component, while we model the intricacies of each matrix multiplication, activation, linear layer and branching/merging structures. By changing the number of branches, or the operations done in each branch, we are able to build and discover uncommon architectural components that are on a similar order of complexity as existing self-attention or indeed residual connections. That is; we provide a significantly more granular space within which novel components can be discovered. It is of course true that we impose some constraints on the space, as discussed in Sec. 3.4, but these are relatively weak constraints that don’t significantly reduce expressiveness. We are open to refining the specific phrasing of claims on this point, towards aiding understanding, if the reviewer deems this important. _Limited evaluation and results:_ We appreciate the reviewer's concern regarding limited evaluation and would firstly note that this is a common problem of previous NAS works, which often only consider a small number of datasets e.g. CIFAR10. To alleviate this concern we present an array of additional experimental results in our rebuttal, towards strengthening our submission. Our updated experimental evaluation now covers 16 different datasets, with sizes ranging from thousands of data points to over a million ('Satellite', NB360), and spatial resolutions of up to 256x256 (‘Cosmic’, NB360). Our updated experimental work provides further evidence of the efficacy of our proposed space, including more expensive tasks, and the experimental breadth can be considered more diverse than most previous NAS work. We address the issue regarding development of more efficient search strategies above (see previous point) and accordingly defer evaluation of ImageNet scale tasks to future work. As part of our new results, we also present a direct comparison to hNASBench201, the hierarchical CFG-based search space from Schrodi et al [1]. These results show how einspace compares favourably to a different search space under the same evolutionary search. Overall, we highlight that our search results on einspace are competitive, even with far weaker priors on the search space. Table A: Comparing einspace to hNASBench201 (from Schrodi et al [1]) || RE (hNASBench201) | RE (Mix) (einspace) | |-|-|-| | AddNIST | 93.82 | 97.72 | | Language | 92.43 | 97.92 | | MultNIST | 93.44 | 92.25 | | CIFARTile | 58.31 | 62.76 | _Encodings:_ Our CFG formulation encodes architectures in the form of derivation trees. This explicitly differs from a graph encoding; a derivation tree records the set of design choices that define a flexible macro structure for the architecture, while a graph encoding alternatively provides only a rigid macro-structure-blueprint for representable architectures (e.g. via fixed size adjacency matrices). Through the use of derivation trees, einspace allows for mutations that can effectively alter both the macro structure -and- the individual components of an architecture. Modifications of this class are more difficult if using rigid graph encodings. We will update our manuscript to include further discussion on this point and thank the reviewer for the helpful question. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and the additional experiments comparing einspace to the hierarchical NB201 space from Schrodi et al. I will increase my score, however, I think this paper has the potential to become a really strong publication by incorporating the feedback from the reviewers, and one more iteration might be beneficial in the long run. --- Reply to Comment 1.1.1: Title: Response to Reviewer duvy Comment: We thank the reviewer for their response and their updated scores, and we are encouraged that they think our work has strong potential. We will integrate all feedback from this review process into our paper and for a potential camera-ready version we are also working towards evaluating a BO search strategy on our search space. Thank you for a great discussion.
null
null
null
null
MAGIS: LLM-Based Multi-Agent Framework for GitHub Issue Resolution
Accept (poster)
Summary: This paper designs a Multi-Agent framework, MAGIS. Following the form of team problem solving, it designs four agents: Manager, Repository Custodian, Developer, and QA Engineer, which are used to decompose tasks, locate problem codes, generate codes, and review codes, respectively. They help the LLM to better locate problems and generate correct codes in a cooperative way. This framework can solve 13.94% of problems on the SWE-bench benchmark. Strengths: 1. This paper draws on the cooperation method of human team, assigns different roles to agents, and utilizes the powerful understanding ability of the big model for natural language, as well as the professional ability in specific fields stimulated by being assigned specific roles. Compared with simply calling the big model, the effect is improved. 2. This paper draws on the software engineering idea very well. For example, the Kick-off Meeting method is adopted, which can help multiple developers clarify independent tasks, ensure that there is no conflict, and can determine the order of modification. Weaknesses: 1. The paper uses 11 prompts in total, and the specific prompt content cannot be found in the main text, which makes people worry whether the experimental results will strongly depend on the quality of the designed prompts. After replacing other models, the effect may be uncontrollable. 2. The framework is relatively complex and requires a large number of interactions with LLMs. In the end, compared with other baseline scores, whether these additional computing resources are necessary is not further analyzed in the paper. 3. **SWE-bench clearly states that the hints field should not be used in the submission list, but from this paper, it is found that hints seem to be the key to MAGIS improvement.** Is this unfair to other methods? 4. I noticed that there are a lot of new submissions on SWE-bench (Lite) recently. Do you need to do a new comparison? (Of course, I think it is reasonable not to compare, after all, they are working in the same period.) ### Reference - [1] Opendevin: Code less, make more. https://github.com/OpenDevin/OpenDevin/, 2024. - [2] Dong Chen, Shaoxin Lin, Muhan Zeng, Daoguang Zan, Jian-Gang Wang, Anton Cheshkov, Jun Sun, Hao Yu, Guoliang Dong, Artem Aliev, et al. Coder: Issue resolving with multi-agent and task graphs. arXiv preprint arXiv:2406.01304, 2024. - [3] Yingwei Ma, Qingping Yang, Rongyu Cao, Binhua Li, Fei Huang, and Yongbin Li. How to understand whole software repository? arXiv preprint arXiv:2406.01422, 2024. - [4] Yuntong Zhang, Haifeng Ruan, Zhiyu Fan, and Abhik Roychoudhury. Autocoderover: Autonomous program improvement, 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: See the questions in Weaknesses. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors explain the limitations of their approach in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for taking the valuable time to review our manuscript and thanks for your positive comments (i.e., drawing on the software engineering idea well, drawing on the cooperation method of the human team, and the improved performance). We are sorry for the confusion and unclear expression in the previous version. We have addressed each of the comments and suggestions. Please refer to our responses below for details. > Q1: The paper uses 11 prompts in total, and the specific prompt content cannot be found in the main text, which makes people worry whether the experimental results will strongly depend on the quality of the designed prompts. After replacing other models, the effect may be uncontrollable. Thanks for your comments. - **Prompt content:** We will provide the full set of prompt content in the revision (part of it is shown below due to the length limit). ```python # Prompt <em>P</em> (Line 5 in Algorithm 2) system_prompt = ("You are a software development manager." "Your responsibility is to provide clear guidance and instructions to a developer regarding modifications or improvements needed in a specific code file. " "This guidance should be based on the details provided in the issue description and the existing content of the code file.") user_prompt = ("Review the issue description and the content of the code file, then provide specific instructions for the developer on the actions they need to take to address the issue with these files.\n" f"# Issue Description:\n{issue_description}\n# Code File:\n{file_content}\n" "Respond concisely and clearly, focusing on key actions to resolve the issue. Limit your answer to no more than 100 tokens.") ``` - **More base LLMs:** Moreover, we replaced GPT-4 with other two LLMs (i.e., DeepSeek [1*] and Llama-3.1-405B [2*]), and the experimental results on SWE-bench Lite are shown below. Please note that all prompts are identical to those we experimented with on GPT-4. In the "Directly Use" setting, the prompts are sourced from SWE-bench, while prompts for other settings are designed by us. | Base LLM | Directly Use | MAGIS | MAGIS (w/o hints, w/o QA) | |---|---|---|---| | DeepSeek | 0.33% | 12.67% | 11.00% | | Llama 3.1 | 1.33% | 16.67% | 11.00% | The above table shows that our method has achieved a 38-fold performance improvement (DeepSeek) and a 12-fold improvement (Llama 3.1) compared to directly using these base LLMs. The performance improvement validates that our method is general and can still unlock the potential of other LLMs in solving GitHub issues. --- > Q2: The framework is relatively complex and requires a large number of interactions with LLMs. In the end, compared with other baseline scores, whether these additional computing resources are necessary is not further analyzed in the paper. Thanks for your comments. Compared with direct usage, LLM-based multi-agent systems [3-5*] including ours can be more complex and need many interactions. However, these additional computing resources are considered to be necessary and worthwhile because (1) The issue-resolving task is not a task that needs immediate resolution (in contrast with coding tasks such as IDE code completion) (2) The additional computing enables the model to resolve issues that were previously unresolvable (with the resolved rate increasing from 1.74% to 13.94%, as shown in Table 2). As direct usage of LLM can resolve few GitHub issues, this paper focuses on how to better use LLM to resolve issues and improve performance. --- > Q3: SWE-bench clearly states that the hints field should not be used in the submission list, but from this paper, it is found that hints seem to be the key to MAGIS improvement. Is this unfair to other methods? Thanks for your comments. We reported the score (10.28%) under the setting without hints in Table 2. This score shows that our framework performs approximately six times better than the base LLM, GPT-4, thus validating the effectiveness of our method even in the absence of hints. The fairness of comparisons with other methods depends on whether these methods can access external information. For instance, in the design of OpenDevin [3*], there is a web browser (Browsing Agent), which allows it to obtain content from the web and these hints are available in GitHub before the issue resolution process begins. --- > Q4: I noticed that there are a lot of new submissions on SWE-bench (Lite) recently. Do you need to do a new comparison? (Of course, I think it is reasonable not to compare, after all, they are working in the same period.) Thanks for your comments and understanding! we are aware of the recent submissions on SWE-bench (Lite) and we compared the methods (Swe-Agent [4*], Autocoderover [5*]) that submitted their results before the submission deadline of NeurIPS 24 in Appendix D. The experimental results are shown in Table 4 on Page 18. The main difference between our method from others is that MAGIS does not use external tools such as web browsers [3*]. Our paper focuses on how to unlock the potential of LLMs for GitHub issue resolution and we conduct many empirical studies on analyzing the limitations of LLMs. Moreover, thanks for your references, we will incorporate them [3,5-7*] into discussion. --- *References* \ [1*] DeepSeek-AI. DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model. 2024.\ [2*] Dubey, et al. The Llama 3 Herd of Models. 2024.\ [3*] Wang, et al. OpenDevin: An Open Platform for AI Software Developers as Generalist Agents. 2024.\ [4*] Yang, et al. Swe-agent: Agent-computer interfaces enable automated software engineering. 2024.\ [5*] Zhang, et al. Autocoderover: Autonomous program improvement. 2024.\ [6*] Chen, et al. CodeR: Issue Resolving with Multi-Agent and Task Graphs. 2024.\ [7*] Ma, et al. How to Understand Whole Software Repository?. 2024. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response and acknowledge that it clarifies many of my initial concerns. However, I remain apprehensive about the use of the "hints" field in the results, as this could potentially limit the method's applicability. Therefore, I will maintain my current score. In addition, I do not see the author's update in the revision. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for your follow-up feedback and for acknowledging the clarifications we provided in our rebuttal. We regret that our response regarding using the "hints" field has not fully alleviated your concerns. To clarify, while the "hints" field was utilized in one specific setting to illustrate potential improvements, our method remains effective without it. As shown in Table 2 of the paper, our approach achieved a score of 10.28% without the use of hints, representing a significant (**~6x**) improvement over the base LLM (i.e., GPT-4) score of 1.74%. This demonstrates the effectiveness of our method, even without hints. Moreover, the experimental results presented in our rebuttal further validate the effectiveness of our method across different LLMs (DeepSeek and Llama 3.1) without relying on hints. Notably, even without QA, our method achieved a score of 11.00%, which is **33 and 8 times** higher than directly using the base models DeepSeek and Llama 3.1, respectively. This demonstrates the applicability of our method, even without hints. In summary, the primary contribution of our work to GitHub Issue resolution lies in the framework of our method, derived from our empirical studies. We will emphasize this point more clearly in our revised manuscript to avoid any potential misunderstandings. Regarding the revision, as per the NeurIPS guidelines, we are unable to upload the updated manuscript during the rebuttal/discussion period. However, we will upload the revision as soon as the system allows. You can refer to the NeurIPS FAQ for more details: https://neurips.cc/Conferences/2024/PaperInformation/NeurIPS-FAQ. We greatly appreciate the time and effort you have taken to review our work. Your feedback has been invaluable in improving our manuscript, and we hope that our clarifications will be taken into consideration. Thank you again for your valuable time to review. --- Rebuttal 2: Comment: Dear Reviewer, Thank you very much for your follow-up feedback. We would like to address your concerns: - **Contemporaneous Work**: The two papers you mentioned are contemporaneous works with our submission, in line with the conference guidelines (details shown below). Both papers were published on arXiv (8 Apr and 6 May) within two months of the NeurIPS submission deadline (22 May). As such, they fall under the category of contemporaneous work, which is why we did not include them as baselines in the main body of our paper. Meanwhile, we have already cited (AutoCodeRover[61] and SWE-Agent[59]) and discussed the comparison with these works in Appendix D (Line 718-729). Given your concerns, we will further expand our discussion in the Appendix to provide a more detailed comparison (part of it is shown below). > For the purpose of the reviewing process, papers that appeared online within two months of a submission will generally be considered "contemporaneous" in the sense that the submission will not be rejected on the basis of the comparison to contemporaneous work. Authors are still expected to cite and discuss contemporaneous work and perform empirical comparisons to the degree feasible. Any paper that influenced the submission is considered prior work and must be cited and discussed as such. Submissions that are very similar to contemporaneous work will undergo additional scrutiny to prevent cases of plagiarism and missing credit to prior work. \ > Reference: https://neurips.cc/Conferences/2024/CallForPapers - **Performance and Cost Comparison**: As of the conference submission deadline, the latest score reported by AutoCoderRover was 16.11%, as noted on page 7, second-to-last paragraph of their paper (https://arxiv.org/pdf/2404.05427v2). This score is not higher than the 16.67% we reported. Regarding SWE-Agent, it indeed reported an 18.00% score, which is higher than our ablation version (w/o hints). Thanks for your great suggestions, we reviewed the cost data on SWE-bench Lite and calculated that the average cost per instance for our method is approximately `$0.41`, which is significantly lower than the `$1.67` reported by SWE-Agent. Considering the trade-off between effectiveness (a 1.33% difference in score) and cost (a fourfold reduction), we believe that our approach remains competitive. We acknowledge the contributions of SWE-Agent and AutoCoderRover, as they, along with our work, collectively advance the state of the art in this task. Thank you once again for your valuable time.
Summary: The paper studies the reasons behind LLMs' failures in resolving GitHub issues, identifying key factors such as locating code files and lines, and the complexity of code changes. The authors propose a novel multi-agent framework, MAGIS, comprising four specialized agents: *Manager* and *Repository Custodian* for planning, *Developer* for coding, and *Quality Assurance Engineer* for code review. Experimental results show that MAGIS significantly outperforms GPT-4 and Claude-2, solving 13.94% on the 25% subset of the SWE-Bench dataset. Ablation studies confirm the effectiveness of each agent's role in the framework. Strengths: - The paper tackles the challenging problem of using LLMs to resolve GitHub issues. The proposed method is intuitive and effective, achieving an eight-fold performance gain, compared to baseline GPT-4. - The paper conducts massive experiments on both the reason why LLMs fail in the process of resolving GitHub issues, and the role of each agent, providing insights for the research community. Weaknesses: - The reproducibility of experiments for baseline models like GPT-4 and Claude is unclear. The code and data are temporarily unavailable, hindering follow-up and comparison by other researchers. - Typos: There are minor typos, such as "multi-agennt" instead of "multi-agent" on page 2, line 63, and some missing "%" signs in the resolved ratio on page 7. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Section 2, I'm a bit confused about the calculation of coverage ratio. How is $[s_i, e_i]$ defined for pure 'deleting' and 'adding' operations? If many code adjustments occur in a single file, does it ensure that each adjustment contributes independently to the coverage? I mean, if model generated code is different from the reference code, the position behind is not aligned.) 2. In Section 4, can you kindly provide more key implementing details, especially for the model comparison experiment in Table 2? Some helpful information may be the core instruction/input for each model, the number of interaction rounds, and whether code interpreter is allowed. Those are all related to the performance of the model. 3. Do you submit (or plan to submit) the results to SWE-Bench official leaderboard? How do this method differ from other agent-based methods? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for taking the valuable time to review our manuscript and thanks for your positive comments (i.e., intuitive and effective method, massive experiments, and insights for the research community). We are sorry for the confusion and unclear expression in the previous version. We have addressed each of the comments and suggestions. Please refer to our responses below for details. > W1: The reproducibility of experiments for baseline models like GPT-4 and Claude is unclear. The code and data are temporarily unavailable, hindering follow-up and comparison by other researchers. Thanks for your comments. We will make the code and data publicly available. --- > W2: Typos: There are minor typos, such as "multi-agennt" instead of "multi-agent" on page 2, line 63, and some missing "%" signs in the resolved ratio on page 7. Thank you for carefully spotting the typo. All these typos will be corrected in the revision. --- > Q1: In Section 2, I'm a bit confused about the calculation of coverage ratio. How is $[s_i, e_i]$ defined for pure 'deleting' and 'adding' operations? If many code adjustments occur in a single file, does it ensure that each adjustment contributes independently to the coverage? I mean, if the model generated code is different from the reference code, the position behind is not aligned.) We are sorry for the unclear presentation. ${[s_i, e_i]}$ indicates the range of lines in the file that have been modified, and the line numbers within this range are based on the file before the modifications are made. Therefore, ${[s_i, e_i]}$ should represent the whole file (i.e., ${[1, LastLineNumber]}$) for pure 'deleting' while it should be none (i.e., $\emptyset$) for pure 'adding'. For cases where a single file has many modifications, the line numbers are aligned based on `Git`, which uses the Myers diff algorithm [1*] as default. The empirical study [2*] demonstrates that the Myers diff algorithm can ensure that each adjustment contributes independently to the coverage in most cases. --- > Q2: In Section 4, can you kindly provide more key implementing details, especially for the model comparison experiment in Table 2? Some helpful information may be the core instruction/input for each model, the number of interaction rounds, and whether code interpreter is allowed. Those are all related to the performance of the model. Thanks for your suggestions. We will add the specific prompt content and make the implementation clearer in the revision (part of it is shown below due to the context limit). Specifically in Table 2, the prompts for directly using the LLMs are sourced from SWE-bench[3*] while other prompts for MAGIS in different settings are designed by us. For each GitHub issue, our method interacts with the issue only once, which means the framework only generates one final result for an issue. Moreover, the code interpreter is not allowed to be used in our method. ```python # Prompt <em>P</em> (Line 5 in Algorithm 2) system_prompt = ("You are a software development manager." "Your responsibility is to provide clear guidance and instructions to a developer regarding modifications or improvements needed in a specific code file. " "This guidance should be based on the details provided in the issue description and the existing content of the code file.") user_prompt = ("Review the issue description and the content of the code file, then provide specific instructions for the developer on the actions they need to take to address the issue with these files.\n" f"# Issue Description:\n{issue_description}\n# Code File:\n{file_content}\n" "Respond concisely and clearly, focusing on key actions to resolve the issue. Limit your answer to no more than 100 tokens.") ``` --- > Q3: Do you submit (or plan to submit) the results to SWE-Bench official leaderboard? How does this method differ from other agent-based methods? Thanks for your comments. We plan to submit the results after the anonymous review period ends. The main difference between our method from others is that MAGIS does not use external tools such as web browsers [4*]. Our paper focuses on how to unlock the potential of LLMs for GitHub issue resolution and we conduct many empirical studies on analyzing the limitations of LLMs. --- *References* \ [1*] Myers, Eugene W. An O (ND) difference algorithm and its variations. *Algorithmica* 1.1 (1986).\ [2*] Nugroho, Yusuf Sulistyo, et al. How different are different diff algorithms in Git?. *EMSE* 25 (2020).\ [3*] Yang, John, et al. Swe-agent: Agent-computer interfaces enable automated software engineering. *arXiv preprint* (2024).\ [4*] Wang, Xingyao, et al. OpenDevin: An Open Platform for AI Software Developers as Generalist Agents. *arXiv preprint* (2024). --- Rebuttal Comment 1.1: Comment: Thank you for your thorough responses. I appreciate the valuable insights and the relatively effective method the paper offers. However, I still have some concerns regarding what and how easily future researchers can build upon this work. Additionally, the varying levels of access to external tools (e.g., code interpreters, web resources) make it hard for fair comparisons or ablations that would help highlight the unique advantages of the proposed agent-based method. Further exploration of these issues may enhance the potential impact of this article. Given those considerations, I will maintain my original score. --- Rebuttal 2: Comment: Dear Reviewer, &nbsp; Thank you for your follow-up feedback and for recognizing the value of our work. We appreciate your support and the "weak accept" recommendation. We would like to clarify that our proposed method does not rely on any external tools (such as code interpreters or web browsers) and the external tools are not used in the evaluation in our paper. To address your concerns about the ease of building upon our work, we will release all relevant code, and detailed implementation instructions after the paper is accepted. This will ensure that future researchers can easily reproduce and extend our empirical findings, fostering further developments in this area. Once again, thank you for your constructive feedback. &nbsp; Best regards, The authors.
Summary: This paper introduces MAGIS, a novel Large Language Model (LLM)-based multi-agent framework designed to address the challenge of resolving GitHub issues in software development. The authors conduct an empirical study to identify key factors affecting LLMs' performance in this task, including file and line localization accuracy and code change complexity. Based on these insights, they propose a collaborative framework consisting of four specialized agents: Manager, Repository Custodian, Developer, and Quality Assurance Engineer. These agents work together through planning and coding phases to generate appropriate code changes. The framework is evaluated on the SWE-bench dataset, demonstrating significant improvements over existing LLMs in resolving GitHub issues. Specifically, MAGIS achieves a resolved ratio of 13.94%, which is an eight-fold increase compared to the direct application of GPT-4. The paper also provides detailed analyses of the framework's components and discusses its effectiveness in different scenarios. Strengths: Originality: 1. The paper presents a in-depth empirical analysis of LLMs' performance in resolving GitHub issues, providing unique insights into the challenges of applying AI to complex software engineering tasks. 2. The study leverage software engineering metrics, such as the line locating coverage ratio, to quantify LLM performance in code change generation. Quality: 1. The empirical study demonstrates rigorous methodology, examining multiple factors affecting LLM performance including file localization, line localization, and code change complexity. 2. The analysis employs statistical techniques to establish correlations between various complexity indices and issue resolution success, adding depth and reliability to the findings. Clarity: 1. The paper clearly articulates the gap between LLMs' performance on function-level tasks versus repository-level tasks, providing a strong motivation for the study. 2. The authors present their empirical findings with well-designed visualizations and tables, making complex data easily interpretable. Significance: 1. The empirical analysis provides crucial insights that bridge theoretical understanding of LLMs with practical software engineering challenges. 2. The findings directly inform the design of more effective AI-assisted software development tools, as demonstrated by the MAGIS framework. 3. The study lays a foundation for future research in applying AI to software engineering tasks, potentially influencing the direction of both AI and software engineering fields. Weaknesses: 1. Limited dataset diversity: The study relies solely on the SWE-bench dataset, which, while comprehensive, is limited to 12 Python repositories. This may not fully represent the diversity of real-world software projects. 2. Computational resources and efficiency: The paper doesn't discuss the computational requirements or efficiency of MAGIS compared to simpler approaches. 3. Potential overfitting to SWE-bench: The framework might be tailored too specifically to perform well on SWE-bench, potentially limiting its generalizability. 4. Lack of user study: The paper doesn't include feedback from actual software developers on the usefulness and quality of MAGIS's solutions. 5. Limited discussion on prompt engineering: While the paper mentions using prompts, it doesn't delve into the specifics of prompt design or its impact on performance. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Your study focuses on the SWE-bench dataset, which is limited to 12 Python repositories. Have you considered how MAGIS might perform on a more diverse set of programming languages and project types? What steps could be taken to validate the framework's generalizability beyond Python projects? 2. The paper doesn't discuss the computational requirements of MAGIS. Can you provide information on the runtime and resource requirements of MAGIS compared to simpler approaches? How does this impact its practical applicability in real-world software development environments? 3. How have you ensured that MAGIS is not overfitting to the specific characteristics of the SWE-bench dataset? Have you tested the framework on any GitHub issues outside of this dataset to validate its real-world applicability? 4. While you describe the roles of different agents, there's limited analysis of how their interactions contribute to overall performance. Have you conducted any ablation studies on different interaction patterns between agents? This could provide insights into which aspects of the multi-agent approach are most crucial for success. 5. While you mention using prompts, there's limited discussion on prompt design. Could you provide more details on your prompt engineering strategies and how they impact the framework's performance? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: 1. The authors acknowledge in Appendix K that the SWE-bench dataset, while representative, may not fully reflect the diversity of all code repositories, particularly in specialized fields or different programming paradigms. However, SWE-bench is known as one of the most challenging benchmarks, this might be ok. 2. In Appendix K, the authors mention the potential impact of prompt design on LLM performance and the difficulty in eliminating prompt bias completely. 3. The authors note in Appendix K that applying their findings to other code repositories may require further validation due to the limited sample scope. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for reviewing our manuscript and thanks for your positive comments (i.e., the in-depth empirical analysis, rigorous methodology, and the well-designed visualizations). We are sorry for the confusion and unclear expression in the previous version. We have addressed each of the comments and suggestions below. > Q1: Your study focuses on the SWE-bench dataset, which is limited to 12 Python repositories. Have you considered how MAGIS might perform on a more diverse set of programming languages and project types? What steps could be taken to validate the framework's generalizability beyond Python projects? Thanks for your comments. We acknowledge that the SWE-bench has limited diversity, which we discuss in Appendix K. While we recognize the need to evaluate MAGIS on a broader range of programming languages (PLs) and project types, we found that there are currently no available datasets in other programming languages. To evaluate our framework's generalizability beyond Python projects, we plan to construct a dataset with various programming languages and projects, then validate MAGIS by collecting pull requests, issues, and test cases from popular repositories, setting up the testing environments, and executing MAGIS to assess issue resolution. This process aims to demonstrate MAGIS's effectiveness across diverse programming languages and project types. --- > Q2: The paper doesn't discuss the computational requirements of MAGIS. Can you provide information on the runtime and resource requirements of MAGIS compared to simpler approaches? How does this impact its practical applicability in real-world software development environments? Thanks for your comments. We will include the computational requirements of MAGIS in the revision. Specifically, our framework resolves each issue in approximately 3 minutes, with an average processing time of under 5 minutes per instance (as noted in Appendix E). Our method utilizes GPT-4, so any machine capable of accessing the OpenAI API can run the experiments. While LLM-based multi-agent systems [1-3*], including ours, require more computational resources and time compared to simpler approaches, this investment is justified. Our method significantly increases the issue resolution rate from 1.74% to 13.94% (as shown in Table 2), allowing us to tackle problems that were previously unresolvable. In summary, this paper emphasizes how to leverage LLMs effectively to enhance issue resolution and improve overall performance. --- > Q3: How have you ensured that MAGIS is not overfitting to the specific characteristics of the SWE-bench dataset? Have you tested the framework on any GitHub issues outside of this dataset to validate its real-world applicability? Thanks for your comments. First, our method operates as an LLM-based multi-agent system, which means it does not require training on a specific dataset. This characteristic helps prevent overfitting to the SWE-bench. While we have not yet validated our method on other GitHub issues, we acknowledge the importance of this step and plan to address it in future work. We understand that broader testing will enhance the generalizability of our results. Finally, the SWE-bench dataset comprises various repositories and real-world GitHub issues, making it a reasonable basis for evaluating our method. --- > Q4: While you describe the roles of different agents, there's limited analysis of how their interactions contribute to overall performance. Have you conducted any ablation studies on different interaction patterns between agents? This could provide insights into which aspects of the multi-agent approach are most crucial for success. Thanks for your comments. We recognize the importance of analyzing the contributions of different agents in our system. In Section 4.2, we conducted an ablation study for the QA agent, and the corresponding results are shown in Table 2. While we have not yet performed ablation studies specifically on different interaction patterns between agents, we analyze the contributions of each phase—planning and coding—in Sections 4.3 and 4.4. These sections provide insights into how these phases impact overall performance. We appreciate your feedback and will explore different interaction patterns in future work. --- > Q5: While you mention using prompts, there's limited discussion on prompt design. Could you provide more details on your prompt engineering strategies and how they impact the framework's performance? We appreciate your interest in our prompt design. In the revision, we will include a detailed discussion of our prompt engineering strategies and provide specific examples of the prompts used. Our prompt engineering strategies adhere to established best practices [4*]. Specifically, we construct our prompt templates in markdown format, ensuring that each prompt covers all relevant information required for the communication. --- *References* \ [1*] Wang, Xingyao, et al. OpenDevin: An Open Platform for AI Software Developers as Generalist Agents. *arXiv preprint* (2024).\ [2*] Yang, John, et al. Swe-agent: Agent-computer interfaces enable automated software engineering. *arXiv preprint* (2024).\ [3*] Zhang, Yuntong, et al. Autocoderover: Autonomous program improvement. *arXiv preprint* (2024).\ [4*] Jessica Shieh. Best practices for prompt engineering with openai api. *OpenAI, https://help.openai.com/en/articles/6654000* (2023).
Summary: This paper proposes MAGIS, a multi-agent coding framework for solving patch generation tasks. The roles consist of Manager, Repository Custodian, Developer and QA engineer, with the task of planning, file location, file editing, and review, respectively. Experiments on the SWE-bench benchmark show that performance is improved over baselines. Strengths: - The proposed multiagent framework is simple and effective - The proposed model is effective with and without using the hints provided in SWE-bench - Ablations demonstrate the effectiveness of the QA engineer role Weaknesses: My concerns are along the evaluation and comparisons: - The main experiments are limited, as only GPT-4 is measured - Baselines such as SWE-agent are missing Technical Quality: 3 Clarity: 3 Questions for Authors: - SWE-bench Lite results (appendix D) should be in the body of the paper - Is ablation of the kickoff meeting possible? This is an interesting mechanism but it's hard to understand whether it is effective or not - Can you explain/show more about the prompts and exemplars (if exemplars are used) for each role? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for taking the valuable time to review our manuscript and thanks for your positive comments (i.e., the simplicity and effectiveness of our method, and the effectiveness in various settings). We are sorry for the confusion and unclear expression in the previous version. We have addressed each of the comments and suggestions. Please refer to our responses below for details. > W1: The main experiments are limited, as only GPT-4 is measured Thanks for your comments. To further validate the effectiveness of our method, we conducted more experiments on other base LLMs. The new experiments use DeepSeek (DeepSeek-V2-0628) [1*] and Llama 3.1 (405B) [2*] in addition to GPT-4 as the base model. The corresponding results are shown below, | Base LLM | Directly Use | MAGIS | MAGIS (w/o hints, w/o QA) | |------------|--------------|--------|---------------------------| | DeepSeek | 0.33% | 12.67% | 11.00% | | Llama 3.1 | 1.33% | 16.67% | 11.00% | Please note that all prompts are identical to those we experimented with on GPT-4. In the "Directly Use" setting, the prompts are sourced from the SWE-bench, while prompts for other settings are designed by us. The above table shows that our method has achieved a 38-fold performance improvement (to DeepSeek) and a 12-fold improvement (to Llama 3.1) compared to directly using these base LLMs. The improvement in effectiveness validates that our method is general and can still unlock the potential of other LLMs in solving GitHub issues. --- > W2: Baselines such as SWE-agent are missing Thanks for your comments. The comparison with SWE-agent [3*] is discussed in Appendix D as SWE-agent is a contemporaneous work (the paper is publicly available at arXiv in May 2024). As shown in Table 4, our method achieved the resolved ratio of 25.33% on SWE-bench Lite, which is higher than the 18.00% reported by SWE-agent. --- > Q1: SWE-bench Lite results (appendix D) should be in the body of the paper Thanks for your advice. We will move the Lite results to the body in the revision. --- > Q2: Is ablation of the kickoff meeting possible? This is an interesting mechanism but it's hard to understand whether it is effective or not Thanks for your comments. The kickoff meeting serves as a transitional link. The minutes of this meeting are converted into a code representing the order of work (sequential or parallel) for each Developer agent moving forward. Therefore, the ablation of the kickoff meeting is not possible in our method. To make the mechanism clearer, one detailed example is provided in Figure 7 (Appendix B) on Page 17 and Figure 14 (Appendix H) on Page 23. --- > Q3: Can you explain/show more about the prompts and exemplars (if exemplars are used) for each role? Thanks for your advice. We will add the specific prompt content in the revision (one of the prompt templates for the Manager agent as shown below due to the context limit). ```python # Prompt <em>P</em> (Line 5 in Algorithm 2) system_prompt = ("You are a software development manager." "Your responsibility is to provide clear guidance and instructions to a developer regarding modifications or improvements needed in a specific code file. " "This guidance should be based on the details provided in the issue description and the existing content of the code file.") user_prompt = ("Review the issue description and the content of the code file, then provide specific instructions for the developer on the actions they need to take to address the issue with these files.\n" f"# Issue Description:\n{issue_description}\n# Code File:\n{file_content}\n" "Respond concisely and clearly, focusing on key actions to resolve the issue. Limit your answer to no more than 100 tokens.") ``` --- *References* \ [1*] DeepSeek-AI. DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model. *arXiv preprint* (2024).\ [2*] Dubey, Abhimanyu, et al. The Llama 3 Herd of Models. *arXiv preprint* (2024).\ [3*] Yang, John, et al. Swe-agent: Agent-computer interfaces enable automated software engineering. *arXiv preprint* (2024). --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thank you to the authors for the detailed response. The response addressed my concerns.
Rebuttal 1: Rebuttal: Dear reviewers, **Thank you very much for your valuable time in providing these constructive comments and suggestions. We have addressed each of the comments and suggestions by adding more experiments or explanations:** --- - **Experiments with Different LLMs (15zT, 1wgB)**: We added experiments with MAGIS using other LLMs, such as DeepSeek [1*] and Llama 3.1 [2*], in addition to GPT-4 as the base model. The results validate that our method is general and can still unlock the potential of other LLMs in solving GitHub issues. - **Specific Prompt Content (15zT, L511, 1wgB)**: We added the specific prompt content to make the implementation clearer. - **Generalizability (gcvU)**: We added steps to validate the framework's generalizability beyond Python projects in the limitation section in Appendix K. - **Computing Resources (gcvU, 1wgB)**: We added more discussions about the computing resources, clarifying the necessity and worthiness of the additional cost. - **Evaluation and Ablation Analysis (gcvU)**: We included more explanation about the evaluation and the ablation analysis. - **Typo Corrections (L511)**: We corrected all identified typos and checked the paper. - **Coverage Ratio Calculation (L511)**: We added more explanation about the calculation of the coverage ratio and cited the relevant diff algorithm [3-4*] for clarity. - **Comparison with Contemporaneous Works (L511, 1wgB)**: We added the discussion about the difference between our method and other contemporaneous works [5-9*] in Appendix D and E. --- We hope these updates address the reviewers' concerns. We remain open to further discussion and revisions. --- *References* \ [1*] DeepSeek-AI. DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model. *arXiv preprint* (2024). \ [2*] Dubey, Abhimanyu, et al. The Llama 3 Herd of Models. *arXiv preprint* (2024).\ [3*] Myers, Eugene W. An O (ND) difference algorithm and its variations. *Algorithmica* 1.1 (1986).\ [4*] Nugroho, Yusuf Sulistyo, et al. How different are different diff algorithms in Git?. *EMSE* 25 (2020).\ [5*] Wang, Xingyao, et al. OpenDevin: An Open Platform for AI Software Developers as Generalist Agents. *arXiv preprint* (2024).\ [6*] Yang, John, et al. Swe-agent: Agent-computer interfaces enable automated software engineering. *arXiv preprint* (2024).\ [7*] Zhang, Yuntong, et al. Autocoderover: Autonomous program improvement. *arXiv preprint* (2024).\ [8*] Chen, Dong, et al. CodeR: Issue Resolving with Multi-Agent and Task Graphs. *arXiv preprint* (2024).\ [9*] Ma, Yingwei, et al. How to Understand Whole Software Repository?. *arXiv preprint* (2024).
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Universal Sample Coding
Accept (poster)
Summary: The authors consider the problem of efficiently encoding a sequence of $n$ iid realisations $X_i \sim P$ using as few bits as possible. This scenario is different from one-shot channel simulation because it involves multiple rounds of communication. Between these rounds, the encoder and decoder adjust their coding distribution, leading to substantial improvements in compression performance. Using techniques and results from universal source coding, the authors characterise the optimal lower bound for this task and propose a scheme that achieves the lower bound within a multiplicative constant. They conduct experiments on a toy federated learning problem, showing that their technique could make federated learning significantly more robust when only a fraction of the clients participate in each communication round while reducing communication costs. Strengths: Overall, I found the paper quite enjoyable to read. The motivation and problem set-up are clear. While at a high level, the authors' solution to the problem is "the obvious one," it has many subtleties, e.g. choosing the "right" probability estimator in Section 5.1. Thus, this solution, combined with the analysis, provides interesting new insights for the efficient communication of samples. I have checked all the proofs in detail, and they are correct. I also liked the federated learning experiments, as they present a scenario in which the authors' method brings concrete benefits. Weaknesses: There are three major points that, if clarified, would make the paper significantly stronger: 1. **What is the sample complexity/run time of the authors' proposed solution?** Since Algorithm 1 calls an arbitrary channel simulation algorithm a subroutine, it should be relatively easy to calculate the total runtime in terms of the subroutine's runtime and state it as an accompanying result to Thm 5.1. Such a result would benefit people wishing to build on the authors' work, as the general issue with channel simulation algorithms with target $P$ and proposal $Q$ is that their runtime scales with $\Vert dP/dQ \Vert_\infty \geq \exp(KL[P \Vert Q])$. 2. **Clarifying the relationship between universal source coding, universal sample coding and channel simulation.** The authors' framework has key differences compared to channel simulation as well as universal source coding, which should be illustrated better. First, they switch to a streaming/multi-round communication setting, where the encoder and decoder can update their coding distribution, which differs from one-shot channel simulation as, in the latter case, there is only a single round of communication. Second, they fix the target distribution of the elements of the sequence, which is equivalent to assuming a delta distribution on the source in channel simulation. Similarly, the analogous setting in universal source coding would be to have a delta distribution as the source: for an alphabet of $K$ symbols, we wish to encode $n$ copies of the same fixed symbol. This can be done by encoding which symbol is repeated using $\log K$ bits and then encoding the number of repeats, which can be done using $\log n + O(\log\log n)$ bits with Elias delta coding. Therefore, the authors' setting and solution are closer to a universal code (a prefix code on the integers with a certain universality property) than a universal source code (a code which doesn't need to be prefix, but the ratio of the actual and optimal expected codelengths must go to one as the number of symbols increases). I realise that this terminology is annoyingly similar. Still, pointing this out/fixing this would enhance the clarity of the paper, especially given that in the current version, the authors seem to use the terms universal code and universal source code interchangeably. Furthermore, in the same way, that a universal code can be used to create a universal source code, I believe we could use the authors' proposed method to create a closer analogue of universal source coding: Assume we have two correlated variables $X, Y \sim P_{X, Y}$ with unknown marginal $P_Y$, if Alice receives $X \sim P_X$ and wants to communicate $Y \sim P_{Y \mid X}$. We could apply the authors' method of updating the counting distribution such that the approximating distribution converges to the marginal $P_Y$ so that the scheme's expected codelength will asymptotically be $n \cdot I[X; Y]$. 3. **The authors only state results assuming a uniform initial distribution.** There are two reasons why I put this as a major issue: 1) this assumption is not explicitly stated in the paper, but the authors make use of it both in the explanations as well as the proofs. 2) At the end of section 7, the authors suggest using a different initial distribution, whose performance is thus not covered by the theoretical statements, even though this setting is of the greatest practical relevance. I would conjecture that the initial choice should only affect the results up to a constant additive factor. I have a few more minor issues as well: 1. It would be good to report the scheme's overhead compared to the theoretical lower bound in the FL scenario, akin to Figure 5. 2. I would need some clarification on the FL experimental setup and results: how did the authors set mu in the FL scenario? Why did the authors specifically pick seven samples per client? How does the performance change as a function of the number of samples sent per client and the fraction of participating clients? 3. "We consider ordered random coding Theis and Ahmed (2022) as the underlying sample communication method" - The authors should clarify (possibly by giving pseudocode in the appendix) what exact implementation of ORC they used. I imagine that the authors did the "right thing" and used the discrete version by drawing samples without replacement, and they either ran ORC until the sample space was exhausted or used a more sophisticated early termination criterion. In any case, it would be crucial to state this, as ORC is usually regarded as an approximate sampling scheme (and is usually used for continuous distributions). Finally, I found some typos/issues with the notation and writing: - line 99: "non-asymptomatic" - "the sample communication bound does not rely on using the exact marginal distribution P, and still holds for other reference distributions Q" - I'm not sure what this sentence means. - Nitpick: the authors state in section 2 that all logs are in base 2, but then they use ln (eg in eq 11) - Theis and Ahmed (2022)—The reference is incorrect; the second author is Noureldin Yosri, so it should be Theis and Yosri (2022). - "In a single learning round, the server broadcasts the global mask θ to each client, which trains an updated version θ′ using their local data and then communicates a sample Bern(θ′) from its updated mask distribution to the central server, where the new global mask probability θ is estimated from all the samples" - It is quite difficult to parse this sentence; please split it in two. - I believe that $\hat{P}$ (in Section 6 and Figures 3 and 4) and $\hat{Q}$ (everywhere else) mean the same thing. Please update them to be consistent if so. Technical Quality: 3 Clarity: 3 Questions for Authors: - Could we perhaps extend the result to continuous spaces by using KDE, or is there something fundamentally different about the continuous case? - "However, the real cost of sending a sample, as shown in (4), includes a constant, which would dominate the total cost as n increases, making it linear instead of logarithmic." - Does it? In the authors' case, since the approximating distribution converges to the true target, the ORC index distribution should converge to (approximately) the delta on 1. Hence, I would imagine that the overhead the authors mention would vanish if we used a universal source code to compress the ORC indices. - What is the long-term behaviour of the approximating distribution? My initial estimate would be that we have $O(k^{-1}n^{-1/2}) $ convergence to the target by the CLT; could we say something more precise? Is there a good way to speed up the convergence, maybe? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors should address the three main points I laid out in the Weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Computational Complexity:** As pointed out by the reviewer, the complexity of all currently known methods for exact channel simulation is proportional to $|\frac{dP}{dQ}|_\infty$. Thus, the proposed method has computational complexity $$\sum_{i=2}^{\lfloor\log_{1+c} (n)\rceil+1} \left(\left\Vert\frac{dP}{dQ(X^{G(i-1)})}\right\Vert_\infty\right)^{g(i)}.$$ To describe the complexity in a more informative way the behavior of the ratio infinity norm between $P$ and its estimate is required. We do not have such results, besides some preliminary calculations suggesting that it might be bounded by $(1+\epsilon)+O(e^{-n}n^{1/\epsilon})$ in expectation if $n$ symbols are communicated. In practice, we used the ordered random coding algorithm with a limited number of samples, at or above $2^{D_{KL}}$. This could have negative implications for the estimation accuracy, as samples do not exactly follow the true distribution $P$. However, from our experiments, we did not observe such negative outcomes. **Naming and generalization:** We agree with the reviewer's perspective about the generalization of channel simulation to multiple samples, where each sample is drawn from a different distribution $P$. Then, the same counting-based estimation could be applied to achieve an optimal asymptotic rate by learning the marginal $Q$. Our formulation generalizes the classical channel simulation to generating multiple samples from the same distribution, while the proposed framework further generalizes this to varying the target distribution at each sample. This setting can be called as 'Universal Channel Simulation', which we believe is a great avenue for further research. On the other hand, universal source coding aims to answer the question "How to send $n$ samples from an unknown distribution $P$?," while the universal sample coding can be understood as "How to remotely generate $n$ samples from an unknown distribution $P$?". Moreover, the redundancy of universal source coding and the rate of universal sample coding coincide, in a natural way. We agree with the reviewer that the relationship between universal source coding, universal coding, channel simulation, and universal sample communication can lead to confusion. Thus, we have replaced all references to 'universal coding' with 'universal source coding'. We have added a comment explaining the channel simulation generalization to n-letter case proposed by the reviewer. **Prior distribution:** The uniform prior distribution is a consequence of the way the estimator of Lemma 5.2 is defined - assigning a weight to each symbol based on the number of its occurrences so far. Thus, without any observations, it assigns equal weights to all symbols, and consequently a uniform prior. For the federated learning experiment, we have switched this estimator to a Bayesian one since we had access to side information - the Bernoulli parameter value in the previous step. If the distribution of $P$ follows that implied by the prior this is an optimal estimator, and therefore performs better for any finite $n$ than the agnostic one from Lemma 5.2. Influence of such prior diminishes with samples with rate $n^{-1}$, additionally, we know that the concentration of the estimator is of order $n^{-1/2}$. Thus, asymptotically, the prior does not change the asymptotic behavior of the estimator (barring extreme cases of having infinite-weight prior). We conjecture that, the influence of such prior decreases exponentially as more samples are communicated. To clarify, we will provide the exact form of the estimator in Theorem 5.2 in the Appendix, and state in the main text that the default prior for the first sample is uniform. Additionally, we changed the wording in the federated learning section to highlight that a Bayesian estimator is used. **Federated learning:** For the federated learning experiment, we have run an experiment with $10, 4, 2$, and $1$ active clients, and tested the effects of sending different number of samples (up to around 16). As expected, the performance was robust to the choice of $\mu$. For the experiments we used $\mu=4$, but $\mu\in[3,10]$ resulted in a similar communication cost, while larger values were converging to the cost of independently sending all the samples. Due to time limitations, we were unable to provide the gap to optimality for the federated learning scenario in this rebuttal. **Questions:** 1. Yes, extension to the continuous case using kernel density estimation is possible. We are unaware of the convergence rate of such methods, but it would be an interesting future direction. 2. By describing the cost of sending each sample separately as linear, we were referring to the upper bound on the communication rate implied by Poisson functional representation. However, we admit that if the samples are sent jointly this cost could be sublinear. 3. Our understanding is that the estimator does indeed converge at the rate dictated by the central limit theorem. We are unaware of any more precise statements or ways to speed up convergence. We agree with all the other issues pointed out by the reviewer and will revise the manuscript accordingly. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. They addressed my concerns, and I increased my score accordingly.
Summary: This paper introduces Universal Sample Coding. This is a simple but significant extension to channel simulation where the sender and receiver communicate $N$ ($N>>1$) samples. The authors prove that the expected codelength per sample will be negligible when $N\rightarrow \infty$, and verify with toy examples. The authors also demonstrate the application of this coding scheme in federated learning and the communication of generated samples from generative models. Strengths: 1. The author formulated the universal sample coding problem and proposed a practical method to handle it. The idea of extending channel simulation to multiple samples is simple yet can have significant influence, and the methods described in this paper are practical and directly applicable to (perhaps small-scale) discrete distributions. 2. The authors also demonstrate a suitable application of the proposed method: in federated learning and the communication of generated samples from generative models. I suspect that in practice, we will want to communicate so many samples to ensure the per-sample cost vanishes in the latter scenario, but this idea is neat and supportive of the proposed method itself. Weaknesses: My major concern with this method is the runtime. To ensure the per-sample cost is asymptotically 0, the authors propose a solution in which exponentially more samples are sent in each communication round. This means that when the problem scale is large, the $D_{KL}[P||Q]$ in some rounds can be large. For order random coding the author used in this paper, to achieve a low biased sample, the total sample size will be $O(2^{D_{KL}[P||Q]})$. $D_{KL}[P||Q]$ will, in the end, go to 0, but it will happen only after enough samples are sent. Therefore, I kindly ask the author to provide the sample size and the KL divergence for each problem and each round, which can provide more hints on how well this method can scale. This is overall a good paper and I will be happy to further raise my score if the questions and weakness are addressed. Technical Quality: 3 Clarity: 3 Questions for Authors: Besides Weaknesses, I have one further question: Lemma 2 says an estimator exists for any distribution but does not provide the form for this estimator. Do you use this estimator for all of the experiments? In Fig 5, I can observe that the empirical results in this toy example obey the bounds very well. But do you think this will always be the case for the FL and LLM settings? Is there any potential caveat? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: adequately addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Complexity:** To answer the reviewer's question about the occurrence of outliers in KL divergence between P and its estimate we have plotted it for every round of sending $n=2^{14}$ samples from $k\in\{3,8\}$-dimensional distribution $P$. In every run $P$ is sampled from a Dirichlet distribution parameterized by $\alpha=(1,\dots,1)$. The plot is appended in the global rebuttal. The blue dots show the KL-divergence in every round for every run and red line shows the mean KL-divergence. Yellow lines show proportion of total runs at or below specified value of KL divergence, that is every 10-th percentile, as well as 95th and 98th percentiles, while black line shows the upper bound on expected KL (ignoring some minor terms). We indicate the number of samples sent in each round below the x-axis. We can see that most of the samples concentrate around the mean. KL values up to 20 are reasonable, which is more than $90$\% of the time, and up to 30 is still computationally feasible (above $98$\%). For the few outliers we can either accept the high bias samples or incorporate a simple feedback mechanism - if the KL does not decrease enough, keep the number of samples the same as in the previous round. This marginally increases the rate by adding one bit of information for every round, totaling $\log(n)$ bits. **Estimator:** The estimator in Theorem 5.2 assigns a weight to each symbol based on the number of occurrences observed in the samples communicated so far plus some small bias, and the estimated probabilities are obtained by normalizing these weights. The exact form of this estimator will be included in the Appendix. The importance of this estimator is that it achieves a min-max lower bound in KL divergence between any distribution $P$ and its estimate. We use this optimal 'P-agnostic' estimator for the numerical experiments, except in the federated learning example, where we use a Bayesian estimator, which works in the same way except for the bias constants. Knowing that the Bernoulli parameters -- by which the neural network is characterized --should not change significantly between training rounds we bias the initial estimate / prior ($Q$) to be close to its value before the training epoch. We cannot claim the optimality of this estimator for an arbitrary $P$, but since we know that in reality $P$ will be correlated to its previous value we see a reduction in the number of communicated bits. In the LLM example, the state space is too large to be estimated directly with count-based methods. In that scenario, we would like to investigate fine-tuning a small language model or potentially conditioning one on all text samples communicated so far. This is left for further study. --- Rebuttal Comment 1.1: Comment: Thank you for your reply and your additional results on the KL. The plot clear and I recommend including this plot in your paper later. I agree that KL values up to 20 are reasonable. However, I do not necessarily agree that KL up to 30 (ideally, at least 2^30 samples if your KL is in bits) is still computationally feasible, especially considering the setting of FL where each client should not have very strong computational resources. Could you explain how many samples you use for this KL value? > For the few outliers, we can either accept the high-bias samples or incorporate a simple feedback mechanism What did you do in your experiments? If you just accept the high bias, how does this bias influence, e.g., convergence? --- Reply to Comment 1.1.1: Comment: For the experiments, we draw $2^{\min(D_{KL}, 25)}$ samples per round. The high KL is caused by the large number of samples sent which is not the case in FL. We agree that this might be an issue for the generative case, however, in those settings greater computational resources are usually available. We think that using biased samples can delay or prevent convergence the convergence. Empirically, in the data provided in the global response (excluding the last round with fewer samples), the probability that: given KL>25 for i-th round, KL>25 for (i+1)-th round is 37%. Thus in most of the cases, the estimate goes 'on the right track' after diverging.
Summary: This paper studies the problem of communicating multiple samples from an unknown distribution using as few bits as possible. The authors provide upper and lower bounds on its communication cost that are within a multiplicative factor from each other. The upper bound is derived from analysing the communication cost of an universal sample coding algorithm, which the authors design based on the reverse channel coding results in information theory. The lower bound is derived based on analysing the connection between universal source coding and universal sample coding. Experiments show that the universal sample coding algorithm can reduce the communication cost in a Federated Learning scenario (up to $37\%$) or for generating samples from a Large Language Model (LLM) on a server (up to $16$ times). Strengths: + The problem setting looks interesting. + Experiment shows that communicating multiple samples can improve the test accuracy (Table 2). Weaknesses: + The gap between lower and upper bounds (the ratio between $\inf_c V_k(c)$ and $(k-1)/2$) is quite large for not very large $k$ (See Fig. 1 and Fig. 2). + Algorithm 1 requires a shared source of randomness between the encoder and decoder. Technical Quality: 2 Clarity: 3 Questions for Authors: + Please address the weakness comments above. Please explain how to generate the share source of randomness in practice. + What is the distribution of the random string $Z \in \mathcal{Z}=\\{0,1\\}^{\infty}$ in Theorem 5.1? + What are the definitions of $I(P,X^n)$ and $I(P,\hat{P})$ in (16)? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: This is a theoretical research paper, hence the negative society impact of this work is not direct. The authors mention some technical limitations of this work in Section 8. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The gap between the upper and lower bounds quickly diminishes as the dimension increases. In our approach, we focus on minimizing the upper bound by choosing an appropriate constant $c$. However, a different $c$ might work better empirically, although lacking optimality in the derived upper bounds. Federated learning experiments validate the competitiveness of the proposed scheme in practice. From a mathematical perspective, common randomness is an infinite list of fair coin flips - that is $Z_i\sim$ Bernoulli(0.5). Equivalently it can be understood as a binary expansion of a single uniform random variable from interval $[0,1)$. This shared randomness can then be used to draw a common list of random variables characterized by $Q$ using, for instance, inverse transform sampling. In practice, it is sufficient that the encoder and decoder have a common seed combined with pseudo-random number generation, which is then used to draw common samples in tandem. Availability of unlimited common randomness is a standard assumption in the channel simulation literature. The reviewer was rightfully confused with our abuse of notation in the mutual information expressions. Here, we will write it more explicitly to dispel any confusion. Let $\Omega$ denote the set of all $k$-dimensional discrete distributions (i.e., distributions over a discrete alphabet of size $k$). Let $\Pi$ denote a random variable taking values in set $\Omega$. Here, $P$ denotes the particular realization of $\Pi$. Theorem 5.3 states that the claim holds for some distribution $P$; thus, it holds for the supremum over all distributions in $\Omega$. Then, $$\sup_{P \in \Omega} L(n) \geq \mathbf{E}_{\Pi}[L(n)],$$ since the maximum is always greater than or equal to the average. Note that $\Pi$ and $X^n$ are correlated random variables, and the right hand side of Equation (15) corresponds to the expected number of bits required to communicate a sample $X^n$ conditioned on $\Pi = P$. This is exactly the reverse channel coding problem, where the lower bound on the communication cost was shown in Harsha et al. (2010) to be the mutual information between the two random variables $$\mathbf{E}_{\Pi} L(n) \geq I(\Pi; X^n).$$ This quantity was bounded in Davisson et al. (1981) as $$I(\Pi; X^n) \geq \frac{k-1}{2}\log(n)+O(1).$$ --- Rebuttal Comment 1.1: Title: Reply to the authors' rebuttal Comment: Thank you very much for your answer to my questions. However, based on your answers, I keep my score unchanged.
Summary: The paper proposes a new problem called "Universal Sample Coding". This is related to a problem called reverse channel coding (or channel simulation) where the receiver must generate samples from a target distribution (P) that is known to the sender but not the receiver. In addition, the receiver and sender have shared randomness which is used to generate samples from a reference distribution Q. The goal is to characterize the communication complexity i.e., the number of information bits that must be transmitted from the sender to the receiver to generate the target sample(s). As far as I understand the authors propose a new problem setting where (at least) the decoder also does not know the reference distribution Q, but estimates it. The paper provides upper and lower bounds on the communication complexity in this variant that have the same scaling in terms of the number of target samples. Strengths: Please see the summary above. Although the setup considered in the paper is potentially new, I did not fully understand it. Weaknesses: The setup and results need clarification. 1. Can you clarify your problem setup? Is it that both the encoder and decoder do not know the samples? Who generates the samples $X_1, \ldots, X_n$ generated from the reference distribution and how are they known to the encoder/decoder? In the reverse channel coding/channel simulation settings that I am familiar with, both the sender and receiver know the reference distribution $Q$ and use common randomness to generate identical samples. I wasn't sure how the samples are generated. 2. The statement of Thm 5.1 is not clear to me. Usually the rate bound should be on $H(f(P,Z)|Z)$ as $Z$ is known to both the encoder and decoder. Why do you consider $H(f(P,Z))$ as your rate? 3.Theorem 5.1 is for "some c", but in the numerical bounds you are optimizing over $c$. Should the statement be for every $c$? Technical Quality: 3 Clarity: 2 Questions for Authors: please see the weakness section above.. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Our problem setting is indeed very similar to conventional channel simulation as described by the reviewer, with the following difference: instead of a single sample, the goal of the decoder is to generate multiple samples from the target distribution $P$, which is known only to the encoder. To be precise, our goal is to identify the average number of bits that need to be communicated to the decoder so that it can generate $n$ samples from $P$. Our solution relies on the idea that as the decoder generates samples from $P$, it can also estimate it, and its estimate can be used as the reference distribution $Q$ for the remaining samples. As the decoder generates more samples, it will have a better and better estimate of $P$, and hence, it will cost fewer and fewer bits to generate new samples from $P$. To the best of our knowledge, this is a new problem formulation that was not studied before and is also relevant for many practical scenarios, as we argue in the paper. We thank the reviewer for pointing out the two typos. Theorem 5.1 should read as $H(f(P,Z)|Z)$, and does indeed hold for `any $c$'. We have corrected both in the manuscript. --- Rebuttal Comment 1.1: Title: Follow up question on problem formulation Comment: Thank you for clarifying the problem formulation. I was wondering why is the problem setup not already a special case of the one-shot setup in channel simulation? Here is one specific example of one-shot setting: The encoder observes $X \sim p_X(\cdot)$ and the decoder wants to sample $Y \sim p_{Y|X}(\cdot)$. We know from prior works that the communication rate required is approximately $I(X;Y)$. In your setting you want to sample $Y_1,\ldots, Y_n$ i.i.d from $p_{Y|X}(\cdot)$ then using their scheme and defining $Z=(Y_1, \ldots, Y_n)$ the rate of $I(X;Y_1, \ldots, Y_n)$ is naturally achievable. Even if you don't assume $X$ to be random, a natural connection to one-shot schemes can be made in a similar fashion. Can you provide a comparison between your proposed approach a natural extension of one-shot schemes? --- Reply to Comment 1.1.1: Comment: The reviewer is right. In fact, this is what we state on page 4 of the paper to derive equation (10). One can directly use any standard channel simulation technique to directly generate $n$ samples. (In the conditional version of the problem suggested by the reviewer, this would be equivalent to the mutual information expression). Our work uses this fact for min-max analysis i.e. guarantees for any distribution $P$. --- Rebuttal 2: Comment: Thank you for helping me better understand your contributions. I will keep my score as it is.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their feedback, most of our rebuttals are specific to the reviews, and thus we respond to each individually. To answer the question by reviewer NNh9, we plot the empirical KL-divergence. Pdf: /pdf/73082c4c26d4bcb695923c3ceb62b053af11e806.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
GRANOLA: Adaptive Normalization for Graph Neural Networks
Accept (poster)
Summary: The paper introduces a novel graph-adaptive normalization layer named GRANOLA for Graph Neural Networks (GNNs). The authors argue that existing normalization techniques, such as BatchNorm and InstanceNorm, are not well-suited for GNNs due to their design not considering the unique characteristics of graph-structured data. To address this, GRANOLA is proposed to normalize node features by adapting to the specific characteristics of the graph, using Random Node Features (RNF) to generate expressive node representations. The paper provides theoretical results supporting the design choices and demonstrates through empirical evaluation that GRANOLA outperforms existing normalization techniques across various graph benchmarks. Strengths: * The paper presents a novel normalization technique specifically tailored for GNNs, addressing a recognized gap in the field where traditional normalization layers do not capture the unique properties of graph data effectively. * The authors provide a solid theoretical basis for their method, including proofs that GRANOLA can achieve full adaptivity to the input graph, which is a significant contribution to the understanding of normalization in GNNs. * The paper offers extensive empirical results across multiple datasets and tasks, showing consistent performance improvements of GRANOLA over existing methods, which strengthens the credibility of the proposed technique. Weaknesses: * The work focuses on the normalization of GNNs for graph-level tasks. However, there is a type of model that dominates this field: Graph Transformers. I'm curious if this normalization would help with the graph transformers in graph-level tasks. * Implementation details are not so clear. Is the RNF sampled once and trained like other parameters or just randomly sampled at each mini-batch (or each epoch)? Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback, and we are glad to see they have recognized the significance of our contribution. They have nonetheless asked some questions, which we address below. **Q1:** The work focuses on the normalization of GNNs for graph-level tasks. However, there is a type of model that dominates this field: Graph Transformers. I'm curious if this normalization would help with the graph transformers in graph-level tasks. **A1:** Following the reviewer’s suggestion, we have conducted an additional experiment and coupled GRANOLA with the GPS graph transformer (Rampášek et al., 2022), as reported in the following table. As can be seen from the table, GRANOLA improves the performance of the GPS transformer, further highlighting its versatility and ability to enhance the performance of various and diverse graph models. | Method | ZINC-12k $\downarrow$ | OGBG-MOLHIV $\uparrow$ | |--------------------|-------------|-------------| | GPS | 0.070±0.004 | 78.80±1.01 | | GPS+GRANOLA (Ours) | 0.062±0.006 | 79.21±1.26 | Rampášek et al., 2022. Recipe for a General, Powerful, Scalable Graph Transformer. NeurIPS 2022 **Q2:** Implementation details are not so clear. Is the RNF sampled once and trained like other parameters or just randomly sampled at each mini-batch (or each epoch)? **A2:** We follow the standard practice of RNF methods (Abboud et al., 2021, Sato et al., 2021), and sample RNF for each mini-batch. We thank the reviewer for pointing it out and we will make it clearer in the next paper revision. Abboud et al., 2021. The Surprising Power of Graph Neural Networks with Random Node Initialization. IJCAI 2021 Sato et al., 2021. Random Features Strengthen Graph Neural Networks. SDM 2021 *** We are thankful to the reviewer for their constructive feedback. We made efforts to conduct the additional experiments they suggested. If they find our responses satisfactory, we kindly ask them to reconsider their rating. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' responses. I'll raise my score to 6
Summary: This paper introduces GRANOLA (Graph Adaptive Normalization Layer), a novel normalization technique designed specifically for Graph Neural Networks (GNNs). The authors identify a critical gap in existing normalization methods for GNNs, which often fail to capture the unique structural characteristics of graph data or consistently improve performance across various tasks. GRANOLA aims to address these limitations by dynamically adjusting node features based on both the graph structure and Random Node Features (RNF). The key innovation lies in its use of an additional GNN (termed GNN_NORM) to generate normalization parameters that are adaptive to the input graph. The paper provides a comprehensive theoretical analysis of GRANOLA, demonstrating its ability to default to RNF-augmented Message Passing Neural Networks (MPNNs) and proving its increased expressive power compared to standard MPNNs. The authors also show that the use of RNF is necessary for this increased expressiveness. Empirically, the paper presents extensive evaluations across multiple graph benchmarks. Strengths: GRANOLA presents a novel approach to graph normalization by incorporating graph adaptivity and RNF, addressing a significant gap in existing methods. The paper provides solid theoretical analysis, including proofs of GRANOLA's expressive power and its ability to default to RNF-augmented MPNNs. The authors conduct extensive experiments across multiple datasets and tasks, demonstrating GRANOLA's consistent performance improvements over existing normalization techniques. The paper effectively demonstrates how GRANOLA serves as a valuable bridge between the theoretical expressiveness of RNF-augmented MPNNs and practical performance improvements in graph learning tasks. GRANOLA maintains the same asymptotic complexity as standard MPNNs while offering improved performance, making it a practical solution for real-world applications. Weaknesses: GRANOLA's parameters (gamma and beta) are unique for each node, layer, and feature attribute. This fine-grained adjustment is significantly more detailed than most normalization methods, making GRANOLA resemble a new model architecture rather than a normalization technique. The motivation as a normalization method may be somewhat inappropriate given this level of detail. The paper could benefit from additional experiments, such as removing the $\gamma_{b,n,c}^{(\ell)}$ term and retaining only the affine intercept term $\beta_{b,n,c}^{(\ell)}$. This would help determine whether the improved performance is due to the detailed internal model adjustments or the normalization concept itself. Technical Quality: 3 Clarity: 3 Questions for Authors: The authors address some limitations of their work in the conclusion section. They could have elaborated more on the practical limitations of GRANOLA, such as potential challenges in implementing it in resource-constrained environments or its scalability to extremely large graphs. While the normalization GNN depth ablation study is available in Appendix H.2, a more detailed discussion on the choice of other hyperparameters for the normalization GNN, such as dimensions, would be beneficial. This is particularly important given that the normalization GNN is at the core of the GRANOLA framework. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are happy to see that the reviewer has appreciated the novelty of our approach, while finding our theoretical analysis solid and our empirical evaluation extensive. We would like to thank them for raising interesting points of discussion, which we address in the following. **Q1:** GRANOLA's parameters (gamma and beta) are unique for each node, layer, and feature attribute. This fine-grained adjustment is significantly more detailed than most normalization methods, making GRANOLA resemble a new model architecture rather than a normalization technique. The motivation as a normalization method may be somewhat inappropriate given this level of detail. **A1:** We agree with the reviewer that the gamma and beta are more refined than other standard normalization layers, but this level of detail is intrinsic to adaptive normalization layers. In an adaptive normalization, instead of using the same affine parameters $\gamma_{c}^{(\ell)}$ and $\beta_{c}^{(\ell)}$ (Equation (3)) for all the nodes in all the graphs, the normalization method utilizes specific parameters conditioned on the input graph. This adaptivity has proven to be a valuable property in other domains (for instance, Huang et al., 2017), and in our case is achieved by generating the affine parameters through the normalization GNN. Huang et al., 2017. Arbitrary style transfer in real-time with adaptive instance normalization. ICCV 2017 **Q2:** The paper could benefit from additional experiments, such as removing the $\gamma_{b,n,c}^{(\ell)}$ term and retaining only the affine intercept term $\beta_{b,n,c}^{(\ell)}$. This would help determine whether the improved performance is due to the detailed internal model adjustments or the normalization concept itself. **A2:** Following your suggestion, we have included an additional experiment obtained by removing the $\gamma_{b,n,c}^{(\ell)}$ term by setting it to zero, while retaining only $\beta_{b,n,c}^{(\ell)}$. The following table shows that GRANOLA outperforms this method, highlighting the importance of the normalization concept. | Method | ZINC-12k $\downarrow$ | OGBG-MOLHIV $\uparrow$ | |-------------------------------|--------------|-------------| | GIN+GRANOLA-$\beta_{b,n,c}^{(\ell)}$-only | 0.1928±0.018 | 74.11±1.39 | | GIN+GRANOLA (As in the paper) | 0.1203±0.006 | 78.98±1.17 | **Q3:** The authors address some limitations of their work in the conclusion section. They could have elaborated more on the practical limitations of GRANOLA, such as potential challenges in implementing it in resource-constrained environments or its scalability to extremely large graphs. **A3:** Thank you for the suggestion. We agree that expanding on the practical limitations of GRANOLA is valuable. In practice, adding GRANOLA to standard MPNNs does not significantly impact memory usage (and it maintains the linear space complexity of MPNNs). Therefore, GRANOLA faces the same challenges of existing and widely-used GNN methods when dealing with extremely large graphs. We will elaborate on this point in the next revision. **Q4:** While the normalization GNN depth ablation study is available in Appendix H.2, a more detailed discussion on the choice of other hyperparameters for the normalization GNN, such as dimensions, would be beneficial. This is particularly important given that the normalization GNN is at the core of the GRANOLA framework. **A4:** Following the reviewer's suggestion, we will include a more thorough discussion of the hyperparameters for our $\text{GNN}\_{NORM}$. For completeness, we note that $\text{GNN}\_{NORM}$ maintains the same embedding dimensions as the outer GNN layer, and our hyperparameter search includes different choices for the number of layers within $\text{GNN}\_{NORM}$. While other configurations are possible, we found these choices to be effective in practice and help reduce the number of parameter choices. *** We are grateful to the reviewer for their feedback. We made efforts to address all the concerns raised, and, if they feel the same, we kindly ask them to reconsider their rating. --- Rebuttal Comment 1.1: Title: Official Review by Reviewer tP2u Comment: Thank you for the detailed response. I raise the score to 6.
Summary: The paper pertains to the problem of using normalisation techniques specifically designed for graph-structured data and Graph Neural Networks (GNNs). Using constructed illustrative examples, the authors claim that existing normalisation techniques (including others designed for graphs) may create expressivity issues to the GNNs to which they are applied. Motivated by these examples, they speculate that one possible reason is the lack of adaptivity of the normalisation (affine) parameters to the input graph. To that end, they propose to replace fixed normalisation parameters with adaptive ones, produced at each layer by an auxiliary GNN for each vertex of the input graph. Additionally (and probably crucially), the auxiliary GNNs do not only receive the current vertex features as inputs but also Random Node Features (typically sampled from a Gaussian). The last one is known to render GNNs universal, and so is the case here. Experimentally, the method is tested on a battery of tasks and ablated across several factors (training convergence speed, combined with more expressive architectures than plain MPNNs), showing consistently improved performance compared to other normalisation methods and competitiveness with the state-of-the-art in several cases. Strengths: **Clarity**. The paper is well-written, easy-to-follow and all the concepts are clearly explained. It can therefore be understood by a wider audience. **Contextualisation to related work**. An extensive review and comparisons with related works are provided, allowing the reader to grasp their differences and understand the innovation of the proposed approach. **Importance to the GNN community.** The techniques proposed by the authors are of general interest to the GNN community since they clearly and consistently improve current normalisation techniques and show good potential for adaptation in practice in various problems. Additionally, it is one of the few works showcasing the practical benefits of RNF (however see also weaknesses), which is a long-standing puzzle for GNN researchers. **Empirical results**. The proposed method appears to work quite well and consistently in practice, in terms of empirical generalisation. In particular, it provides improvements against all base architectures tested and against all competing normalisation techniques, while both methodological techniques proposed (adaptive normalisation + RNF) are sufficiently ablated and shown to be important to be combined to obtain the desired performance gains. Weaknesses: **Reservations regarding claims and the selected way to present findings**. - One of my main objections to the presented manuscript is the way the findings were selected to be presented. In particular, the authors present their work as a graph-oriented adaptive normalisation technique and motivate their approach by comparing it against other normalisation techniques, both intuitively using constructed counterexamples and empirically, showcasing consistent improvements. Nevertheless, it seems to me that the main reason behind the empirical success is the random features. For example, see Table 2, where the adaptive normalisation per se does not seem to provide any significant practical advantages. - However, as the authors correctly point out, random features are known to behave subpar in practice, despite their theoretical advantages. This leads me to believe that the authors have probably found a way to *improve the performance of random features*. To me, this is an important contribution per se, but the authors have chosen not to present their paper in that way, nor to sufficiently point out this takeaway in their paper. - On the other hand, the authors have devoted a substantial part of the paper to motivating adaptive normalisation and discussing other normalisation techniques. In my viewpoint, the intuitive explanations provided (the examples in sec. 2.2. and the last paragraph before section 3.1.) are speculative and do not seem to be the real reason behind the success of the approach. - Note also, that in several parts of the paper, the authors claim that existing normalisation techniques limit MPNNs’ expressivity, but I am not sure if this is the case. I think that if the authors choose to keep these claims, they should probably be provided with a more rigorous theoretical statement. - I think that the most important question that needs to be addressed is why incorporating RNF into a normalisation layer overcomes their current limitations in providing performance improvements. I recommend that the authors discuss this both in their rebuttal and the paper, as it will provide important insights. **Efficiency**. Another reservation I have about this work is that computational efficiency might be an issue, but is not adequately discussed. For example, the runtimes provided by the authors in Table 4 in the appendix, show an almost 3-fold increase in both training and inference time. Although this might not be significant compared to other more expressive GNNs, the comparison between the performance-complexity trade-offs is not clear. Moreover, it is a limitation of this method and should be more clearly discussed in the main paper. **Limited evaluation against baselines**. Although as I mentioned before, the results are indeed convincing, both from a perspective of a normalisation technique and of a random feature technique, the authors have not sufficiently compared against the state-of-the-art. To their credit, they did include other baselines in the supplementary material and mentioned the gap in performance in their limitations section, but I think it would be fairer to discuss this more prominently (especially since this work can be perceived as an expressive GNN and not as a normalisation technique alone). **Limited technical novelty**: Finally, a minor weakness is that there is limited novelty from a technical perspective, since if I am not mistaken adaptive normalisation has been proposed before for other domains (e.g. AdaIN, Huang et al., ICCV’17 is a relevant example – discussed by the authors in the appendix) and random features are well-known in the GNN community. Technical Quality: 3 Clarity: 3 Questions for Authors: - Have the authors tried to modify a different normalisation technique, apart from LayerNorm? This might be an interesting ablation study. - Do the authors need a GNN normalisation network for each layer of the GNN processing network? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have mentioned some limitations (mainly the gap in performance compared to state-of-the-art), but I think others should be discussed more extensively (e.g. efficiency - see weaknesses). I do not foresee any negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback, and the recognition of the importance of this work for the GNN community. The reviewer also raised concerns, which we address below. **Q1:** ..it seems to me that the main reason behind the empirical success is the random features. **A1:** We found that the combination of RNF and graph adaptivity drives GRANOLA's success, as shown by the worse performance of RNF-PE and RNF-NORM (lines 263-266), which lack graph adaptivity, and GRANOLA-NO-RNF, which omits RNF, compared to GRANOLA. **Q2:** ..This leads me to believe that the authors have probably found a way to improve the performance of random features. To me, this is an important contribution per se. **A2:** We agree that our method makes RNF practical and, thus, offering a contribution to RNF research. We discussed this in the paper (lines 235-245), and we will make it more prominent. We remark that other expressive GNNs can be used for the normalization function (lines 245-249). Thus, we have conducted an experiment, using DS-GNN [1] as our $\text{GNN}\_\text{NORM}$, instead of an MPNN + RNF. The table shows that GRANOLA-SubgraphGNN behaves similarly to GRANOLA, but with additional complexity of the Subgraph GNN. This shows that the expressivity of $\text{GNN}_\text{NORM}$ does not necessarily need to be achieved by RNF, and our choice was motivated by the linear complexity of MPNNs + RNF. | | ZINC-12k $\downarrow$ | OGBG-MOLHIV $\uparrow$ | |--|--|--| | GIN+GRANOLA-SubgraphGNN | 0.1186±0.008 | 78.62±1.31 | | GIN+GRANOLA (Using RNF, as in the paper) | 0.1203±0.006 | 78.98±1.17 | [1] Bevilacqua et al., 2022. Equivariant Subgraph Aggregation Networks **Q3:** In several parts of the paper, the authors claim that existing normalisation techniques limit MPNNs’ expressivity, but I am not sure if this is the case. **A3:** This result is explained in Cai et al., 2021, and we provide a slightly more formal explanation, which we will expand in the paper. Theorem: Let $f$ be a stacking of GIN layers with non-linear activations followed by sum pooling. Let $f^{\text{norm}}$ be the architecture obtained by adding InstanceNorm or BatchNorm without affine parameters. Then $f^{\text{norm}}$ is strictly less expressive than $f$. Proof Sketch: All non-isomorphic graphs that can be distinguished by $f^{\text{norm}}$ can clearly be distinguished by $f$. To show that $f^{\text{norm}}$ is strictly less expressive than $f$, consider two CSL graphs with different numbers of nodes. These are distinguishable by $f$. However, applying InstanceNorm to the output of GIN results in a zero matrix (Proposition 4.1 in Cai et al., 2021). Similarly, if the batch consists of these two graphs, applying BatchNorm results in a zero matrix. Since the output of the normalization is a zero matrix, they are indistinguishable by $f^{\text{norm}}$. Cai et al, 2021. GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training **Q4:** Why incorporating RNF into a normalisation layer overcomes their current limitations in providing performance improvements. **A4:** We agree with the reviewer that this corresponds to our main research question, and we will add a more thorough discussion. Incorporating RNF into our GRANOLA allows it to *fully adapt to the input graph*, providing different affine parameters for non-isomorphic nodes. Full adaptivity is lost when removing RNF and using a standard MPNN as $\text{GNN}\_\text{NORM}$, as in GRANOLA-no-RNF. This is because GRANOLA-no-RNF is not more expressive than an MPNN (Prop. 4.1), and thus, there exist non-isomorphic nodes that will get the same representation (and the same affine parameters). However, any other most expressive architecture used as $\text{GNN}_\text{NORM}$ would achieve the same full adaptivity, and our choice of MPNN + RNF was motivated by its linear complexity. **Q5:** Another reservation I have about this work is that computational efficiency might be an issue, but is not adequately discussed. **A5:** We agree that the performance improvement of GRANOLA requires additional computations (Appendix G). However, the overall runtime remains a fraction of more expressive methods (5.2x faster than efficient expressive models like SubgraphGNNs). We remark that practitioners often have different needs from their models. At times, they might prefer accuracy over efficiency (e.g., drug discovery). In such scenarios, GRANOLA offers a strong tradeoff between performance and cost. We will clarify this in the revision. **Q6:** … adaptive normalisation has been proposed before for other domains (e.g. AdaIN, Huang et al., ICCV’17 is a relevant example – discussed by the authors in the appendix) and random features are well-known in the GNN community. **A6:** Despite some similarities, GRANOLA differs significantly from AdaIN, which adjusts the content's mean and variance to match the *style input*. Additionally, GRANOLA employs RNF for full graph adaptivity, but other expressive methods could also be used as shown in A2. **Q7:** Have the authors tried to modify a different normalisation technique, apart from LayerNorm? **A7:** Thank you for the suggestion. We modified BatchNorm to be graph adaptive using our GRANOLA design. Table 1 in the additional PDF shows that GRANOLA-BatchNorm outperforms BatchNorm, highlighting the role of graph adaptivity in normalizations. **Q8:** Do the authors need a GNN normalisation network for each layer of the GNN processing network? **A8:** We use a shallow GNN normalization network for each layer. We also tested a variant with a shared normalization GNN across all layers. Table 2 in the additional PDF shows that while per-layer normalization GNNs yield better results, sharing one still significantly improves over the baselines *** We sincerely appreciate the reviewer’s feedback. We have made efforts to address their questions thoroughly, and if they feel the same, we kindly ask them to reconsider their rating. --- Rebuttal 2: Title: Post-rebuttal Comment: I thank the authors for their response and the additional experiments provided. As I mentioned in my initial review, combining adaptive normalisation with random features works well in practice, but Ι still find the underlying reasons behind this success unclear. The authors provided in their rebuttal an additional experiment, replacing random features with Subgraph-GNNs (as an alternative to boost the expressivity of the function computing the normalisation parameters) obtaining similar results. Although this hints that adaptive normalisation can be successful with any technique that improves expressivity, it triggers an additional question: should we credit success to extra expressivity or adaptive normalisation? At this point, it should be noted that, if I am not mistaken, using Subgraph-GNNs alone performs better than GIN+GRANOLA-SubgraphGNN. In any case, the overall picture remains a bit blurry: the authors put significant focus during their presentation on the adaptive normalisation perspective of their approach, yet this works well only when combined with more expressive functions computing the normalisation parameters. However, this, in turn, might be outperformed by more expressive architectures alone. I agree with the argument that the latter is not the case for random features, which brings me back to my initial observation that the authors have found a way to exploit the theoretical advantages of random features (along with their linear complexity). However, the rest of the claims need further work to be made more convincing. Overall, my reservations do not concern the method (the results are convincing), but rather the explanations/insights provided w.r.t. its behaviour. In other words, I think the paper deserves attention due to its empirical findings, but to some extent lacks maturity. Therefore, I will keep my initial score and recommendation. --- Rebuttal 3: Title: Post-rebuttal follow up Comment: **We thank the Reviewer for the discussion and the added comments. We are also happy to read that the Reviewer finds our results convincing and deserving attention.** **We would like to reply to your important comments one by one. We hope that you find them satisfactory, and we would like to receive your feedback. We implemented your comments and their follow-up discussions to the revised paper.** *** **Regarding Subgraph-GNNs vs. GRANOLA:** The reviewer rightfully mentions that there are Subgraph-GNNs that can outperform GRANOLA, and this is also reflected in our results in Appendix in Tables 7, 8, and 9. **However**, it is important to note that our paper does not make claims about outperforming subgraph GNNs. Rather, it **focuses on GNNs with linear time complexity** Importantly, this experiment aims to demonstrate that effective graph normalization requires both **adaptivity** and **expressivity**. Our approaches achieve this combination using random features, which allows for implementation with linear time complexity. Furthermore, please note that in the added experiment following your questions, we used a DS-GNN [D1] with Ego-network subgraphs. This approach does not yield higher results on the tested datasets compared with our GRANOLA, as we show in the Table below: | Method | ZINC-12k $\downarrow$ | OGBG-MOLHIV $\uparrow$ | |-------------------------|-------------------------|--------------------------| | DS-GNN (GIN) (Ego) | 0.126±0.006 | 78.00±1.42 | | GIN+GRANOLA-SubgraphGNN | **0.1186±0.008** | **78.62±1.31** | Therefore, it is important to note that, despite us not claiming that GRANOLA is meant to compete with subgraph GNNs, it does offer better performance than the core subgraph GNN used in our added experiment. We added this important discussion to our revised paper. [D1] Bevilacqua et al., 2022. Equivariant Subgraph Aggregation Networks. *** **Regarding expressivity vs adaptivity:** Thank you for the comment. We kindly note that in our paper, we highlight the discussion of the desire to have **both expressivity and adaptivity** several times, and we verify these claims by extensive experiments. *We now list the discussions on this in the paper:* 1. Lines 35-40: We mention that full adaptivity can be obtained by combining an adaptive normalization method with expressive backbone networks. 2. Figure 3 (caption): we discuss that full adaptivity can be obtained by incorporating RNF. 3. Section 3.1: We dedicate this subsection to discuss the design choices made in GRANOLA, and **emphasize the need for both expressivity and adaptivity**. 4. Proposition 4.1 and Lines 226-234: We discuss the necessity of expressiveness in addition to adaptivity. 5. Lines 929-932: We state that “GRANOLA benefits from (i) enhanced expressiveness, and (ii) graph adaptivity.”, and elaborate on this point within these lines. *In terms of experiments, we have extensively shown that:* 1. Adaptivity in itself improves the baseline results, as consistently shown by the variant called GRANOLA-NO-RNF. 2. Including RNF improves baselines results in many cases. (In particular the variants denoted by RNF-PE and RNF-NORM). 3. The combination of RNF and Adaptivity achieves the overall largest improvement compared to the baseline – this is our method GRANOLA. For convenience, we also provide several key results from the paper that support our claims discussed above, and also mention whether each method is adaptive/expressive. As can be seen, all directions (adaptive/expressive) help to improve the baseline, and having both properties achieve the best performance among these variants. | Method | ZINC-12k $\downarrow$ | OGBG-MOLHIV $\uparrow$ | Adaptive | Expressive | |----------------------|-----------------------|------------------------|----------|------------| | GIN+BatchNorm | 0.1630±0.004 | 75.58±1.40 | No | No | | GIN+BatchNorm+RNF-PE | 0.1621±0.014 | 75.98±1.63 | No | Yes | | GIN + RNF-NORM | 0.1562±0.013 | 77.61±1.64 | No | Yes | | GIN + GRANOLA-NO-RNF | 0.1497±0.008 | 77.09±1.49 | Yes | No | | GIN + GRANOLA | **0.1203±0.006** | **78.98±1.17** | **Yes** | **Yes** | We understand that the Reviewer feels that this point might have not been stressed enough in the paper, and in our revised version we will ensure to further highlight it. We thank you for the invaluable guidance. --- Rebuttal Comment 3.1: Title: Post-rebuttal follow-up (part 2) Comment: **Regarding GRANOLA as a practical way to utilize RNF:** We thank the Reviewer for the comment, which we fully agree with and discuss in our original submission (page 6, “Relation to expressive GNNs”). We made this discussion even clearer in the revision. We believe this additional perspective on our contribution (i.e., "improving GNNs with random features" rather than "designing effective GNN normalization schemes") strengthens our work rather than diminishes it. Multiple viewpoints often emerge in scientific research and tend to enrich the overall discussion.
Summary: This paper proposes an adaptive normalization layer for GNNs. It first points out that the traditional normalization layer (BatchNorm, InstanceNorm) are not specifically designed for graphs and thus may limit the expressive power of GNNs, and single normalization technique can not always be the best for all graphs. Therefore, it proposes the GRANOLA method which is a learnable normalization layer for GNNs, and justify its design with theories. Strengths: S1. the paper is well-structured and easy to follow. S2. GRANOLA consistently bring performance improvement on all benchmark datasets. S3. Baselines are carefully selected and organized. Weaknesses: W1. GRANOLA involves an additional GNN module to learn the adaptive normalization, making it more costly and less scalable than its counterparts. It would be better if the authors could provide some experimental results to show how much additional time we will need when applying GRANOLA. W2. The proposed GRANOLA is only tested on GIN and GSN backbones. It would be better to check if it could be applied to more common backbone models such as GCN, GAT, etc. W3. (This is a minor point). Code is not provided. Technical Quality: 3 Clarity: 2 Questions for Authors: Q1. The authors point out that the traditional normalization may not effectively capture the unique characteristics of graph-structured data. I am curious to see is there any empirical study or theoretical results to justify this point? Q2. Figure 2 provides an example to illustrate that BatchNorm may make the GNN less powerful. Is it possible to have something similar to visualize the performance of BatchNorm, InstanceNorm, LayerNorm, and GRANOLA? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Please refer to weakness and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. We are glad to see the reviewer has appreciated the presentation of our work, finding the paper well-structured and easy to follow. We proceed by answering each question in the following. **W1:** GRANOLA involves an additional GNN module to learn the adaptive normalization, making it more costly and less scalable than its counterparts. It would be better if the authors could provide some experimental results to show how much additional time we will need when applying GRANOLA. **A:** We agree with the reviewer that our GRANOLA involves some additional computations due to the GNN layers in the normalization mechanism. However, this addition allows the normalization to adapt to the input graph and the task at hand, yielding significant performance improvements as reflected in our experiments. Furthermore, in our paper, we report and discuss the runtimes in Appendix G, as well as the computational complexity of our method (which is linear in the number of nodes and edges, as standard MPNNs). Our results indicate that while GRANOLA requires additional computations (3x slower than Batchnorm), it is still a fraction of the cost of more complex methods (5.2x faster than a scalable provably-powerful GNN), while yielding favorable downstream performance. We believe that finding good tradeoffs between computational complexity of methods and their accuracy is important. In our revision, we will ensure to highlight this point better in the main text. **W2:** The proposed GRANOLA is only tested on GIN and GSN backbones. It would be better to check if it could be applied to more common backbone models such as GCN, GAT, etc. **A:** As stated in our paper on Lines 267-268, our motivation for experimenting with GIN is that it is maximally expressive among standard MPNNs (i.e., it is as expressive as 1-WL). Additionally, we demonstrated that our GRANOLA method can be beneficial for methods that exceed 1-WL expressiveness, such as GSN. This shows that GRANOLA is beneficial for different kinds of GNNs with varying levels of expressiveness. Nevertheless, we agree that experimenting with additional backbones is interesting and important. Therefore, we have now added results using GCN (Kipf et al., 2017), GAT (Veličković et al., 2018), and GPS (Rampášek et al., 2022) as backbones, combined with our GRANOLA method. Our results are provided in the table below and have been added to our paper. As can be seen from the table, GRANOLA **consistently improves** these various backbones. These results further underscore the versatility of GRANOLA, which can be potentially coupled with any GNN layer and improve its performance. | Method | ZINC-12k $\downarrow$ | OGBG-MOLHIV $\uparrow$ | |--------------------|-------------|-------------| | GCN | 0.367±0.011 | 76.06±0.97 | | GCN+GRANOLA (Ours) | 0.233±0.005 | 77.54±1.10 | |--------------------|-------------|-------------| | GAT | 0.384±0.007 | 76.0±0.80 | | GAT+GRANOLA (Ours) | 0.254±0.009 | 77.39±1.03 | |--------------------|-------------|-------------| | GPS | 0.070±0.004 | 78.80±1.01 | | GPS+GRANOLA (Ours) | 0.062±0.006 | 79.21±1.26 | Kipf et al., 2017. Semi-Supervised Classification with Graph Convolutional Networks. ICLR 2017 Veličković et al., 2018. Graph Attention Networks. ICLR 2018 Rampášek et al., 2022. Recipe for a General, Powerful, Scalable Graph Transformer. NeurIPS 2022 **W3:** (This is a minor point). Code is not provided. **A:** We agree that making the code public is important. To this end, our submission includes a statement that upon acceptance, we will release our code on GitHub. We confirm that this is indeed the case. **Q1:** The authors point out that the traditional normalization may not effectively capture the unique characteristics of graph-structured data. I am curious to see is there any empirical study or theoretical results to justify this point? **A1:** Traditional normalization techniques may result in a loss of expressiveness (the ability to compute certain functions) due to their lack of consideration of the underlying graph. For example, as we show in our paper (Figure 2), normalizing an MPNN with BatchNorm, together with the choice of the ReLU activation function (which is a very common choice used in various papers, as discussed in Lines 112-113), might lead to the loss of the ability to compute node degrees, which is an essential feature in graph learning. **Q2:** Figure 2 provides an example to illustrate that BatchNorm may make the GNN less powerful. Is it possible to have something similar to visualize the performance of BatchNorm, InstanceNorm, LayerNorm, and GRANOLA? **A2:** In Appendix C, we present additional motivating examples representing failure cases for other normalization methods. However, we understand the importance of visualization, and we will include additional figures similar to Figure 2. Thank you for the suggestion. --- We sincerely appreciate the reviewer’s constructive feedback. We have made efforts to address their questions thoroughly, and have conducted the suggested experiments. We kindly ask them to reconsider their rating if they find our responses satisfactory.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their valuable comments and for their efforts in providing actionable feedback to enhance the quality of the submission. Overall, reviewers appreciated our novel adaptive normalization scheme for GNNs that maintains linear complexity and provides consistent improvements over all normalization schemes. We are glad to see that reviewers recognized the potential impact of the paper, found *“of general interest to the GNN community”* (**Ao3Y**), *“having good potential for adaptation in practice”* (**Ao3Y**), being *“a practical solution for real-world applications”* (**tP2u**). The reviewers further highlighted the contribution of this work, deeming it *“significant”* (**PwFk**), and our approach *“novel”* (**tP2u**, **PwFk**) . Additionally, reviewers unanimously appreciated the experimental evaluation, finding it *“extensive”* (**tP2u**, **PwFk*), yielding *“consistent improvements”* (**vndL**, **Ao3Y**, **tP2u**) and recognizing its *“competitiveness with the state-of-the-art”* (**Ao3Y**). Additionally, we are glad to see reviewers **tP2u** and **PwFk** have both appreciated the theoretical analysis, finding it *“solid”* (**tP2u**, **PwFk**). The paper presentation has also been particularly valued, with the paper described as *“well-structured”* (**vndL**), *“well-written”* (**Ao3Y**), and *“easy to follow”* (**vndL**, **Ao3Y**). **New Experiments.** Several additional experiments were conducted following the reviewers’ comments: 1. An additional variant of our approach, where instead of using an MPNN + RNF as $\text{GNN}_\text{NORM}$, we employ a Subgraph GNN, showing that full-graph adaptivity can be obtained with any expressive architecture other than MPNN + RNF, and our choice was dictated by their linear complexity (**Ao3Y**); 2. A comparison of GRANOLA and standard normalizations using GAT as the GNN backbone, showing that GRANOLA is beneficial for different kinds of GNNs and consistently improves their performance (**vndL**); 3. A variant of our approach that uses the BatchNorm blueprint instead of the LayerNorm-node one, demonstrating that adaptivity is beneficial for different normalization blueprints (**Ao3Y**); 4. An additional analysis of the impact of the normalization term, achieved by removing the $\gamma_{b,n,c}^{(\ell)}$ term by setting it to zero, further highlighting the importance of normalizing (**tP2u**); 5. An evaluation of the performance of GRANOLA when coupled with graph transformers, demonstrating the versatility of GRANOLA, which can be coupled with any layer and improve its performance (**PwFk**). If the reviewers find that we have adequately addressed their concerns, we would be grateful if they would consider reassessing their rating. We are also open to further discussion and welcome any additional input. Pdf: /pdf/60bbfd0536b89c6b273f8b01c2cd42fc6e37851c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Tighter Convergence Bounds for Shuffled SGD via Primal-Dual Perspective
Accept (poster)
Summary: This paper aims at improving existing bounds on random reshuffling. While lower bounds are tight in the worst case, refined smoothness definitions allow to take a larger step-size in favorable cases, which in turns allows faster convergence. These smoothness definitions involve a supremum over permutation-dependent partial averaging of individual smoothness constants. The dependence on these refined smoothness constants is obtained through a primal-dual interpretation of SGD, though the convergence analysis does not consist in applying existing primal-dual analyses to this formulation. As a side note, results include mini-batching, which is nice to have. For generalized linear models, individual covariances can be averaged instead of individual smoothness constants, thus allowing to gain even more especially in high-dimensions. A non-smooth extension is given, performing similar averagings over individual Lipschitz constants. Typos: Line 27: why does SGD rely on a standard basis vector?? This looks like a mix between SGD and coordinate descent. L45: we choose take Algo 1: shouldn't this be a $\nabla f_i$ on line 7? Strengths: - Nice use of the finite-sum assuption, which allows to use a primal-dual reformulation. - Improves over existing bounds for shuffled SGD, mimicking cyclic coordinate descent results. Weaknesses: - $\tilde{L}$, the $L_{max}$-like constant, is the one that appears in Theorems 1 and 2. In particular, the more "average smoothness" constant (from which most of the improvement seems to come) only affect higher-order terms by using larger step-sizes to be used, but does not change anything for small step-sizes. - While results are strong for generalized linear models (taking into account the actual spectrum of each Hessian), general results are quite looser (though they still improve on existing ones). Technical Quality: 3 Clarity: 3 Questions for Authors: 1) In the end, everything seems to depend on primal quantities. What makes an actual difference in the analysis that could not have been done in the primal (or why was it more intuitive to do it in the primal)? Why can't the full analyses be done by replacing the $y^i_k with $\nabla f_i(x_{k,i})$? In particular, what technical challenges do you expect moving to the non-convex setting? In the end all quantities can, but the primal-dual interpretation of SGD does not hold anymore since biconjugacy is used at some point if I'm not mistaken). 2) What are the connections with existing (primal-dual) CD analyses? I see that the primal-dual reformulation allows to study a setting that is close (although different since CD would not get any residual variance). It is said in Appendix A that results are technically disjoints, so then why is this inspiration helpful still? 3) It seems that the discussion from Section 4 only applies to the generalized linear models case, because it does not seem possible to benefit from structure in the general case. Do you have any intuition for why such assumptions cannot be made using Hessians, and thus benefit from similar improvements? Additionally, how would Table 2 look with these bounds instead of the generalized linear models ones? 4) Do you actually expect to depend on the smoothness for the average permutation (instead of worst-case)? Is this complicated to analyze directly because of the dependence between iterates within a single epoch? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We thank the reviewer for the positive evaluation of our paper. We hope that the clarifications provided below address the reviewer’s questions. We are happy to answer further questions in the discussion phase.** --- ### Summary > Typos Thanks for pointing out. It should be $E_{i_t} [ \nabla f_{i_t} (x_t) / p_{i_t}] = \nabla f (x_t)$ in Line 27. We will correct those typos in our revision. --- ### Questions > Q1 Correct. In our case, the primal-dual view allowed us to separate the linear transformation ($a_i^\top x$) from nonlinearity (loss $\ell_i$), as is done in splitting primal-dual methods like [1]. The separation of linear vs nonlinear portion was useful for handling cancellations coming from the cyclic updates (see e.g., the proof of Lemma 9) and obtaining the data dependent bounds (see e.g., Lines 692–696 in the proof of Lemma 11). That said, given that in the end the algorithm is equivalent to standard shuffled SGD (i.e., the algorithm is primal-only and the primal-dual perspective is only there for the analysis), we expect that there is a way to write the analysis as primal-only. We did not pursue this direction as we did not think it would improve the readability and interpretations of the proofs. > Q2 We first note that in nonconvex settings one would need guarantees in terms of stationarity, which is different from the optimality gap in convex settings. Also, all the existing work on shuffled SGD for nonconvex objectives requires additional assumptions about the problem, such as bounded variance at all points [2] or bounded sequence correlation [3]. However, we believe that our techniques can be generalized to at least some (but still fundamental in the context of ML applications) nonconvex objectives, such as those arising from generalized linear models (GLMs). As a specific example, one could consider minimizing the mean squared loss of a GLM (e.g., ReLU as the most basic example): $$\min_x \frac{1}{2 n} \sum_{i=1}^n (\sigma(x^\top a_i) - b_i)^2$$ where $\sigma(t)$ is an activation function; for example $\sigma(t) = \mathrm{ReLU}(t) = \max\\{0, t\\}.$ One can “dualize” the quadratic portion of this objective to write it in a primal-dual form as: $$\min_x \max_y \frac{1}{n} \sum_{i=1}^n (\sigma(x^\top a_i) - b_i)y_i - (1/2)y_i^2.$$ In this case, we get a nonlinear (but structured) coupling of the primal and the dual, which we believe can be handled with a separate analysis. One relevant observation is that in this case $a_i y_i$ (which for convex ERM functions in our paper was the gradient of the component function) now corresponds to the gradient of a *surrogate* function for the nonconvex GLM problem. SGD-style methods applied to the surrogate function with the same gradient field $a_i y_i$ as here have been widely studied in the learning theory literature and used to argue about the learnability of these problems (under distributional assumptions); see e.g., [4] and references therein. We leave the investigation of nonconvex problems based on GLMs as an interesting topic for future study. > Q3 Thanks for raising this point. We draw inspiration from primal-dual methods to separate the linear and nonlinear components of the objectives and follow the methodology of gap construction in primal-dual methods. However, details of the analysis are crucially different, as discussed in Appendix A. First, most CD analyses cannot provide tight bounds for the setting with random permutation of blocks, while the only works handling random permutations are specialized to convex quadratics [5,6]. Further, for those works providing fine-grained CD analyses [7–9] (most CD works only provide analyses with the worst-case dependence on the number of blocks), they either rely on a certain block-wise descent lemma [7] ( there is no smoothness or descent on the dual side for shuffled SGD), or use extrapolation steps which are incompatible with the shuffled SGD algorithm [8, 9]. > Q4 To use Hessians, we would need to assume the function is twice (continuously) differentiable, which are the assumptions we are not using right now. It is however a good insight that possibly similar fine-grained bounds can be obtained by working with the Hessian. An alternative approach would be to assume that functions are smooth with respect to Mahalanobis (semi)norms, similar to what was done in [7–9], and then obtain bounds dependent on the matrices defining those norms. This is something that seems doable and we can consider adding if the reviewer thinks it would be useful to include. > Q5 We first note the dependence on the worst-case permutation mainly appears in our final complexity and comes from the need to select a constant, deterministic step size, as discussed in Lines 716-723 in the proof. In principle, one can replace it with a certain high-probability bound on $\hat L_\pi$ and $\tilde L_\pi$. We would like to further point out that for the 15 datasets we looked at, the smoothness parameter over permutations seems to concentrate around the mean, as illustrated in Appendix E. Thus in practice, the difference between these two quantities appears minor. On the other hand, for the dependence on $\tilde L$ in Lemma 2 and Theorem 2, the reviewer is correct on the technical difficulty. We refer to our proof in Lines 695-705, as an example. From the inequality in Lines 695–696, we need to bound $\tilde L_{\pi^{(k)}}\\|A_k^\top I_{(i - 1)\uparrow}y_{\ast, k}\\|^2$ with permutation randomness coupled together. Such a term is nontrivial to bound, so we first relax $\tilde L_{\pi^{(k)}}$ to $\tilde L$ to reduce the randomness on $\\|A_k^\top I_{(i - 1)\uparrow}y_{\ast,k}\\|^2$ only, and then use Lemma 8 to bound that term as in Lines 703–704. However, we conjecture that the dependence on the average permutation could be achieved by a much more complicated analysis combining our fine-grained analysis with the techniques of conditional expectations, which we leave for future research. --- Rebuttal Comment 1.1: Comment: Thank you for carefully answering my questions. I believe the Mahalanobis semi-norm extension (which as you point out allows to bypass the second order differentiability requirement, though might be less tight for non-GLMs) would be nice to have, but I don't consider it compulsory. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our response and for the clarification regarding the Mahalanobis semi-norm extension. --- Rebuttal 2: Title: References for the rebuttal Comment: ### References [1] Chambolle, A. and Pock, T. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40(1):120–145, 2011. [2] Mishchenko, K., Khaled, A., and Richtarik, P. Random reshuffling: Simple analysis with vast improvements. In Proc. NeurIPS’20, 2020. [3] Koloskova, A., Doikov, N., Stich, S. U., & Jaggi, M. On Convergence of Incremental Gradient for Non-Convex Smooth Functions. In Proc. ICML'24, 2024. [4] Wang P, Zarifis N, Diakonikolas I, Diakonikolas J. Robustly learning a single neuron via sharpness. In Proc. ICML’23, 2023. [5] Lee, C.-P. and Wright, S. J. Random permutations fix a worst case for cyclic coordinate descent. IMA Journal of Numerical Analysis, 39(3):1246–1275, 2019. [6] Wright, S. and Lee, C.-P. Analyzing random permutations for cyclic coordinate descent. Mathematics of Computation, 89(325):2217–2248, 2020. [7] Xufeng Cai, Chaobing Song, Stephen J Wright, and Jelena Diakonikolas. Cyclic block coordinate descent with variance reduction for composite nonconvex optimization. In Proc. ICML'23, 2023. [8] Chaobing Song and Jelena Diakonikolas. Cyclic Coordinate Dual Averaging with Extrapolation for Generalized Variational Inequalities. SIAM Journal on Optimization, 2023. [9] Cheuk Yin Lin, Chaobing Song, and Jelena Diakonikolas. Accelerated cyclic coordinate dual averaging with extrapolation for composite convex optimization. In Proc. ICML'23, 2023.
Summary: This paper studied the convergence of random shuffled SGD through the lens of dual coordinate descent. By leveraging the analysis in coordinate descent, the author(s) derived a rate that is $O(\sqrt{n})$ faster than existing rate. Strengths: Pros: - The paper is well written and easy to follow, contributions are clearly stated (Table 1). - The rate derived by the author(s) is better than previous state-of-the-art. Weaknesses: - The primal-dual relationship between SGD and coordinate descent has a quite long history, e.g. [1]. The author(s) should mention these works in the related work section or section 2. [1] Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization, Shai Shalev-Shwartz and Tong Zhang, 2013. Technical Quality: 3 Clarity: 3 Questions for Authors: The convergence analysis of coordinate descent usually have to assume the objective is smooth, which means the primal objective is strongly convex. But the strong convexity assumption was not used in this paper, I wonder how does the author(s) circumvent this assumption? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are not well-addressed. I do not see any potential negative impact of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We thank the reviewer for their valuable feedback. We hope that the answers provided below address the reviewer’s concerns and that the reviewer would consider reevaluating our work. We appreciate the opportunity to answer further questions in the discussion phase.** --- ### Weaknesses > The primal-dual relationship between SGD and coordinate descent has a quite long history, e.g. [1]. The author(s) should mention these works in the related work section or section 2. Thank you for pointing out this line of work. We will cite this paper and discuss related work in our revision. In particular, [1] provides theoretical results only for SDCA, which chooses the dual coordinate to optimize *uniformly at random*. The cyclic variant SDCA-Perm (related to shuffled SGD) that samples the dual coordinate without replacement is only presented as an empirical example and studied through numerical experiments. --- ### Questions > The convergence analysis of coordinate descent usually have to assume the objective is smooth, which means the primal objective is strongly convex. But the strong convexity assumption was not used in this paper, I wonder how does the author(s) circumvent this assumption? The reviewer is correct, and that is exactly why the existing analyses for coordinate methods cannot be applied to shuffled SGD methods. Note that there is no smoothness or descent on the dual side in our setting, and we are also taking the "best response" steps on the dual, which is strongly convex (due to primal smoothness). It is also important to note here that cyclic methods would be making specific gradient or proximal steps and use the properties of such steps in the analysis, while here we can only allow the dual update to be “best response” so that we maintain the equivalence with the standard (primal-only) shuffled SGD. To handle these issues, we follow a primal-dual approach, which is different from traditional analysis of coordinate methods. Strong convexity is mainly used to bound the gap by introducing negative terms $-\\|y_k - y_*\\|^2_{\Lambda^{-1}}$ on the dual side; see Lines 669-670 in the proof as an example. As a side note, our analysis can also deduce convergence results on the dual variables $\\|y_k - y_*\\|^2_{\Lambda^{-1}}$. The major part in which we build connections with coordinate methods is on how to characterize the difference between the intermediate iterate $x_{k- 1, i}$ and the iterate $x_k$ after one full cycle, as mentioned in Lines 73-79. To improve over previous worst-case analysis and obtain a tighter fine-grained bound, one needs to avoid using global smoothness with a triangle inequality, which is a prevalent approach in existing analyses of shuffled SGD [2–4]. Instead, we derive the fine-grained bounds on the partial sum of intermediate dual variables $y_k^i$ by tracking the progress of the cyclic update on the dual side, in the aggregate; see e.g., the proof of Lemma 11 to bound $\mathcal{T}_2$. Our technique mirrors the very recent advance on the fine-grained bounds for coordinate methods [5–7]. However, our setting and algorithm are technically disjoint with these works, as discussed in Lines 455-462 in Appendix A. --- ### References [1] Shalev-Shwartz, S., & Zhang, T. Stochastic dual coordinate ascent methods for regularized loss minimization. Journal of Machine Learning Research, 14(1), 2013. [2] Jaeyoung Cha, Jaewook Lee, and Chulhee Yun. Tighter lower bounds for shuffling SGD: Random permutations and beyond. In Proc. ICML'23, 2023. [3] Konstantin Mishchenko, Ahmed Khaled, and Peter Richtárik. Random reshuffling: Simple analysis with vast improvements. In Proc. NeurIPS’20, 2020. [4] Lam M Nguyen, Quoc Tran-Dinh, Dzung T Phan, Phuong Ha Nguyen, and Marten Van Dijk. A unified convergence analysis for shuffling-type gradient methods. The Journal of Machine Learning Research, 22(1):9397–9440, 2021. [5] Xufeng Cai, Chaobing Song, Stephen J Wright, and Jelena Diakonikolas. Cyclic block coordinate descent with variance reduction for composite nonconvex optimization. In Proc. ICML'23, 2023. [6] Chaobing Song and Jelena Diakonikolas. Cyclic Coordinate Dual Averaging with Extrapolation for Generalized Variational Inequalities. SIAM Journal on Optimization, 2023. [7] Cheuk Yin Lin, Chaobing Song, and Jelena Diakonikolas. Accelerated cyclic coordinate dual averaging with extrapolation for composite convex optimization. In Proc. ICML'23, 2023. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: I would like to thank the author(s) for the detailed response. My questions are addressed, I have raised my score slightly. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for carefully considering our response and acknowledging that your questions have been addressed. We appreciate your increased support of our work.
Summary: This paper considers the primal-dual aspect of SGD using sampling without replacement (shuffled SGD). The authors present results for shuffled SGD on smooth and non-smooth convex settings with tight convergence bounds for several shuffling schemes (IG, SO, and RR). In some specific settings, the convergence rate can be improved with a factor of $\sqrt{n}$. The authors perform experiments to demonstrate that their bound is tight and better than prior work. Strengths: The theoretical results seem to be solid, although I did not check the proofs. The authors choose to investigate a different type of smoothness constant compared to the prior work, thus lead to improvement in the bound. In addition, they study the primal-dual aspect for linear predictors and then extend the results to non-smooth setting. Weaknesses: The improvement in the convergence is not significant as it is only changing the Lipschitz constant. The authors should be more transparent about the setting where there is $\sqrt{n}$ improvement as it seems like this only applies to linear predictors (the claim in abstract, line 14-17 is not clear). The authors should include that context in your statements. Another weakness is that the bounds in Section 3 depend on the data. Technical Quality: 2 Clarity: 2 Questions for Authors: Why do you call RR and SO uniformly random shuffling? How the results in section 3 reduces and compares to general convex finite-sum problems when $a_i$ are all-1 vectors i.e. loss function applies on $x$? I have read the author responses. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We thank the reviewer for their feedback and kindly request them to consider our responses below when evaluating our work. We appreciate the opportunity to further engage in a discussion, as needed.** --- ### Weaknesses > The improvement in the convergence is not significant as it is only changing the Lipschitz constant. We respectfully disagree with this point, as obtaining a dependence on a fine-grained smoothness/Lipschitzness constant directly leads to up to $O(\sqrt{n})$ complexity improvement. Dependence on other problem parameters cannot be improved, by the lower bound results in [1]. > The authors should be more transparent about the setting where there is $\sqrt{n}$ improvement as it seems like this only applies to linear predictors (the claim in abstract, line 14-17 is not clear). For general convex smooth problems, one can also expect $O(\sqrt{n})$ improvement, as our results improve the dependence from the max to average smoothness. > Another weakness is that the bounds in Section 3 depend on the data. We note that such data-dependent bounds are generally appreciated in theory, because they automatically predict faster convergence than the worst-case analysis based on component smoothness. --- ### Questions > Why do you call RR and SO uniformly random shuffling? It is because the permutation is chosen uniformly at random over the set of possible permutations. This terminology is standard in the shuffled SGD literature [1–3]. > How the results in section 3 reduces and compares to general convex finite-sum problems when $a_i$ are all-$1$ vectors i.e. loss function applies on $x$? We first note that having all $1$s in $a_i$ does not reduce to the general convex finite sum setting, as in that case each $f_i$ would be univariate, dependent on the sum of the entries of $x$. For the comparison between the results in two sections, we note that our results in Theorem 2 provide a tighter bound than directly applying Theorem 1 to the generalized linear models. This is because Eq. (3) is a tighter estimate than Eq. (2), where the data matrix $A$ and the smoothness constants from the nonlinear part $\Lambda$ are separated in Eq. (3). In particular, consider the case where $\ell_i$ are all $1$-smooth for simplicity and $f_i = \ell_i(\langle a_i, x\rangle)$ is $\\|a_i\\|^2$-smooth. For brevity, we omit the permutation notation and let the batch size be $1$, then Eq. (3) and Eq. (2) reduce to the operator norms of two $n \times n$-dimensional matrices $M$ and $N$ normalized by $1/n^2$ respectively, where their entries are given by $M_{ij} = \min\\{i, j\\} a_i^\top a_j$ and $N_{ij} = \min\\{i, j\\} \\|a_i\\| \\|a_j\\|$. Note that the operator norm of $M$ is always smaller than the operator norm of $N$, because the absolute value of each element of $M$ is no larger than the corresponding element of $N$, by the Cauchy-Schwarz inequality. Further, for the case where all the row norms are scaled to $1$ (the average and the maximum component smoothness constants would both be equal to $1$), our results would still be tighter by a factor as large as $\sqrt{n}$. This is implied by our Gaussian example discussed in Lines 251-260 in Section 4, using concentration of measure on the sphere. For general convex cases, we refer to Appendix B for more discussion. --- ### References [1] Jaeyoung Cha, Jaewook Lee, and Chulhee Yun. Tighter lower bounds for shuffling SGD: Random permutations and beyond. In Proc. ICML'23, 2023. [2] Konstantin Mishchenko, Ahmed Khaled, and Peter Richtárik. Random reshuffling: Simple analysis with vast improvements. In Proc. NeurIPS’20, 2020. [3] Lam M Nguyen, Quoc Tran-Dinh, Dzung T Phan, Phuong Ha Nguyen, and Marten Van Dijk. A unified convergence analysis for shuffling-type gradient methods. The Journal of Machine Learning Research, 22(1):9397–9440, 2021.
Summary: This paper focuses on SGD with shuffling for finite-sum minimization problems. While there exists tight convergence upper and lower bounds for SGD assuming a global smoothness constant $L_{\max}$, the proposed results give a more fine-grained analysis in terms of the component-wise smoothness constants $L_i$ to show possibly faster convergence rates using Fenchel dual arguments. By replacing the $L_{\max}$’s with smaller constants $\tilde{L}, \hat{L}$, the new results can improve the iteration complexity by up to a factor of $L_{\max}/\hat{L} = O(n)$ as demonstrated in the linear predictor example. This work also extends to non-smooth, Lipschitz objectives in the case of linear predictors, and provides empirical evidence supporting the claims. Strengths: - The idea of the ‘primal-dual’ formulation, which essentially allows one to exploit similar-to-coordinate-descent techniques for better dependencies on the $L_i$’s, is new to me and seems to be a potentially useful technique. I agree that using component-wise structures for improved convergence rates could be one of the many meaningful future directions for the shuffling community. - The paper presents results that consider general types of shuffling-based algorithms (RR, SO, IG) which I appreciate. Weaknesses: - Focusing on the results, the only big difference with previous work would be the constant $L_{\max}$ being replaced with $\tilde{L}, \hat{L}$, which means that the improvements are directly linked with how small the ratios $\tilde{L}/L_{\max}, \hat{L}/L_{\max}$ could actually be. While Table 2 suggests that $L_{\max}$ could be quite large (for the linear predictor case), it is quite hard to see how ‘small’ the values $\tilde{L}, \hat{L}$ are in first glance due to the relatively complicated definitions. The paper explains the gap for Gaussian data, but I think this example is a bit too ‘specific’. - Moreover, based on my understanding, the definitions of $\tilde{L}, \hat{L}$ for the general convex case and the linear predictor case seem to be a bit different. It is still unclear whether these fine-grained bounds could imply a significantly *faster* convergence rate, especially for the general convex case. - Minor Stuff: I might have missed this, but it seems that the definition of $\boldsymbol{A}_{\pi}$ (which I think is $\\boldsymbol{A}\_{\\pi} = [a\_{\pi_1} \cdots a\_{\pi_n}]^{\top}$) is missing in Section 3.1. Also, the proofs are all written for batch size $b$, might be confusing to readers: maybe it would have been better if the main body also contained statements including $b$. - TYPO: Line 9 of Algorithm 1, $m$ → $n$ - TYPO: Line 473 mini-bath → mini-batch Technical Quality: 3 Clarity: 3 Questions for Authors: - Is there a way to intuitively understand when or why $\tilde{L}/L_{\max}, \hat{L}/L_{\max}$ could be ‘much’ smaller than $1$ (with possible $n^{\alpha}$-ish dependencies), both for the Or are there at least any examples apart from the linear predictor that demonstrate significantly small $\tilde{L}/L_{\max}, \hat{L}/L_{\max}$? - Based on my knowledge, in coordinate descent algorithms it is common to use *different* step sizes for different coordinates, say $\eta_i = 1/L_i$, to improve dependencies on the $L$’s. Would it be possible to do something similar for this case if we take the primal-dual approach? - Can you elaborate a bit more on why the inequality $(iii)$ of $(4)$ (in Section 4) is loose by a factor of $n$ in most cases? My understanding of the linear predictor case is that the norm-trace gap corresponds to the largest VS sum of singular values, and hence if the singular values are even, then the gap can be as loose as a factor of $n$, while if we have a single large $\lambda_{\max}$ then the inequality can be tight. I wonder if (i) this interpretation is correct, and (ii) whether something similar could hold for the $\tilde{L}^g, \hat{L}^g$’s in the general convex case. - Minor question: Is it true Algorithm 1 is completely equivalent to the standard SGD with random reshuffling (without the $\boldsymbol{y}$’s), and the dual variables are just there for illustrative purposes? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors adequately addressed the limitations, and there seems to be no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We appreciate the questions the reviewer raised. We hope that the answers provided below address the reviewer’s concerns and that the reviewer would consider reevaluating our work. We will fix the typos, update statements in the main body to apply to batch size $b$, and add more discussions in the revision, as suggested. Please let us know if there is any additional information that would be helpful.** --- ### Weaknesses > W1: While Table 2 suggests that $L_{\mathrm{max}}$ could be quite large (for the linear predictor case), it is quite hard to see how ‘small’ the values $\tilde{L}$, $\hat{L}$ are in first glance due to the relatively complicated definitions. The paper explains the gap for Gaussian data, but I think this example is a bit too ‘specific’. We share the same view that it may be difficult to directly compare $\tilde{L}$, $\hat{L}$ against $L_{\mathrm{max}}$, which is why we conducted numerical computations on 15 popular machine learning datasets to illustrate the gap between $\hat{L}$ and $L_{\mathrm{max}}$. In some cases, we demonstrate that the gap can be as large as $O(n)$ in the linear predictor settings, suggesting the effectiveness of our fine-grained analysis through a primal-dual perspective. > W2: Moreover, based on my understanding, the definitions of $\tilde{L}$, $\hat{L}$ for the general convex case and the linear predictor case seem to be a bit different. It is still unclear whether these fine-grained bounds could imply a significantly faster convergence rate, especially for the general convex case. The reviewer is correct on the difference between the two cases, while such a difference is intuitive and desirable. For the general convex case, one can view our fine-grained bounds as improving from maximum (in previous works [1–3]) to average smoothness, which could also lead to $O(\sqrt{n})$ improvement in the final complexity (when component smoothness parameters are highly nonuniform). For linear predictors, the bounds are more informative (and in our opinion, interesting), as they are directly dependent on the data matrix. We will further clarify these points in our revision. --- ### Questions > Q1: Is there a way to intuitively understand when or why $\tilde{L}/L_{\mathrm{max}}$, $\hat{L}/L_{\mathrm{max}}$ could be ‘much’ smaller than $1$ (with possible $n^{\alpha}$-ish dependencies), both for the Or are there at least any examples apart from the linear predictor that demonstrate significantly small $\tilde{L}/L_{\mathrm{max}}$, $\hat{L}/L_{\mathrm{max}}$? For linear predictors, the main source of the improvement can be seen as our bounds being dependent on the operator (as opposed to Frobenius) norm of the data matrices, as discussed in Lines 241-250 in Section 4. The operator-to-Frobenius norm relaxation is almost always loose, and often by a factor of $n$ for $n \times n$ matrices, as discussed in Lines 249-250. At an intuitive level, we expect this relaxation to be loose (and our bound to be tighter) when there is weak correlation between the data points, which we take to an extreme in our Gaussian data example (but similar conclusions could be drawn for e.g., sub-Gaussian data). We expect this to be the case in datasets where data is collected from independent sources, which is a standard assumption for even being able to guarantee good statistical properties pertaining to learning. For general convex smooth functions, our dependence on the smoothness also improves from the maximum to the average, leading to $O(n)$ improvement when the component smoothness constants are highly nonuniform. > Q2: Based on my knowledge, in coordinate descent algorithms it is common to use different step sizes for different coordinates, say $\eta_i = 1/L_i$, to improve dependencies on the $L$’s. Would it be possible to do something similar for this case if we take the primal-dual approach? This is a good point. We first note that our focus is not on proposing new algorithms but deriving tighter bounds for shuffled SGD, so we stick to the constant step size over all components/inner iterations, which agrees with previous works on shuffled SGD [1–3] and empirical practice. On the other hand, deploying different step sizes for each component gradient update based on the component smoothness would be an interesting approach. We believe it is possible to incorporate different step sizes in our analysis by carrying out the argument using a weighted $\ell_2$ norm, similar to what was done for coordinate methods in nonconvex settings [4]. However, this is out of scope of the present work, and we leave it for future research. > Q3: Can you elaborate a bit more on why the inequality (iii) of (4) (in Section 4) is loose by a factor of $n$ in most cases? My understanding of the linear predictor case is that the norm-trace gap corresponds to the largest VS sum of singular values, and hence if the singular values are even, then the gap can be as loose as a factor of n, while if we have a single large $\lambda_{\mathrm{max}}$ then the inequality can be tight. I wonder if (i) this interpretation is correct, and (ii) whether something similar could hold for the $\tilde{L}^g$, $\hat{L}^g$’s in the general convex case. The reviewer’s interpretation is correct. This is exactly what happens in our Gaussian example, but one can generalize. For general convex settings, there is no specific structure of data matrices in the optimization objective, so the best one may expect is the improvement from the max (in previous works [1–3]) to the average smoothness constant. We discuss those in Lines 493-501 in Appendix B. > Q4: Minor question: Is it true Algorithm 1 is completely equivalent to the standard SGD with random reshuffling (without the $\mathbf{y}$’s), and the dual variables are just there for illustrative purposes? Yes, the primal-dual version is provided for convenience of the primal-dual analysis; there is no difference in the actual algorithm. --- Rebuttal 2: Title: References for the rebuttal Comment: ### References [1] Jaeyoung Cha, Jaewook Lee, and Chulhee Yun. Tighter lower bounds for shuffling SGD: Random permutations and beyond. In Proc. ICML'23, 2023. [2] Konstantin Mishchenko, Ahmed Khaled, and Peter Richtárik. Random reshuffling: Simple analysis with vast improvements. In Proc. NeurIPS’20, 2020. [3] Lam M Nguyen, Quoc Tran-Dinh, Dzung T Phan, Phuong Ha Nguyen, and Marten Van Dijk. A unified convergence analysis for shuffling-type gradient methods. The Journal of Machine Learning Research, 22(1):9397–9440, 2021. [4] Xufeng Cai, Chaobing Song, Stephen J Wright, and Jelena Diakonikolas. Cyclic block coordinate descent with variance reduction for composite nonconvex optimization. In Proc. ICML'23, 2023. --- Rebuttal Comment 2.1: Comment: Thank you for your detailed and precise answers. --- Reply to Comment 2.1.1: Title: A small request for clarification Comment: Thank you for reading through our rebuttal. We are glad to learn you found our answers detailed and precise. We noticed that you kept your score as 'borderline' and wanted to kindly ask if you could provide some additional insight into your reasoning. Understanding your perspective would be valuable for us to address any remaining concerns and improve the work further. Thank you in advance, Authors
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their precious time and valuable feedback. In this top-level rebuttal, we reiterate the contributions and strengths of our work. --- We first summarize our main contribution by quoting the Review S4vW where they acutely pointed out > This paper aims at improving existing bounds on random reshuffling. While lower bounds are tight in the worst case, refined smoothness definitions allow to take a larger step-size in favorable cases, which in turns allows faster convergence. --- Further, we are encouraged by reviewers recognizing the following aspects of our work: - **Improved complexity**: All reviewers recognize that our fine-grained analysis provides improved complexity over previous worst-case analyses. - **Novel primal-dual view**: * > (Reviewer JY5g) “New to me and seems to be a potentially useful technique, … using component-wise structures for improved convergence rates could be one of the many meaningful future directions for the shuffling community”, * > (Reviewer S4vW) “Nice use of the finite-sum assumption, which allows to use a primal-dual reformulation”. - **Extensive study**: * > (Reviewer JY5g) “consider general types of shuffling-based algorithms (RR, SO, IG) which I appreciate”, * > (Reviewer Ua2v) “extend the results to non-smooth setting”, * > (Reviewer S4vW) “As a side note, results include mini-batching, which is nice to have”. - **Writeup**: Three reviews rate our soundness 3 and presentation 3 (Review JY5g, Review EeyJ, Review S4vW). It is also particularly mentioned that * > (Reviewer EeyJ) “The paper is well written and easy to follow, contributions are clearly stated (Table 1)”. --- We look forward to interacting with the reviewers and providing any further explanations as needed. Sincerely, Authors
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
IODA: Instance-Guided One-shot Domain Adaptation for Super-Resolution
Accept (poster)
Summary: The author uses CLIP to assist in extracting the representation of target domain samples and implements a one-shot domain adaptation framework. Strengths: The introduction of CLIP improves the performance of one-shot domain adaptation. Weaknesses: Although the method is effective, it does not appear to be the first time that CLIP has been used for domain adaptation. Moreover, this work seems to merely combine Alpha-CLIP with SR.In addition, the training scenario for CLIP may involve cross-domain scenarios of super-resolution (SR). However, the author did not explain this issue. Technical Quality: 3 Clarity: 3 Questions for Authors: Why can Alpha-CLIP enhance the performance of SR when other versions of CLIP cannot? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The proposed method exhibits slow inference speed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question1:** Why can Alpha-CLIP enhance the performance of SR when other versions of CLIP cannot? **Response:** Conventional CLIP-based SR network domain adaptation methods face limited target domain diversity when using a single target domain LR image. To address this problem, we propose an instance-guided target domain distribution expansion method. Similar to human perception of the environment, which achieves comprehensive understanding by repeatedly focusing on different targets within a scene, we generate multiple image features from a single LR image, each focusing on different regions. We argue that focusing on instances within the image, rather than randomly selected regions, can implicitly introduce high-level semantic information. Thus, we optimize the focus areas to target different instances. Alpha-CLIP, a new model introduced at CVPR24, enhances the standard CLIP by allowing specified focus areas. This makes it an ideal feature extractor for our approach. We first use the SAM model to construct an instance mask pool, then select a certain number of instance masks to guide Alpha-CLIP to focus on different instances in the image, ultimately expanding the target domain feature distribution. Therefore, alpha-CLIP's unique ability to specify focus areas makes it indispensable for our proposed method. **Question2:** Although the method is effective, it does not appear to be the first time that CLIP has been used for domain adaptation. Moreover, this work seems to merely combine Alpha-CLIP with SR. In addition, the training scenario for CLIP may involve cross-domain scenarios of super-resolution (SR). However, the author did not explain this issue. **Response:** The CLIP model, trained on millions of data samples, covers a wide range of scenarios, including various lighting conditions and degradation models, and possesses rich prior knowledge and strong generalization. This makes it widely used in downstream tasks such as domain adaptation for object detection and image generation, leveraging CLIP's extensive prior knowledge and strong generalization capabilities for efficient domain adaptation. Therefore, this paper introduces CLIP for the first time in the domain adaptation for low-level tasks like super-resolution, using its rich prior information for SR domain adaptation. However, existing CLIP-based domain adaptation methods cannot be directly applied to SR tasks, as SR focuses more on restoring low-level features like texture details, while existing tasks emphasize high-level semantic information. To address this, we propose an image-guided domain adaptation method for SR tasks. Additionally, to address the limited diversity in target domain distribution caused by single target domain sample scenarios, we introduce an instance-guided target domain distribution expansion strategy. Our innovative approach efficiently enhances distribution diversity by focusing on different instances. In the first innovation, we specifically optimize domain adaptation networks for super-resolution by proposing image-guided domain adaptation instead of text-guided approaches. In the second innovation, to address the limited diversity in target domain distribution, we introduce an instance-guided diversity expansion strategy. This is the first use of Alpha-CLIP for enhancing target domain feature distribution, with a designed instance region partitioning scheme rather than random mask shapes. **Question3:** Explanation and optimization of network adaptation training time consumption **Response:** Existing domain adaptation methods for SR can be broadly classified into test-time domain adaptation methods and adversarial generated domain adaptation methods. The test-time adaptation methods need to individually manual explicit model degradation and perform domain adaptation training for each image, which can be time-consuming for relatively larger target domain datasets (see Table 2 of the Rebuttal materials) and complex to manual model. As summarized in Section 2.1 of the paper, the proposed IODA method utilizes a single LR image from the target domain for adaptation training, enabling the network to adapt from the source domain to the target domain. IODA requires only one adaptation training session on a single LR image from the target domain to perform super-resolution inference on the entire target domain dataset, eliminating the need for repeated domain adaptation training on other target domain LR images. Additionally, IODA method leverages the rich prior semantic information from CLIP, resulting in lower training time compared to adversarial generated domain adaptation methods for SR task (DASR_oneshot and DASR (Domain)). Additionally, the domain adaptation training time of IODA can be optimized. When the source and target domains have similar distributions in high-level semantic information, the adaptation training time can be significantly reduced. For example, as shown in Table 2 of the Rebuttal materials, when the source domain is the daytime driving dataset Cityscapes and the target domain is the rainy driving dataset ACDC_rain, the training time is reduced to 1.16 minutes. In addition, selective fine-tuning of network parameters is also an effective strategy for speed improvement, which will be the focus of our future work. --- Rebuttal 2: Comment: Dear Reviewer ohtL, We sincerely appreciate the time and effort you have invested in reviewing our paper. We would like to inquire whether our response has addressed your concerns and if you have the time to provide further feedback on our rebuttal. We are more than willing to engage in further discussion. Best regards, Authors of paper 4658.
Summary: This paper proposes a framework of efficient instance-guided one-shot domain adaptation (abbr. IODA) with only one unlabeled target domain LR image for addressing image super-resolution (SR) issues. On top of that, it designs an instance-guided target domain distribution expansion strategy to expand the diversity of domain distribution, thus enhancing one-shot DA performance. Extensive ablation studies across multiple datasets and networks have shown the effectiveness of IODA. Strengths: 1. The paper provides a new perspective for solving the problem of image super-resolution based on the domain adaptation method, especially in resource-constrained situations. 2. IODA achieves efficient domain adaptation by using a single unlabeled low-resolution (LR) image of the target domain. Moreover, the instance-guided target domain distribution expansion strategy prevents the pattern collapse by expanding the diversity of domain distribution. These facilitate IODA to carry out practical applications in the real world. Weaknesses: 1. In order to clearly illustrate the methodology of this paper, the description of the specific implementation of the image-guided domain adaptation and instance-guided target domain distribution extension strategies needs to be further supplemented and refined, in particular with respect to the associated constraints. 2. The evaluation of SR results cannot be limited to pixel-oriented PSNR and SSIM, it is equally important for the evaluation of visual perceptual effects, especially in real-world applications. In addition, visual demonstrations should be provided in the main text, not just in an appendix. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. During model training, is the selection of LR images for unlabeled target domains is random? Or does it need to be based on some criteria that are not stated in this manuscript? 2. In practice, is it also necessary to add some screening criteria or conditional restrictions to the distribution expansion strategy to limit its scope? If the selection is random, how to ensure its representativeness? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: While IODA provides inspiration for subsequent solutions to the task of domain adaption-based SR, it may encompass too many existing approaches such as CLIP, Alpha-CLIP, SAM, and it would further enhance the significance and value of this work if some specialized designs or optimizations could be proposed for the current research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question1:** During model training, is the selection of LR images for unlabeled target domains is random? Or does it need to be based on some criteria that are not stated in this manuscript? **Response:** To ensure the reliability of the experimental results, we repeated each experiment 5 times and took the average result, selecting different images from the dataset each time as the single LR image for training in the target domain. In the ablation studies, to avoid the impact of different target domain images on module performance, we kept the target domain images consistent across all ablation experiments, using the first 5 images from the target domain dataset. **Question2:** In practice, is it also necessary to add some screening criteria or conditional restrictions to the distribution expansion strategy to limit its scope? If the selection is random, how to ensure its representativeness? **Response:** The proposed target domain distribution expansion strategy primarily extends the CLIP spatial features by introducing Alpha-CLIP, which generates features focusing on different instances. This method does not introduce new objects into an image, unlike other data augmentation methods such as Cut-mix, which might place a cat into an image full of dogs. Instead, it ensures that the network, when generating multiple feature maps, has each feature map focus on different dogs within the image. For example, one feature map focuses on a standing dog, while the next focuses on a lying dog. This approach avoids disrupting the original feature distribution with newly introduced feature distributions. This method is similar to the human visual sensory mechanism, where one repeatedly focuses on different objects within a scene to achieve a comprehensive understanding of the scene. **Question3:** The evaluation of SR results cannot be limited to pixel-oriented PSNR and SSIM, it is equally important for the evaluation of visual perceptual effects, especially in real-world applications. In addition, visual demonstrations should be provided in the main text, not just in an appendix. **Response:** As shown in Figure 1 of the Rebuttal materials, we provide additional visualizations to demonstrate the effectiveness of the proposed method. **Question4:** Regarding the use of CLIP, Alpha-CLIP, and SAM. **Response:** In this paper, we address the specificity challenges of domain adaptation for low-level tasks like super-resolution by proposing an image-guided domain adaptation method. This method leverages the focus on detailed textures in super-resolution tasks, using images to guide the adaptation process and overcoming the limitations of text-guided approaches in representing fine details. Additionally, to address the limited diversity in target domain distribution caused by single target domain sample scenarios, we introduce an instance-guided target domain distribution expansion strategy. Our innovative approach efficiently enhances distribution diversity by focusing on different instances. In the first innovation, we specifically optimize domain adaptation networks for super-resolution by proposing image-guided domain adaptation instead of text-guided approaches. In the second innovation, to address the limited diversity in target domain distribution, we introduce an instance-guided diversity expansion strategy. This is the first use of Alpha-CLIP for enhancing target domain feature distribution, with a designed instance region partitioning scheme rather than random mask shapes. Many excellent works utilize large models like CLIP, Llama, and SAM as the foundation for further exploration. For example, current domain adaptation networks for object detection use the CLIP network for guidance, but we do not deny the contributions of their work. Similarly, this paper employs powerful models like Alpha-CLIP and SAM as tools to achieve our objectives. --- Rebuttal 2: Comment: Dear Reviewer G4LY, We sincerely appreciate the time and effort you have invested in reviewing our paper. We would like to inquire whether our response has addressed your concerns and if you have the time to provide further feedback on our rebuttal. We are more than willing to engage in further discussion. Best regards, Authors of paper 4658.
Summary: The paper presents a novel approach to one-shot domain adaptation for super resolution. The key idea is to use the CLIP directional vector between low resoultion source and target domain images to guide the SR image generation in the target domain. They further use occlusion masks to further increase the performance of the model by providing pre-trained Alpha-CLIP with different region-range masks, enabling it to generate Alpha-CLIP spatial features focused on different areas of the image, thereby expanding the diversity of the target domain distribution. Strengths: - The paper has a well-organized structure and expresses its ideas clearly. - Ablation analysis is provided in the paper to study the effect of various components of the pipeline on the downstream performance. - The idea is pretty interesting and new and the motivation is sound. Weaknesses: - Given the high time complexity of the approach, the performance gains over the baseline seems to be small. - The SR performance of the model needs to be compared against stronger SR baselines to better analyze the performance gains. - More visualization analysis is needed in the paper to better support the performance gains. Technical Quality: 3 Clarity: 3 Questions for Authors: - Can you compare the performance of your models against the other domain adaptation-based SR approaches like [12] and [13]? - Can you provide a more comprehensive visualization analysis to better support you performance claims? - Can you provide a comparison of the efficiency between this method and other methods? A comparison that considers both efficiency and PSNR/SSIM would be more reasonable. [12] Yunxuan Wei, Shuhang Gu, Yawei Li, Radu Timofte, Longcun Jin, and Hengjie Song. Unsuper304 vised real-world image super resolution via domain-distance aware training. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13380–13389, 2021. 13] Wei Wang, Haochen Zhang, Zehuan Yuan, and Changhu Wang. Unsupervised real-world super307 resolution: A domain adaptation perspective. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 4298–4307, 2021. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The main limitations of this paper is the relatively low training efficiency that would impose certain limitations in practical scenarios. Although the idea looks nice, but as mentioned in the paper, it takes approx 10 mins to generate a SR version of a single image. Therefore, I'm not completely convinced about its complexity. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question1:** Can you compare the performance of your models against the other domain adaptation-based SR approaches like [12] and [13]? **Response:** As shown in Table 1 of the Rebuttal materials (Rebuttal.pdf), we have additionally included domain adaptive methods for super-resolution, DADA [1] and SRTTA [2], in our comparative experiments. Currently, we compared super-resolution domain adaptive methods DASR [3], DADA [1], SRTTA [2], and ZSSR [4]. Notably, we compared the performance of DASR with single target domain LR image and multiple LR images. Additionally, we also evaluated the performance of general domain adaptive methods on SR tasks (Style-GAN-NAN [5], P0DA [6]). Reference [13] did not release its source code, so it was not included in the comparison experiments. **Question2:** Can you provide a more comprehensive visualization analysis to better support you performance claims? **Response:** As shown in Figure 1 of the Rebuttal materials (Rebuttal.pdf), we provide additional visualizations to demonstrate the effectiveness of the proposed method. **Question3:** Can you provide a comparison of the efficiency between this method and other methods? A comparison that considers both efficiency and PSNR/SSIM would be more reasonable. **Response:** As shown in Table 1 of the Rebuttal materials (Rebuttal.pdf), we have additionally included inference efficiency metrics to demonstrate the effectiveness of the method. **Question4:** Explanation and optimization of network adaptation training time consumption **Response:** Existing domain adaptation methods for SR can be broadly classified into test-time domain adaptation methods and adversarial generated domain adaptation methods. The test-time adaptation methods need to individually manual explicit model degradation and perform domain adaptation training for each image, which can be time-consuming for relatively larger target domain datasets (see Table 2 of the Rebuttal materials) and complex to manual model. As summarized in Section 2.1 of the paper, the proposed IODA method utilizes a single LR image from the target domain for adaptation training, enabling the network to adapt from the source domain to the target domain. IODA requires only one adaptation training session on a single LR image from the target domain to perform super-resolution inference on the entire target domain dataset, eliminating the need for repeated domain adaptation training on other target domain LR images. Additionally, IODA method leverages the rich prior semantic information from CLIP, resulting in lower training time compared to adversarial generated domain adaptation methods for SR task (DASR_oneshot and DASR (Domain)). Additionally, the domain adaptation training time of IODA can be optimized. When the source and target domains have similar distributions in high-level semantic information, the adaptation training time can be significantly reduced. For example, as shown in Table 2 of the Rebuttal materials, when the source domain is the daytime driving dataset Cityscapes and the target domain is the rainy driving dataset ACDC_rain, the training time is reduced to 1.16 minutes. In addition, selective fine-tuning of network parameters is also an effective strategy for speed improvement, which will be the focus of our future work. --- Rebuttal 2: Comment: Dear Reviewer V6qe, We sincerely appreciate the time and effort you have invested in reviewing our paper. We would like to inquire whether our response has addressed your concerns and if you have the time to provide further feedback on our rebuttal. We are more than willing to engage in further discussion. Best regards, Authors of paper 4658.
Summary: This paper addresses one-shot domain adaptation (OSDA) in the field of super-resolution (SR). It leverages the fact that the content remains unchanged during super-resolution to propose an image-guided domain adaptation method, ensuring consistency by aligning the direction between the source domain and the target domain. The authors highlight the difficulty in learning the target domain's distribution during OSDA in SR and propose increasing data samples using random masking, similar to MAE. They also utilize SAM and Alpha Clip to obtain instance-aware representations. The method trains to align the direction between the source and target domains using the representation obtained from Alpha Clip. Various experiments demonstrate the effectiveness of the proposed method. Strengths: - They propose an OSDA method for real-world applications of super-resolution (SR). - They effectively utilize foundation models to address the domain adaptation (DA) challenges in the SR task. Weaknesses: There seems to be an insufficient survey of domain adaptation methods in the SR task, specifically those outlined in references [1-4]. Major revision is needed to emphasize the necessity and originality of the proposed method, based on an analysis of these existing methods. Particularly, since [4] addresses test-time adaptation, it is essential to highlight the advantages of one-shot domain adaptation over test-time adaptation. Additionally, the experimental tables predominantly report performance improvements over the baseline, but a comparison with existing domain adaptation methods is also required. [1] Wang, Wei, et al. "Unsupervised real-world super-resolution: A domain adaptation perspective." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. [2] Wei, Yunxuan, et al. "Unsupervised real-world image super resolution via domain-distance aware training." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021. [3] Xu, Xiaoqian, et al. "Dual adversarial adaptation for cross-device real-world image super-resolution." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [4] Deng, Zeshuai, et al. "Efficient test-time adaptation for super-resolution with second-order degradation and reconstruction." Advances in Neural Information Processing Systems 36 (2023): 74671-74701. Technical Quality: 3 Clarity: 2 Questions for Authors: Please refer to the Weaknesses section Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: They've addressed the limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question1**: There seems to be an insufficient survey of domain adaptation methods in the SR task, specifically those outlined in references [1-4]. Major revision is needed to emphasize the necessity and originality of the proposed method, based on an analysis of these existing methods. Particularly, since [4] addresses test-time adaptation, it is essential to highlight the advantages of one-shot domain adaptation over test-time adaptation. **Response:** In the paper, we describe domain adaptation for SR tasks in Appendix A1, "One/Zero-shot Domain Adaptation", where references 1 and 2 cited by the Reviewer correspond to references 12 and 13 in the paper. To provide a clearer and more comprehensive overview of domain adaptation methods in the SR task, we will add a section 'Domain Adaptation Methods in SR' in the related work. In this section, we added descriptions of references [3, 4], provided a detailed analysis of test-time domain adaptation networks, and highlighted the advantages of one-shot domain adaptation. **Domain Adaptation Methods in SR** The feature distribution differences between the training and test sets led to significant performance degradation of SR networks that performed well on the training set when evaluated on the test set. To address this issue, test-time domain-adaptive SR networks were introduced, treating the training set as the source domain and the test set as the target domain. During inference on the target domain, the network simulated the degradation experienced by target domain LR images to generate additional training samples. Training the network with these simulated samples, which approximated the target domain's degradation, effectively reduced the negative impact of distribution discrepancies on network performance. Shocher et al. [28] can be seen as a test-time domain-adaptive SR network. During inference, it performed Bicubic downsampling on target domain LR images to generate pseudo-LR images, simulating the degradation of the target domain LR images. It then used paired target domain LR and pseudo-LR images for supervised training, achieving super-resolution without requiring labels for target domain HR images. Deng et al. [4] argued that the ZSSR network's consideration of only the Bicubic downsampling degradation model was insufficient to represent the more complex degradation models encountered by LR images in real-world scenarios. Therefore, they proposed the SRTTA network, which considered various degradation factors such as GaussianBlur, DefocusBlur, GlassBlur, and GaussianNoise. They used a pre-trained degradation classification network to identify the degradation category of target domain LR images and generated corresponding pseudo-LR images based on this classification. This more accurate degradation modeling enabled SRTTA to achieve better SR performance. Although test-time domain-adaptive SR networks had considered various degradation models, real-world scenarios involved highly complex degradation due to factors such as lighting and imaging devices, which manual degradation models could not fully represent. To address this issue, adversarial generated domain adaptation methods emerged, using generative adversarial networks for implicit modeling of degradation, thus avoiding complex manual modeling. Wang et al. [13] employed a generative network to generate fake LR images paired with high-resolution (HR) images and used a discriminator to constrain the generated LR images to align with the target domain distribution. Subsequently, Wei et al. [12] considered the impact of domain distance between the target domain and the source domain on network domain adaptation training. They optimized the network adaptation process based on the domain distance mapped by a discriminator, assigning higher learning weights to samples with higher domain similarity, further enhancing the network’s fit to the target domain. Xu et al. [3] introduced two adversarial adaptation modules to align source domain features with target domain features, achieving effective cross-device domain adaptive super-resolution performance. While adversarial generated domain adaptation networks achieved good performance, they required a large number of target domain samples for network adaptation, making deployment challenging in real-world scenarios. Testing-time domain adaptation methods could perform inference on individual test samples from the target domain, but they required complex manual modeling of the target domain’s degradation model and separate degradation modeling and training for each test image, which was time-consuming. The CLIP model, trained on millions of data samples, covers a wide range of scenarios, including various lighting conditions and degradation models, and possesses rich prior knowledge and strong generalization. Therefore, we proposed the IODA method, which leverages CLIP’s extensive prior knowledge to guide domain adaptation for SR networks. IODA performs domain adaptation using only a single LR image from the target domain without requiring HR image labels. Furthermore, when performing SR inference on a batch of data, domain adaptation training is required only for the first image, enabling efficient inference for subsequent images. **Question2**: Additionally, the experimental tables predominantly report performance improvements over the baseline, but a comparison with existing domain adaptation methods is also required **Response:** As shown in Table 1 of the Rebuttal materials, we have additionally included domain adaptive methods for super-resolution, DADA and SRTTA, in our comparative experiments. Currently, we compared super-resolution domain adaptive methods DASR, DADA, SRTTA, and ZSSR. Notably, we compared the performance of DASR with single target domain samples and multiple data samples. Additionally, we also evaluated the performance of general domain adaptive methods (StyleGAN-NAN, P0DA). --- Rebuttal 2: Comment: Dear Reviewer wDY8, We sincerely appreciate the time and effort you have invested in reviewing our paper. We would like to inquire whether our response has addressed your concerns and if you have the time to provide further feedback on our rebuttal. We are more than willing to engage in further discussion. Best regards, Authors of paper 4658. --- Rebuttal Comment 2.1: Comment: Thank you for the responses, but I still believe that this paper requires a major revision, and my concerns are as follows: In the introduction section, it is stated that existing methods need to train on many low-resolution (LR) images from the target domain (L39) and that there is no work addressing this issue (L53). However, the authors should carefully survey and compare related studies, including those I mentioned in my initial review, to demonstrate the necessity and distinctiveness of this research. The current survey does not seem thorough enough, and the advantages of this work compared to the mentioned studies are not very convincing. The paper should be reorganized with a focus on addressing these issues. --- Reply to Comment 2.1.1: Title: Revised survey on SR domain adaptation [1/3] Comment: Dear Reviewer, 1. We have further optimized the section on "domain adaptation for SR." 2. It is worth noting that domain adaptation for SR primarily addresses the issue of unpaired SR, with the main approach based on adversarial networks for domain alignment. Due to the nature of adversarial networks, they require a substantial number of samples for training. While test-time domain adaptation networks have optimizations for training time, they still require repeated domain adaptation training for each image in the target domain, leading to relatively longer processing times. Our proposed IODA utilizes the rich prior knowledge of Alpha-CLIP for domain adaptation guidance, **requiring adaptation training on only a single LR image from the target domain and eliminating the need for repeated degradation modeling and training on all images.** ## **Domain Adaptation Methods in SR** The feature distribution differences between the training and test sets led to significant performance degradation of SR networks that performed well on the training set when evaluated on the test set. To address this issue, test-time domain-adaptive SR networks [1,2,3,4,5] were introduced, treating the training set as the source domain and the test set as the target domain. During inference on the target domain, the network simulated the degradation experienced by target domain LR images to generate additional training samples. Training the network with these simulated samples, which approximated the target domain's degradation, effectively reduced the negative impact of distribution discrepancies on network performance. Shocher et al. [1] can be seen as a test-time domain-adaptive SR network. During inference, it performed Bicubic downsampling on target domain LR images to generate pseudo-LR images, simulating the degradation of the target domain LR images. It then used paired target domain LR and pseudo-LR images for supervised training, achieving super-resolution without requiring labels for target domain HR images. Soh et al. [2] suggested that the ZSSR [1] network repeatedly performed domain adaptation training from the random initial weights, leading to long training times. Therefore, they attempted to find a universal initial weight parameter to reduce the duration of domain adaptation training. Deng et al. [3] argued that the ZSSR network's consideration of only the Bicubic downsampling degradation model was insufficient to represent the more complex degradation models encountered by LR images in real-world scenarios. Therefore, they proposed the SRTTA network, which considered various degradation factors such as GaussianBlur, DefocusBlur, GlassBlur, and GaussianNoise. They used a pre-trained degradation classification network to identify the degradation category of target domain LR images and generated corresponding pseudo-LR images based on this classification. This more accurate degradation modeling enabled SRTTA to achieve better SR performance. Rad et al. [4] constrained fine-tuning samples by actively selecting additional reference samples that optimize fine-tuning efficiency, thereby improving network performance. Additionally, Zhang et al. [5] and Cheng et al. [6] applied the concept of test-time domain adaptation to propose Light Field Super-Resolution and Hyperspectral Image Super-Resolution, respectively. --- Reply to Comment 2.1.2: Title: Revised survey on SR domain adaptation [2/3] Comment: Although test-time domain-adaptive SR networks had considered various degradation models, real-world scenarios involved highly complex degradation due to factors such as lighting and imaging devices, which manual degradation models could not fully represent. To address this, adversarial generated domain adaptation methods emerged [7,8,9,10,11,12,13,14,15,16], using generative adversarial networks for implicit modeling of degradation, thereby avoiding the need for complex manual modeling. Wang et al. [7], Sun et al. [8] and Cong et al. [9] employed a generative network to generate fake LR images paired with high-resolution (HR) images and used a discriminator to constrain the generated LR images to align with the target domain distribution. Subsequently, Fritsche et al. [10] separated high-frequency and low-frequency information for domain adaptation training. They considered that texture details correspond to high-frequency information, which is crucial for SR tasks. Therefore, they applied high-frequency filtering before feeding the features into the discriminator, using the discriminator to constrain the high-frequency information, effectively improving SR performance in reconstructing texture details. Ji et al. [11] similarly constrained generated images at the frequency level, using the discriminator for adversarial training on high-frequency information and introducing a Frequency Density Comparator to enable the network to perceive frequency differences at varying sampling rates, further improving SR performance. Huang et al.[12] proposed a RGB image guided infrared super-resolution network, effectively reducing the negative impact of RGB image noise on infrared super-resolution performance through frequency-domain constraints. Subsequently, Wang et al. [13] considered the impact of domain distance between the target domain and the source domain on network domain adaptation training. They optimized the network adaptation process based on the domain distance mapped by a discriminator, assigning higher learning weights to samples with higher domain similarity, further enhancing the network’s fit to the target domain. Yin et al. [14] also adopted the concept of distance awareness from [8] and achieved good performance in facial SR tasks. Xu et al. [15] introduced two adversarial adaptation modules to align source domain features with target domain features, achieving effective cross-device domain adaptive super-resolution performance. While adversarial generated domain adaptation networks achieved good performance, they required a large number of target domain samples for network adaptation, making deployment challenging in real-world scenarios. Testing-time domain adaptation methods could perform inference on individual test samples from the target domain, but they required complex manual modeling of the target domain’s degradation model and separate degradation modeling and training for each test LR image, which was time-consuming. The Alpha-CLIP model [17], trained on millions of data samples, covers a wide range of scenarios, including various lighting conditions and degradation models, and possesses rich prior knowledge and strong generalization. Therefore, we proposed the IODA method, which leverages Alpha-CLIP’s extensive prior knowledge to guide domain adaptation for SR networks. IODA performs domain adaptation using only a single LR image from the target domain without requiring HR image labels. Additionally, when performing SR inference on a batch of data, domain adaptation training on a single LR image suffices to achieve efficient super-resolution for all LR images in the target domain. --- Reply to Comment 2.1.3: Title: Revised survey on SR domain adaptation [3/3] Comment: [1] A. Shocher, N. Cohen and M. Irani, "Zero-Shot Super-Resolution Using Deep Internal Learning," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 3118-3126, doi: 10.1109/CVPR.2018.00329. [2] J. W. Soh, S. Cho and N. I. Cho, "Meta-Transfer Learning for Zero-Shot Super-Resolution," *2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, Seattle, WA, USA, 2020, pp. 3513-3522, doi: 10.1109/CVPR42600.2020.00357. [3] Deng, Zeshuai, et al. "Efficient test-time adaptation for super-resolution with second-order degradation and reconstruction." Advances in Neural Information Processing Systems 36 (2023): 74671-74701. [4] M. S. Rad, T. Yu, B. Bozorgtabar and J. -P. Thiran, "Test-Time Adaptation for Super-Resolution: You Only Need to Overfit on a Few More Images," *2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)*, Montreal, BC, Canada, 2021, pp. 1845-1854, doi: 10.1109/ICCVW54120.2021.00211. [5] L. Zhang, J. Nie, W. Wei and Y. Zhang, "Unsupervised Test-Time Adaptation Learning for Effective Hyperspectral Image Super-Resolution With Unknown Degeneration," in *IEEE Transactions on Pattern Analysis and Machine Intelligence*, vol. 46, no. 7, pp. 5008-5025, July 2024, doi: 10.1109/TPAMI.2024.3361894. [6] Z. Cheng, Z. Xiong, C. Chen, D. Liu and Z. -J. Zha, "Light Field Super-Resolution with Zero-Shot Learning," *2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, Nashville, TN, USA, 2021, pp. 10005-10014, doi: 10.1109/CVPR46437.2021.00988. [7] Y. Wei, S. Gu, Y. Li, R. Timofte, L. Jin and H. Song, "Unsupervised Real-world Image Super Resolution via Domain-distance Aware Training," *2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, Nashville, TN, USA, 2021, pp. 13380-13389, doi: 10.1109/CVPR46437.2021.01318. [8] W. Sun, D. Gong, Q. Shi, A. van den Hengel and Y. Zhang, "Learning to Zoom-In via Learning to Zoom-Out: Real-World Super-Resolution by Generating and Adapting Degradation," in *IEEE Transactions on Image Processing*, vol. 30, pp. 2947-2962, 2021, doi: 10.1109/TIP.2021.3049951. [9] S. Cong , K. Cui , Y.Yang , Y. Zhou, X. Wang, H Luo, Y Zhang, X Yao, "DDASR: Domain-Distance Adapted Super-Resolution Reconstruction of MR Brain Images," [J]. medRxiv, 2023: 2023.06. 29.23292026. [10] M. Fritsche, S. Gu and R. Timofte, "Frequency Separation for Real-World Super-Resolution," *2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)*, Seoul, Korea (South), 2019, pp. 3599-3608, doi: 10.1109/ICCVW.2019.00445. [11] X. Ji, “Frequency Consistent Adaptation for Real World Super Resolution”, *AAAI*, vol. 35, no. 2, pp. 1664-1672, May 2021. [12] Huang Y, Miyazaki T, Liu X, et al. Target-oriented domain adaptation for infrared image super-resolution[J]. arXiv preprint arXiv:2311.08816, 2023. [13] Wang, Wei, et al. "Unsupervised real-world super-resolution: A domain adaptation perspective." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. [14] Z. Yin, M. Liu, X. Li, H. Yang, L. Xiao and W. Zuo, "MetaF2N: Blind Image Super-Resolution by Learning Efficient Model Adaptation from Faces," *2023 IEEE/CVF International Conference on Computer Vision (ICCV)*, Paris, France, 2023, pp. 12987-12998, doi: 10.1109/ICCV51070.2023.01198. [15] Xu, Xiaoqian, et al. "Dual adversarial adaptation for cross-device real-world image super-resolution." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [16] P. Albert *et al*., "Unsupervised domain adaptation and super resolution on drone images for autonomous dry herbage biomass estimation," *2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)*, New Orleans, LA, USA, 2022, pp. 1635-1645, doi: 10.1109/CVPRW56347.2022.00170. [17] Sun Z, Fang Y, Wu T, et al. Alpha-clip: A clip model focusing on wherever you want[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 13019-13029. --- Rebuttal 3: Comment: Thank you for your response. 1. I have included the four papers you mentioned in the newly added related work section. Our original related work, which covers SR and zero/one-shot domain adaptation, is in the appendix of the original paper. With our previous reply, we have now included related work for all three branches. 2. Regarding domain adaptation in SR, the reference [2] you mentioned is the first to introduce domain adaptation into super-resolution. Most domain adaptation work in SR uses a Cycle-GAN-like architecture, which, due to its adversarial nature, requires a large number of target domain samples for training. As a result, there has been little work focusing on single-target domain samples and incorporating CLIP-based models to guide SR domain adaptation. 3. Compared to existing multi-sample-based SR domain adaptation networks, the proposed method demonstrates better adaptability and performance for single target domain samples (Rebuttal.pdf, Table1: DASR_oneshot, DASR_Domain,DADA). Unlike test-time domain adaptation networks, our IODA method leverages CLIP's rich prior knowledge to perform domain adaptation training on a single target domain sample and efficiently infer SR for all remaining samples, without requiring repeated domain adaptation training for each LR image in the target domain (Rebuttal.pdf, Table1,2: SRTTA,ZSSR). Additionally, in the newly added experiments (Rebuttal.pdf), we validate this point, showing that IODA performs well in both domain adaptation training time and performance metrics.
Rebuttal 1: Rebuttal: 1. ### **Additional visual demonstrations** As shown in Figure 1 of the Rebuttal materials (Rebuttal.pdf), we provide additional visualizations to demonstrate the effectiveness of the proposed method. 2. ### **Additional comparisons with domain adaptive methods for Super-resolution** As shown in Table 1 of the Rebuttal materials (Rebuttal.pdf), we have additionally included domain adaptive methods for super-resolution, DADA [1] and SRTTA [2], in our comparative experiments. Currently, we compared super-resolution domain adaptive methods DASR [3], DADA [1], SRTTA [2], and ZSSR [4]. Notably, we compared the performance of DASR with single target domain LR image and multiple LR images. Additionally, we also evaluated the performance of general domain adaptive methods on SR tasks (Style-GAN-NAN [5], P0DA [6]). 3. ### **Explanation and optimization of network adaptation training time consumption** Existing domain adaptation methods for SR can be broadly classified into test-time domain adaptation methods and adversarial generated domain adaptation methods. The test-time adaptation methods need to individually manual explicit model degradation and perform domain adaptation training for each image, which can be time-consuming for relatively larger target domain datasets (see Table 2 of the Rebuttal materials) and complex to manual model. As summarized in Section 2.1 of the paper, the proposed IODA method utilizes a single LR image from the target domain for adaptation training, enabling the network to adapt from the source domain to the target domain. IODA requires only one adaptation training session on a single LR image from the target domain to perform super-resolution inference on the entire target domain dataset, eliminating the need for repeated domain adaptation training on other target domain LR images. Additionally, IODA method leverages the rich prior semantic information from CLIP, resulting in lower training time compared to adversarial generated domain adaptation methods for SR task (DASR_oneshot and DASR (Domain)). Additionally, the domain adaptation training time of IODA can be optimized. When the source and target domains have similar distributions in high-level semantic information, the adaptation training time can be significantly reduced. For example, as shown in Table 2 of the Rebuttal materials, when the source domain is the daytime driving dataset Cityscapes and the target domain is the rainy driving dataset ACDC_rain, the training time is reduced to 1.16 minutes. In addition, selective fine-tuning of network parameters is also an effective strategy for speed improvement, which will be the focus of our future work. ### **Reference** [1] Xu, Xiaoqian, et al. "Dual adversarial adaptation for cross-device real-world image super-resolution." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [2] Deng, Zeshuai, et al. "Efficient test-time adaptation for super-resolution with second-order degradation and reconstruction." Advances in Neural Information Processing Systems 36 (2023): 74671-74701. [3] Wang, Wei, et al. "Unsupervised real-world super-resolution: A domain adaptation perspective." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. [4] A. Shocher, N. Cohen and M. Irani, "Zero-Shot Super-Resolution Using Deep Internal Learning," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 3118-3126, doi: 10.1109/CVPR.2018.00329. [5] Gal R, Patashnik O, Maron H, et al. Stylegan-nada: Clip-guided domain adaptation of image generators[J]. ACM Transactions on Graphics (TOG), 2022, 41(4): 1-13. [6] M. Fahes, T. -H. Vu, A. Bursuc, P. Pérez and R. De Charette, "PØDA: Prompt-driven Zero-shot Domain Adaptation," *2023 IEEE/CVF International Conference on Computer Vision (ICCV)*, Paris, France, 2023, pp. 18577-18587, doi: 10.1109/ICCV51070.2023.01707. Pdf: /pdf/0e5b3cc1ada3afc937cdbe2da28106cab69f8c84.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Protected Test-Time Adaptation via Online Entropy Matching: A Betting Approach
Accept (poster)
Summary: This paper proposes a novel test-time adaptation method based on martingales and online learning. It detects whether testing samples need to be adapted based on the sequential entropy values. Then, if a sample needs to be adapted, a pseudo-entropy value is computed for the adaptation. Overall, the idea of this paper is reasonable and interesting. Strengths: 1. The authors replace entropy minimization with entropy matching, which is interesting. Under this main idea, online drift detection and online model adaptation are naturally proposed and make sense. 2. This paper is well written and easy-to-follow. Weaknesses: 1. The experiments are relatively weak as the authors only conduct experiments on the ImageNet-C dataset, ignoring the CIFAR10-C and CIFAR100-C datasets. It would be better to present the results on datasets with a small number of classes. 2. The proposed method estimated the pseudo-entropy at testing time. However, I wonder whether this can be done when the label distribution [1, 2] also shifts because the shifted label distribution also affects the sequential entropy values. This should be discussed in detail, as well as the related papers. 3. The “Protected” in the title of this paper and name of this method should be carefully considered as the overall method seems not to explicitly ensure the safety of performance. [1] NOTE: Robust Continual Test-time Adaptation Against Temporal Correlation. NeurIPS 2022 [2] ODS: Test-Time Adaptation in the Presence of Open-World Data Shift. ICML 2023 Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the `Weaknesses` section. 1. The experiments show that the proposed method can achieve a lower ECE value. A further discussion about how these results can be done and their benefit for practice. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations at the end of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your positive feedback and valuable suggestions. We are pleased that the reviewer found that “online drift detection and online model adaptation are naturally proposed and make sense.” We appreciate the positive feedback regarding the clarity of our writing. Thank you! > The experiments are relatively weak as the authors only conduct experiments on the ImageNet-C dataset, ignoring the CIFAR10-C and CIFAR100-C datasets. Kindly refer to the global response to all reviewers. > The proposed method estimated the pseudo-entropy at testing time. However, I wonder whether this can be done when the label distribution [1, 2] also shifts because the shifted label distribution also affects the sequential entropy values. Thank you for raising this important issue. Extending our proposal to the label shift setting remains an open question for us. The idea in [1] may serve as a promising starting point. Specifically, we found the idea of prediction-balanced reservoir sampling appealing, as it can be used to approximately simulate an i.i.d. data stream from non-i.i.d. stream in a class-balanced manner. This can potentially reduce the sensitivity of the martingale process to label shifts. In turn, we anticipate that under label shift and in the absence of covariate shift, our adaptation would be minimal; and in the presence of the latter, the adaptation would be more substantial, as desired. Another possible approach would be to work with a weighted source CDF rather than the vanilla source CDF, where the weights should correspond to the likelihood ratio $P_t(Y)/P_s(Y)$. The use of such a weighted CDF was suggested in the conformal prediction literature to adjust for label shift between the source holdout data and test points [3], making the test loss to “look exchangeable” with the source losses. Certainly, the challenge in this context is to approximate the likelihood ratio $P_t(Y)/P_s(Y)$ in a reasonable manner. This challenge becomes more pronounced when faced with both a covariate and label shift. It may be the case that reference [2] pointed out by the reviewer could be a valuable starting point for us to explore this path. We will include a discussion on this matter in the revised version of the paper. [1] T. Gong, et al. "NOTE: Robust Continual Test-time Adaptation Against Temporal Correlation," NeurIPS, 2022. [2] Z. Zhou, et al. "ODS: Test-Time Adaptation in the Presence of Open-World Data Shift," ICML, 2023 [3] A. Podkopaev and A. Ramdas, “Distribution-free uncertainty quantification for classification under label shift,” Uncertainty in artificial intelligence, 2021. > The “Protected” in the title of this paper and name of this method should be carefully considered. We are now seriously considering removing the word "protected" from the title. We initially used this word for three main reasons. First, our monitoring tool rigorously alerts for distribution shifts, and the ability to raise such a warning is crucial for communicating with users that the model is encountering new environments. Second, our approach has been shown to have no harmful effect when the test data follows the same distribution as the source domain, which is a significant concern in test-time adaptation. Third, we wanted to acknowledge the protected regression method [1] that inspired our proposal. [1] Vladimir Vovk, “Protected probabilistic regression,” technical report, 2021. > The experiments show that the proposed method can achieve a lower ECE value. Recall that the ECE is presented when applying self-training on in-distribution test data (Figure 3, left panel). This experiment demonstrates that in this in-distribution scenario, our method maintains the ECE of the source model—we are not improving the ECE of the source model. Importantly, this stands in contrast to entropy minimization methods that, by design, drive the model to make over-confident predictions, as reflected by the increased ECE value in Figure 3. This emphasizes that when the test samples do not shift, we preserve the calibration property of the model and avoid making over-confident predictions, which is desired in practice. We will clarify the exposition surrounding Figure 3 in the text accordingly. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I would like to keep my positive score for this paper. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you for your engagement and for your positive feedback! We sincerely appreciate your thoughtful comments, which have helped improve our work.
Summary: This paper introduces a novel method for test-time domain adaptation using online self-training. It combines a statistical framework for detecting distribution shifts with an online adaptation mechanism to dynamically update the classifier's parameters. The approach, grounded in concepts from betting martingales and optimal transport, aligns test entropy values with those of the source domain, outperforming traditional entropy minimization methods. Experimental results demonstrate improved accuracy and calibration under distribution shifts. Strengths: 1. It is interesting to see betting and martingale appear in test-time adaptation, especially for modeling CDF for better shifted test sample prediction. 2. The paper is overall easy to follow, with rich experiments and visualizations. The algorithms clearly explain how the framework works. 3. The experiments on two TTA settings (single domain and continual TTA) confirm its effectiveness. Weaknesses: 1. It is not very intuitive to use betting in TTA. Although it might work for modeling CDF, martingale itself is not naturally suitable for TTA entropy CDF. 2. It is not very suitable to use the term "domain adaptation" in this context. Domain adaptation typically allows multiple epochs for adaptation, even in source-free settings, and uses target training samples while evaluating on target testing samples. In TTA, the same set of test data is used for adaptation and testing. 3. In Figure 1, the comparison with entropy minimization shows peaks, while entropy matching shows valleys when facing data in the tail for both classes, indicating they are both good indicators for class boundaries regardless of the changed data distribution. However, as described in lines 176-181, it seems the optimization will follow either the black line (entropy minimization) or the red line (entropy matching), making the meaning of this figure a bit vague. 4. The model requires calculating the source CDF, which requires extra time. Additionally, if there are source privacy concerns, it may not be possible to perform such calculations if the source model is only made available. 5. The experiments do not include an efficiency study, which is one of the motivations for doing TTA. 6. The paper does not include commonly used TTA datasets such as CIFAR10-C, CIFAR100-C, or any domain adaptation datasets such as OfficeHome, DomainNet, etc. 7. There is no sensitivity study. 8. This paper does not compare with state-of-the-art TTA baselines such as ROID [1]. [1] Marsden, R. A., Döbler, M., & Yang, B. (2024). Universal test-time adaptation through weight ensembling, diversity weighting, and prior correction. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 2555-2565). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can you explain how betting and martingales specifically contribute to the modeling of entropy CDF and why you believe this is a suitable approach for TTA? 2. In Figure 1, the comparison between entropy minimization and entropy matching is somewhat unclear. Can you elaborate on the intended interpretation of this figure and how it supports your claims about optimization following either the black or red lines? 3. The requirement to calculate the source CDF adds extra computational overhead and potential privacy concerns. How do you propose mitigating these issues, especially in scenarios where source model access is restricted or where computational resources are limited? --- Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. The paper uses the term "domain adaptation," which traditionally implies multiple epochs of adaptation and separate target training and testing samples. In the context of test-time adaptation (TTA), this terminology may cause confusion. A clearer distinction between these methodologies is suggested to avoid misinterpretation. 2. The requirement to calculate the source CDF introduces additional computational overhead, which has not been thoroughly discussed. In practical scenarios, especially where computational resources are limited or where source model access is restricted due to privacy concerns, this could pose significant challenges. An analysis of the method's computational efficiency and potential solutions to mitigate these issues would be beneficial. 3. The experiments conducted do not include widely recognized TTA datasets such as CIFAR10-C, CIFAR100-C, or domain adaptation datasets like OfficeHome and DomainNet. Including these datasets in the evaluation would provide a more comprehensive understanding of the method's generalizability and robustness. 4. The paper does not compare its results with state-of-the-art TTA baselines. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and the constructive review. We are pleased that the reviewer appreciates the novelty and clarity of our writing. We very much value the reviewer’s comment that “the experiments on two TTA settings (single domain and continual TTA) confirm its effectiveness.” Thank you! > Why is betting martingale a suitable approach for TTA? A betting martingale is a powerful tool to monitor distribution shifts in an online manner. We designed a monitoring tool that tests whether the distribution of the classifier’s entropy values drifts at test time. This naturally leads us to ask: if a martingale can detect that the model “behaves” differently at test time, why not use this knowledge to correct the model’s behavior? Our work shows how the evidence for distribution drift encapsulated in the betting martingale can be used to adapt the model at test time. The idea is to use the martingale to transform the test entropies to “look like” the source entropies—essentially matching the distribution of the source and the self-trained model entropy distributions. This, in turn, builds invariance to distribution drifts, which is a key principle to improve model robustness. In the interest of space, for a more technical reply, we kindly refer the reviewer to the response we provided to Reviewer Utu5 (“Clarification on the entropy matching procedure”). > The use of the term "domain adaptation". Your point is well taken! If given the opportunity, we will fix this issue in the revised paper and use “test-time adaptation” instead. We will also remove the word “Adaptation” from the title of the paper. Thank you for your constructive comment. > Can you elaborate on the intended interpretation of Figure 1? Each curve in Figure 1 presents a different risk (black for entropy minimization, red for entropy matching) as a function of the weight $w$ of the classifier $f_w$. Since the optimization procedure aims to minimize a given risk function by changing the value of the weight $w$, the curves help to understand what is the optimal value that should be obtained. By minimizing the entropy risk, the optimization ends with a self-trained classifier $f_w$ that achieves the smallest value of the black curve. This results in a trivial classifier that always predicts $+1$ (or always $-1$), regardless of the value of $X$. By minimizing the entropy matching risk, the optimization ends with $f_w$ that achieves the smallest value of the red curve. This results in a classifier that separates the two classes as much as possible. Indeed, our online method obtained a self-trained classifier whose accuracy (nearly) matches the accuracy of the Bayes optimal classifier both under an in-distribution setting (top panel) and under an out-of-distribution setting (bottom panel). >The model requires calculating the source CDF, which requires extra time. Recall that this is a CDF of 1-dimensional variables (the source entropies), which is computed only once and offline. The complexity of computing this CDF is dominated by the evaluation of the pre-trained model on relatively small holdout unlabeled samples from the source domain. At test time, we only need to compute the value of the pre-computed CDF at a single point (the test point’s entropy value), which amounts to accessing a small, pre-computed 1-dimensional array. > What if computational resources are limited? Following the global response to all reviewers, the new experiments show that the runtime of our method is comparable to TENT and EATA and even lower than that of SAR. Moreover, our monitoring tool can be used to decide whether the model should be updated or not at test time (that can further reduce runtime), as the martingale process detects distribution shifts. Notably, our monitoring tool can be applied in a “black-box” manner as it only requires access to the output of the softmax layer. > Additionally, it may not be possible to perform such calculations if the source model is only made available. Indeed, we require access to a pre-trained source model and a *pre-computed* source CDF. Importantly, given the two, we do not require any additional access to samples from the source data at test time. In that respect, our work does not differ significantly from EATA, which also assumes access to holdout source examples. > Limited evaluation; there is no sensitivity study. Kindly refer to the global response to all reviewers. > This paper does not compare with state-of-the-art TTA baselines such as ROID Our goal is to highlight why we believe it is important to transition from entropy minimization to online entropy matching. Therefore, we deliberately compared our approach to strong baseline methods that are based on entropy minimization. The ROID method builds on a weighted version of the soft likelihood ratio loss as a self-supervised loss. This loss departs from the line of the baseline, entropy-minimization methods we focus on, but it opens an interesting future direction. Broadly speaking, our paper offers an online mechanism for matching source and target distributions of any given self-supervised loss. In the context of ROID, it will be illuminating to explore how our matching paradigm would perform in combination with the weighted soft likelihood ratio loss instead of the entropy loss. We will include this idea for future work in the text! Moreover, ROID highlights that self-training can fail to improve or even deteriorate performance. This motivated the authors of ROID to include several, complex components within the test-time training scheme. Naturally, a SOTA method would include various components to enhance performance, and this set of ideas can be also valuable for our method. However, given that we introduce a new concept to the ever-growing test-time adaptation literature, we believe such explorations go beyond the scope of the current paper. These will divert attention away from our central message. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed reply and additional experiments. I've also read the reviews and comments from other reviewers. I will increase the score. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you for your engagement and for raising your score! We sincerely appreciate your thoughtful comments, which have helped improve our work.
Summary: The paper addresses the problem of adapting a classifier to a new domain at test-time. It proposes a framework that first detects a distribution shift and, based on the detection results, adapts the classifier. The distribution shift detector employs a sequential test evaluating if the distribution of the test entropy deviates from the distribution of the source entropy. In the adaptation step, the normalization parameters of the model are updated via an entropy matching objective. Experiments are conducted on ImageNet-C. Strengths: - The idea of using test martingales for test-time adaptation is novel to my knowledge and seems like a very natural application and interesting research direction. - The paper claims it only adapts when the distribution actually shifts, which is a desirable property of a TTA method, leveraging the trade-off between agile adaptation and keeping the source knowledge. - The paper is very well written overall. The reader is provided with intuitive explanations of the testing-by-betting framework - a very technical and not easy-to-explain theory - which makes the paper pleasant and insightful to read. - The experiments on ImageNet-C show encouraging results. Weaknesses: **In short:** The paper proposes a very interesting approach, but it seems to me more work is required to round out the paper. In particular, the paper requires more empirical validation and clarifications. Some of the main weaknesses I see are: - The experimental results are quite limited. By showing results on only one dataset against three baselines, it is a bit unclear how the method performs across different datasets compared to existing methods. Comparing on other standard TTA benchmarks (such as CIFAR-10-C, CIFAR-100-C for corruptions, or Office-Home for domain adaptation) could help determine in which settings the method provides most gains and also its limitations. - The paper would benefit from an illustration visualizing the entropy matching procedure. In particular, it would be helpful to illustrate how $u_j, \tilde{u_j}, Z^t_j$, and $\tilde{Z}_j^t$ connect via the functions $F_s$ and $Q$. - More space and explanation could be dedicated to the actual entropy matching procedure. How can we match the two entropy distributions given $u_j$? This seems currently concentrated in lines 265-271 (see questions below). Technical Quality: 2 Clarity: 2 Questions for Authors: - I’d appreciate some more clarification regarding section 3.4 (adaptation mechanism). Could you please elaborate on the role and interpretation of $Q$. I understood from lines 260-264 that $Q$ can be thought of as the distribution that approximates the unknown target’s entropy CDF, which makes sense to me given equations (3) and (6). However, in line 266 $Q$ seems to be used as a function to transform $u_j$ to $\tilde{u}_j$ by $\tilde{u_j} = Q(u_j)$. Could you explain the link between $Q$ being the target’s entropy CDF as well as a transformation? - I think the entropy matching could potentially be a good alternative to the dominant paradigm of entropy minimization, particularly since the latter has been shown to collapse eventually [1]. How does entropy matching perform on long-range adaptation? My understanding from the experiments is that only 1000 samples per corruption are used and not even the entire ImageNet-C dataset. [1] Press, Ori, et al. "The entropy enigma: Success and failure of entropy minimization." arXiv preprint arXiv:2405.05012 (2024). Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The limitations are rather vague, and it seems currently unclear which limitations the method encounters. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your valuable feedback and suggestions. We are glad that the reviewer found our approach to be novel and an interesting research direction. It is gratifying to see that the reviewer thinks that “the entropy matching could potentially be a good alternative to the dominant paradigm of entropy minimization.” Additionally, we appreciate the positive feedback regarding the clarity of our writing and that the reviewer found the experiments on ImageNet-C show encouraging results. Thank you! > The experimental results are quite limited Kindly refer to the global response to all reviewers. > An illustration visualizing the entropy-matching procedure Thank you for this great suggestion, we will include such an illustration in the revised paper. We also discuss the role of each component below. > Clarification on the entropy matching procedure. What is the role and interpretation of $Q$? How can we match the entropy distributions? Thank you for raising this question, as it touches on one of the more nuanced aspects of our work. Indeed, as the reviewer mentioned, one can interpret $Q$ as the distribution approximating the unknown CDF of the target entropy $Z_t$. We understand that this might be confusing as $Q$ is a function of $u$. However, recall that $u$ is a function of $Z_t$, as $u := F_s(Z_t)$. Observe also that $Z_t = F_s^{-1}(u)$. To better clarify the role of $Q$, consider a case where the betting is optimal in the sense of Proposition 2. The right-hand side of Eq. (8) gives the explicit form of the ideal $Q$ being $\tilde{u} = Q(u) = F_t(F_s^{-1}(u))$. In turn, $Q(u) = F_t(Z_t)$. In practice, the test entropy CDF $F_t$ is unknown, yet the above relation highlights why the $Q$ we formulate via the betting function can be intuitively viewed as the distribution approximating the unknown target entropies CDF. This is due to the fact that any valid betting martingale is a likelihood ratio process, aligning with Eq. (8). As for the matching property, observe that in the ideal case, the pseudo-entropy value $\tilde{Z_t} = F_s^{-1}( \tilde{u}) = F_s^{-1}(Q(u)) = F_s^{-1}(F_t(Z_t))$. This argument reveals the tight relation between our adaptation scheme and optimal transport: in the ideal case, the pseudo-entropy $\tilde{Z_t}$ is obtained by applying the optimal transport map from the target entropy distribution to the source entropy distribution. Our experiment in Figure 6 in the appendix demonstrates that we indeed achieve distribution matching via the online-estimated $Q(u)$ in practice, although we do not have access to $F_t$ that varies over time. We will include this discussion and clarification around Proposition 2 in the text. > How does entropy matching perform on long-range adaptation? My understanding from the experiments is that only 1000 samples per corruption are used and not even the entire ImageNet-C dataset. The experiments we conducted also involved a long-range adaptation on a test set of size $\approx 30,000$ for ImageNet; $\approx 15,000$ for CIFAR; and for OfficeHome we used the entire test data. Focusing on ImageNet-C, we highlight that in the continual setup, we also used $2000$ samples per corruption (Figure 2 bottom right panel), resulting in a test set of size $2000 \cdot 15=30,000$ samples in total. In the single corruption experiments, we use $37,500$ test samples for each corruption. Notably, we had to reserve a subset of the test set to implement both EATA and our method as the two need access to unlabeled holdout data from the source domain. > The limitations are rather vague, and it seems currently unclear which limitations the method encounters The key limitations are the following: 1. To implement our method, we assume access to the source CDF, evaluated on holdout unlabeled samples from the source domain. 2. The choice of hyper-parameters, in particular the learning rate, can be challenging as it depends on the data and model used. This is akin to related TTA methods. 3. The lack of theory that reveals when entropy matching is guaranteed to improve performance. This issue is one of our future research directions. We will update accordingly the discussion on the limitations in the text. We thank the reviewer for this point. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I read all the responses and appreciate the additional clarifications. > CIFAR-10C and CIFAR-100C experiments / Office Home I appreciate the additional results, particularly including a domain adaptation dataset that provides more diversity in the distribution shifts tested. > Clarification on the entropy matching procedure. What is the role and interpretation of Q? How can we match the entropy distributions? Thanks for the clarifications. > How does entropy matching perform on long-range adaptation? My understanding from the experiments is that only 1000 samples per corruption are used and not even the entire ImageNet-C dataset. Thank you for detailing the lengths of the adaptation streams. I'm still wondering why the entire test set is not included in the experiments. For example, on CIFAR-10/100-C, the test set contains 10,000 samples per corruption type, and constructing a stream with 15 corruptions results in 150,000 test samples (instead of ~15,000 as used in Figure 1, rebuttal). To my knowledge, using all test samples is the standard evaluation setting (see e.g. [1]). Could you explain why you are diverging from the standard evaluation setting and subsampling? I understand you need a subset from the source domain for EATA and POEM, but this can be small and taken from the source data, right? Related to the above question, I am not sure if I am entirely convinced by the experimental evaluation. 1. If the focus of the paper is to propose a SOTA method, the evaluation against three baselines is too limited in my opinion. This concern of mine has not been addressed. 2. If instead the focus of the paper is to propose an alternative to entropy minimisation, I think three standard entropy minimisation methods as baselines seem sufficient. However, in this case, I am not entirely convinced of entropy matching being a robust alternative to entropy minimisation. In particular, I am not sure which of the cons of entropy minimisation does entropy matching address. The proposed approach seems to address the overconfidence issue on the source data, which the paper nicely shows does not occur with entropy matching. I think this is a promising result. However, I see one of the major limitation of entropy minimisation as that of model collapse after a long range of adaptation. However, it is unclear if this alternative addresses the said important limitation of entropy minimisation since the evaluated test streams seem to be even shorter then those in standard settings. --- Reply to Comment 1.1.1: Title: Follow-up to Reviewer Utu5 [1/2] Comment: We thank the reviewer for their comments and for acknowledging our previous response. > I'm still wondering why the entire test set is not included in the experiments The test sets of ImageNet-C, CIFAR10-C, and CIFAR100-C consist of 15 different corrupted versions of the original test set of each dataset—these 15 corrupted versions represent different variations of the same “clean” test images. Therefore, we found it more natural to form an out-of-distribution test set that contains a single instance of a specific image, rather than using all 15 versions of the same original image. Additionally, we used shorter adaptation streams to demonstrate our approach’s ability to achieve faster adaptation compared to baseline methods, as shown in Figure 2 (bottom right). We apologize for any confusion and hope this explanation clarifies our initial choice. To address the reviewer's concern, we have now conducted experiments using the entire test set of CIFAR10-C and CIFAR100-C, which includes all 15 corrupted versions of each test image. The results are described hereafter. > I understand you need a subset from the source domain for EATA and POEM, but this can be small and taken from the source data, right? Yes, the holdout set can be small and should include unlabeled source samples. Since we use an off-the-shelf pre-trained model, we selected these holdout samples from the test set of the original dataset, representing the source domain. To ensure a fair out-of-distribution test set, we made sure our method (and EATA) does not have knowledge of the clean versions of the corrupted images in the test set. This is why we removed the holdout images from the corrupted test data, as the corrupted images are merely variations of the clean ones. **Long-range adaptation experiments on CIFAR-10C and CIFAR100-C:** In the following experiment, we reserved 2,500 images to form a holdout set for EATA and our method. We followed the same experimental protocol described in the global response to all reviewers and ran each adaptation method on a test set containing 112,500 samples (15 versions of 7,500 images). The results are summarized in two tables, presented in a separate comment below. These tables show that our proposed method is competitive with the baseline methods in terms of adaptation accuracy. Notably, the runtime of our method is twice as fast as SAR and comparable to EATA and TENT. Importantly, we **do not** use a model-reset mechanism (as done by SAR) or anti-forgetting loss (as done by EATA), highlighting the stability of our approach. By contrast, we found that TENT is highly sensitive to the choice of learning rate. We sincerely thank the reviewer for raising this point. > I am not entirely convinced of entropy matching being a robust alternative to entropy minimisation Beyond the theoretical aspects of our work and the monitoring capabilities we introduce, our experiments demonstrate several practical advantages over entropy minimization: 1. The proposed method maintains the performance of the source model while avoiding overconfident predictions under in-distribution settings, a crucial advantage over entropy minimization methods. 2. Short-term adaptation: Our approach achieves faster adaptation than entropy minimization methods, e.g., as shown in Figure 2 (bottom right). This is attributed to our betting scheme that quickly reacts to distribution shifts. While the reviewer emphasizes the problem of long-range adaptation, it is important to recognize the critical role of adaptation speed as well. Rapid adaptation is especially crucial, amid various strategies proposed for stabilizing long-range adaptation, such as resetting the self-trained model to its original state when specific heuristic conditions are met (as employed in SAR) or incorporating an anti-forgetting component into the entropy loss (as used in EATA). 3. Long-term adaptation: In extended test periods, our new experiments with 112,500 test examples show comparable adaptation performance to strong baseline methods, demonstrating the robustness of our method. Moreover, if stable long-range adaptation is the main concern, we could integrate model-resetting or anti-forgetting mechanisms. Notably, our monitoring tool can detect when unfamiliar corrupted data arrives, allowing for rigorous decisions on model resetting, for example, to prevent aggressive adaptation from a diverged state. This capability highlights another unique and practical advantage of our method. Once again, we apologize for any confusion and hope this discussion resolves the reviewer’s concerns. Please let us know if there are any questions, comments, or concerns left. --- Reply to Comment 1.1.2: Title: Follow-up to Reviewer Utu5 (Long-range adaptation experiment) [2/2] Comment: # Long-range adaptation: detailed results ## CIFAR-10C Accuracy Table | Method | Shot Noise | Motion Blur | Snow | Pixelate | Gaussian Noise | Defocus Blur | Brightness | Fog | Zoom Blur | Frost | Glass Blur | Impulse Noise | Contrast | Jpeg Compression | Elastic Transform | Overall | |-------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|-------------------|-------------------|----------------| | No adapt | 17.81 ± 0.05 | 11.71 ± 0.07 | 29.11 ± 0.09 | 13.67 ± 0.04 | 15.93 ± 0.07 | 12.54 ± 0.07 | 42.06 ± 0.12 | 11.14 ± 0.06 | 11.81 ± 0.08 | 17.33 ± 0.08 | 18.94 ± 0.10 | 16.24 ± 0.10 | 13.80 ± 0.09 | 19.96 ± 0.06 | 16.11 ± 0.07 | 17.87 ± 0.02 | | TENT | 50.09 ± 0.25 | 67.02 ± 0.15 | 62.30 ± 0.31 | 61.34 ± 0.14 | 53.96 ± 0.18 | 69.00 ± 0.16 | 72.44 ± 0.21 | 67.72 ± 0.16 | 68.55 ± 0.22 | 63.76 ± 0.30 | 51.18 ± 0.20 | 51.36 ± 0.25 | 66.67 ± 0.26 | 58.88 ± 0.18 | 60.67 ± 0.22 | 61.91 ± 0.06 | | EATA | 48.95 ± 0.15 | 66.65 ± 0.20 | 60.75 ± 0.28 | 58.59 ± 0.30 | 47.62 ± 0.13 | 67.95 ± 0.19 | 70.94 ± 0.24 | 65.91 ± 0.20 | 65.99 ± 0.21 | 58.99 ± 0.16 | 45.69 ± 0.20 | 42.78 ± 0.22 | 67.25 ± 0.18 | 52.48 ± 0.21 | 55.97 ± 0.10 | 58.29 ± 0.06 | | SAR | 50.02 ± 0.15 | 67.10 ± 0.17 | 61.86 ± 0.20 | 60.84 ± 0.16 | 52.28 ± 0.11 | 68.73 ± 0.18 | 72.48 ± 0.21 | 67.52 ± 0.18 | 68.23 ± 0.19 | 63.21 ± 0.15 | 50.21 ± 0.14 | 49.85 ± 0.28 | 67.81 ± 0.26 | 57.75 ± 0.19 | 60.15 ± 0.18 | 61.22 ± 0.06 | | POEM (ours) | 51.80 ± 0.10 | 67.69 ± 0.16 | 63.68 ± 0.20 | 63.33 ± 0.17 | 56.60 ± 0.23 | 69.06 ± 0.19 | 72.69 ± 0.17 | 67.82 ± 0.22 | 69.25 ± 0.21 | 64.72 ± 0.16 | 52.01 ± 0.21 | 52.29 ± 0.29 | 64.07 ± 0.32 | 58.09 ± 0.27 | 59.07 ± 0.23 | 62.12 ± 0.06 | ## CIFAR-100C Accuracy Table | Method | Shot Noise | Motion Blur | Snow | Pixelate | Gaussian Noise | Defocus Blur | Brightness | Fog | Zoom Blur | Frost | Glass Blur | Impulse Noise | Contrast | Jpeg Compression | Elastic Transform | Overall | |-------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|-------------------|-------------------|----------------| | No adapt | 4.81 ± 0.03 | 5.44 ± 0.05 | 13.14 ± 0.07 | 6.97 ± 0.05 | 4.18 ± 0.04 | 4.36 ± 0.04 | 25.99 ± 0.08 | 5.91 ± 0.04 | 5.01 ± 0.04 | 7.50 ± 0.04 | 4.01 ± 0.02 | 3.52 ± 0.04 | 1.42 ± 0.02 | 10.64 ± 0.06 | 7.22 ± 0.05 | 7.33 ± 0.01 | | TENT | 14.34 ± 0.13 | 29.09 ± 0.15 | 24.18 ± 0.19 | 24.69 ± 0.16 | 16.29 ± 0.19 | 30.63 ± 0.23 | 34.24 ± 0.20 | 27.05 ± 0.18 | 30.63 ± 0.24 | 25.72 ± 0.25 | 18.99 ± 0.27 | 15.13 ± 0.18 | 26.17 ± 0.33 | 19.51 ± 0.24 | 22.22 ± 0.19 | 23.97 ± 0.05 | | EATA | 14.05 ± 0.11 | 28.82 ± 0.18 | 23.50 ± 0.12 | 22.89 ± 0.12 | 13.98 ± 0.14 | 29.08 ± 0.14 | 33.06 ± 0.21 | 25.85 ± 0.16 | 29.14 ± 0.14 | 23.38 ± 0.23 | 16.34 ± 0.15 | 11.47 ± 0.09 | 26.55 ± 0.18 | 16.97 ± 0.13 | 21.76 ± 0.14 | 22.42 ± 0.05 | | SAR | 14.29 ± 0.11 | 29.08 ± 0.19 | 24.09 ± 0.16 | 24.47 ± 0.17 | 16.28 ± 0.13 | 30.19 ± 0.18 | 33.74 ± 0.15 | 26.79 ± 0.15 | 30.19 ± 0.24 | 25.17 ± 0.17 | 18.41 ± 0.24 | 14.94 ± 0.13 | 25.25 ± 0.37 | 19.31 ± 0.17 | 21.89 ± 0.15 | 23.61 ± 0.05 | | POEM (ours) | 13.98 ± 0.11 | 28.79 ± 0.20 | 23.62 ± 0.16 | 23.35 ± 0.19 | 14.55 ± 0.15 | 29.91 ± 0.19 | 33.35 ± 0.14 | 26.48 ± 0.13 | 29.65 ± 0.16 | 24.40 ± 0.16 | 17.35 ± 0.23 | 12.77 ± 0.14 | 28.03 ± 0.22 | 18.35 ± 0.18 | 23.18 ± 0.14 | 23.19 ± 0.05 |
null
null
Rebuttal 1: Rebuttal: We appreciate the reviewers' engagement with our paper and their valuable comments and suggestions. We will integrate their feedback into the revised paper and have conducted a new set of experiments, detailed below. The reviewers acknowledged that the paper is well-written and introduces a novel approach for test-time adaptation. The main criticism raised was that the experiments, though conducted on ImageNet-C, are quite limited. To address this, we have conducted additional experiments on CIFAR10-C, CIFAR100-C, and Office-Home datasets. In short, the new experiments show that **our approach is competitive with strong baseline test-time adaptation methods that are based on entropy minimization (TENT, SAR, and EATA)**. This conclusion aligns with the ImageNet-C experiments presented in the paper. ### **CIFAR-10C and CIFAR-100C experiments** We focus on the continual setting where the corruption type is changing over time. Using a pre-trained ResNet32 model, we applied online self-training with a batch size of 4 (due to batch-normalization layers). Each corruption type had $1024$ samples, resulting in a test set of approximately $15 \cdot 1024 = 15,000$ samples. To ensure a fair comparison, we tuned the learning rate for each method using a pre-specified grid; see the sensitivity study below. Notably, we did not change the hyper-parameters of the monitoring tool and used the same values as those employed in our ImageNet-C experiments. Results are summarized in Figure 1 (attached PDF): * Our method is competitive and often outperforms baseline methods in terms of accuracy. * Runtime comparisons (relative to the no-adapt model) are also presented, demonstrating that our method's complexity is similar to TENT and EATA, and lower than SAR. * A sensitivity study for the learning rate parameter reveals our method’s robustness to this choice, particularly when compared to SAR and TENT. ### **Office-Home experiments** We focused on adaptation from the “Real World” domain to the “Art”, “Clipart”, and “Product” domains. A continual setting was deemed less natural here. We fine-tuned the last layer of a ResNet50 model with group-norm layers (pre-trained on ImageNet1K) on Office Home's real-world images, reserving 20% as a holdout set for our method and EATA. Test-time adaptation was applied to the entire test data of each target domain. Learning rates were tuned for fair comparison, similar to the CIFAR experiments. For consistency, we kept the same hyper-parameters for the monitoring tool as those used in our ImageNet-C experiments. As such, the same hyper-parameters for the monitoring tool are used across all experiments, regardless of the model or dataset. Results are summarized in Figure 2 (attached PDF): * Overall, all the methods demonstrated modest accuracy gains compared to the ‘no-adapt’ case. Our proposed method slightly outperformed TENT and EATA in terms of accuracy, while achieving results comparable to SAR. * In terms of computational efficiency, our method's runtime was on par with TENT and EATA, and notably faster than SAR. * Regarding sensitivity to the choice of the learning rate, our approach displayed superior robustness compared to TENT and SAR, and a similar robustness to that of EATA. We provide individual replies to specific comments from each reviewer. Pdf: /pdf/dbe512426d5c9297a8ad4ef38b9416250d5f8f1d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
The Evolution of Statistical Induction Heads: In-Context Learning Markov Chains
Accept (poster)
Summary: This work introduces a simple “in-context learning” Markov chain modelling task and studies how transformer models tackle it. The task consists of inferring the transition probability matrix of the Markov chain, which is sampled from a prior Dirichlet distribution. The authors consider the task to pertain to “in-context learning” because each data point (that is, each sequence in the dataset) is sampled from a different transition probability matrix. The prior over the matrices is however global and thus shared by both training and test datasets. More in detail, the authors focus on Markov chains with only two states, and sample each sequence from the stationary distribution of the process. Given this dataset, the authors investigate the training dynamics of a two-layer attention-only transformer model both empirically and theoretically. The theoretical investigation involves the introduction of a simplified model, which is defined to mimic the main features of the transformer model. The authors report three different training stages, during which the model very quickly changes from predicting random transitions, to predicting samples similar to the stationary distribution of the chain, and lastly, predicting transition from the inferred transition probability matrix. The authors further connect their findings with previous work. Strengths: Deep learning architectures are remarkably complex systems and designing both tractable models and modelling tasks that mimic the behaviour of these systems, prior, during or after the training process, is an important approach to unveil their inner workings. This paper attempts to carry one such analysis, by introducing a simple “in-context learning” Markov chain modelling task and studying how a two-layer attention-only transformer model and a simplification thereof solve it, thereby adding another interesting contribution to our understanding of transformer models. One strength of the paper is that the authors first demonstrate that both their models can indeed find the optimal solution of their Markov chain task, and then empirically verify this, which gives soundness to their claims. In particular the results in Figure 3, which demonstrate the similarity in behaviour of both models, are very compelling (see however the questions below). A second strength is that the authors identify different phenomena within their setting which have also been observed before, like multiple distinct stages of training, the latter of which is also connected to induction head formation, or the order in which the layers in the network are learned. Finally, the authors demonstrate that some of their findings are also present in a second-order Markov chain modelling task. Putting aside some minor details with some notation and content organisation (see below), the paper is overall well written. Weaknesses: My first issue with the paper is: - how much does the proposed learning task really concern “in-context learning”? especially in the context of large language models, which the authors use as motivation. As framed by e.g. Xie et al. (2022) who focus on LLMs, "in-context learning" deals with sequences which have low probability under the training distribution. The authors do not really elaborate in their understanding or interpretation of "in-context learning", nor do they explain how it relates to "in-context learning" in the setting of language models. In short, I believe the paper contributes more to our understanding of transformers than to our understanding of in-context learning. A second major issue is that the authors do not explain their reasoning behind nor the limitations of their dataset/task definition. - First, the authors consider Markov chains with only two states and do not comment on why they restricted their study to that case nor how their findings extend to Markov chains with more states, if at all. - Second, the authors choose the initial distribution of the chain to be its stationary distribution but, again, did not explain why. Note that a stationary Markov chain corresponds to a very special case, and it's not very clear how the observations made by the authors extend beyond it. Indeed, choosing a Markov chain with more states, initialised from a distribution far from stationarity, can generate long sequences which are “out-of-equilibrium”. One can't but wonder whether one would still find multiple training stages in this case. See e.g. question 8 below. Despite its merits, I think the issues above require some revision, or more detailed explanation, before the paper can be published. *Other comments*: The comment in line 205 “We observe that training a 1-layer transformer fails to undergo a phase transition or converge to the right solution”, could be better highlighted as evidence for the emergence of induction heads. It feels somewhat buried in between other comments and statements. Also there’s a typo in line 239 and another in line 280. *References:* - An explanation of in-context learning as implicit Bayesian inference. Xie et al. (2022) Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why did you consider Markov chains with two states only? 2. Can you please elaborate on your comment in line 145, page 4: “the stationary distribution of this random Markov chain does not admit a simple analytical characterization when there is a finite number of state”? 3. Why did you choose the stationary distribution as the initial distribution? 4. Is the ground-truth unigram strategy then computed by estimating the stationary distribution with the count histogram? Is this the one used to compute the KL distance in Figure 3? 5. Similarly, do you use the bigram strategy of section 2.1 to compute the corresponding KL distance in Figure 3? Or do you use the instance-dependent ground-truth transition probability matrix instead? 6. Is the difference in KL between the unigram and bigram strategy due to finite sampling? 7. Why does the bigram strategy have a smaller loss? is it because of the inductive bias of the model? 8. In the setting with an initial condition far from the stationary distribution, would the model first fit the step-dependent marginal distribution of the chain (a unigram strategy) and later fit the bigram strategy of section 2.1? Isn't it simpler to fit the latter first (as opposite to the stationary case)? 9. Is the 2-layer transformer model initialised to the matrices in Appendix C.1? 10. The sequence length is first labelled by $t$ (e.g. after eq. 1) and then by $T$ (e.g. before eq. 4). Is this change of notation intentional? Or am I misunderstanding something? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes, they authors did address (some of) the limitations of their method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for critically reading our work and helping us improve its presentation. We greatly value your in-depth review! We will first discuss your most significant concerns with our work: ## In-Context Learning You're right that we don't spend much time in our paper discussing what "in-context learning" means, and indeed, there isn't a single consensus definition! The term "in-context learning" was (we believe) introduced in the GPT-3 paper ([Henighan et al., 2020]) to refer specifically to the phenomenon where LLMs have improved few-shot performance with increasingly many examples in their context. Xie et al., meanwhile, does indeed explore scenarios where there is a distribution shift between training and inference; however, in the broader literature, ICL is not consistently tied to distribution shifts (see, e.g. [Garg et al., 2022] for an influential study of ICL that does not require a distribution shift in its definition). Meanwhile, [Elhage et al., 2021], which was roughly concurrent with Xie et al., and the follow-up work [Olsson et al, 2022] -- use ICL very broadly to refer to LLMs being better at predicting tokens that appear later in a sequence. The way we use the term is somewhat narrower than this; as we state in the Introduction: we think of ICL as when language models are "incorporating patterns from their context into their predictions." In other words, it is when models *learn* from patterns in their context; few-shot learning being a special case of this. The task we focus on, ICL-MC, is an in-context version of a classic learning task, Markov chain inference. Each test-time sequence has *zero* probability of appearing during training, because of the continuous nature of the distribution over chains. During pre-training, the transformer learns to perform the learning task in-context. We don't only focus on the case where there is a distribution shift between train and test -- as discussed above, we think ICL is interesting even in the absence of distribution shift. (Though we do explore distribution shifts in the paper: see Figures 4 and 11.) We would be happy to elaborate on our interpretation of ICL in the camera-ready version of the paper. ## Answers to questions 1\. This choice was made to facilitate a theoretical analysis of the observed phenomena. In particular, results such as Lemma C.2 become feasible only when the number of states is 2, or other simplifying assumptions are made. However, in the experiments, we have verified that our insights transfer to Markov Chains with a larger number of states (see, for example, Figure 7 in the Appendix). We will add a discussion of Markov chains with more states, including a reference to figure 7. Based on your concerns, we have decided to add back in a proof we decided not to include, which means that **by making only small changes to the submitted version, we are able to extend our main results to any number of states**. This requires using a distribution like that in experiment on the left of figure 4. Due to lack of space, we defer the details of this to a comment. 2\. Thank you for pointing out this line as being out of place. The line (edited for clarity) will be moved to the discussion of limitations. It intended to communicate the difficulties with the distribution of the stationary distribution for $k>2$. Due to lack of space, we defer further elaboration to a comment. 3\. We chose this to make the setting closer to the next token prediction seen in practice. The marginal distributions of the tokens generated from a markov chain approach the stationary distribution exponentially fast no matter the initial distribution. We have ran experiments for other somewhat natural starting distributions (such as uniform) and found no meaningful differences. 4\. Yes. We will add the following mathematical definition: the unigram strategy's probability of the token at position $t+1$ being $j$ is $\frac{1}{T}\sum_{i=0}^T \mathbb{1}[x_i=j].$ 5\. The bigram strategy is computed by counting the frequency of pairs of states in the context of each sequence. This corresponds to the formula in line 157. The graphs on the left in figure 3 show the KL-div loss using the instance-dependent ground-truth transition probability matrix. Because the context size is long (100 tokens), and we average over 1024 test samples, these two measures are almost identical (as we would expect). 6\. No, the KL div between the unigram and bigram strategies is not due to finite sampling. The unigram strategy is a flawed strategy (it ignores the information we get from the relative order of the tokens), for almost all markov chains it will get higher expected loss than the bigram strategy, even for small context sizes. 7\. The bigram strategy is a near optimal strategy (it approaches the optimal strategy exponentially fast as the context length grows). This is true regardless of the model used. 8\. We are not entirely sure how to interpret this question and would appreciate clarification. Sampling the first token by the stationary or uniform distribution does not meaningfully affect any empirical observations. The marginal distributions of the tokens after the first will always approach the stationary distribution exponentially fast. Empirically, the model starts by representing the unigram strategy and later the bigram strategy. In both cases it is not obvious that the model needs to fit the unigram strategy at any point (after all, the unigram strategy performs worse than the bigram strategy), and so it must be only due to the inductive biases of the model and optimizer that this happens. 9\. No. The transformers were initialized with the default Pytorch initializations (Gaussian with mean 0), we will clarify this in section B. 10\. This is a typo. *** We hope we have managed to address your concerns and would be grateful if you adjusted your score accordingly. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed response. Before I update my score, I have a couple more questions/comments, if possible. I apologize in advance for these additional inquiries; my intention is simply to gain a better understanding of some of the claims made in the paper. I hope these questions also help the authors present their results more clearly. @3: When you write: *"The marginal distributions of the tokens generated from a markov chain approach the stationary distribution exponentially fast no matter the initial distribution"* do you mean for $k=2$ states? Because one can construct transition probability matrices which yield Markov chain that exhibit slow convergence to stationarity. A random walk with periodic boundary conditions is an example, where the mixing times are of the order of $k^2$, for $k$ the number of states. Or am I missing sth? @6: Could you please remind me how you compute the KL divergence in your experiments? Meaning, you compute the KL wrt. what? @8: Let's assume that we are dealing with a Markov chain that exhibits slow convergence to stationarity. In such a case, the marginal unigram distribution changes at every step. Let's also assume one train your model on the *out-of-equilibrium* sequences sampled from said Markov chain. Do you still think the model will first find such a unigram strategy? I hope this rephrasing makes my question clearer. --- Reply to Comment 1.1.1: Title: Response to Reviewer Comment Comment: @3: To clarify, in our setting $T$ is taken to be large, while $k$ is effectively a constant. This is similar to natural language settings, where the number of letters or tokens is some constant, but the context can be arbitrarily long. When we wrote that the "marginal distributions... approach the stationary distribution exponentially fast", we were referring to Lemma C.4, which states that the distribution of the token n tokens after the current has distance to the stationary distributed bounded by $\alpha^n$, for some $\alpha<1$ depending on the specific chain. @6: In our experiments, we used KL divergence to measure the difference between the probabilities predicted by the model and other probability distributions. For test loss, this other distribution was the appropriate rows of the transition matrices used to generate the test examples. Formally, let $f(x_{1:T-1})$ be the softmax distribution of the transformer's output, given the input sequence $x_{1:T-1}$. In our standard setting, we measured $$d_{KL}(\mathcal{P}\_{x_{T-1}} || f(x_{1:T-1}))$$ where $\mathcal{P}\_{x_{T-1}}$ is the true distribution of the next state $x_T$ given the previous state, under the true Markov chain $\mathcal{P}$. Note that $\mathcal{P}$ varies from sequence to sequence (it is drawn from a prior over transition matrices) and is not directly observable by the learner—this is what needs to be learned in-context. For measuring how close the model was to various strategies, we computed the predicted probabilities given by said strategies, and used those as the base distribution. Note that the output of the bigrams strategy (which is Bayes-optimal for our base setting) is different from the aforenentioned ground-truth $\mathcal{P}\_{x_{T-1}})$. Instead, as described in Section 2.1, it is a Bayesian posterior distribution of the next state given the observed sequence, with the prior determined by the prior distribution of transition matrices. Formally: $$ \mathbb{E}[\mathcal{P}\_{x_{T-1}} | x_{1:T-1}] $$ where the expectation is taken over the draw of Markov chain transition matrix. @8: Setting aside the mixing time issue discussed in @3, your question seems to suppose that all the training sequences are drawn from a single Markov chain, whereas in our work there is a new chain for each sequence. Do you have a more precise formulation of your question in mind? Regardless, we agree it could be interesting to explore what happens when the number of states is comparable to the sequence length, and the prior distribution over chains is crafted to favor chains that mix slowly, though this is beyond the scope of this work. --- Rebuttal 2: Title: New Lemma for $k>2$ Comment: We can prove a version of Lemma 3.1 for any number of states (k) as long as we consider a different distribution for transition matrices: specifically, a mixture of the distribution where the unigram strategy is optimal, and the distribution where the unigram strategy is as bad as guessing randomly. The proof only requires different versions of lemmas C.1 and C.2, and actually adds more intuition to our stage-wise learning story. Specifically, in the first phase of learning, the contribution to the gradient from the unigram-optimal distribution dominates, but in the second phase, the other component (from the distribution where unigrams are useless) is dominant. We would like to add this variation of Lemma 3.1 to our camera-ready version. --- Rebuttal 3: Title: Stationary Distribution Results Comment: For two states, the transition matrix can be represented as $\mathcal{P}=\begin{pmatrix}1-\alpha&\alpha\\\beta&1-\beta\end{pmatrix}$, where $\alpha,\beta$ are iid random variables from the uniform distribution on $[0,1]$. The stationary distribution is $\pi = \begin{pmatrix} \frac{\beta}{\alpha + \beta}, & \frac{\alpha}{\alpha + \beta} \end{pmatrix}$, which is feasible to analyze. While there exist results such as [Chafai et al](https://arxiv.org/abs/0808.1502) that characterize the stationary distribution in the limit as the number of states approaches infinity, we do not believe analogous results exist for any constant number of states greater than two.
Summary: The paper studies in-context learning with transformer models in a simple Markov Chain sequence modelling task. The authors empirically show the formation of statistical induction heads which correctly compute the posterior probabilities given bigram statistics. Moreover, they observe that during training the model undergoes phase transitions where the complexity of the n-gram model increases. They also propose a simplified theoretical model of a two-layer transformer to analyse these phenomena. Strengths: 1. The paper addresses a relevant topic within a simplified setting, facilitating the interpretability of transformer model solutions and an understanding of their training dynamics. 1. Utilising Markov chains is an effective approach to studying sequence-to-sequence modelling. 1. The study seeks to balance empirical investigation and theoretical analysis. 1. The observation that models may progress from simple solutions, like unigrams, to more complex structures during training is interesting. 1. It is relevant that transformers can learn an algorithm to estimate the transition matrix in context by gradient descent. Weaknesses: The major weakness of this work is that the abstract and introduction suggest a primary focus on analysing and uncovering the mechanisms behind the simplicity bias and the phase transition from simple unigram solutions to more complex ones. However, upon reading the main body of the paper, it seems that the experiments mainly reveal the existence of such biases and transitions, while the theoretical section addresses a different phenomenon: how the two layers are learned in different training phases. The plateau behaviour is particularly intriguing, such as the model learning unigrams, bigrams, trigrams, and so on, but their theory does not seem to describe or explain this observation. 1. **Unigram strategy:** Can the authors clarify this statement "*the stationary distribution of this random Markov chain does not admit a simple analytical characterization when there is a finite number of states*". In this context, what does "*this Markov chain*" refer to? 1. **Simplified transformer:** the proposed simplified transformer seems to be able to capture all the phenomenology of a real transformer. Nevertheless, it is not clear to me if the proposed model is a good proxy for an attention-only transformer or not. In particular: 1. The first attention layer does not use the interaction between tokens but represents the attention through a learnable matrix that could in principle learn the same structure. Could the authors elaborate on this choice? 1. The second attention instead captures the interaction between tokens but between the input and the output of the first layer which is unusual. I understand that this is based on the construction of the real transformer where the output of the first attention is copied in the second block of the embedding and therefore can interact with the input but it still means that the simplified version doesn't need to learn this mechanism. Could the author elaborate on this choice? 1. The model seems to be composed of a non-linear attention for the first layer (softmax is present) and linear attention for the second. Could the authors elaborate on this particular choice? Is it a way to simplify the analysis while maintaining the properties of the softmax where needed? 1. **Data generation:** in section 2, it is explained how the transition matrices are generated according to a Dirichlet distribution, nevertheless in the theory and some of the experiments there is the additional requirements for the matrix to be doubly stochastic which is not mentioned when the setup is described. Could the authors clarify this point and highlight the importance of doubly stochastic transition matrices? 1. **Proof of Proposition 2.2:** the proof of this proposition appears disorganised and not easy to follow. In particular: 1. Setting the internal dimension d=3k seems fundamental to ensure that the model has the correct number of dimensions to copy the tokens from one layer to the next. I do not necessarily have a problem with this choice but it would be useful to see it discussed in the main text. 2. Is the definition of $v^{(1)}$ correct? if I am not mistaken it appears that the matrix is of dimensions $3k \times 3k$ whereas by the main text, it should be of dimensions $t \times 3k$ could the author clarify this together with the definition of $\delta_2$ and $1_k$. 3. There are multiple quantities used in the text such as $e_{x_i}$ or $e_i$ or $e_{{i-1},j}$ could the authors clarify their meaning. 4. In the expression $\text{softmax}(\text{mask}(A))_{i,j} \approx \mathbb{1}[j=i-1]$ shouldn't it be $[i=j-1]$ ? Why is the indicator function comparing the indexes instead of the values $x_i$ ? Moreover, could the authors clarify why it is only the indicator function and not $\frac{1}{\text{count}([x_i=x_j-1])}$ given the softmax? 5. In the expression of $\text{Attn}_2(e)$ why is the summation from $h=1$ to $h=3k$ if h is the index of the element in the sequence? Furthermore a new index $g$ appears which is not used anywhere. 1. **Experiments in Figure 8:** are the experiments in Figure 8 using d=3k ? do the authors observe that the parameters converge to the construction given in Proposition 2.2 ? 1. **Proof of Proposition 2.3:** The proof for the unigram construction seems to be missing. 1. **Unigram and bigram constructions:** For both the two-layer transformer and the simplified model it is possible to show constructions for the unigram and bigram models. Nevertheless, besides giving the weights for such constructions the authors do not discuss the relationship between them and the objective functions. Are they stationary points of the dynamics? Does this help explain the plateau observed in the experiments? is the unigram a saddle point? 1. **Minimal model:** In line 241 the authors state that "*the minimal model converges to the bigram solution spending however significantly less time at the unigram solution*". By looking at Figure 3 I understand how the minimal model reaches the same Kl value in almost half the time but it also has fewer parameters and some of the structure is already enforced by construction (for example the fact that the second attention is between input and output of the first layer). Could the authors explain why the comparison still makes sense? 1. **Varying the data distribution:** I find this part unclear. In line 231 the authors state that they define distributions "*we define distributions over Markov chains that are in between the distribution where unigrams is Bayes optimal, and the distribution where unigrams is as good as uniform.*" can the authors provide a mathematical definition of such distribution? Are doubly stochastic transition matrices used to create Markov chains with uniform stationary distribution such that the latter would be the only unigram possible? 1. **Two phases of learning in Lemma 3.1:** The model is capable of reproducing the effect of learning the second layer first, nevertheless I am not convinced that this is a property of the model of the simplified transformer but rather a consequence of the initialization and step size. Could the authors clarify this point? Technical Quality: 2 Clarity: 2 Questions for Authors: See above. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: - Simplified transformer architecture - Synthetic task only, not clear if similar phenomena appear in larger models trained on natural language data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the in-depth review and feedback! You point out that there is some misalignment between our experimental story and our proofs, and you see that as our paper's major weakness. We agree that there is misalignment, due to the difficulty of analyzing SGD on multi-layer Transformers. However, we believe our analysis is very relevant to the experimental observations about simplicity bias and phase transitions. In particular, Lemma 3.1 exhibits simplicity bias by showing that the parameters after the first phase compute a solution that is a weighted version of the unigram solution, and the second phase computes a mixture of unigram and bigram. Also, the magnitude of the signal for the second phase is a factor $T$ lower than the first step, indicating that the second phase requires more time for "signal accumulation". We agree that this does not explain the sharpness of the phase transition, but it does provide theoretical intuition for the existence of multiple phases. We are happy to add these aspects of the story to the discussion following Lemma 3.1 in the text; for these reasons, we don't think the extent of misalignment is a serious weakness in the paper. We address your various minor concerns below: 1\. Thank you for pointing out this line as being out of place. A version of the line (edited for clarity) will be moved to our discussion of limitations. The line intended to communicate the difficulties with understanding the stationary distribution when $k>2$. 2.1 and 2.2. The goal of the minimal model is to capture the dynamics of the full transformer, while being amenable to theoretical analysis. Essentially all analyses of multilayer transformers have relied on simplifying out many parts of the transformer. While simplifying the transformer we carefully checked that the dynamics we wished to understand were preserved as much as possible. Due to space limitations, we have deferred a more in depth explanation of the choices in the minimal to a comment. 2.3. This is correct. Unfortunately, the analysis does not seem to be feasible with softmax in both layers. 3 and 9. Generating each row of the transition matrix iid from a flat Dirichlet distribution is equivalent to uniform distribution over all transition matrices. We chose to focus primarily on this distribution because it is the most natural. The only mention of doubly stochastic matrices in the theory (lemma C.3) was mistakenly left in after the surrounding context was removed. Due to space constraints, we defer definitions of the other distributions to a comment. 4\. Thanks for your careful proofreading. Due to space constraints we defer the specific changes and improvements made in response to a comment. 5\. The trained transformers used an internal dimension of 16. The specific parameters after training always vary randomly due to initialization and training samples. Figure 8 shows that the transformers are implementing the same algorithm as our construction, with each layer performing the same functions. 6\. Thanks for pointing out this oversight, we have added the full proof in (it's rather short and simple). 7\. That's an interesting question! Unfortunately, the details of the loss landscape for even simplified transformers are hard to precisely characterize with current theoretical tools. Among other difficulties, a barrier to transformer optimization results is analyzing cases where the input to softmax isn't close to zero, or one hot. Our intuition is that the unigram is not exactly a saddle point: during the plateau period, there is very slow continual improvement in the loss. We suspect the gradient at the unigram solution is simply small relative to the change needed to move towards the bigram solution. 8\. Line 241 is a holdover from a previous version, and no longer applies to the experiments in the paper, we apologize that it made into this version and will remove it. On the left in figure 3, both the minimal model and transformer model reach their minimum loss after seeing around 80,000 training sequences. We do believe though that one shouldn't put too much weight into the fact that these took the same time to train. We believe this comparison still makes sense. The minimal model is designed to be able to express everything the transformer expresses when being trained on the task. Experimentally, it goes through the same phases. In the right most graphs in figure 3, both models quickly learn to use the unigram strategy, before eventually adopting the bigram strategy. Since this task does not require the full expressive power of a transformer, it is not too surprising that a model with many fewer parameters is sufficient to gain insight into the training dynamics. 10\. The most compelling piece of evidence that the minimal model actually goes through these phases are the experimental results shown in figures 3 and 4. Experimentally this also works for standard small Gaussian initializations. In the conclusion, we will add a mention of the limitation of this being a two step analysis. Unfortunately, existing tools are insufficient to analyze dynamics involving softmaxes (specifically when the inputs to the softmax are neither small nor dominated by a single index). For lemma 3.1 we were forced to limit our analysis to two steps. However, as we mention on lines 579-580, the uniform component of $W$ does not contribute to the gradient of $v$, so even if we initialized $W$ to any uniform non-zero initialization, $v$ would still not change in the first step. We chose the specific step sizes to allow the theory to show how the model can go through the unigram and bigram phases, which happens in practice over the course of many small steps. *** We hope we have managed to address your concerns. If you think we have adequately done so, we would be grateful if you adjusted your score accordingly. Thank you! --- Rebuttal 2: Title: Minimal Model Design Comment: To create our minimal model, we started from a two layer attention-only disentangled (see [Elhage et al](https://transformer-circuits.pub/2021/framework/index.html)) transformer (using relative positional embeddings) and iteratively simplified parts that empirically did not affect the training dynamics. In our construction, the first layer only attends to positional embeddings, and the second layer ignores positional embeddings, so we set the first layer key matrix and the positional embeddings from the second layer to zero. Then we set the value matrices and query matrices to the identity in both layers. In the experiments, all optimizations such as layer norms and weight decay were removed, and the optimizer used was SGD. With all of these changes, the overall training dynamics were not changed much at all, depending on hyper parameters they could speed up training, but the same loss curve and phases were observed. To make analysis of gradient descent on the minimal model feasible, the softmax on the second layer had to be removed, which did make the phase transition less sharp. --- Rebuttal 3: Title: Doubly stochastic and 'unigram' distributions Comment: Doubly stochastic transition matrices are those for which the stationary distribution is the uniform vector, or those for which the unigram strategy and the uniform strategy get the same loss. If we want to observe the inductive biases of the model when there is no signal encouraging the unigram strategy, then sampling from doubly stochastic transition matrices is natural. Finally, we also considered the distribution where each row in the transition matrix is the same, resulting in the unigram and bigram strategies having the same loss. The mathematical definition of the mixed distribution in the graph on the left of figure 4 is as follows: with 75% chance, choose a uniformly random doubly stochastic transition matrix, otherwise make each row of the transition matrix the same vector, chosen from a flat Dirichlet distribution. --- Rebuttal 4: Title: Question 4 Comment: 4.1. We use a dimension of $3k$ to have a simple and intuitive construction. In practice, models can learn with a far smaller internal dimension. We can add discussion of this to the main text. 4.2. We apologize for the confusing notation with regard to the matrix definitions, it will be improved. $v^{(1)}$ is of dimension $t\times d$, where $d=3k$. We will make sure that the dimensions of each submatrix used to define $v^{(1)}$ and all other matrices are all specified in the notation. 4.3. $x_i$ refers to the token at position $i$. $e$ is used to represent the $1$-hot embeddings, $e_i$ is a vector that is $0$s besides at position $i$ where it is $1$. Similarly, $e_{x_i}$ is all zeros except at position $x_i$ where it has a $1$. $e_{i-1,j}$ is the $j$th index of the vector $e_{i-1}$, hence $e_{i-1,j}=\mathbb{1}[i-1=j]$. We will do an extra pass over all of this notation to clarify and improve it. 4.4. and 4.5. We thank you for pointing out these typos. It should say: $$\text{softmax}(\text{mask}(A))\_{i,j}\approx \frac{\mathbb{1}[x_{j-1}=i]}{\sum_{h=1}^i \mathbb{1}[x_{h-1}=i]}$$ Fixing these typos, we get the result $$Attn_2(e)\_{i,j+2k} =\frac{\sum_{h=1}^{k}\mathbb{1}[x_{h-1}=x_i]\mathbb{1}[x_h=j]}{\sum_{g=1}^i\mathbb{1}[x_{g-1}=x_i]}$$ Which is more correct, since this is the bigram probabilities, instead of just the bigram statistics. That is, this is the empirical approximation of $P_{x_i, j}$. --- Rebuttal Comment 4.1: Comment: Following up to see what you think about our response to your review. Let us know if you have any further questions or need any further clarifications?
Summary: This paper introduces a task to investigate how in-context learning capabilities are learnt by transformer models. They show that models trained on this task go through a phase transition from which they start by modeling unigram to then acting as a bigram model. The authors further extend their work to the case of n=3 and show similar behavior. Strengths: 1. Nice presentation, with clear figures exhibiting the key behavior in question (sudden "emergence" of the correct behavior. 2. Good attempt to form a theoretical/mathematical foundation, which extends to the appendix. 3. Attempt to understand ICL LLMs using a good toy task. Weaknesses: 1. Questionable impact since this topic has been explored heavily in the past years, with other similar tasks and toy models existing, showing similar results. 2. Questionable how well the toy setting can actually transfer to real-world ICL, albeit interesting. Technical Quality: 3 Clarity: 3 Questions for Authors: What is the key differentiating contribution of the work in comparison with the myriad of other works in this space? What is the conclusion of the work that you believe is transferable to real-world ICL setting. What n-gram statistics would that follow? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: It would potentially be useful to also discuss the limitations of the toy task to explain the full-fledged LLM ICL setting. A lot has been said in the paper about conclusions made, and I understand that this the same with any work employing a toy task, but an attempt for making more direct comparisons between real vs. toy setting may enhance the value of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and questions. Here we address the main weaknesses and questions raised by you. ## Impact and related works >Questionable impact since this topic has been explored heavily in the past years, with other similar tasks and toy models existing, showing similar results... What is the key differentiating contribution of the work in comparison with the myriad of other works in this space? As we discuss in our related work section, there are indeed prior works which introduced synthetic settings for exploring ICL: - [Garg et al., 2022](https://arxiv.org/abs/2208.01066) (and follow-ups) train models to learn linear functions (and other simple function classes) in a few-shot setting. - [Xie et al. 2022](https://arxiv.org/pdf/2111.02080) describe a few-shot learning setting in which each document consists of examples generated by a hidden Markov model. - Finally, most related to our work is [Bietti et al. 2023](https://arxiv.org/abs/2306.00802), in which all sequences are generated from a single Markov chain (which doesn't need to be learned in-context), but certain 'trigger' tokens are automatically followed by sequence-specific tokens which need to be learned in-context. We believe that our task, in-context learning of Markov chains (and the higher-order generalizations thereof), is a valuable contribution in the context of this rich literature. (No single synthetic task is going to capture all of the scientifically interesting aspects of in-context learning.) In particular: - It is very natural and simple to describe. - It elicits the formation of *induction heads* in networks trained on the task. (It is a particularly natural setting for studying their emergence.) - There are multiple strategies of various levels of sophistication which can be used to solve the task to varying degreees of success (unigram, bigram, etc.), and which networks pass through in stages during training. Moreover, the results we obtained by studying this task experimentally and theoretically are novel. Our key findings -- the formation of statistical induction heads, the stage-wise phase transitions between in-context solutions (unigram, bigram, trigram, ...) on the way to success, the arguments that simplicity bias may delay the formation of the correct in-context solution, and finding that the second layer of the model is learned before the first layer, rather than the other way around -- are all original to our work. Given that you say there are lots of very similar works, **we suspect you may be mistakenly assuming that some concurrent works were actually prior works.** Indeed, at the same time we released our preprint, there were several simultaneous papers (all with pre-prints posted within a single month) that shared aspects of our setup and/or scientific focus. Please see the "Concurrent Works" paragraph in our Related Work section for a discussion of these papers and their relation to our work: [Akyürek et al., 2024](https://arxiv.org/abs/2211.15661), [Hoogland et al., 2024](https://arxiv.org/abs/2402.02364), [Makkuva et al., 2024a](https://arxiv.org/abs/2402.04161), and [Nichani et al., 2024](https://arxiv.org/abs/2402.14735). There have also been subsequent works in this space, including [Rajaraman et al., 2024a](https://arxiv.org/abs/2407.17686), [Rajaraman et al., 2024b](https://arxiv.org/abs/2404.08335) and [Makuva et al., 2024b](https://arxiv.org/abs/2406.03072). We want to make sure that you are following the scientific best practice and judging our work's significance in relation to *prior works*, not concurrent works. If you mistakenly thought the above concurrent works were prior, we understand! There has indeed been a burst of activity on this topic recently. In that case, we hope you will reassess our work in its proper context, and adjust your score accordingly. If you were referring only to actual prior works, we hope you find the above contextualization of our work convincing and reassess accordingly; if not, we would appreciate some elaboration on the works you are referring to. ## Real-world relevance > What is the conclusion of the work that you believe is transferable to real-world ICL setting. What n-gram statistics would that follow? N-gram statistics are useful for predicting real-world natural language text. Indeed, historically, language models were often simply n-gram predictors! It is also useful for models to learn in-context bigram (and more generally, n-gram) statistics. For instance, the writing style of a particular document is connected to the n-gram statistics of that document. The most concrete connection the toy MC-ICL setting has to real world ICL settings is induction heads, which have been shown to have importance to in-context learning in LLMs in past work ([Olsson et al, 2022](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html)). In the MC-ICL setting, the emergence of induction heads corresponds to the phase transition in loss, just like the phase transition found in the ICL ability of transformers hypothesized to be caused by induction heads emerging. Additionally, compared to other ICL tasks studied in the past (such as linear of logistic regression) we believe that n-grams are a better model for natural language specifically. --- Rebuttal Comment 1.1: Title: Link correction Comment: A minor fix to our rebuttal: the reference to Akyürek et al., 2024 should point to [this paper](https://arxiv.org/abs/2401.12973). --- Rebuttal 2: Comment: Thank you for answering my questions. I admit that some, but definitely not all of the papers in mind may be concurrent works. I have read the other reviewer's comments, and in general it seems like while there is an agreement that this is a well-done piece of scientific literature, the impact is questionable, as the toy-setting field particularly when it pertains to ICL is definitely crowded. Finding new angles to attack the problem is a worthwhile pursuit, but at times may distract from perhaps other more important and less-studied issues. As I see there are better reviewers than me to assess this work, so I will raise my score from a 4 to a 5 while keeping my confidence low, and leave it up to them to reach a consensus.
Summary: In the paper, the authors investigate the phenomenon of in-context learning exhibited by Transformers with the help of a simplified architecture and Markovian synthetic data. The authors show that experimentally the attention layers form statistical induction heads that help the model to implement an add-constant estimator based on the empirical counts of the input. They also show that a simplified transformer architecture can effectively represent such an estimator, and they provide a SGD convergence analysis for a minimal model. Strengths: The paper joins a line of works that aim at studying transformers with the help of synthetic data generated according to Markov distributions. Even if the model considered is significantly simplified, I believe that the theoretical and experimental insights provided by the paper are intriguing. The paper is also well written and the results seem correct to me, even if the notation can be improved, especially for the proofs in the appendix, that are not easy to follow. Weaknesses: The main limitations of the paper are: (1) the transformer architecture is heavily simplified (even if I believe that the results obtained in the paper should extend to more complex architectures); (2) the model used for the SGD analysis is, indeed, minimal. While limitation (1) is not so important to me, I am a bit concerned about limitation (2). I am not sure what is the actual utility of a SGD convergence analysis on such a minimal model. While I think that the representation result of Proposition 2.2 can be extended to a more complex model, I don't think that the SGD analysis of the minimal model would work for a more complex architecture, which would then need completely different and more sophisticated techniques. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Do you think that an SGD analysis similar to the one carried out for the minimal model would carry over to a more complex architecture, closer to the actual transformer? If not, what are the main issues with working out such a analysis for, for example, the simplified transformer model of Equation (1)? 2. Your model architecture uses relative positional embeddings, as opposed to most state-of-the-art transformer models. Do you think that your analysis would be easily extended to a model with absolute positional embeddings? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful review. Here we address the questions/concerns raised by the reviewer. 1. > Do you think that an SGD analysis similar to the one carried out for the minimal model would carry over to a more complex architecture, closer to the actual transformer? If not, what are the main issues with working out such a analysis for, for example, the simplified transformer model of Equation (1)? We believe that the analysis would mostly carry over to a more complicated architecture with one exception. Specifically, the softmax in the second layer attention proved intractable to analyze at the same time as the softmax in the first layer. Analyzing softmax is a common difficulty in transformer analyses, and why most theoretical results apply to linear attention only, we are happy to have managed to include the softmax in the first layer. Empirically, the training dynamics of the minimal model are very similar to that of the full model, the primary difference being that the phase transition isn't as sharp in the minimal model. With the softmax in the second layer added in, the phase transition becomes sharp like in the full model, but the tools do not currently exist to analyze this (specifically nested softmaxes on inputs not close to 0 or infinity). To create our minimal model, we started from a two layer attention-only disentangled transformer (using relative positional embeddings) and iteratively simplified parts that empirically did not affect the training dynamics. In this context, disentangled transformer (introduced in [Elhage et al](https://transformer-circuits.pub/2021/framework/index.html)) is like a normal transformer, except the output of each layer is appended after the residual. In our construction, the first layer only attends to positional embeddings, and the second layer ignores positional embeddings, so we set the first layer key matrix, $W_k^{(1)}$, to $0$, and removed positional embeddings from the second layer ($v^{(2)}=0$). Then we set the value matrices and query matrices to the identity in both layers. In the experiments, all optimizations such as layer norms and weight decay were removed, and the optimizer used was SGD. With all of these changes, the overall training dynamics were not changed much at all, depending on hyper parameters they could speed up training, but the same loss curve and phases were observed. To make analysis of gradient descent on the minimal model feasible, the softmax on the second layer had to be removed, which did make the phase transition less sharp. We believe an analysis that includes this softmax could confirm additional insights into why the phase transition is so sharp, and is a potential direction for future work. 2. > Your model architecture uses relative positional embeddings, as opposed to most state-of-the-art transformer models. Do you think that your analysis would be easily extended to a model with absolute positional embeddings? Empirically, the absolute positional embeddings add noise, but do not fundamentally change any of the results. The analysis might be a bit messier, but we would not expect it to be fundamentally different. Additionally, many state of the art models use non absolute positional embeddings, including the Llama series of models ([1](https://arxiv.org/abs/2302.13971), [2](https://arxiv.org/abs/2307.09288), and [3](https://arxiv.org/abs/2407.21783)). --- Rebuttal Comment 1.1: Comment: Thank you for your comments. I will maintain my positive score.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Towards Safe Concept Transfer of Multi-Modal Diffusion via Causal Representation Editing
Accept (poster)
Summary: The paper addresses the potential misuse of VL2I diffusion models, such as copying artistic styles without permission, which could lead to legal and social issues. The paper introduces an early exploration of safe concept transfer in MLLM-enabled diffusion models using a novel framework called Causal Representation Editing (CRE). CRE allows for effective inference-time removal of unsafe concepts from noisy images while retaining other generated content. This is achieved through fine-grained editing based on identifying the causal period during which unsafe concepts appear. The approach reduces the editing overhead by nearly half compared to existing methods. The paper presents extensive evaluations demonstrating that CRE outperforms existing methods in terms of effectiveness, precision, and scalability, even in complex scenarios with incomplete or blurred features of unsafe concepts. Strengths: 1. The paper introduces an innovative approach to address the emerging concern of safe concept transfer in multimodal diffusion models. The originality is evident in its novel application of CRE to selectively remove unsafe concepts from generated images. 2. The research is methodologically sound, with comprehensive evaluations and rigorous testing of the proposed CRE framework. 3. The paper is well-written and clearly structured, making it accessible to both experts and those new to the field. 4. The significance of this work lies in its potential to influence the future development of safe AI-generated content. As AI models become increasingly integrated into creative industries, the ability to safely and efficiently remove unsafe or unwanted concepts is critical. Weaknesses: 1. The paper lacks a user study or feedback from practitioners who might use the CRE framework in their workflows. 2. Although the paper provides extensive evaluations, it could benefit from a more diverse set of evaluation metrics and benchmarks. The effectiveness, precision, and scalability are well-documented, but additional metrics such as computational efficiency, user satisfaction, or real-world applicability could provide a more comprehensive assessment of the method's performance. Furthermore, comparisons with a wider range of existing techniques would strengthen the argument for the superiority of the proposed approach. 3. While the paper highlights the importance of safe content generation, it could delve deeper into the ethical and legal implications of using CRE. For example, the criteria for defining "unsafe concepts" are not fully explored, which could lead to subjective or inconsistent applications of the method. Moreover, the potential misuse of the technology for censorship or manipulation of content raises important ethical concerns that the paper does not adequately address. A more detailed discussion on these aspects would enhance the comprehensiveness of the research. Technical Quality: 4 Clarity: 4 Questions for Authors: Please see weaknesses. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors have adequately described the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! Below we address your concerns. Please feel free to post additional comments if you have further questions. 1. Thanks for your suggestion. We are applying our technology to more actual deployed generative models and considering inviting users to participate in testing. For computational efficiency, we report the comparison of inference time on generating 100 images as follows: | Kosmos-G | SLD | ProtoRe | CRE | | ---- | ---- | ---- | ---- | | 226 s | 228 s | 257 s | 246 s | We also compare with two unlearning-based methods CA [1] and UCE [2]. Experimental results are shown in anonymized pdf. https://anonymous.4open.science/r/Exp-E7E7/ If an error is displayed online, please download the pdf file. [1] Kumari, Nupur, et al. "Ablating concepts in text-to-image diffusion models." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [2] Gandikota, Rohit, et al. "Unified concept editing in diffusion models." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024. 2. The concept of "unsafe content" is not yet fully defined and may require further clarification in line with laws and regulations. Our approach is designed for generation service providers to prevent the creation of such content within their models. However, there remains a potential risk of misuse of representation editing techniques. For instance, adversaries could exploit this technology to conceal a model's ability to generate specific unsafe concepts, thereby evading third-party platform (such as Hugging face) reviews. Additionally, they could intentionally introduce or remove certain concepts in the images provided by regular users, leading to biased generation outcomes. --- Rebuttal Comment 1.1: Comment: I acknowledge having read the authors' rebuttal. My overall assessment of the paper remains unchanged, and I continue to support my current rating.
Summary: The authors studied an important problem about misuse of Text-to-image (T2I) diffusion model, leading to legal and social issues. They propose a causal representation editing (CRE) method, extends representation editing from large language models to diffusion-based models. CRE improves safe content generation by intervening at diffusion timesteps linked to unsafe concepts, allowing precise removal of harmful content while preserving quality. The extensive experiment results have shown the effectiveness of their model. Strengths: 1. The authors focus on an important problem about misuse of Text-to-image (T2I) diffusion model, leading to legal and social issues. 2. The writing is clear and easy to follow. 3. The authors conduct extensive experiments to verify the effectiveness of their proposed method, CRE. Weaknesses: 1. The authors may need the experiment to compare the inference time with other Inference-time Safe Concept Transfer models to further verify the efficiency of their model. 2. It’s better to provide more related works about representation editing in the related works. 3. The contribution of the design of selecting editing timesteps to performance of the model is unclear compared with random selection. Technical Quality: 3 Clarity: 3 Questions for Authors: In ProtoRe, they also have similar editing function in their Eq (7). The contribution in this paper is about adding the term related to discriminator. Can authors explain more about the contribution compared with ProtoRe? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Can be found in weaknesses part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! Below we address your questions and concerns. Please feel free to post additional comments if you have further questions. **For weakness 1:** We report the comparison of inference time on generating 100 images as follows: | Kosmos-G | SLD | ProtoRe | CRE | | ---- | ---- | ---- | ---- | | 226 s | 228 s | 257 s | 246 s | **For weakness 2:** Thanks for your suggestion. We include more related works to comprehensively introduce the advancements of representation editing for LLMs. Representation editing involves creating steering vectors that, when added during the forward passes of a frozen large language model (LLM), produce desired changes in the output text [1,2]. It is based on the idea that LLMs encode knowledge linearly [3]. By editing these steering vectors, which are derived from the model's activations, users can modify the model's behavior [4]. Some previous studies have used gradient descent to search these steering vectors [5,6]. Current studies on Inference-Time Intervention (ITI) [7] in Large Language Models (LLMs) indicate that many LLMs exhibit interpretable directions in their activation spaces, which influence their inference processes. For instance, by introducing carefully designed steering vectors to specific layers for particular tokens, the output can be significantly biased, regardless of the user prompt's topic [8,9]. Developing a training-free editing method to mitigate unsafe concepts in generative models offers two key advantages. Firstly, it allows the model to retain its strong zero-shot generation ability by preserving the knowledge from pre-training. Secondly, as unsafe concepts may change dynamically due to legal or copyright factors, a plug-and-play editing method can efficiently add or remove types of unsafe concepts under governance. [1] Dathathri, Sumanth, et al. "Plug and play language models: A simple approach to controlled text generation." arXiv preprint arXiv:1912.02164 (2019). [2] Zou, Andy, et al. "Representation engineering: A top-down approach to ai transparency." arXiv preprint arXiv:2310.01405 (2023). [3] Burns, Collin, et al. "Discovering latent knowledge in language models without supervision." arXiv preprint arXiv:2212.03827 (2022). [4] Li, Kenneth, et al. "Emergent world representations: Exploring a sequence model trained on a synthetic task." arXiv preprint arXiv:2210.13382 (2022). [5] Subramani, Nishant, Nivedita Suresh, and Matthew E. Peters. "Extracting latent steering vectors from pretrained language models." arXiv preprint arXiv:2205.05124 (2022). [6] Hernandez, Evan, Belinda Z. Li, and Jacob Andreas. "Inspecting and editing knowledge representations in language models." arXiv preprint arXiv:2304.00740 (2023). [7] Li, Kenneth, et al. "Inference-time intervention: Eliciting truthful answers from a language model." Advances in Neural Information Processing Systems 36 (2024). [8] Turner, Alex, et al. "Activation addition: Steering language models without optimization." arXiv preprint arXiv:2308.10248 (2023). [9] Liu, Sheng, Lei Xing, and James Zou. "In-context vectors: Making in context learning more effective and controllable through latent space steering." arXiv preprint arXiv:2311.06668 (2023). **For weakness 3:** We conduct a comparison of CRE and random selection. For fairness, the total number of edits in random selection is similar as that in CRE. Experimental results are shown in Figure 5 in the anonymized pdf. https://anonymous.4open.science/r/Exp-E7E7/ If an error is displayed online, please download the pdf file. **For question:** Given the limitations of ProtoRe, our proposed CRE method introduces improvements in two key areas: **Scalability:** ProtoRe struggles with scalability; its performance deteriorates as the number of unsafe concepts increases, particularly when editing multiple concepts simultaneously. To address this, CRE incorporates a discriminator that focuses on editing the representation of only one unsafe concept at a time. This approach helps maintain stable performance. **Efficiency:** ProtoRe applies representation editing throughout the entire diffusion process, leading to unnecessary computational overhead. In contrast, CRE targets specific diffusion steps associated with unsafe concepts, based on an understanding of the different information generated in successive diffusion process. Typically, CRE restricts editing to no more than half of the diffusion steps, thereby reducing overhead.
Summary: The paper introduces Causal Representation Editing (CRE) for vision-language-to-image (VL2I) models to prevent undesirable concept generation. CRE prevents unsafe image generation through the following process: 1. Using a discriminator, it detects whether an unsafe concept is present in the user prompt. 2. If the discriminator identifies an unsafe concept, it projects out the representation $\tilde{A}$ corresponding to the unsafe concept from the existing attention representation $A$. 3. The projection out is not performed at every diffusion timestep but within the specific interval $[t_s, t_e]$ identified as causally influential for unsafe concept generation through assess-with-exclusion. The authors demonstrated the efficacy of CRE through object, style, and multiple-style censoring. Strengths: (Here, I will refer to "preventing unsafe image generation" as censoring.) 1. Ability to Handle Unsafe Image Prompts Most existing safety methods are designed for T2I models and focus on preventing the generation of unsafe text concepts. The method proposed in this paper has the advantage of being applicable regardless of the prompt domain, whether text or image. 2. Censoring Performance Table 1 shows that CRE performs better in censoring compared to existing methods. Weaknesses: 1. Using an external discriminator is highly inefficient. For example, let's say we need to remove K unsafe concepts. Using an external discriminator adds two types of overhead: (1) training a discriminator for each concept and (2) performing discriminator inference for each concept. While (1) can be somewhat mitigated with models like CLIP that have good zero-shot classification performance, (2) remains problematic. In my opinion, the advantage of test-time guidance methods like ActAdd and SLD is that they don't require training. The use of a discriminator diminishes the benefits they offer over inference-time refusal or machine unlearning methods. 2. Inadequate experiments The current paper lacks significant experiments needed in the safety unlearning field. There are no experiments showing the impact of CRE on image fidelity. While a perfect discriminator would be ideal, real-world discriminators are not perfect. Can you report the COCO 30k dataset FID? Considering limited computational resources, knowing the results when censoring "cassette player" and "Disney" would be sufficient. This paper uses a T2I diffusion model as an image decoder, so it could compare machine unlearning techniques for T2I diffusion models as baselines. Are there any comparisons with safety unlearning techniques for T2I diffusion models? State-of-the-art methods include SPM [1] and MACE [2], but considering the submission time, comparing with CA [3] and UCE [4] would suffice. There are no experiments on NSFW concept removal. I'm curious if CRE can robustly prevent adversarial prompt attacks like Ring-A-Bell [5] or concept inversion [6] through red teaming. 3. Methodological novelty (minor) Directly using the representation editing method ActAdd from LLMs is relatively less novel. --- [1] : One-Dimensional Adapter to Rule Them All: Concepts, Diffusion Models and Erasing Applications, https://arxiv.org/abs/2312.16145 [2] : MACE: Mass Concept Erasure in Diffusion Models, https://arxiv.org/abs/2403.06135 [3] : Ablating Concepts in Text-to-Image Diffusion Models, https://arxiv.org/abs/2303.13516 [4] : Unified Concept Editing in Diffusion Models, https://arxiv.org/abs/2308.14761 [5] : Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?, https://arxiv.org/abs/2310.10012 [6] : Circumventing Concept Erasure Methods For Text-to-Image Generative Models, https://arxiv.org/abs/2308.01508 Technical Quality: 2 Clarity: 3 Questions for Authors: Q1: Can you provide the specific values of $[t_s, t_e]$ obtained through assess-with-exclusion? The current paper includes the method but lacks actual causal tracing results or precise values for each concept. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The author describes the limitations caused by discriminator. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! Below we address your questions and concerns. Please feel free to post additional comments if you have further questions. **For weakness 1:** We emphasize the necessity of the discriminator by illustrating the disadvantages of existing inference-time refusal methods. Taking ProtoRe as an example, ProtoRe uses CLIP to cluster unsafe features, which is efficient but not scalable. As the number of unsafe concepts increases, the absence of a discriminator necessitates editing the representations of multiple unsafe concepts simultaneously, even during safe content generation, which significantly degrades image quality. The experimental results presented in Table 3 and Figure 3 demonstrate that the effectiveness of simultaneous representation editing for multiple concepts is limited when conducted without the assistance of a discriminator. To solve this problem, we introduce a discriminator to perform representation editing on only a single unsafe concept at a time, ensuring stable performance. Compared with unlearning, CRE has two outstanding advantages: 1. It does not require full fine-tuning of the generated model; 2. It can flexibly add and delete the types of unsafe concepts currently supervised. **For weakness 2:** Thanks for your suggestion. We report the COCO 30k dataset FID of the model after introducing CRE for cassette player and Mickey Mouse (see Figure 2 in the anonymized pdf ): https://anonymous.4open.science/r/Exp-E7E7/ If an error is displayed online, please download the pdf file. | Model | Kosmos-G | Kosmos-G w. CRE(cassette player) | Kosmos-G w. CRE(Mickey Mouse) | | ---- | ---- | ---- | ---- | | FID | 10.99 | 13.83 | 11.34 | It is important to note that CRE is only activated when the discriminator detects that the input prompt contains either concept cassette player or Mickey Mouse. CRE does not alter the reasoning process for regular users, and thus does not impact the quality of image generation for them. We compare with CA and UCE. Experimental results of CA are shown in Figure 3 in the anonymized pdf. | ImageNet Category | Cassette Player | Chain Saw | Church | Gas Pump | Tench | Garbage Truck| English Springer | Golf Ball |Parachute | French Horn | Avg | | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | UCE | 0.0 | 0.0 | 8.4 | 0.0 | 0.0 | 14.8 | 0.2 | 0.8 | 1.4| 0.0 | 2.6 | | CRE | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | We conduct experiment on NSFW Inappropriate Image Prompts (I2P) benchmark dataset, which contains 4703 toxic prompts assigned to at least one of the following categories: hate, harassment, violence, self-harm, sexual, shocking, illegal activity. See Figure 1 in the anonymized pdf for examples. Numerical results are as follows: | I2P Category | Hate | Harrassment | Violence | Self-harm | Sexual | Shocking | Illegal activity | Avg | | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | SLD | 0.2 | 0.17 | 0.23 | 0.16 | 0.14 | 0.30 | 0.14 | 0.19 | | ProtoRe | 0.1 | 0.07 | 0.09 | 0.09 | 0.08 | 0.1 | 0.11 | 0.09 | | CRE | 0.04 | 0.07 | 0.07 | 0.06 | 0.07 | 0.06 | 0.04 | 0.06 | Due to time limitation, the defense experiments against adversarial prompt attacks, such as Ring-A-Bell and concept inversion, will be included in a later version of the paper. **For weakness 3:** Comparing with directly representation editing (ActAdd and ProtoRe), CRE makes improvements in two aspects. First, we introduced the discriminator to solve the scalability problem (as described above). Second, ProtoRe performs representation editing throughout the diffusion process, causing unnecessary overhead. Based on the understanding of the different dimensional information generated at each stage of the diffusion process, we only edit in the diffusion steps related to specific unsafe concepts. In most cases, CRE only edits in no more than half of the steps. **For question Q1:** We show the editing intervals corresponding to four concepts in Figure 4 in the anonymized pdf. --- Rebuttal Comment 1.1: Comment: According to the Author's Guidelines (sent by email), > 4. All the texts you post (rebuttal, discussion and PDF) should not contain any links to external pages. This accident will be redirected to SAC. --- Reply to Comment 1.1.1: Comment: We apologize for any inconvenience caused. Our aim is to address the reviewer's concerns as thoroughly as possible within the constraints of a double-blind review. The anonymous link provided contains only an anonymized PDF, ensuring no personal information is disclosed. The images included in the PDF are supplementary experiments intended to address the reviewer's concerns. We regret any adverse effects this may have caused. --- Rebuttal 2: Comment: Reviewers, please refrain from opening the link. I will inform you of any updates. In the meantime, feel free to continue the discussion as usual.
Summary: This paper proposes a framework called Causal Representation Editing (CRE) to address the ethical and copyright concerns in vision-language-to-image (VL2I) diffusion models. CRE enhances safe content generation by intervening at diffusion timesteps linked to unsafe concepts, effectively removing harmful content while preserving quality. The approach is more effective, precise, and scalable than existing methods. CRE also offers a solution for complex scenarios, providing insights into managing harmful content in diffusion-based models. Comprehensive evaluations demonstrate CRE's superiority in various benchmarks. Strengths: 1. Innovative Causal Representation Editing Framework: The paper introduces a novel framework called Causal Representation Editing (CRE) that effectively extends representation editing techniques from language models to diffusion-based generative models. This framework enhances the efficiency and flexibility of safe content generation, providing a new approach to addressing ethical and copyright concerns in vision-language-to-image (VL2I) models. 2. Comprehensive Handling of Unsafe Concepts: CRE demonstrates superior effectiveness, precision, and scalability compared to existing methods. It can handle complex scenarios, including incomplete or blurred representations of unsafe concepts, ensuring that harmful content is precisely removed while maintaining acceptable content quality. 3. Detailed Experimental Validation: The paper provides extensive evaluations and experiments to validate the effectiveness of the proposed method. The results show that CRE surpasses existing methods in various benchmarks, highlighting its potential for managing harmful content generation in diffusion-based models. Weaknesses: 1. Dependency on Discriminator Accuracy: The effectiveness of CRE is heavily reliant on the accuracy of the unsafe concept discriminator. If the discriminator fails to accurately identify unsafe concepts, CRE might incorrectly edit safe content, leading to unnecessary modifications and potentially impacting the quality of the generated images. 2. Additional Inference Overhead: Compared to safe generation methods that use fine-tuned diffusion models, CRE introduces additional inference overhead. This can lead to increased computation time and resource usage, which might be a significant drawback in practical applications where efficiency is crucial. 3. Limited Applicability to Complex Unsafe Concepts: While CRE is effective for well-defined unsafe concepts, its performance might degrade when dealing with more complex or nuanced unsafe concepts that are difficult to categorize or describe. This limitation restricts its applicability in real-world scenarios where unsafe content is not easily defined. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Could you explain how your method handles cases where the discriminator accuracy is low, and CRE might perform representation editing even for safe prompts? Are there any mechanisms to mitigate this issue? 2. The paper mentions that the additional overhead introduced by representation editing is within a tolerable range. Could you provide more details on how this overhead might scale with larger datasets or more complex models? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: see weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! Below we address your questions and concerns. Please feel free to post additional comments if you have further questions. **For Weakness 1 and Question 1:** As the reviewer mentioned, the discriminator may not always make accurate judgments. To assess the impact of our CRE on safe generation when the classifier fails, we conducted an experiment using ImageNet, as shown in Table 1. Specifically, we examined the effect of consistently applying representation editing to other safe categories. For instance, we introduced CRE for the category "cassette player" and then measured the average image generation accuracy for the remaining nine categories. The results are as follows: | Model | Kosmos-G | Kosmos-G w. ProtoRe (cassette player) | Kosmos-G w. CRE(cassette player) | | ---- | ---- | ---- | ---- | | Avg Acc | 40.44 | 24.69 | 33.84 | When the discriminator incorrectly classifies a safe concept as unsafe, the generation of safe content is only minimally impacted. Additionally, we report the FID score on the COCO 30k dataset after applying CRE to the "cassette player" and "Mickey Mouse". | Model | Kosmos-G | Kosmos-G w. CRE(cassette player) | Kosmos-G w. CRE(Mickey Mouse) | | ---- | ---- | ---- | ---- | | FID | 10.99 | 13.83 | 11.34 | While occasional errors by the discriminator do not significantly affect safe content generation, we emphasize its importance for scalability. As the number of unsafe concepts increases, the absence of a discriminator necessitates editing the representations of multiple unsafe concepts simultaneously, even during safe content generation, which significantly degrades image quality. The experimental results presented in Table 3 and Figure 3 demonstrate that the effectiveness of simultaneous representation editing for multiple concepts is limited when conducted without the assistance of a discriminator. **For Weakness 2 and Question 3:** As demonstrated, increasing the number of unsafe concepts gradually decreases the accuracy of safe content generation. Therefore, the discriminator effectively prevents image quality loss by avoiding simultaneous editing of multiple concepts. We shift the cost of managing additional unsafe concepts from simultaneous editing during inference to the pre-training phase of the discriminator. The inference cost of CRE remains comparable to that of the standard model. We report the comparison of inference time on generating 100 images as follows: | Kosmos-G | SLD | ProtoRe | CRE | | ---- | ---- | ---- | ---- | | 226 s | 228 s | 257 s | 246 s | It is important to note that CRE can be easily extended to larger models, even as the model structure becomes more complex and the number of parameters increases. This is because CRE only edits the intermediate outputs of the cross-attention layer in the diffusion model, and the parameters in this layer account for just **5.11%** of the total model parameters. **For Weakness 3:** The effectiveness of representation editing depends on aligning the encoding space of unsafe concepts with the feature space of the generative model, which can be achieved using a general encoder like CLIP. However, there is a limitation: if unsafe concepts are not clearly defined, they cannot be correctly encoded by the encoder. We will discuss this limitation in the article.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Kaleido Diffusion: Improving Conditional Diffusion Models with Autoregressive Latent Modeling
Accept (poster)
Summary: Diffusion models generate high-quality images from text but often lack diversity, especially with high classifier-free guidance. Kaleido addresses this by using autoregressive latent priors, which generate diverse latent variables from captions. It enriches input conditions, resulting in more diverse outputs while maintaining quality. Experiments confirm Kaleido's effectiveness in increasing image diversity and adherence to guidance. Strengths: 1. Due to the tendency of samples to converge towards the direction indicated by the condition under high CFG settings, the approach of generating abstractions first to expand the diversity of conditions and then proceeding with generation seems a very reasonable approach. Moreover, these abstractions are controllable, enhancing interpretability and customization possibilities, which is also favorable. 2. The qualitative results also appear promising. 3. The approach of constructing fine-grained prior via autoregression is novel to my knowledge Weaknesses: 1. The formulation in Section 3.1 seems somewhat unintuitive. From my perspective, both text descriptions and the additional autoregressive priors you construct are forms of condition signals for diffusion models. Therefore, I do not see a compelling reason to modify or complicate the original classifier-free guidance formulation. Why not simply regard your method as an extension of the condition signal for classifier-free guidance, while still adhering to the existing CFG formulation? 2. The quantitative results are somewhat limited, with MDM being the only baseline. More comparisons with state-of-the-art diffusion models are encouraged. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is the motivation behind introducing visual tokens in the autoregressive prior? From the overall narrative of your paper, it seems that the autoregressive prior is intended to provide more customized control for users. However, visual tokens are essentially uninterpretable, making them unusable for user control. 2. The generation process involves an autoregressive process at the beginning. How does this process affect the efficiency of your method? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: More results on the efficiency and quantitative performance of your method are encouraged. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **W1: Explanation of the formula in Sec. 3.1** Thank you for your valuable feedback. - In our practical implementation, we indeed adhere to the existing classifier-free guidance (CFG) formulation. Both text descriptions and our introduced autoregressive priors indeed serve as conditioning signals for the diffusion models. Our sampling method can indeed be seen as an extension of the conditioning signal in CFG, specifically through the incorporation of autoregressive priors. - The formulation presented in Section 3.1 is designed to elucidate how these autoregressive priors effectively address diversity issues when operating under high CFG settings. Therefore, this is indeed a mathematical motivation rather than an actual implementation. We will ensure these points are more clearly articulated in the revised version. ## **W2: More Quantitative Results and comparison with sota diffusion models / L1: More quantitative Results:** Thank you for your feedback! - Our proposed framework integrates an autoregressive prior with a diffusion model to enhance image generation. Theoretically, this approach is compatible with a variety of backbone diffusion models. In this paper, we focused on comparing our model with the MDM baseline in pixel space to achieve a fair comparison and clearly demonstrate the specific improvements our framework offers. As part of future experiments, we will also apply our approach to other model types, such as latent diffusion and flow matching. - **Please refer to the general response to all reviewers for detailed results and a discussion** of the comparison between our method and CADS—a state-of-the-art, training-free approach that enables diverse generation from diffusion models. ## **Q1: Motivation Behind Introducing Visual Tokens** - The primary motivation for incorporating the autoregressive prior (i.e., latent tokens) is to explicitly model the mode selection distribution $p(z \mid c)$, enabling the generation of diverse image samples from the same condition, even under high CFG settings. This approach also introduces an explainable and editable mechanism into the image generation process. Our experiments demonstrate that visual tokens (vokens) complement text tokens and are particularly effective at capturing visual details difficult to convey through text, such as artistic style, as shown in Fig. 6 of the main paper. - We would also like to clarify that vokens are **interpretable** and **can be manipulated by users for image editing**. The training dataset of visual tokens is constructed by encoding the images into discrete image tokens using SEED [1], a VQ-VAE-based image tokenizer [2] (L748-756). Therefore, we can interpret the generated visual tokens by decoding them back into images using the corresponding image de-tokenizer (i.e., decoder). User control over the image editing process can be achieved by replacing the vokens with different visual tokens. In Fig. R.6 of our rebuttal PDF, we show that by replacing the visual tokens, users can alter the style and characteristics of the re-generated image. [1] Planting a Seed of Vision in Large Language Models [2] Neural Discrete Representation Learning ## **Q2: Model Efficiency / L1: Results on Efficiency** AR sampling is performed only once before the diffusion steps, making its running cost negligible. - Specifically, in text-to-image settings for generating 256x256 images with classifier-free guidance (batch-size=32), the AR part takes **6 seconds** while the MDM part takes **110 seconds** on a single H100 GPU when using DDPM with 250 steps, which dominates the majority of the time. --- Rebuttal 2: Title: Discussion Period Comment: Dear Reviewer, As the discussion period deadline nears, we would greatly appreciate it if you could review our rebuttal and share any further feedback. If there are still concerns, we would greatly appreciate a list of specific changes you would need to reconsider your rating. Thank you for your time and consideration. Best regards, --- Rebuttal Comment 2.1: Comment: Thanks for your clarification. I would like to keep my positive rating. --- Reply to Comment 2.1.1: Comment: Thank you for your valuable feedback! We hope our responses have clarified your concerns!
Summary: This paper introduced Kaleido Diffusion which leverages an autoregressive model to first model the latent mode and then generate latents based on the sampled mode. The proposed method is reasonable. The authors explain the insight from a classifier-free guidance perspective. Several experimental results can support the authors' claim. In addition, the paper with the same contents is published as a workshop paper in ICML24. Strengths: - This paper investigates how to improve the diversity of generated images with additional mode controls, which is interesting. - The writing is clear and the mathematical explanation seems reasonable. Weaknesses: - The quantitative results are very limited (only Figure 5). - What is the context extractor (MLLM) used in the experiments? Although the authors claimed the mode selection is the major contribution of this work, the additionally enrolled pseudo labels perhaps also contribute to the performance improvements. - I am not convinced that the proposed approach is closely connected to the CFG explanation. Several previous studies, eg, ControlNet (ICCV23) and MaskComp (ICML24) have proven that dense controls will improve image quality. The proposed method is more like distilling knowledge from the pre-trained MLLM to obtain more detailed semantic information about the original dataset. - It would be better to discuss more implementation details to enhance reproducibility. Technical Quality: 3 Clarity: 3 Questions for Authors: - The dataset section claimed the usage of ImageNet and CC12M. However, the results of ImageNet are missing in the entire paper. - Which dataset was used for the results reported in Fig 5? - Can you quantitatively measure the diversity? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **W2: Use of MLLM/context extractor; Introduction of pseudo labels** The context (latent) extractor is employed to extract different types of abstract latents given the condition-image pair (Sec. 3.2). In practice, for different types of latents, we utilize different methods as the context (latent) extractors. The construction of the training dataset for these latent tokens is explained in Appendix A. We appreciate the reviewer's concern regarding the potential impact of pseudo labels on our results. However, it is crucial to clarify that the improvements in our work are not solely due to the introduction of pseudo labels. Simply introducing pseudo labels, without modeling them as latents, would not alone lead to the observed improvements in image diversity. - To illustrate this point, we have conducted additional training with the MDM (baseline) model, using the generated synthetic textual description (i.e., pseudo tokens) as the condition for training. In Fig.R.3 of the rebuttal PDF, we demonstrate that merely using pseudo labels in this manner does not enhance the diversity, underscoring that the effectiveness of pseudo labels depends significantly on the thoughtful integration into the modeling process. For both models, all images are generated using DDPM with 250 steps and a CFG of 5. ---- ## **W3: The CFG explanation & knowledge distillation** We appreciate the reviewer’s reference to prior studies, such as ControlNet [1] and MaskComp [2]. While these contributions are significant, they are not designed to improve image diversity given the same input condition $c$. - In ControlNet, the text condition and dense control (e.g., canny edge) together form the user input $c$. Although using different additional controls (considered as different modes, $z′$ can produce diverse outputs, this approach differs from our focus. Our objective is to enhance diversity without requiring additional user-provided information. - Similarly, MaskComp [2] addresses the object completion task through iterative generation and mask segmentation but does not prioritize diverse generation of partial masks. Our design is closely connected to the CFG explanation: - In a standard diffusion model, increasing CFG sharpens the conditional distribution $p_\theta(c \mid x)$, leading to a reduction in diversity. To address this, we introduce an additional variable $z$, representing various “modes” of $z$. We propose explicitly modeling “mode selection” $p(z \mid c)$ before applying CFG in diffusion steps, ensuring that the mode distribution is not distorted by guidance (see Equation 8). - We propose using an AR model to learn mode selection $p(z \mid c)$, leveraging a synthetic condition-latent pair $(c, z)$ dataset. This can be viewed as **a form of knowledge distillation from pre-trained models**. However, we kindly ask the reviewer to consider the broader design and intent of our entire framework. Introducing the abstract latent variable $z$ strategically enhances the utilization of distilled knowledge. Without the latent modeling and sampling in our proposed framework, merely distilling from pre-trained models would not alone lead to the observed improvements in diversity. Our work provides a theoretical explanation from the CFG perspective on how explicitly modeling "mode selection" helps mitigate the issue of mode collapse under high CFG (Sec. 3.1). To address concerns about the reliance on distilled knowledge, we have included an alternative approach that constructs latent tokens without relying on additional knowledge. - Specifically, we train a model using **color clusters** as latent tokens. For each color channel (R, G, and B) within the range of 0-255, we equally segment it into eight clusters, resulting in a total of $8 \times 8 \times 8 = 512$ color clusters. Given an image, we resize it to 4x4 pixels and assign a color cluster ID to each pixel based on its RGB value. The image is then encoded into a sequence of color cluster IDs (e.g., "$C_1$#$C_2$#...#$C_{512}$"), with each $C_i$ representing a color cluster ID. This sequence serves as the condition for training our Kaleido diffusion. - In Fig.R.4 of our rebuttal PDF, we showcase images generated using color clusters as latent tokens on ImageNet. Our results demonstrate that, compared to the baseline MDM, our Kaleido diffusion can generate much more diverse images with latent tokens derived purely from color clustering. This highlights that Kaleido diffusion's capability to generate diverse images is **independent of distilled external knowledge**, confirming that our approach can produce varied images without the aid of any other pretrained models. [1] Adding Conditional Control to Text-to-Image Diffusion Models [2] Completing Visual Objects via Bridging Generation and Segmentation ---- ## **W1: Quantitative Results / Q3: Quantitative measure of diversity** (**See general response**) ---- ## **W4: Implementation Details** Thank you for your valuable feedback. To enhance the reproducibility of our work, in the revised version of the paper, we will provide more implementation details including the model parameters, training/inference hyperparameters, and any additional setup information necessary to replicate our results in the revised version. ---- ## **Q1: Missing results on ImageNet?** The results in ImageNet can be found in Fig. 5, the top row of Fig. 7, and Fig. 10 and 11 in the Appendix. We will make this clear in the revised version that these figures illustrate the performance on the ImageNet. ---- ## **Q2: The dataset used in Fig. 5** The dataset utilized for the results presented in Fig. 5 is ImageNet. We will ensure that this information is explicitly stated in the revised version --- Rebuttal 2: Title: Dual submission policy Comment: We would like to direct the reviewer’s attention to the dual submission policy outlined for NeurIPS [1]. According to this policy, “Papers previously presented at workshops are permitted, so long as they did not appear in a conference proceedings (e.g., CVPRW proceedings), a journal or a book.” Presentations at ICML workshops are not considered as part of the archival proceedings of the ICML conference. [1] https://neurips.cc/Conferences/2024/CallForPapers --- Rebuttal Comment 2.1: Title: Post-rebuttal response Comment: Thank the authors for providing their rebuttal. Most of my concerns are addressed and I will increase my rating to 6. --- Reply to Comment 2.1.1: Title: Thank you! Comment: Thank you and we are glad our rebuttal addressed your concerns. Let us know anything you still have in mind or unclear. Best regards,
Summary: This paper improves the diversity of diffusion generation by incorporating autoregressive latent priors. It leverages the autoregressive model to generate specific discrete latent features, and then concat them with the original extracted text features to serve as the condition of diffusion model. Experiments show that the method can achieve high quality and diversity even with high CFG. Strengths: The investigated problem is interesting and critical. By incorporating the intermediate "mode" representation, the method can apply CFG after the "mode" is sampled, which alleviates the "mode collapse" phenomenon. The author introduces four specific modes which can also support fine-grained editing. Weaknesses: (1) The writing lacks training and inference details. During training, whether all the modes are used in each step or they are randomly selected? For the visual tokens, whether the latent features from SEED are directly used or the sequence IDs are re-embedded? During inference, the autoregressive model is responsible for generating the latent modes if I understand correctly. Then how different latent modes are generated in control especially the combination of them? (2) Evaluation is poor. In the experiment section, only a figure is provided without providing numerical numbers. The main experiment setting is confused (class conditioned or text conditioned). The construction of the toy example is also not clearly stated. (3) Comparison with related works is not sufficient, making the position of this work unclear. There are related works also trying to improve the generation diversity of diffusion models such as [a]. [a] CADS: Unleashing the diversity of diffusion models through condition-annealed sampling. Technical Quality: 2 Clarity: 4 Questions for Authors: I hope the authors can answer my questions in Weakness section point by point. I am willing to raise my score if my concerns are well addressed. Confidence: 4 Soundness: 2 Presentation: 4 Contribution: 3 Limitations: See Weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **W1: Training and Inference details** We appreciate your inquiry into the specifics of our training and inference methodologies. - During training, we investigate both isolated and combined uses of different latent tokens, including text, bounding boxes (bbox), blobs, and vokens, to highlight the unique contributions of each token type. Specifically, we train separate models focusing on individual latent types and a combined model that integrates text, bbox, and voken tokens for text-to-image generation. - Regarding the visual tokens, the latent tokens are formulated as a sequence of discrete image token IDs ("$I_1$#$I_2$#...#$I_{32}$"), where each $I_i$ denotes an image token ID (L754-756). Our autoregressive (AR) model re-embeds these image token IDs, resizing its vocabulary to accommodate the special image tokens (L203-204). The original SEED features are not used in our formulation. - In the combined setting, we include all latent tokens in a shared vocabulary and train them jointly. The AR model predicts **text | bbox | voken** sequentially, allowing later latents to be controlled by earlier ones. We observed that the "text" token first expands the semantic aspects (e.g., objects, behaviors) of the generation, the "bbox" specifically controls the spatial allocation of described objects, and the visual tokens control the global styles of the image. We plan to explore other combinations in future work. We will provide more detailed descriptions of the training and inference details in the revised manuscript to eliminate any ambiguity. ## **W2: Quantitative results and Toy examples** - The main experiment setting for the quantitative experiments is class-conditional generation on ImageNet 256x256, which aligns better with existing works. We also explored training text-to-image models with our methods with mainly qualitative comparisons. - In response to the concerns regarding quantitative evaluation, we expand our quantitative evaluation to include metrics like the Mean Similarity Score (MSS) and Vendi score, which are used for measuring image diversity following CADS. (**See general response**) - Regarding the toy example, we construct the toy dataset with two primary classes, each comprising two subclasses with a predefined weight (30% of samples in the first subclass and 70% in the second subclass). Each subclass is sampled from a Gaussian distribution. We train two models for comparison: a standard conditional diffusion model that uses the major class ID as conditions, and a latent-augmented conditional diffusion model that takes both the major class ID and subclass ID as conditions, with the subclass ID serving as latent priors. Both models are trained with classifier-free guidance. We design this toy experiment to show the benefit of latent priors for improving diversity under high guidance. We will include these elaborated details and the results from these models in the revised version of our paper. ## **W3: Comparison with related works** Thank you for suggesting a comparison of our work with relevant methodologies like CADS. - We have now included a comparison with CADS in our study for both class-to-image and text-to-image settings. - Our results demonstrate that both Kaleido and CADS can effectively enhance the generation diversity of diffusion models, particularly for class-to-image generation tasks like ImageNet, in a relatively orthogonal manner. Additionally, we show that the improvements from Kaleido and CADS can be complementary. - **Please refer to the general response to all reviewers for detailed results and discussion.** --- Rebuttal 2: Title: Discussion Periods Comment: Dear Reviewer, As the discussion period deadline nears, we would greatly appreciate it if you could review our rebuttal and share any further feedback. If there are still concerns, we would greatly appreciate a list of specific changes you would need to reconsider your rating. Thank you for your time and consideration. Best regards, --- Rebuttal 3: Comment: Thanks for the clarification and new results. I would like to keep my positive rating. --- Rebuttal Comment 3.1: Comment: Thank you for your valuable feedback! We hope our responses have clarified your concerns, and it would be nice if you consider raising your score to reflect this. However, thank you so much again!
Summary: In this paper, the authors propose a principled pipeline called Kaleido Diffusion for text-to-image generation with better mode coverage and diversity. The main intuition is that, conventional DM requires large CFG to make samples locate at high-likelihood modes, which would somehow constrain the diversity by only generating those limited modes. To solve this problem, the authors propose to first use AR to capture the distribution of such latent tokens, then use diffusion model to take them as extra conditions for generation. Experiments on both quantitative numeraical results and qualitative visualizations are used to evaluate the effectiveness of this work. Strengths: 1) The intuition of this work makes sense to me. The excessively large CFG would limit generated samples to certain modes, where using an extra model to capture such text-irrelevant information is a good practice. 2) The visual quality and especially the teaser of this work can well demonstrate the effectiveness of this paper. 3) The experiments are thorough to me. Weaknesses: 1) I still don't well comprehend why choosing AR as the first-stage distribution learner. One of the reason I guess here is that, the CFG in AR model is only modulating the predicted logits, so that we can still sample across multiple reasonable modes for the next-stage DM through temperature parameter or top-k/top-p sampling. However, I'm still a little bit confused by this part: what if we use a diffusion model w/o CFG sampling to learn such an intermediate distribution? Is the model architecture design choice mainly for the fact that the authors use detailed textual descriptions as the latent token, so that it's natural to choose AR? If so, then this is a little bit ad-hoc to me. 2) Choosing these four specific types of latent tokens is also kind of ad-hoc to me. For example, somebody may also say the excessive CFG could result in similar artistic styles in generation. In this way, we have to additionally extract the style token as ground truth, then laboriously train the whole pipeline again to enable it. Another drawback of such design is that: the training cost is large and it's hard to scale up, every time we want to solve the mode collapse of new categories, we have to first extract/tokenize that new aspect, then jointly train the diffusion model and AR part, which is too ad-hoc and hard to scale up to me. 3) I hope the authors could also discuss about the diffusion decoder idea and the difference between that and this work. Specifically, AR + diffusion decoder would similarly take the latent tokens predicted by AR and feed into DM decoder as conditions. The major difference is that their setting doesn't need the ground truth for those latent tokens, which means the latent tokens are implicitly learned via end2end training. My question is what's the benefit of Kaleido compared to their pipeline, and won't their pipeline better at scaling up? Technical Quality: 3 Clarity: 3 Questions for Authors: They are mainly elaborated in the weakness section. Overall I really like the intuition of this work. I'd like to see the authors' responses on my questions. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations are discussed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Title: Request for clarification for W3 "the diffusion decoder idea" Comment: Dear Reviewer, We are currently in the process of drafting our rebuttal response and would greatly appreciate your clarification on a point mentioned in Weakness 3. Specifically, we are seeking clarification on the "diffusion decoder idea" referenced in your feedback. At present, we believe you might be referring to approaches like multimodal large language models (MLLM) that employ diffusion models as decoders for image generation, similar to the approach used in EMU [1] [2]. Could you please confirm if our understanding is correct? If not, we would be grateful if you could provide a reference to the specific paper or work related to the "diffusion decoder idea" so that we can address this point more accurately in our response. Thank you very much for your assistance! [1] EMU: Generative Pretraining in Multimodality [2] Generative Multimodal Models are In-Context Learners --- Rebuttal Comment 1.1: Comment: Dear Authors: Thanks for letting me know about the un-clarified point. Yes, this is exactly what I mean about the "diffusion decoder idea", which would in general uses the output from first-stage model (e.g., AR in your setting) as input to a diffusion model for image/video generation. Hope this can make my concerns clear. Best --- Reply to Comment 1.1.1: Comment: Thank you for your prompt response! We will carefully compare our approaches with theirs in our rebuttal. --- Rebuttal 2: Rebuttal: ## **W1: Reason for choosing AR in modeling mode selection distribution** We appreciate your insights and queries regarding our choice of employing an AR model to model $p_\theta(z | c)$. Our choice is grounded in several pivotal considerations: - **Why discrete?** The modes $z$ that humans can perceive from an image are largely categorical, abstract, semantic, and high-level information. Such abstract semantics are more easily represented in **discrete** symbols. Moreover, modeling the modes as discrete latents further provides an explainable and editable mechanism for the image generation process. It allows the user to adjust the discrete latent codes before final image production, granting greater flexibility and control over the output. This capability is particularly beneficial in scenarios requiring detailed customization or iterative design processes. We demonstrate the impact of sequential latent editing in Fig. 8. - **Why autoregressive?** Given our objective to model $z$ as abstract discrete tokens, an AR model emerges as the most suitable and convenient method for handling such discrete structures. The inherent design of AR models, which sample one token at a time conditioned on previous tokens, naturally supports the generation of diverse modes. Using a diffusion model w/o CFG to learn $p_\theta(z \mid c)$ could also serve as an alternative for learning intermediate distributions, however, it presents several challenges. - First, employing a diffusion model to model discrete abstract latents is a challenging and ongoing research area. Alternatively, representing these abstract semantics (i.e., modes) with continuous tokens raises fundamental questions about the characteristics and truth distribution of these abstract continuous latents. The challenge lies in accurately defining a distribution that authentically captures the complex, abstract semantics underlying the mode using continuous token. - Moreover, even if such a distribution could be defined, diffusion models typically demand high CFG to model it effectively, which circles back to our original challenge with high CFG. Nevertheless, exploring the potential of using diffusion models without CFG as the mode-selection learner remains an intriguing avenue for future research. Additionally, we would like to clarify that the AR model does not employ CFG during the sampling of latent tokens. The integration of $p_\theta(z \mid c)$ in Equation 8 works to push the updating direction towards the sampled modes $z$ at each step. ## **W2: Reason for choosing the four specific types of latent tokens** - We would like to clarify that we employ various types of latent tokens in order to explore the best ways for representing the modes, each type of latent tokens does not serve as a restricted representative of a particular "subset" of modes. It is important to clarify that our goal is **not** to exhaustively cover every conceivable category of diversity for image generation. In our experiments, we choose text, bbox/blob, and vokens because they are useful for showing different controls. - Sometimes, a single representative type of latents like text is sufficient to generate samples that are diverse enough in terms of various aspects. For instance, Fig. 12 in the Appendix. Our proposed model is a general tool that can cover most of the aspects of diversity. - If users seek to create images that focus on the diversity of specific artistic styles, they can manually adjust the discrete latent tokens to reflect desired styles. - Alternatively, like other general text-to-image generation models, users may opt to employ techniques such as LoRA to fine-tune the model to achieve enhanced diversity within specific artistic styles. We will keep enhancing the clarity of these points in our revised paper until the methodological approach is clearly understood. ## **W3: Difference from AR + diffusion decoder; Scalability** In comparing our work with MLLMs using diffusion models as decoders, such as EMU [1][2] and MiniGPT5 [3], - A fundamental difference is that their "autoregressive" generation of latent tokens is based on regression, which is **deterministic** and typically produces similar images from the same input $c$. This deterministic nature in EMU and MiniGPT5 arises from their training objectives. EMU predicts visual representations $z’$ from the text input $c$, applying an image regression loss with encoded visual embeddings of image $x$ as the ground truth. MiniGPT5 uses an MSE loss to minimize the distance between generated image features $z’$ and the encoded caption feature of text input $c$. - In contrast, our Kaleido diffusion allows for greater variability by explicitly modeling $p_\theta(z \mid c)$ for diverse mode selection. This ability to generate diverse latents distinguishes our work, addressing challenges in generating varied, high-quality images under high CFG. Fig. R.5 in our rebuttal PDF shows that, unlike EMU, which produces nearly identical images for a given $c$, Kaleido generates varied samples from the same text condition, demonstrating superior diversity. - Unlike models that jointly train the encoder and diffusion model decoder, Kaleido uses pretrained discrete encoders, offering flexibility and efficiency by reducing training costs and complexity. Regarding scalability, - the total parameter count for our AR model and diffusion model is approximately 1.5B and 500M, respectively, which enables low computational cost. - Both AR and diffusion can be trained in parallel jointly, with ground-truth latents pre-extracted. The training cost is similar to the standard language model and diffusion model training. [1] EMU: Generative Pretraining in Multimodality [2] Generative Multimodal Models are In-Context Learners [3] MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens --- Rebuttal Comment 2.1: Comment: I thank authors for the detailed reply. Overall I agree with the choice of AR to model the first-stage mode distribution. The reason for choosing those four specific types of latent tokens is mainly for paper demonstration. So this rebuttal has addressed most of my concerns. The only point is that I still think training separate tokenizer + accompanied AR model + diffusion makes the pipeline too complex to me. Even though I understand the model size is reasonable w/o need for too much compute, we still have to train the whole set of these three components every time we have a new requirement. But indeed this paper is clearly above the bar of this venue. I hence increase my score and advocate for acceptance of this work. --- Reply to Comment 2.1.1: Comment: Thank you for your valuable feedback and for increasing your ratings. We completely agree that further simplifying the pipeline to enhance scalability is crucial. This will be a key focus in our future work as well. Stay tuned! --- Rebuttal 3: Title: Discussion Periods Comment: Dear Reviewer, As the discussion period deadline nears, we would greatly appreciate it if you could review our rebuttal and share any further feedback. If there are still concerns, we would greatly appreciate a list of specific changes you would need to reconsider your rating. Thank you for your time and consideration. Best regards,
Rebuttal 1: Rebuttal: ## **Quantitative Comparison with CADS:** In response to the request for more quantitative results and comprehensive baseline comparisons, we have conducted additional experiments, specifically comparing our Kaleido diffusion model with CADS [1]. - **Condition Annealed Diffusion Sampler (CADS)** is a general sampling strategy that enhances the diversity of diffusion models by annealing the conditioning signal during inference. - Following CADS, we employ two additional quantitative assessments of diversity: Mean Similarity Score (MSS) and Vendi scores. We use SSCD [2] as the pretrained feature extractor for calculating both MSS (SSCD) and Vendi (SSCD). Additionally, we utilize DiNOv2 [3] as the feature extractor for Vendi (DiNOv2), based on evidence from [4] suggesting that DiNOv2 provides a richer evaluation of generative models. - Given that CADS is a training-free strategy applicable to different model architectures, we integrate CADS with both the baseline model MDM and our Kaleido-MDM. We emphasize that the contribution of CADS is **orthogonal** to our work, and its application is independent and complementary to the core methodologies in our research. ----- ## **Results** We report the results for class-conditional generation on ImageNet 256×256 in Table 1 and 2, and for text-conditional generation on the MSCOCO [5] validation set in Table 3. All models use DDPM sampling with 250 steps. - Table 1 presents a quantitative comparison based on evaluations from 50K samples. Our Kaleido diffusion outperforms the MDM + CADS combination in terms of FID-50K and precision, demonstrating that our method more effectively maintains high image quality while generating diverse samples. Furthermore, when we integrate CADS with our model, we achieve the best FID-50K results. Note that Precision cannot accurately evaluate models with diverse outputs since a model producing high-quality but non-diverse samples could artificially achieve high Precision [1]. - In Table 2 and 3, following CADS, we assess the diversity of the generated images using 10K samples. For Table 2, we select 1,000 random classes from ImageNet and generate 10 samples per class. For Table 3, we use 1,000 random text prompts from the MSCOCO validation set and generate 10 samples for each prompt. Our findings indicate that **both our Kaleido model and CADS significantly enhance sample diversity**. Although CADS achieves better performance in diversity, our model maintains superior image quality, as shown in Table 1. - Additionally, **the methodologies used in CADS are complementary to ours, suggesting potential benefits from integrating CADS with our Kaleido model**. In fact, incorporating CADS into our model not only further improves image quality but also improves diversity, achieving the best scores in FID-50K, MSS (SSCD), and Vendi (DiNOv2) in class-conditioned image generation, and best Vendi (DiNOv2) in text-conditioned image generation. - Lastly, our **rebuttal PDF** includes visual comparisons of these models for class- and text-conditioned image generation in Fig.R.1 and 2, respectively. All images are generated using DDPM with 250 steps. Specifically, in Fig.R.2, we observe that MDM + CADS fails to generate cats of diverse breeds from the prompt "a cat sleeping on the bed." In contrast, our Kaleido diffusion model excels, producing images of cats from various breeds with more diverse surrounding environments, showcasing its superior diversity capabilities. This observation contrasts with the trend of diversity scores in Table 2, suggesting that these diversity metrics may not fully capture certain aspects of diversity. ----- **Table 1: Comparison on 50K samples of ImageNet, CFG=5.0** | Model | FID-50K ↓ | Precision ↑ | Recall ↑ | |----------|----------|----------|----------| | MDM | 15.5 | **0.93** | 0.22 | | MDM + CADS | 10.6 | 0.60 | **0.62** | | Kaleido (ours) | 9.0 | 0.85 | 0.42 | | Kaleido (ours) + CADS | **5.9** | 0.76 | 0.52 | ----- **Table 2: Diversity Comparison on 1K x 10 Samples of ImageNet** | Model | MSS (SSCD) ↓ | Vendi (SSCD) ↑ | Vendi (DiNOv2) ↑ | |------------------|----------|----------|----------| | MDM | 0.21 | 8.42 | 3.04 | | MDM + CADS | **0.12** | **9.28** | 4.72 | | Kaleido (ours) | 0.16 | 8.82 | 3.79 | | Kaleido (ours) + CADS | **0.12** | 9.21 | **4.83** | ----- **Table 3: Diversity Comparison on 1K x 10 Samples of COCO Val** | Model | MSS (SSCD) ↓ | Vendi (SSCD) ↑ | Vendi (DiNOv2) ↑ | |------------------|----------|----------|----------| | MDM | 0.29 | 7.55 | 3.39 | | MDM + CADS | **0.18** | **8.65** | 4.60 | | Kaleido (ours) | 0.20 | 8.52 | 4.59 | | Kaleido (ours) + CADS | 0.19 | 8.61 | **4.75** | ------ ------ [1] CADS: Unleashing the diversity of diffusion models through condition-annealed sampling. [2] A Self-Supervised Descriptor for Image Copy Detection. [3] DINOv2: Learning Robust Visual Features without Supervision. [4] Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models. [5] Microsoft COCO: Common Objects in Context Pdf: /pdf/e94cdb60f26f8613940986acbcb86afebe660c75.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Uncovering, Explaining, and Mitigating the Superficial Safety of Backdoor Defense
Accept (spotlight)
Summary: This paper studies post robustness of backdoor purification. The authors show that backdoors purified by existing defenses can be recovered via Retuning Attacks and they propose the Query-based Reactivation Attack to recover the backdoor. The authors address such an vulnerability by proposing a Path-Aware Minimization defense. Evaluations are performed on four different attacks over CIFAR-10/100 and Tiny-ImageNet across various deep neural networks. Results show that the proposed PAM defense achieves a better post purification robustness. Strengths: 1. It is novel to study the post purification robustness and this paper reveals an often-neglected vulnerability of DNNs and limitations of existing defenses. 2. The paper progressively demonstrate that models can relearn backdoors after purification via Retuning Attacks and based on the observation, the authors propose a Query based attacks that is more practical under real-world threat models. 3. The finding that the inadequate deviation of purified models from the backdoored model along the backdoor-connected path is the root cause of poor post purification robustness is both Instructive and insightful. The authors successfully develop an effective defense upon this finding. Weaknesses: 1. My primary concern is that the attacks evaluated in the paper are not SOTA. BadNets, Blended, SSBA and LC are attacks developed years ago. The authors should also provide the results on recent attacks such as Sleeper Agent[1] and Adaptive Blended[2]. 2. The authors mentioned there is a trade-off between post purification robustness and the clean accuracy. However, it is unclear how to determine the hyper parameter $\rho$ to achieve the balance. An algorithm for $\rho$ selection would be necessary. 3. The presentation needs to be improved. For example, order of Fig.2 and Fig.3 should be interchanged; it is very hard to see the results in Fig. 5; Algorithm 1 should be placed close to where it is described. 4. Some typos. For example, line 142: we following -> we follow; in Figure 1, shouldn't P-ASR be R-ASR? [1]. Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch, NeurIPS, 2022 [2]. REVISITING THE ASSUMPTION OF LATENT SEPARABILITY FOR BACKDOOR DEFENSES, ICLR 2023 Technical Quality: 3 Clarity: 2 Questions for Authors: 1. In Table 2, the clean accuracy of all these models are lower than usual, why is that? Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Limitations has been addressed in "Conclusions and Limitations". Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We are grateful to you for your time and effort in reviewing our work, as well as acknowledging our contributions.** ### **Response to Weakness 1** Thanks for your suggestion! As you suggested, we test our method PAM on more backdoor attacks, Adaptive-Patch, Adaptive-Blend, and All-to-All attacks which insert multiple backdoors into models [1]. The results are shown in the Table of the Global Response. We could observe that our method still achieves satisfying post-purification robustness against them. We also attempt the SleeperAgent but find unstable attack performance and low ASR with public implementation [2, 3]. In this work, we tend to figure out whether achieving a low ASR through current purification methods truly signifies the complete removal of inserted backdoor features. To investigate this phenomenon, we select several classic and practical attack methods in the field of backdoor learning. We demonstrate that even under these well-explored attack paradigms, current state-of-the-art defenses still suffer from our Retuning Attack which cannot achieve the post-purification robustness. We are greatly appreciative of the suggestions put forth by the reviewer. We will add these experiments in the revised version. ### **Response to Weakness 2** Thanks for your instructive comments. We showcase the model performance across various $\rho$ values in the Table below. As $\rho$ rises, there is a slight decrease in clean accuracy alongside a significant enhancement in robustness against RA. Additionally, we can observe that the performance is not sensitive to the $\rho$, when larger than 0.3. Considering we only observe the C-Acc (with val set) in practice and need to achieve a good trade-off between these two metrics, we follow FST [4] and choose $\rho$ to ensure that the C-Acc doesn’t fall below a predefined threshold like 92%. We demonstrate the performance of PAM with diverse $\rho$ and evaluate the Blended attack on CIFAR-10 with ResNet-18. The O-Robustness metric represents the purification performance of the defense method, and the P-Robustness metric denotes the post robustness after applying RA. |Evaluation Mode|$\rho=0.1$ (C-Acc/ASR)|$\rho=0.3$ (C-Acc/ASR)|$\rho=0.5$ (C-Acc/ASR)|$\rho=0.7$ (C-Acc/ASR)|$\rho=0.9$ (C-Acc/ASR)| |-----------|--------------|--------------|--------------|--------------|--------------| |O-Robustness |94.03/6.33|93.64/2.07|93.34/1.67|92.12/0.50|91.99/1.00| |P-Robustness |93.60/33.29|93.61/10.06|93.38/2.69|92.17/2.62|92.54/0.30| ### **Response to Weakness 3 and 4** We greatly appreciate the reviewer for the invaluable suggestions and for pointing out our typos. We are committed to enhancing the presentation. We will follow your suggestions to adjust the layout, and colors, and rectify the typos in our revised version. ### **Response to Question 1** Thanks for your question. In our work, we adopt the ResNet-18 model (the checkpoint pretrained on ImageNet from Torchvision) for CIFAR-100 and TinyImageNet. After backdoor poisoning, the clean accuracies of backdoor models(averaged on attacks) are 78.8% and 73.1%, respectively. The numbers are higher than the numbers from the BackdoorBench paper [1], where the clean accuracies are 70.51% and 57.28%, respectively. The results in our paper are also aligned with the results from [PaperwithCode](https://paperswithcode.com/), where the accuracy on CIFAR-100 is 75% which is from ResNet-164 [1] (without using any tricks) and accuracy on TinyImageNet is 74% which is from ResNeXt-50 [1] (with using advanced augmentations). After conducting purification, there are slight decreases in clean accuracy. The clean accuracies of PAM are 75.56% and 68.23% on CIFAR-100 and TinyImageNet which is comparable to the original BTI and outperforms FST (as exhibited in Appendix Tables 8 and 9). We reorganize the results from Tables 2, 8, and 9 into the below Tables to take a convenient comparison. We could observe that our PAM achieves a better trade-off between clean accuracy and backdoor robustness. We will present a better-organized version of the results in our revision. The slight drops across all purification methods may be attributed to our practice of using small clean datasets to adjust backdoored models; a limitation potentially mitigated by utilizing augmentations or distillation with backdoor models as teachers. We leave it for future exploration. **CIFAR-100** |Evaluation Mode|Clean (C-Acc/ASR)|EP (C-Acc/ASR)|SAM (C-Acc/ASR)|FST (C-Acc/ASR)|BTI (C-Acc/ASR)|PAM (C-Acc/ASR)| |-----------|--------------|--------------|--------------|--------------|--------------|--------------| |O-Backdoor|78.83/97.30|78.83/97.30|78.83/97.30|78.83/97.30|78.83/97.30|78.83/97.30| |O-Robustness |79.70/0.04|76.78/0.04|76.38/2.20|72.99/0.73|75.61/2.68|75.56/0.21| |P-Robustness |78.75/0.85|76.37/0.50|76.43/90.52|72.42/84.54|75.69/58.61|75.53/0.95| **Tiny-ImageNet** |Evaluation Mode|Clean (C-Acc/ASR)|EP (C-Acc/ASR)|SAM (C-Acc/ASR)|FST (C-Acc/ASR)|BTI (C-Acc/ASR)|PAM (C-Acc/ASR)| |-----------|--------------|--------------|--------------|--------------|--------------|--------------| |O-Backdoor|73.10/98.68|73.10/98.68|73.10/98.68|73.10/98.68|73.10/98.68|73.10/98.68| |O-Robustness |73.88/0.07|70.75/0.01|70.54/7.05|65.92/1.71|68.70/0.73|68.23/1.56| |P-Robustness|72.86/1.36|70.5/0.88|70.94/87.30|65.22/77.06|68.43/61.22|67.74/7.48| [1]. BackdoorBench: A Comprehensive Benchmark of Backdoor Learning, NeurIPS 2022 [2]. https://github.com/vtu81/backdoor-toolbox [3]. https://github.com/hsouri/Sleeper-Agent [4]. Towards Stable Backdoor Purification through Feature Shift Tuning, NeurIPS 2023. --- Rebuttal Comment 1.1: Title: Thanks for your recognition of our work Comment: We sincerely appreciate your constructive feedback throughout the review process and will incorporate your suggestions as we revise the paper. We are delighted that our responses have addressed your concerns. Thanks for your recognition of our work! The Authors.
Summary: This paper investigates the effectiveness of current purification-based backdoor defenses and tries to uncover whether purified DNNs are truly free from backdoor vulnerabilities. The authors identify the “post-purification robustness” of DNNs and propose Retuning Attack (RA) and Query-based Reactivation Attack (QRA) respectively to assess the susceptibility of purified DNNs to backdoor reactivation. Additionally, the paper proposes Path-Aware Minimization (PAM) to improve post-purification robustness. Strengths: The work makes contribution in backdoor robustness evaluation by shifting the focus from merely achieving low ASR to evaluating the post-purification robustness of backdoor defenses. The paper introduces a pipeline of methods: RA, QRA, and PAM, for the assessment of this vulnerability. Weaknesses: The proposed notion of post-purification robustness lies with the assumption that the purified DNN will encounter further fine-tuning. As such, the entire workflow proposed is not practical in a real-world scenario where the purified model is kept frozen with no further updates. Technical Quality: 2 Clarity: 3 Questions for Authors: Can the authors further justify the proposed threat model’s real-world implications? How can the proposed workflow be enabled in a real-world attack setting? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 4 Limitations: The paper falls short in addressing practical implementation challenges. These limitations suggest that the proposed methods might not be as universally practical as claimed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We are grateful to you for your time and effort in reviewing our work, as well as acknowledging our contributions.** ### **Response to Weakness and Questions**: Thanks for your kind question! Due to the space limitation, we only simply discuss practical implications for real-world settings in Lines 109-116 of Section 3. We will further explain the threat model as follows. We also mention this in the global response. Large Models such as CLIP, ChatGPT, LLaMa, and Stable Diffusion, have become essential bases supporting a wide variety of AI applications. These Large Models (after completed safety tuning) provide powerful pre-trained capabilities that can be fine-tuned for a wide range of specific use cases. In practice, further customization of these models via fine-tuning is often desirable to tailor their performance for particular applications [1]. For open-sourced models like the LLaMa series and Stable Diffusion, the model providers explicitly encourage further fine-tuning to specialize these models' capabilities for use applications [2]. For close-sourced models like GPT-4 and Claude 3, providers offer APIs that allow users to upload their specific datasets and fine-tune these models accordingly [3,4]. Meanwhile, the pretraining datasets for Large Models have grown to web-scale datasets with billions of samples crawled from the internet [5,6]. At this scale, it is infeasible to manually curate each example, which leaves a viable opportunity for attackers to launch actual poisoning attacks [7]. To ensure the safety of models in practical use, people have proposed tremendous methods to purify models before releasing them [8,9,10]. Despite substantial efforts in this area, it is not clear that even if a purified model’s initial safety performance is impeccable (nearly 0 ASR), will this robustness still be preserved after further finetuning with possible poisoned data? Similar to works [11,12] which studied safety alignment against finetuning on harmful prompts, **we take the first initial attempt to consider this actual issue for backdoor poisoning threats.** Our study finds that backdoors could be very easily reactivated by further tuning on only an extremely small number of poisoned samples (like 1 sample for blended attack). This reveals that current purification methods cannot truly eliminate poisoning backdoor features learned during pretraining. This will undoubtedly pose a greater threat to the real world. Attackers could potentially bypass “firewalls” from safety purifications. Leveraging the stronger capabilities of Foundation models, they could then generate more threatening and disruptive content (more realistic counterfeit images [13] and more potent malicious codes [14]), further harming other’ productive activities. We will add this detailed explanation as a separate section in the revised version. We hope our response could address your concerns. -------- [1]. https://llama.meta.com/responsible-use-guide/ [2]. https://llama.meta.com/docs/how-to-guides/fine-tuning/ [3]. https://platform.openai.com/docs/guides/fine-tuning [4]. https://www.anthropic.com/news/fine-tune-claude-3-haiku [5]. Laion-5b: An open large-scale dataset for training next generation image-text models, NeurIPS 2022. [6]. Exploring the limits of transfer learning with a unified text-to-text transformer, JMLR 2020. [7]. Poisoning Web-Scale Training Datasets is Practical, IEEE S&P 2024. [8]. Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback, arxiv 2022. [9]. Backdoor Learning: A Survey, arxiv 2022. [10]. https://openai.com/index/openai-safety-update/ [11]. Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! ICLR 2024 [12]. Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models, arxiv 2023. [13]. How to Backdoor Diffusion Models?, CVPR 2023. [14]. Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, arxiv 2024. --- Rebuttal Comment 1.1: Comment: I'd to thank the authors for their detailed responses. However, the mention of large-scale datasets and models in this response does not align with the evaluations in the paper (such as using CIFAR and ResNets). I suggest the authors to further consider real-world implications of the proposed notion and include discussions in the paper. I maintain my original rating of "Weak Accept". --- Rebuttal 2: Title: Thanks for your feedback and recognition of our work. Comment: We sincerely value your supportive feedback during the review process along with the acknowledgement of our work. Due to the substantial computational costs associated with large-scale experiments, we initially examine and verify this crucial issue, post-purification robustness, using smaller datasets and models in this study. We will follow your question and incorporate our discussions into both a separate section and the Limitation section in the revised version. Following your suggestions, we will be dedicated to exploring our proposed method for backdoor safety issues on LLMs [1,2,3]. Our work pioneers the idea that an attacker could bypass existing safety purifications merely by fine-tuning purified models with an extremely small number of poisoned samples. As pointed out by the Reviewer, it brings a new angle for evaluating backdoor robustness known as post-purification robustness instead of solely depending on ASR. This is very vital since more models are becoming available for users' further fine-tuning after undergoing safety tuning. Our research initially demonstrates this feasibility with small-scale datasets and models. This potential vulnerability emphasizes the necessity for more faithful evaluations and stable defense methods against backdoor threats, to ultimately develop safer and more reliable systems. We once again extend our heartfelt thanks to the Reviewer. We are delighted to discuss with you and sincerely appreciative that these comments have better aided us in more clearly clarifying our contribution and improving the quality of our paper. The Authors. ------- [1]. Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! ICLR 2024 [2]. Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, arxiv 2024. [3]. Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs, arxiv 2024.
Summary: This paper reveals a phenomenon in backdoor defense: the purified backdoor can be reactivated by fast retuning on a few backdoor samples. Building upon this observation, the paper explores both attacks and defense measures for more reliable backdoor research. On the attack side, a Retuning Attack (RA) is proposed and generalized to the black-box setting as a Query-based Reactivation Attack (QRA). On the defense side, a Path-Aware Minimization (PAM) method is proposed to force more deviation from the backdoor-connected path. Experiments verify the effectiveness of PAM compared to exact purification (EP). Strengths: 1. The observed phenomenon that the purified model is more easily to restore the trigger is interesting. 2. Both attacks and defense were explored. 3. The defense results look promising. Weaknesses: 1. The restoration of the backdoor was conducted on a poisoned subset which is not a surprise in this case as the model will surely relearn the backdoor. The authors should prove the phenomenon via tuning on purely clean training data. 2. The threat models of the two proposed attacks are problematic. 1) RA will require a post-purification poisoning to the defense model, which means that it can poison again after the defense, which does not make sense to me; 2) QRA attack is very similar to an adversarial attack, that requires access to both the purified model and the RA model, why the defender exposes these models (or their APIs) to the attacker is questionable. 3. The authors should clearly define what is Post-purification Robustness. In the current version, it appears the same as standard backdoor robustness, i.e., how to guarantee the PAM purified model is 100% robust? 4. The authors seem to confuse a backdoor attack with adversarial perturbation in proposing the QRA attack, A backdoor attack does not make changes to the input during the inference stage otherwise it becomes an adv attack. These two attack types assume different capabilities and flexibility of the adversary. 5. The proposed PAM is very much like using a moving average to force larger updates of the weights. PAM requires reverse-engineered backdoor samples $\mathcal{D_r}$, I wonder if one could simply use unlearning to achieve the same effect. 6. The proposed method was not compared with existing defense methods. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Can PAM deal with multi-trigger attacks?[1] [1] Li, Yige, et al. "Multi-Trigger Backdoor Attacks: More Triggers, More Threats." arXiv preprint arXiv:2401.15295 (2024). 2. How does PAM work, compared to applying the same existing defense twice, e.g., ANP or SAM? 3. Can the problem be addressed by simply adjusting the hyperparameters of the Mode Connectivity defense [2]? [2] Zhao, Pu, et al. "Bridging mode connectivity in loss landscapes and adversarial robustness." arXiv preprint arXiv:2005.00060 (2020). Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: 1. Problematic threat model. 2. Limited technical novelty. 3. Missing systematic comparison with existing methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thanks for your time and effort in reviewing our work!** ### **Response to W1**: 1. For "surely relearn the backdoor": Sorry for the possible confusion. First, we emphasize **the reviewer’s statement “...model will surely relearn the backdoor.” is not correct**. **As emphasized in Lines 169-173 of Section 3.2 and Figure 1**, exact purification (EP) doesn't relearn backdoors, maintaining low ASR after RA. 2. For "tuning on purely clean training data": Thanks for this interesting question. Simply tuning on clean data cannot relearn backdoors. Instead, it is widely used as a baseline defense [1,2]. Additionally, it would be interesting to explore whether fine-tuning on carefully selected clean data, based on certain metrics, may lead to relearning backdoors. We will explore it in the future. 3. We also discuss why our work is surprising in the Global Response. ### **Response to W2**: 1. For 1): Please refer to the practical implications of our threat models in Global Response. 2. For "QRA attack is very similar to an adversarial attack" in 2): Sorry for the possible confusion. In the main submission, **we have specifically emphasized the differences between QRA and adversarial perturbations (ADV) in Lines 212-215 and derived the final objective of QRA, Eq. 2.** As shown in Figure 3, QRA from Eq. 2 only works when added on backdoored examples and with purified models. Applying QRA on clean images with purified models or attacking clean models will not succeed. Instead, ADV can work on these all attack scenarios. This suggests Our QRA is different from ADV. We also discuss their difference in Weakness 4. 3. For "that requires access both the..." in 2): As mentioned in response to 1), nowadays, after safety tuning, the model providers open-source the models or release the API to enable further fine-tuning for specific usage. We also demonstrate that QRA can successfully transfer across unknown purification methods in Figure 2. This doesn’t need any queries to targeted models, which highlights the practicality of QRA. ### **Response to W3**: 1. For "it appears the same as standard backdoor robustness": As we've discussed throughout the article, post-purification robustness (P-) has a significant difference from standard backdoor robustness (S-). S- directly tests purified models’ ASR on a backdoored testing set. Contrastively, P- evaluates purified models’ ASR against RA used in the paper. Although they both evaluate the ASR, P- takes a further step than S-. It reveals that the current defense with nearly 0 ASR cannot truly eliminate backdoor features, which quickly regain ASR after RA (Sec.3). This emphasizes instead of solely depending on S-, we need more faithful and comprehensive evaluations to ensure lasting protection against backdoor attacks. 2. For "guarantee the PAM purified model is 100% robust?": In practice, all current defense methods cannot guarantee they are 100% robust since they cannot achieve 0 ASR [2,4,5] empirically. PAM is also an empirical defense and doesn’t offer certified guarantees. ### **Response to W4**: Sorry for the possible confusion. First, our QRA is different from ADV and is specific to the inserted backdoor. **We believe the reviewer’s statement about the difference between backdoor and ADV is not accurate**. For a successful backdoor attack, it's necessary to activate a preembedded backdoor to conduct an attack[3]. Contrarily, ADV does not need to embed a backdoor and can mislead any model. The difference is not about modifying samples during inference, but whether perturbation refers to a previously inserted backdoor [3]. ### **Response to W5**: Sorry for the possible confusion. We have conducted unlearning with reversed backdoored samples in our paper. All numbers referencing "BTI" pertain to this unlearning. We follow source codes to implement it (BTI-U in original paper). We give a detailed description of it in Lines 522-525 of Appendix. We could observe that BTI cannot achieve same effect as PAM. We will clarify this point in the revised version. ### **Response to W6 and Q2**: 1. Response to W6: We think **the reviewer may miss our main experimental results in Figure 1-4**. We have taken a thorough comparison between PAM and existing defense methods and observed PAM outperforms others in terms of post-purification robustness. 2. Response to Q2: We think the reviewer may misunderstand the process of PAM. Like existing defense methods, PAM is applied **once** on backdoor models. We don’t understand Question 2, since it appears there's no difference between doing them once or twice. Inspired by suggestions, we have added another method for comparison. After getting reversing samples, we adopt the SAM method to purify models on them. We observe that this cannot maintain consistent robustness against RA, regaining over 75% ASR average on CIFAR-10. ### **Response to Q1**: Thanks for this interesting question. Since we cannot find the source code of reference work, we utilize the All-to-All attacks [4] which also inserts multiple backdoors into models and present PAM’s performance in Table of the Global Response. PAM still achieves good post-purification robustness. ### **Response to Q3**: Thanks for this interesting question. The method of MCR cannot achieve same effect as PAM against RA. As shown in Figure 2 of MCR, we observe that to maintain good clean accuracy, the defender must select low ASR solutions very close to backdoored models that are not robust to RA (see our Section 4.1). [1]. Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks, arxiv 2018. [2]. Towards Stable Backdoor Purification through Feature Shift Tuning, NeurIPS 2023. [3]. Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses, arxiv 2021. [4]. Backdoorbench: A comprehensive benchmark of backdoor learning, NeurIPS 2022. [5]. Reconstructive neuron pruning for backdoor defense, ICML 2023. --- Rebuttal 2: Title: Thanks for the clarifications Comment: I want to thank the authors for the rebuttal. It has addressed most of my concerns. I have increased my rating accordingly. --- Rebuttal Comment 2.1: Title: Thanks for your feedbacks Comment: We appreciate your further comment and recognition of our responses. We are delighted to have addressed your concerns. The Authors
Summary: Backdoor attacks are a major threat to Deep Neural Networks (DNNs), as they allow attackers to manipulate model predictions with backdoor triggers. Existing purification methods reduce the Attack Success Rate (ASR) of these models, but it's unclear if they fully eliminate backdoor threats. This study investigates post-purification robustness by employing the Retuning Attack (RA) and finds that current methods are vulnerable, as models quickly relearn backdoor behaviors. To address this, the study proposes the Query-based Reactivation Attack (QRA) and a novel Path-Aware Minimization (PAM) technique. PAM enhances robustness by promoting deviation along backdoor-connected paths with extra model updates. Extensive experiments show PAM significantly improves robustness, maintaining low ASR and good accuracy, providing a new perspective on evaluating and improving backdoor defenses. Strengths: 1. The paper empirically verifies that poisoned nodes typically exhibit large prediction variance under edge dropping, providing a indicator for identifying poisoned nodes​​. 2. The proposed robust training strategy not only has theoretical guarantees but also shows practical effectiveness in defending against various types of backdoor attacks, maintaining clean accuracy while reducing the attack success rate​​. Weaknesses: 1. While the paper focuses on various backdoor attack types, the scope of attack types and defense mechanisms explored could be broadened to cover more diverse scenarios and setting Technical Quality: 3 Clarity: 3 Questions for Authors: This is a good paper, discussing the details of the proposed defense method, one thing I'm curious is that does the author has plans for designing an adaptive attack for this defense method, and how to achieve it? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations have been discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We are grateful to you for your time and effort in reviewing our work, as well as acknowledging our contributions.** ### **Response to Weakness 1**: Thanks for your suggestion! Following your suggestion, we add evaluations on more poisoning attacks. **For more attacks:** We test our PAM on more backdoor attacks, including Adaptive-Patch [1], Adaptive-Blend [1], and All-to-All attack [2]. Our experiments are shown in Table of the Global Response. The results show that our method still achieves satisfying post-purification robustness against them, yielding only 3.75% ASR after the Retuning Attack. **For more scenarios:** We plan to apply our method to backdoor safety issues on LLMs [4,5,6]. We could first adopt current backdoor reversing methods to reverse inserted trigger prompts [6]. Then we could utilize our PAM method with reversed trigger prompts to purify backdoored LLMs. Future work will be dedicated to these endeavors. ### **Response to Question 1**: Thanks for your interesting question. We have also considered designing possible adaptive attacks for our PAM method. PAM consists of two parts: 1) getting gradients of the interpolated model for further update; and 2) data specification for backdoor-connected path; Next, we discuss possible strategies for the above two components, respectively. 1. Making the loss landscape around the poisoned solution becomes smoother: PAM needs the gradients of the interpolated model for getting further updates, to obtain a solution deviated from backdoored models. If attackers could control the training process and position the backdoored model at a smoothed local minimum (possibly using methods like SAM), they might impede the post-purification performance of PAM. We try this possible adaptive attack. **We find our PAM still performs robustly against it.** We suspect that the small radius adopted by SAM limits its smoothing performance. However, enlarging the radius will also significantly sacrifice clean accuracy. We will explore other possible methods in the future. 2. Nonetheless, such advanced attacker capabilities that include controlling training procedures exceed our work's scope and do not align with our experimental setting, since we mainly focus on practical data-poisoning attacks. Making reversing backdoor samples becomes harder: PAM needs to reverse backdoored samples to specify the backdoor-connected path. If attacks could make reversing backdoor samples fail, they may be able to fail PAM defense. In our work, we adopt the most advanced backdoor-reversing method, BTI [3], which can effectively handle current data-poisoning attack methods. ----------- [1]. Revisiting the Assumption of Latent Separability for Backdoor Defenses, ICLR 2023. [2]. Backdoorbench: A comprehensive benchmark of backdoor learning, NeurIPS 2022. [3]. Towards reliable and efficient backdoor trigger inversion via decoupling benign features, ICLR 2024. [4]. Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, arxiv 2024. [5]. Universal jailbreak backdoors from poisoned human feedback, ICLR 2024. [6]. Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs, arxiv 2024. --- Rebuttal Comment 1.1: Title: Seeking Your Valuable Feedback Comment: Dear Reviewer YFJE, We wish to express our gratitude for your dedicated time and insightful comments. We are awaiting your valuable feedback and insights regarding the points we addressed in the rebuttal. Ensuring your satisfaction with our rebuttal is of utmost importance to us. Your response is very helpful in further improving the quality of our work. Sincerely, Authors --- Rebuttal Comment 1.2: Comment: Thank you for the reply. It resolved my concerns. I will keep my rating positive. --- Reply to Comment 1.2.1: Title: Thanks for your recognition of our work Comment: We sincerely appreciate your invaluable feedback throughout the review process and will incorporate your suggestions as we revise the paper. We are delighted that our responses have resolved your concerns. Thanks for your support and recognition for our work! The Authors.
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to all reviewers for their time and efforts in reviewing our work. We will carefully revise our manuscript by adding suggested experiments, more detailed explanations, and fixing the typos. **Here, we provide a global response to questions from reviewers about the practical implications of our threat models and our work's contribution. We also provide evaluation results on more attacks.** ### **The Practical Implications of our threat models for real-world settings**: Large Models such as CLIP, ChatGPT, LLaMa, and Stable Diffusion, have become essential bases supporting a wide variety of AI applications. These Large Models (after completed safety tuning) provide powerful pre-trained capabilities that can be fine-tuned for a wide range of specific use cases. In practice, further customization of these models via fine-tuning is often desirable to tailor their performance for particular applications [1]. For open-sourced models like the LLaMa series and Stable Diffusion, the model providers explicitly encourage further fine-tuning to specialize these models' capabilities for use applications [2]. For close-sourced models like GPT-4 and Claude 3, providers offer APIs that allow users to upload their specific datasets and fine-tune these models accordingly [3]. Meanwhile, the pretraining datasets for Large Models have grown to web-scale datasets with billions of samples crawled from the internet [4]. At this scale, it is infeasible to manually curate each example, which leaves a viable opportunity for attackers to launch actual poisoning attacks [5]. To ensure the safety of models in practical use, people have proposed tremendous methods to purify models before releasing them [6,7]. Despite substantial efforts in this area, it is not clear that even if a purified model’s initial safety performance is impeccable (nearly 0 ASR), will this robustness still be preserved after further finetuning with possible poisoned data? Similar to works [8] which studied safety tuning against finetuning on harmful prompts, **we take the first initial attempt to consider this actual issue for backdoor poisoning threats.** Our study finds that backdoors could be very easily reactivated by further tuning on an extremely small number of poisoned samples (like 1 sample for blended attack). This reveals that current purification methods cannot truly eliminate backdoor features learned during pretraining. This will undoubtedly pose a greater threat to real world. Attackers could potentially bypass “firewalls” from safety purifications. Leveraging the stronger capabilities of Foundation models, they could then generate more threatening and disruptive content (more realistic counterfeit images [9] and more potent malicious codes [10]), further harming other’ productive activities. **We want to reemphasize why our findings are surprising and contributions are important:** Our work first proposes a new perspective toward evaluating the effectiveness of backdoor defense methods. Rather than simply focusing on the ASR, we investigate the post-purification robustness via **RA** (Sec.3.2) and more practical **QRA** (Sec.3.3). We find that purified models with current defense methods still have backdoor features which could be very easily reactivated. Our findings underscore the necessity for more faithful and comprehensive evaluations to ensure lasting protection against backdoor threats. We observe that the EP (assuming acknowledging triggers and true labels) does not relearn the backdoor, maintaining low ASR after RA. It validates the possibility of achieving post-purification robustness. This contrast also emphasizes that current purification methods are surprisingly weak and superficial. Such insights guide us to conduct analysis about what leads to robustness of EP (Sec.4.1), further leading us to propose our **PAM** defense technique (Sec.4.2) . Notably, our PAM significantly improves robustness against RA. ### **Additional evaluations against more attacks**: Following the suggestions from Reviewer YFJE, Reviewer JZci, and Reviewer jgTc, we have expanded our experiments on additional attack types, including the Adaptive-BadNet, Adaptive-Blend, and All-to-All attacks, as presented in the Table below. The results show that our PAM still achieves good post-purification robustness against them. Table: Experiments are conducted on CIFAR-10 with ResNet-18. The O-Backdoor indicates the original performance of backdoor attacks, O-Robustness metric represents the purification performance of defense method, and the P-Robustness metric denotes post robustness after applying RA. |Evaluation Mode|Adaptive-BadNet (C-Acc/ASR)|Adaptive-Blend (C-Acc/ASR)|BadNet-All2All (C-Acc/ASR)|Blended-All2All (C-Acc/ASR)| |-----------|--------------|--------------|--------------|--------------| |O-Backdoor|94.54/86.83|94.70/94.91|94.25/90.21|94.65/77.73| |O-Robustness (BTI)|92.97/1.40|91.87/4.52|92.29/1.43|93.16/4.11| |P-Robustness (BTI)|93.13/56.82| 92.08/45.79|92.41/88.41|93.41/58.84| |O-Robustness (PAM)|92.05/0.65| 92.08/0.14|93.18/0.70|92.52/3.61| |P-Robustness (PAM)|91.83/0.53|92.51 /4.73| 92.51/0.90| 92.26/6.43| ----- [1]. https://llama.meta.com/responsible-use-guide/ [2]. https://llama.meta.com/docs/how-to-guides/fine-tuning/ [3]. https://platform.openai.com/docs/guides/fine-tuning [4]. Laion-5b: An open large-scale dataset for training next generation image-text models, NeurIPS 2022. [5]. Poisoning Web-Scale Training Datasets is Practical, IEEE S&P 2024. [6]. Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback, arxiv 2022. [7]. Backdoor Learning: A Survey, arxiv 2022. [8]. Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! ICLR 2024 [9]. How to Backdoor Diffusion Models?, CVPR 2023. [10]. Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, arxiv 2024.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Expert-level protocol translation for self-driving labs
Accept (poster)
Summary: This paper proposes an automated protocol translation framework, which takes natural language descriptions designed for human experimenters as input and outputs a structured representation that can be used for self-driving labs. The framework consists of a three-stage workflow. First, a domain-specific program is synthesized from the natural language description using a classic action/entity extraction and Expectation Maximization approach. Then, reagent flow analysis, essentially reaching definitions of the synthesized program is performed. Third, constraints prohibiting undesired execution behaviors are inferred and will be used to monitor the execution. Results show that the synthesized protocol translation matches manually written ones by human experimenters. Strengths: - The problem is well-motivated and of great significance in advancing AI applications in scientific discovery. - The paper is easy to follow, although the required background knowledge is non-trivial. - The empirical evaluation shows that the proposed approach outperforms pure LLM-based synthesis and matches the manual translation by human experimenters. Weaknesses: - The proposed solution is a portfolio of standard applications of existing tools or well-known algorithms, which is less interesting and novel from a machine learning perspective. - The targeted DSL is relatively simple, and the proposed solution consists of ad-hoc design choices (particularly spatial-temporal dynamics), which may not generalize well to DSLs with richer features Technical Quality: 3 Clarity: 3 Questions for Authors: What off-the-shelf tools are used in the pre-processing to extract actions and entities? Are they LLMs? Are there important differences between the extraction of action/entities and the extraction of reagent entities (state-of-the-art LLMs are used in the latter)? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors briefly discussed limitations in Appendix E. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The proposed solution is a portfolio of standard applications of existing tools or well-known algorithms, which is less interesting and novel from a machine learning perspective. Thanks for the comment. In this work, we study the problem of translating experimental protocols designed for human experimenters into formats suitable for machine execution. Our primary motivation is to bridge the existing gap between machine learning algorithms in the field of AI for science, such as molecular design, and the grounded experimental verification facilitated by self-driving laboratories. We appreciate the reviewer's recognition that "the required background knowledge is non-trivial". Indeed, conventional workflows for setting up self-driving laboratories and conducting physical experiments necessitate deep integration with domain experts, significantly impeding the progress of machine learning researchers in verifying and iterating their findings. Consequently, our framework aims to provide an infrastructure that enables these researchers to advance their machine learning algorithms and seamlessly validate their findings, thereby closing the loop of automatic scientific discovery. To meet the requirements of such infrastructure, we conduct a systematic study to identify existing gaps in protocol translation between human experimenters and automatic translators in self-driving lab. Accordingly, we develop the three-stage framework that integrates cognitive insights from human experts with approaches from program synthesis, automata construction, and counterfactual analysis. At the syntax level, we synthesize the operation dependence graph to transform natural-language-based protocols into structured representations, thereby making explicit the operation-condition mappings and the control flows. At the semantics level, we analyze the reagent flow graph to reconstruct the complete lifecycles of intermediate products, addressing the latent, missing, or omitted properties and values. At the execution level, we contextualize both the operation dependence graph and the reagent flow graph within spatial and temporal dynamics, resulting in the protocol dependence graph. This graph conducts counterfactual reasoning to detect potential conflicts or shortages of execution resources and to identify inappropriate combinations of operations in execution sequences. > The targeted DSL is relatively simple, and the proposed solution consists of ad-hoc design choices (particularly spatial-temporal dynamics), which may not generalize well to DSLs with richer features Thanks for the comment. In this study, we aim to investigate the development of automatic translators for executing experimental protocols in self-driving labs. To execute the instructions on Internet-of-Things-connected hardware devices, such as valves, pumps, and reactors, protocols must ultimately be formatted in JSON-style configuration files, which represent a mainstream format in hardware-software communication (see C.3-1). Although DSLs with syntactic and semantic features differing from those of JSON-style DSLs for self-driving labs are beyond the scope of this paper, generalizing the framework to DSLs with other language features represents a significant direction for future research. We appreciate the reviewer's suggestion in this regard. We acknowledge that the core components of our framework are designed in an ad-hoc manner. The design guidelines are derived from both a systematic study of the required cognitive capabilities in protocol translation and established computer science theories. Rather than directly devising engineering solutions tailored to the specific problem, we decompose the problem to abstract subproblems, including symbolic regression, flow analysis, and counterfactual reasoning. We consider these subproblems as scientific challenges and develop solutions from the conceptual level to the implementation level in a top-down approach. Specifically, the scientific challenge behind the design of spatial-temporal dynamics lies in the extremely long-tail distribution of historical run-time error cases. To address this issue, we propose leveraging foresight simulation by contextualizing individual operations within both the spatial dimension, i.e., the specific assignment of resources according to capacity requirements, and the temporal dimension, i.e., the specific precondition and postcondition of resources according to their properties. The division of spatial and temporal dimensions is mutually exclusive and echoes the two major aspects of computer programs --- computation resources and control logic. Thus, although ad-hoc, the design of spatial-temporal dynamics targets the underlying scientific problem behind the superficial challenges and holds the potential for generalization to other DSLs. Extending the application scope of our framework is a significant direction for future work. > What off-the-shelf tools are used in the pre-processing to extract actions and entities? Are they LLMs? Are there important differences between the extraction of action/entities and the extraction of reagent entities? Thanks for the question. We employ the SpaCy Dependency Parser to analyze the syntactic structure of protocols, which allows for the extraction of verbs and the identification of associated objects and modifiers. After parsing, these verbs are aligned with corresponding operational actions in our DSL by maximizing the cosine similarity between their word2vec representations and those of the DSL operations. Furthermore, we utilize a few-shot model based on LLM to accurately identify and classify entities within the text. The rationale of integrating LLMs with classical parsing techniques lies in leveraging the advanced natural language processing capabilities of LLMs while mitigating their inherent uncertainties. There are significant differences exist between various stages of this pipeline (see C.3-2). --- Rebuttal 2: Title: C.3-1: XDL machine-executable code to drive hardware devices for automatic Chemistry experiment Comment: Here, we provide a realistic example of XDL machine-executable code used to operate the devices in a self-driving laboratory for Chemistry [1]. ``` Original protocol: 2,2'-dinitro-6,6'-dimethylbiphenyl (39 g, 0.14 mol) was dissolved in 100 ml ethyl acetate in a hydrogenation vessel. Palladium on Carbon (10%, 5.5 g) was added. The system was evacuated and H2 added to a pressure of 28 psi. The reaction was left until no further uptake of H2 could be detected. The solution was filtered through celite and the solvent evaporated to give the product diamine in 100% yield. Machine executable code: <?xdl version="1.0.0" ?> <XDL> <Synthesis> <Hardware> <Component id="cartridge_celite" type="cartridge" chemical="celite" /> <Component id="reactor" type="reactor" /> <Component id="rotavap" type="rotavap" /> </Hardware> <Reagents> <Reagent name="2,2'-dinitro-6,6'-dimethylbiphenyl" id="2,2'-dinitro-6,6'-dimethylbiphenyl" role="reagent" /> <Reagent name="H2" id="H2" role="reagent" /> <Reagent name="ethyl acetate" id="ethyl acetate" role="reagent" /> <Reagent name="palladium on Carbon (10 %)" id="palladium on Carbon (10 %)" role="reagent" /> </Reagents> <Procedure> <AddSolid vessel="reactor" reagent="2,2'-dinitro-6,6'-dimethylbiphenyl" mass="39 g" /> <Dissolve vessel="reactor" solvent="ethyl acetate" volume="100 mL" temp="25 °C" /> <AddSolid vessel="reactor" reagent="palladium on Carbon (10 %)" mass="5.5 g" stir="True" /> <EvacuateAndRefill vessel="reactor" /> <Add vessel="reactor" reagent="H2" volume="0" stir="True" speed="40.0" /> <FilterThrough from_vessel="reactor" to_vessel="rotavap" through="celite" /> <Evaporate vessel="rotavap" time="30 min" /> </Procedure> </Synthesis> </XDL> ``` References: [1] S. Hessam M. Mehr et al., A universal system for digitization and automatic execution of the chemical synthesis literature. Science 370, 101-108 (2020). --- Rebuttal 3: Title: C.3-2: The preprocessing pipeline on real-world examples Comment: Below, we present several real-world examples to illustrate these distinctions. In our implementation of the pipeline, we employ a state-of-the-art dependency parser [1] alongside a state-of-the-art LLM-based NER model [2]. | original text | action extraction | entity extraction | classification with LLM | preprocess result | LLM-pure | |---------------|-------------------|-------------------|--------------|----------|----------| | Stain with DAPI nucleic acid stain for 30 seconds. | stain | ['DAPI nucleic acid stain', '30 seconds'] | [(property='reagent', value='DAPI nucleic acid stain'), (property='time', value='30 seconds')] | {"action": "stain", "output": "", "reagent": ["DAPI nucleic acid stain"], "time": ["30 seconds"]}; | {"action": "stain", "duration": ["30 seconds"], "reagent": ["DAPI nucleic acid stain"]}; | | Purify CD4+ by magnetic isolation using the Auto MACS sorter (Miltenyi Biotec) using POSSELD2 program. | purify | ['the Auto MACS sorter (Miltenyi Biotec)', 'POSSELD2 program', 'CD4+'] | [(property='reagent', value='CD4+'), (property='device', value='the Auto MACS sorter (Miltenyi Biotec)'), (property='device', value='POSSELD2 program')] | {"action": "purify", "device": ["the Auto MACS sorter (Miltenyi Biotec)", "POSSELD2 program"], "output": "", "reagent": ["CD4+"]}; | {"action": "purify", "device": ["the Auto MACS sorter (Miltenyi Biotec)", "POSSELD2"], "method": ["magnetic isolation"], "reagent": ["CD4+"]}; | | Measure baseline oxidative status every 20 s for at least 5 min, then add stimulating substances (e.g., thapsigargin). | measure, add | ['baseline oxidative status', 'every 20 s', '5 min'], ['stimulating substances'] | [Parameter(property='output', value='baseline oxidative status'), Parameter(property='time', value='every 20 s'), Parameter(property='time', value='5 min')], [(property='reagent', value='stimulating substances')] | {"action": "measure", "output": "baseline oxidative status", "time": ["every 20 s", "5 min"]}; {"action": "add", "reagent": ["stimulating substances"]}; | {"action": "measure", "output": "baseline oxidative status", "reagent": ["stimulating substances"], "time": ["every 20 s", "5 min"]}; | | Spin the crude extracts by ultracentrifugation at 55000 RPM to properly pellet residual insoluble proteins from the extract. | spin | ['ultracentrifugation', '55000 RPM', 'residual insoluble proteins', 'the extract'] | [Parameter(property='device', value='ultracentrifugation'), Parameter(property='force', value='55000 RPM'), Parameter(property='reagent', value='residual insoluble proteins'), Parameter(property='container', value='the extract')] | {"action": "spin", "device": ["ultracentrifugation"], "force": ["55000 RPM"], "output": "", "reagent": ["residual insoluble proteins"], "time": [""]} | {"action": "spin", "reagent": ["crude extracts"], "method": ["ultracentrifugation"], "purpose": ["to properly pellet residual insoluble proteins from the extract"], "speed": ["55000 RPM"]} | | Confirm positive colonies by transient transfection of sgRNAs analysis (SPH primers). | confirm | ['positive colonies', 'sgRNAs analysis (SPH primers)'] | [Parameter(property='output', value='positive colonies'), Parameter(property='reagent', value='sgRNAs analysis (SPH primers)')] | {"action": "confirm", "output": "positive colonies", "reagent": ["sgRNAs analysis (SPH primers)"]} | {"action": "confirm", "device": ["SPH primers"], "method": ["transient transfection of sgRNAs analysis"], "output": ["positive colonies"]} | References: [1] Honnibal, M. and Johnson, M. (2015). An improved non-monotonic transition system for dependency parsing. In Annual Conference on Empirical Methods in Natural Language Processing. [2] Xie, T., Li, Q., Zhang, Y., Liu, Z., and Wang, H. (2024). Self-improving for zero-shot named entity recognition with large language models. arXiv preprint arXiv:2311.08921.
Summary: The paper presents a framework for translating experimental protocols from natural language (NL) to machine-interpretable formats, specifically designed for self-driving laboratories. The proposed framework automates the protocol translation process through a three-stage workflow that constructs Protocol Dependence Graphs (PDGs) incrementally at the syntax, semantics, and execution levels. The approach is validated through quantitative and qualitative evaluations, demonstrating its performance on par with human experts. Strengths: * The paper introduces a novel, automated approach to protocol translation for self-driving laboratories, addressing a critical gap in the transition from AI-driven discoveries to empirical experimentation. * The paper is well-structured and clearly written. Weaknesses: - The proposed method requires substantial computational resources for training and execution, which might limit its accessibility for some research teams. - The paper could benefit from a more detailed analysis of the types of errors made by the automated translator compared to human experts, which would help understand the limitations and areas for improvement. Technical Quality: 3 Clarity: 3 Questions for Authors: - Can the authors provide more details on how the system handles ambiguous or incomplete protocol instructions that may be common in real-world scenarios? - What specific optimizations could be applied to reduce the computational requirements of the proposed framework? - How does the system ensure the safety and correctness of translated protocols, especially in high-stakes domains such as medical and clinical research? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > What specific optimizations could be applied to reduce the computational requirements of the proposed framework? Thanks for the question. Computational efficiency is always a topic of interest when evaluating new computational frameworks. Let us consider a new coming protocol with $k$ steps, with each step configured by a constant number of parameters, denoted as $\epsilon$. At the syntax level, the primary computation bottleneck arises during DSL program synthesis, where the EM Algorithm exhibits a worst-case complexity of $O(\epsilon^k)$. This is a highly conservative estimate, as mainstream optimization approaches can solve the EM much more efficiently. At the semantics level, the bottleneck occurs during reagent flow analysis, which consumes $O(k^2)$ complexity. Notably, only approximately 10\% of the steps are included in the nested loop for reagent flow construction, as about 90\% of the steps are linearly connected. At the execution level, the protocol execution model also exhibits $O(k^2)$ complexity, encompassing both forward and backward tracing. This can be optimized by replacing the full tracing strategy with a sliding window built upon the topological dependencies between steps. Although the complexities of the algorithms at these three levels are tractable, there is substantial room for improving the efficiency of the framework. Investigating methods to speed up the translation process for protocols with extremely high complexity would be a valuable area of research. We are committed to making the computational framework as accessible as possible for all research teams. In addition, our proposed framework functions as an auxiliary module for LLMs, supporting the use of off-the-shelf LLMs such as GPT and Llama without the need for domain-specific fine-tuning. Costs associated with calling commercial LLM APIs are quite affordable. We selected OpenAI's gpt-3.5-turbo-0125 model for our experiments. Across 75 test protocols, we executed 1816 queries to achieve syntax-level translation, resulting in structured protocols. At the semantic level, we conducted 4062 queries for completion tasks (including translating protocols retrieved from training dataset). Consequently, our expenditures were approximately 17 USD in total. > How does the system ensure the safety and correctness of translated protocols, especially in high-stakes domains such as medical and clinical research? Thanks for the question. In general, ensuring the safety and correctness of translated protocols in high-stakes domains is an exceptionally challenging task. Several factors contribute to these challenges, including accurately mapping operations to their corresponding configuration parameters, precisely parsing control flows from natural language, completing latent semantics with domain-specific knowledge, inferring missing or omitted key information, tracking resource capacities, and verifying the safety of run-time execution of experiments. Even minor errors in these areas can significantly compromise the safety and correctness of translated protocols. Consequently, we have made specific efforts in response to these challenges. At the syntax level, we synthesize the operation dependence graph to transform natural-language-based protocols into structured representations. This approach explicitizes operation-condition mappings and control flows. At the semantics level, we analyze the reagent flow graph to reconstruct the complete lifecycles of intermediate products, thereby addressing latent, missing, or omitted properties and values. At the execution level, we contextualize both the operation dependence graph and the reagent flow graph within spatial and temporal dynamics, resulting in a protocol dependence graph. This graph facilitates counterfactual reasoning to identify potential conflicts or shortages of execution resources and inappropriate combinations of operations within execution sequences. We provide several illustrative examples to demonstrate these concepts (see C.2-1). > The paper could benefit from a more detailed analysis of the types of errors made by the automated translator compared to human experts, which would help understand the limitations and areas for improvement. Thanks for the suggestion. Here we present a detailed analysis of the errors made by our proposed automatic translator compared to human experts. We discuss the potential improvements of the translator accordingly. At the syntax level, the major difference being in the analysis of long sentences in natural language. Human experts analyze the parameters of events/actions or multiple actions in long sentences with ease, while for our approach, there are sometimes problems with the correspondence between action and parameter (see C.2-2-1). At the semantic level, when supplementing known unknowns, human experts tend to infer parameters based on established protocols outside their expertise; when supplementing unknown unknowns, human experts tend to transfer their knowledge from familiar domains to protocols in various fields. Our system, however, completes parameters based on all collected protocols, which is essentially the opposite of the transfer process used by human experts (see C.2-2-2). At the execution level, human experts track capacity primarily based on prior knowledge, subsequently using context to judge the appropriateness of the equipment used. In contrast, the machine extracts the entire flow process, enabling it to calculate each step and ensure that the capacity tracking is scientifically sound and reasonable (see C.2-2-3). > Can the authors provide more details on how the system handles ambiguous or incomplete protocol instructions that may be common in real-world scenarios? Thanks for the question. Here, we present a series of case studies to elucidate the the specific behaviors of components within the proposed three-stage framework at the syntax, semantics, and execution levels (see C.2-3). --- Rebuttal 2: Title: C.2-1: Running examples on the measures to ensure the safety and correctness of translated protocols Comment: **Tab. 2-1-1: Syntax level - Operation-condition mapping** | original text | syntax level | action | conditions | |---------------|--------------|--------|------------| | Spin media at 500-1,000 x g for 10 min (optional), pre-x g for 10 min, filter with 0.22 µm PES membrane, freeze at -80°C. | {"action": "spin", "output": "filtered media", "speed": ["500-1,000 x g"], "time": ["10 min"]}, {"action": "filter", "output": "filtered media", "device": ["0.22 µm PES membrane"]}, {"action": "freeze", "output": "frozen media", "temperature": ["-80°C"]} | Spin, Filter, Freeze | Speed: 500-1,000 x g, Time: 10 min, Device: 0.22 µm PES membrane, Temperature: -80°C | | Thaw 4 ml supernatant on ice, add 4 ml XBP buffer. | {"action": "thaw", "output": "thawed supernatant", "volume": ["4 ml"], "reagent": ["supernatant"]}, {"action": "add", "output": "sample/XBP mix", "reagent": ["XBP buffer"], "volume": ["4 ml"]} | Thaw, Add | Volume: 4 ml, Temperature: On ice | | Add sample/XBP mix to exoEasy maxi spin column, centrifuge 1-3 min at 500 x g, discard flow-through. | {"action": "add", "output": "flow-through", "reagent": ["sample/XBP mix"], "container": ["spin column"]}, {"action": "centrifuge", "output": "flow-through", "speed": ["500 x g"], "time": ["1-3 min"]}, {"action": "discard", "output": "", "reagent": ["flow-through"]} | Add, Centrifuge, Discard | Container: Spin column, Speed: 500 x g, Time: 1-3 min | | Add 10 ml XWP to spin column, centrifuge 5 min at 5,000 x g, transfer column to fresh collection tube. | {"action": "add", "output": "", "reagent": ["XWP"], "volume": ["10 ml"]}, {"action": "centrifuge", "output": "", "speed": ["5,000 x g"], "time": ["5 min"], "container": ["spin column"]}, {"action": "transfer", "output": "", "container": ["fresh collection tube"]} | Add, Centrifuge, Transfer | Volume: 10 ml, Speed: 5,000 x g, Time: 5 min, Container: Spin column, Fresh collection tube | | Add 700 µL Qiazol to spin column, centrifuge 5 min at 5,000 x g, spin PLG tubes 30 s at 16,000 x g. | {"action": "add", "output": "", "reagent": ["Qiazol"], "volume": ["700 µL"]}, {"action": "centrifuge", "output": "", "speed": ["5,000 x g"], "time": ["5 min"], "container": ["spin column"]}, {"action": "spin", "output": "", "speed": ["16,000 x g"], "time": ["30 s"], "container": ["PLG tubes"]} | Add, Centrifuge, Spin | Volume: 700 µL, Speed: 5,000 x g, Time: 5 min, Speed: 16,000 x g, Time: 30 s, Container: Spin column, PLG tubes | | Add flow-through to PLG tube, vortex 5 s, incubate 5 min at RT. | {"action": "add", "output": "", "reagent": ["flow-through"], "container": ["PLG tube"]}, {"action": "vortex", "output": "", "time": ["5 s"]}, {"action": "incubate", "output": "", "time": ["5 min"], "temperature": ["RT"]} | Add, Vortex, Incubate | Container: PLG tube, Time: 5 s, Time: 5 min, Temperature: RT | | Add 90 µL chloroform. | {"action": "add", "output": "", "volume": ["90 µL"], "reagent": ["chloroform"]} | Add | Volume: 90 µL | | Shake vigorously for 15 s, incubate 2-3 min at RT. | {"action": "shake", "output": "", "time": ["15 s"]}, {"action": "incubate", "output": "", "time": ["2-3 min"], "temperature": ["RT"]} | Shake, Incubate | Time: 15 s, Time: 2-3 min, Temperature: RT | | Centrifuge 15 min at 12,000 x g, transfer upper aqueous phase to new tube. | {"action": "centrifuge", "output": "upper aqueous phase", "speed": ["12,000 x g"], "time": ["15 min"]}, {"action": "transfer", "output": "upper aqueous phase", "container": ["new tube"]} | Centrifuge, Transfer | Speed: 12,000 x g, Time: 15 min, Container: New tube | | Add 2 volumes 100% ethanol, mix. | {"action": "add", "output": "ethanol mixture", "volume": ["2 volumes"], "reagent": ["100% ethanol"]}, {"action": "mix", "output": "", "reagent": ["ethanol mixture"]} | Add, Mix | Volume: 2 volumes | | Add mix to MinElute spin column, centrifuge 15 s at 1,000 x g, discard flow-through, repeat until all sample is used. | {"action": "add", "output": "", "reagent": ["ethanol mixture"], "container": ["MinElute spin column"]}, {"action": "centrifuge", "output": "", "speed": ["1,000 x g"], "time": ["15 s"]}, {"action": "discard", "output": "", "reagent": ["flow-through"]}, {"action": "repeat", "output": "", "condition": ["until all sample is used"]} | Add, Centrifuge, Discard, Repeat | Container: MinElute spin column, Speed: 1,000 x g, Time: 15 s, Condition: Until all sample is used | | Wash column with 700 µL Buffer RWT, centrifuge 15 s at ≥8,000. | {"action": "wash", "output": "", "reagent": ["Buffer RWT"], "volume": ["700 µL"]}, {"action": "centrifuge", "output": "", "speed": [">=8,000"], "time": ["15 s"]} | Wash, Centrifuge | Volume: 700 µL, Speed: ≥8,000, Time: 15 s | | Wash twice with 500 µL Buffer RPE, centrifuge 15 s at ≥8,000. | {"action": "wash", "output": "RNase-free", "reagent": ["Buffer RPE"], "volume": ["500 µL"]}, {"action": "centrifuge", "output": "RNase-free", "speed": [">=8,000"], "time": ["15 s"]} | Wash, Centrifuge | Volume: 500 µL, Speed: ≥8,000, Time: 15 s | --- Rebuttal 3: Title: C.2-1 continued Comment: **Tab. 2-1-2: Syntax level - Operation control flows** | original text | syntax level | action | control flows | |---------------|--------------|--------|---------------| | Centrifuge the cell suspension at 200 x g at room temperature for 5 min. | {"action": "centrifuge", "output": "cell pellet", "temperature": ["room temperature"], "time": ["5 min"], "force": ["200 x g"]} | Centrifuge | Linear | | Remove the supernatant. | {"action": "remove", "output": "supernatant"} | Remove | Linear | | Suspend the cell pellet with 2 ml ACK lysing buffer for 1 min to deplete red blood cells. | {"action": "suspend", "output": "depleted cell suspension", "volume": ["2 ml"], "reagent": ["ACK lysing buffer"], "time": ["1 min"]} | Suspend | Linear | | If red blood cells are not completely depleted, repeat the ACK lysing buffer step until they are. | {"action": "repeat", "output": "", "reagent": ["ACK lysing buffer"], "condition": ["if red blood cells are not completely depleted"]} | Repeat | Non-linear | | Filter the cell suspension through a 40 μm nylon strainer. | {"action": "filter", "output": "filtered cell suspension", "container": ["40 μm nylon strainer"]} | Filter | Linear | | Wash the strainer with 2 ml 1x DPBS for 5 min. | {"action": "wash", "output": "", "container": ["strainer"], "reagent": ["1x DPBS"], "volume": ["2 ml"], "time": ["5 min"]} | Wash | Linear | | Wash the cell pellet with 1x DPBS with 20 ng/ml murine M-CSF in a 100 mm Petri dish. | {"action": "wash", "output": "", "reagent": ["1x DPBS with 20 ng/ml murine M-CSF"], "container": ["100 mm Petri dish"]} | Wash | Linear | | Suspend in 15 ml complete DMEM medium. | {"action": "suspend", "output": "cell suspension in DMEM", "volume": ["15 ml"], "reagent": ["complete DMEM medium"]} | Suspend | Linear | | Incubate at 37 °C, 5% CO2. | {"action": "incubate", "output": "incubated cells", "temperature": ["37 °C"], "environment": ["5% CO2"]} | Incubate | Linear | | After 3 days, replace half of the medium with fresh complete DMEM medium. | {"action": "replace", "output": "", "reagent": ["fresh complete DMEM medium"], "time": ["after 3 days"]} | Replace | Linear | | Repeat this step every 2 days. | {"action": "repeat", "output": "", "reagent": ["fresh complete DMEM medium"], "frequency": ["every 2 days"]} | Repeat | Non-linear | **Tab. 2-1-3: Semantic level - Latent semantics of known unknowns** | original text | semantic level | known unknowns | |-----|-----|------| | Transfer the sample (plasma, , cell suspension) into a glass centrifuge vial. | {"action": "transfer", "output": "", "reagent": ["the sample (plasma, , cell suspension)"], "container": ["a glass centrifuge vial"]} | | | Adjust the volume to 1 ml with PBS. | {"action": "modify", "output": "heparinized blood.1 ml medium", "volume": ["<<<1 ml>>>"], "length": [""], "device": [""]} | "1 ml" | | 50-200 µl plasma was taken from heparinized blood.1 ml medium. | {"action": "take", "output": "", "reagent": ["heparinized blood.1 ml medium"]} | | | Plasma was directly taken from cell culture. | {"action": "take", "output": "a plasma sample", "reagent": [""]} | | | Add 10 µl of the internal standard (10 μM C17-S1P in MeOH). Add 300 µl of 18.5% HCl. | {"action": "add", "output": "", "reagent": ["18.5% HCl"], "volume": ["<<<300 µl>>>"]} | "300 µl" | | As an example, S1P extraction from a plasma sample is shown in step A7. | {"action": "show", "output": "step A7", "reagent": ["a plasma sample"]} | | | The CHCl3-phase is extracted by directly pipetting through the upper aqueous phase. | {"action": "extract", "output": "the CHCl3", "container": ["the upper aqueous phase"], "reagent": ["step A7"]} | | | Add this CHCl3-phase to the transferred CHCl3-phase of step A7. | {"action": "add", "output": "", "reagent": ["this CHCl3-phase"]} | | | Vacuum-dry the CHCl3 in the vacuum rotator at 60 °C for 45 min. | {"action": "rinse", "output": "", "reagent": ["<<<the CHCl3>>>"], "temperature": ["60 °C"], "time": ["<<<45 min>>>"]} | "the CHCl3", "45 min" | | Alternatively, the samples can be dried under nitrogen gas flow. | {"action": "dry", "output": "", "reagent": ["samples"], "time": ["1-20 min"]} | | | Re-equilibrate with 90% solution A. | {"action": "equilibrate", "output": "S1P", "concentration": ["90% solution"], "volume": [""]} | | | S1P is analyzed with the mass transition 380 m/z -> 264 m/z. For quantitative analysis, a standard curve with S1P amounts of 1 pmol to 100 pmol as the internal standard is generated. | {"action": "examine", "output": "quantitative analysis", "reagent": ["S1P"]} | | --- Rebuttal 4: Title: C.2-1 continued Comment: **Tab. 2-1-4: Semantic level - Latent semantics of unknown unknowns** | original text | semantic level | unknown unknowns | |---------------|----------------|----------------| | Harvest approximately 1×10<sup>7</sup> cells by centrifugation for 5 min. | {"action": "harvest", "output": "", "device": ["centrifugation"], "force": ["<<<2000 RPM>>>"], "time": ["5 min"]} | "<<<2000 RPM>>>" | | Cell lysates are homogenized by passing through 22-gauge needles. | {"action": "homogenize", "output": "", "reagent": ["cell lysates"]} | | | Tubes are put on ice for 15 min to complete the lysis. | {"action": "incubate", "output": "", "container": ["tubes"], "time": ["15 min"], "temperature": ["on ice"]} | | | Crude extracts are then centrifuged. | {"action": "centrifuge", "output": "", "force": ["<<<2500 RPM>>>"], "time": ["<<<5 min>>>"]} | "<<<5 min>>>" | | Supernatants are transferred to fresh centrifuge tubes. | {"action": "transfer", "output": "", "container": ["fresh centrifuge tubes"]} | | | Cold 5 M NaCl is added to each sample to make a salt concentration of between 0.7 – 1.0 M to disrupt protein-protein interactions. | {"action": "add", "output": "sample with NaCl", "container": ["each sample"], "reagent": ["5 M NaCl"], "concentration": ["0.7 – 1.0 M"]} | | | Spin the crude extracts by ultracentrifugation to properly pellet residual insoluble proteins from the extract. | {"action": "spin", "output": "Hypotonic Buffer", "device": ["ultracentrifugation"], "force": ["<<<55000 RPM>>>"], "reagent": ["residual insoluble proteins"]} | "<<<55000 RPM>>>" | | Transfer supernatants into fresh centrifuge tubes. | {"action": "transfer", "output": "", "reagent": ["supernatants"], "container": ["fresh centrifuge tubes"]} | | | Rinse Protein A beads in Hypotonic Buffer until ready for use. | {"action": "rinse", "output": "use", "reagent": ["Hypotonic Buffer"]} | | | Take a volume of cell lysates (prepared as described above). | {"action": "take", "output": "Hypotonic Buffer", "volume": ["cell lysates"]} | | | Dilute with Hypotonic Buffer to 250 – 500 mM salt to enable protein-protein interactions. | {"action": "dilute", "output": "antibody", "reagent": ["Hypotonic Buffer"]} | | | Add 2 µg of preclearing antibody to the diluted lysate (e.g., anti), vortex, add 50 µL of Protein A beads. | {"action": "add", "output": "polyclonal anti-MEKK1", "reagent": ["antibody", "Protein A beads"]} | | | Add 2 µg of polyclonal anti-MEKK1 to the lysates, add 50 µL of Protein A beads at 4 °C for 1 h. | {"action": "add", "output": "", "reagent": ["polyclonal anti-MEKK1"], "container": ["the lysates"], "temperature": ["4 °C"], "time": ["1 h"]} | | | Touchspin beads, wash beads with hypotonic buffer (supplemented with NaCl). | {"action": "wash", "output": "", "reagent": ["hypotonic buffer"], "concentration": ["<<<300 mM>>>"]} | "<<<300 mM>>>" | | In total, 3 – 5 washes of the beads are performed. | {"action": "perform", "output": "", "reagent": ["hypotonic buffer"], "frequency": ["3 – 5"]} | | | Finally, wash once with Hypotonic Buffer. | {"action": "wash", "output": "", "reagent": ["Hypotonic Buffer"]} | | | Purified MEKK1 may be stored by snap-freezing in liquid nitrogen. | {"action": "store", "output": "M", "method": ["snap-freezing"], "reagent": ["liquid nitrogen"]} | | | Following preparation of MEKK1 immunoprecipitates (as above), incubate with 7 µg of JNKK1(K131M) along with 5 µCi of [γ-<sup>32</sup>P]ATP for 30 min. | {"action": "incubate", "output": "", "reagent": ["JNKK1(K131M)", "[γ-<sup>32</sup>P]ATP"], "container": ["<<<Kinase Assay Buffer>>>"], "temperature": ["<<<30 °C>>>"], "time": ["30 min"]} | "<<<Kinase Assay Buffer>>>", "<<<30 °C>>>" | --- Rebuttal 5: Title: C.2-1 continued Comment: **Tab. 2-1-5: Execution level - Capacity of resources** | original text | execution level | reagent flow graph | |-----|-----|------| | Prepare annealing solution of 50 µM RNA/DNA oligos with 50 mM NaCl in DNase/RNase-free water, aliquot 50 µl in PCR tube. | {"action": "prepare", "output": "annealing solution", "concentration": ["50 µM RNA/DNA oligos", "50 mM NaCl"], "reagent": ["DNase/RNase-free water"], "volume": ["50 µl"], "container": ["PCR tube"]} | in: DNase/RNase-free water (50 µl), RNA/DNA oligos (50 µM), NaCl (50 mM); out: annealing solution (50 µl) | | Dissolve inhibitor compound in DMSO to 10 mM, if needed, prepare serial dilutions in Milli-Q water. | {"action": "dissolve", "output": "inhibitor compound solution", "reagent": ["inhibitor compound", "DMSO"]} | in: inhibitor compound, DMSO; out: inhibitor compound solution (volume depends on dilution) | | Add water (20 µl in blanks, 10 µl in controls) to 96-well plate. | {"action": "add", "output": "water in wells", "reagent": ["water"], "container": ["96-well plate"]} | in: water (20 µl for blanks, 10 µl for controls); out: water in 96-well plate (20 µl in blanks, 10 µl in controls) | | Add 80 µl RT reaction mix (1.25x). | {"action": "add", "output": "RT reaction mix in wells", "volume": ["80 µl"]} | in: RT reaction mix (80 µl); out: RT reaction mix in 96-well plate (80 µl) | | Add 10 µl inhibitor dilution to samples, to each well. | {"action": "add", "output": "samples with inhibitor", "volume": ["10 µl"], "reagent": ["inhibitor dilution"]} | in: inhibitor dilution (10 µl); out: samples with inhibitor (10 µl) | | Stop reaction with 50 µl EDTA (0.5 M, pH 8.0). | {"action": "stop", "output": "stopped reaction", "reagent": ["EDTA"], "volume": ["50 µl"]} | in: EDTA (50 µl); out: stopped reaction with EDTA (50 µl) | | Quantify reaction with Victor 3 at 490/528 nm, report inhibitor values as percentage of control. | {"action": "quantify", "output": "quantified reaction", "device": ["Victor 3"]} | in: reaction; out: quantified reaction at 490/528 nm | | Subtract blank value from samples. | {"action": "subtract", "output": "corrected samples", "reagent": ["blank value"]} | in: blank value, samples; out: corrected sample values | | Calculate IC50 value as the concentration reporting 50% reduction of signal compared to control. | {"action": "calculate", "output": "IC50 value", "reagent": ["signal"]} | in: signal; out: IC50 value | **Tab. 2-1-6: Execution level - Safety of operations** | original text | execution level | reagent flow graph | |-----|-----|------| | Replace medium after 12 hours (Day 2). | {"action": "replace", "output": "medium replaced", "container": ["medium"], "volume": [""]} | in: old medium; out: new medium | | Digest mESCs with 0.05% trypsin, prepare for FACS into 96-well plates (Day 10). | {"action": "digest", "output": "mESCs", "reagent": ["0.05% trypsin"], "container": ["96-well plates"]} | in: mESCs, 0.05% trypsin; out: digested mESCs (ensure trypsin is neutralized to avoid over-digestion) | | Remove single colonies from 96-well plates to 24-well plates. | {"action": "remove", "output": "single colonies", "container": ["96-well plates", "24-well plates"]} | in: single colonies; out: single colonies in 24-well plates | | Confirm positive colonies by transient transfection of sgRNAs analysis (SPH primers) (Day 14-15). | {"action": "confirm", "output": "positive colonies", "reagent": ["SPH primers"]} | in: single colonies, SPH primers; out: positive colonies | | Replace medium after 12 hours (Day 2). | {"action": "replace", "output": "medium replaced", "container": ["medium"], "volume": [""]} | in: old medium; out: new medium | | Sort single cells into 96-well plates by FACS. | {"action": "sort", "output": "single cells", "device": ["FACS"], "container": ["96-well plates"]} | in: single cells; out: sorted single cells in 96-well plates (ensure proper calibration of FACS to avoid sorting errors) | | Confirm insertion by PCR (Day 18). | {"action": "confirm", "output": "insertion confirmed"} | in: single cells; out: confirmed insertion | | Remove single colonies from 96-well plates to 24-well plates. | {"action": "remove", "output": "single colonies", "container": ["24-well plates"]} | in: single colonies; out: single colonies in 24-well plates | | Confirm positive colonies by PCR (Day 22). | {"action": "confirm", "output": "positive colonies"} | in: single colonies; out: positive colonies | | Measure fluorescent intensity of colonies by FACS, take fluorescence images under confocal microscope (Day 27). | {"action": "take", "output": "fluorescence images", "device": ["confocal microscope"], "container": ["colonies"]} | in: colonies; out: fluorescence images (handle samples to avoid photobleaching) | --- Rebuttal 6: Title: C.2-2-1: Distinctions between human experts and the translator at the syntax level Comment: This series of examples demonstrates the superior performance of our system at the syntax level when processing relatively short sentences. | original text | syntax level | |-----|-----| | Transfer 200 µl of serum into a 1.5 ml microfuge tube. | {"action": "transfer", "output": "buffer", "reagent": ["serum"], "container": ["a 1.5 ml microfuge tube"], "volume": ["200 µl"]} | | Add 200 µl of buffer XBP. | {"action": "add", "output": "the sample/XBP mixture", "reagent": ["buffer"], "volume": ["200 µl"]} | | Add the sample/XBP mixture onto the exoEasy spin column. | {"action": "add", "output": "", "reagent": ["the sample/XBP mixture"]} | | Add 10 ml of buffer XWP to the exoEasy spin column. | {"action": "add", "output": "", "reagent": ["buffer XWP"], "volume": ["10 ml"]} | | Transfer the spin column to a fresh collection tube. | {"action": "transfer", "output": "Qiazol", "container": ["a fresh collection tube"]} | | Add 700 μl Qiazol to the membrane of the spin column. | {"action": "add", "output": "", "reagent": ["Qiazol"], "volume": ["700 μl"]} | | Transfer the flow-though, which is the lysate, to a PLG tube. | {"action": "transfer", "output": "flow-though", "container": ["PLG tube"]} | | Incubate at room temperature for 5 minutes. | {"action": "incubate", "output": "", "temperature": ["room temperature"], "time": ["5 minutes"]} | | Add 90 µl chloroform. | {"action": "add", "output": "", "volume": ["90 µl"]} | This series of examples illustrates the challenges faced with longer sentences due to the diversity of actions and the multiple parameters. | original text | syntax level | remarks | |-----|-----|------| | Rinse MSC with 2 ml PBS. | {"action": "rinse", "output": "MSC with 2 ml PBS", "reagent": ["PBS"], "volume": ["2 ml"]} | | | trypsinize with 0.5 ml trypsin, | {"action": "trypsinize", "output": "", "reagent": ["trypsin"], "volume": ["0.5 ml"]} | | | transfer to 15 ml tube, | {"action": "transfer", "output": "", "container": ["15 ml tube"]} | | | add 10 ml DMEM with 10% FBS | {"action": "add", "output": "", "reagent": ["DMEM", "FBS"], "volume": ["10 ml"]} | | | Incubate overnight at 37 °C, 5% CO2. | {"action": "incubate", "output": "", "temperature": ["37 °C"]} | Lack of parameter | | Wash cells twice with 2 ml PBS, add osteogenic differentiation medium. | {"action": "wash", "output": "", "volume": ["2 ml PBS"], "reagent": ["osteogenic differentiation medium"]} | Lack of action in single sentence | | change medium every 2 days for 10 days | {"action": "change", "output": "Alizarin red S", "time": ["every 2 days for 10 days"]} | | | At day 10, stain with Alizarin red S for 5 min. | {"action": "stain", "output": "", "reagent": ["Alizarin red S"], "time": ["5 min"]} | | --- Rebuttal 7: Title: C.2-2-2: Distinctions between human experts and the translator at the semantics level Comment: The example presents as follows — the completion of two types of parameters at the semantic level is included: for instance, determining the configuration parameter for an operation, where human experts rely on personal experimental experience; and inferring the required reagents for one step, where human experts use contextual reasoning. When the context is not sufficiently clear, human experts cannot infer the known unknowns within a single sentence. | original text | semantic level - machine result | remarks | |-----|-----|------| | Add 700 μl of buffer RWT to the RNeasy MinElute spin column. | {"action": "add", "output": "", "reagent": ["<<<buffer RWT>>>"], "volume": ["700 µl"]} | known unknown | | Discard the flow-through. | {"action": "discard", "output": "the flow-through", "volume": [""]} | | | Discard the collection tube with the flow-through. | {"action": "discard", "output": "the flow-through", "container": ["the collection tube"], "volume": [""], "reagent": ["the flow-through"]} | | | Transfer the RNeasy MinElute spin column into a new 2 ml collection tube (supplied). | {"action": "transfer", "output": "", "device": ["RNeasy MinElute"], "container": ["a new 2 ml collection tube (supplied)"], "volume": [""]} | | | Open the lid of the spin column. | {"action": "open", "output": "", "container": ["<<<spin column>>>"]} | known unknown | | Centrifuge at full speed (14,000 xg) to dry the membrane. | {"action": "centrifuge", "output": "", "speed": ["full speed (14,000 xg)"], "container": ["membrane"], "time": ["<<<5 min>>>"]} | unknown unknown | | Discard the collection tube with the flow-through. | {"action": "discard", "output": "the flow-through", "container": ["the collection tube"], "volume": [""], "reagent": ["the flow-through"]} | | | Transfer the RNeasy MinElute spin column into a new 1.5 ml collection tube. | {"action": "transfer", "output": "RNase-free water", "device": ["RNeasy MinElute"], "container": ["a new 1.5 ml collection tube (supplied)"], "volume": [""]} | | | Add 14 μl RNase-free water directly to the center of the spin column membrane. | {"action": "add", "output": "", "reagent": ["<<<RNase-free water>>>"], "volume": ["14 µl"]} | known unknown | | original text | semantic level - human result | remarks | |-----|-----|------| | Add 700 μl of buffer RWT to the RNeasy MinElute spin column. | {"action": "add", "output": "", "reagent": ["<<<NONE>>>"], "volume": ["700 µl"]} | known unknown | | Discard the flow-through. | {"action": "discard", "output": "the flow-through", "volume": [""]} | | | Discard the collection tube with the flow-through. | {"action": "discard", "output": "the flow-through", "container": ["the collection tube"], "volume": [""], "reagent": ["the flow-through"]} | | | Transfer the RNeasy MinElute spin column into a new 2 ml collection tube (supplied). | {"action": "transfer", "output": "", "device": ["RNeasy MinElute"], "container": ["a new 2 ml collection tube (supplied)"], "volume": [""]} | | | Open the lid of the spin column. | {"action": "open", "output": "", "container": ["<<<spin column>>>"]} | known unknown | | Centrifuge at full speed (14,000 xg) to dry the membrane. | {"action": "centrifuge", "output": "", "speed": ["full speed (14,000 xg)"], "container": ["membrane"], "time": ["<<<5 min>>>"]} | unknown unknown | | Discard the collection tube with the flow-through. | {"action": "discard", "output": "the flow-through", "container": ["the collection tube"], "volume": [""], "reagent": ["the flow-through"]} | | | Transfer the RNeasy MinElute spin column into a new 1.5 ml collection tube. | {"action": "transfer", "output": "RNase-free water", "device": ["RNeasy MinElute"], "container": ["a new 1.5 ml collection tube (supplied)"], "volume": [""]} | | | Add 14 μl RNase-free water directly to the center of the spin column membrane. | {"action": "add", "output": "", "reagent": ["<<<water>>>"], "volume": ["14 µl"]} | known unknown | --- Rebuttal 8: Title: C.2-2-3: Distinctions between human experts and the translator at the execution level Comment: This series of examples demonstrates how our system tracks the required capacities at each step of the protocol by contextualizing the step into the spatial dimension. | original text | execution level | key resources | |-----|-----|------| | Add 4 μl of 160 mM KMnO4 to radiolabeled DNA (40 ng, 5,000-10,000 cpm) in 40 μl total volume. | {"action": "add", "output": "reaction mixture", "reagent": ["160 mM KMnO4", "radiolabeled DNA"], "volume": ["4 μl", "40 μl"]} | "radiolabeled DNA" | | Precipitate with ethanol. | {"action": "precipitate", "output": "precipitate", "reagent": ["ethanol"]} | "reaction mixture" | | dissolve in 70 μl 10% piperidine, | {"action": "dissolve", "output": "dissolved DNA", "reagent": ["10% piperidine"], "volume": ["70 μl"]} | "precipitate" | | incubate at 90 °C for 30 min | {"action": "incubate", "output": "incubated DNA", "temperature": ["90 °C"], "time": ["30 min"]} | "dissolved DNA" | | Precipitate with ethanol | {"action": "precipitate", "output": "pellets", "reagent": ["ethanol"]} | "incubated DNA" | | Wash pellets with 70% ethanol, dry, dissolve in 5 μl electrophoresis loading buffer. | {"action": "rinse", "output": "non-labeled DNA", "reagent": ["70% ethanol", "electrophoresis loading buffer"], "volume": ["5 μl"]} | "pellets" | This series of examples illustrates how our system tracks the preconditions and postconditions at each step of the protocol by contextualizing the step into the temporal dimension. | original text | execution level | key resources | |-----|-----|------| | Freeze cells for 1 hour at -80°C, thaw at 37°C for 1 hour. | {"action": "freeze", "output": "DLF_R004", "reagent": ["cells"], "time": ["1 hour"], "temperature": ["-80°C"]}, {"action": "thaw", "output": "DLF_R004", "reagent": ["cells"], "time": ["1 hour"], "temperature": ["37°C"]} | "DLF_R004" | | If not using DLF_R004, lyse cells with lysis buffer. | {"action": "lyse", "output": "cell lysate", "reagent": ["lysis buffer"], "condition": ["not using DLF_R004"]} | "DLF_R004" | | Prepare serological pipette by cutting at the 3 mL mark, sealing bottom with parafilm. | {"action": "prepare", "output": "modified pipette", "device": ["serological pipette"], "modification": ["cutting at the 3 mL mark", "sealing bottom with parafilm"]} | "lysis buffer" | | Secure serological pipette to a vertical surface. | {"action": "secure", "output": "secured pipette", "device": ["serological pipette"]} | "modified pipette" | | Fill pipette with at least 2.5 mL cell lysate, measure distance from 2 mL to 1 mL mark. | {"action": "fill", "output": "filled pipette", "volume": ["at least 2.5 mL"], "reagent": ["cell lysate"]} | "secured pipette" | | Position cell phone camera to record pipette, drop a glass bead inside, repeat two more times. | {"action": "position", "output": "recorded experiment", "device": ["cell phone camera", "pipette"], "reagent": ["glass bead"]} | "filled pipette" | | remove parafilm seal. | {"action": "remove", "output": "", "container": ["parafilm seal"]} | "recorded experiment" | | rinse pipette | {"action": "rinse", "output": "cleaned pipette", "device": ["pipette"]} | "next sample" | | repeat with next sample to obtain triplicates | {"action": "repeat", "output": "triplicates"} | "triplicates" | --- Rebuttal 9: Title: C.2-3: Showcases on how each component within the framework handle ambiguity and incompleteness Comment: | original text | w/ stage1, w/o stage2&3 | utility of stage1 | w/ stage1&2, w/o stage3 | utility of stage2 | w/ stage1&2&3 | utility of stage3 | |-----|-----|------|------|------|------|------| | Kill most the contaminating spores that have germinated. Centrifuge the spore mixture at \<MASK> for 5 min. | {"action": "eliminate", "output": "", "reagent": ["the contaminating spores that have germinated"]}; {"action": "centrifuge", "force": ["\<MASK>"], "output": "", "reagent": ["the spore mixture"], "time": ["5 min"]}; | {"in": ["the contaminating spores that have germinated"], "out": []}; {"in": ["the spore mixture"], "out": []}; | {"action": "eliminate", "output": "", "reagent": ["the contaminating spores that have germinated"]}; {"action": "centrifuge", "force": ["1,200 x g"], "output": "", "reagent": ["the spore mixture"], "time": ["5 min"]}; | Latent semantics of unknown unknowns (force); | {"action": "eliminate", "output": "the spore mixture", "reagent": ["the contaminating spores that have germinated"]}; {"action": "centrifuge", "force": ["1,200 x g"], "output": "", "reagent": ["the spore mixture"], "time": ["5 min"]}; | Reagent: spore mixture; No specific volume provided | | Add pre-hybr soln directly to the hybridization reaction to get hybrid molecule. Incubate for \<MASK>. | {"action": "add", "output": "hybrid molecule", "reagent": ["pre-hybr soln", "the hybridization reaction"]}; {"action": "incubate", "output": "", "reagent": [""], "temperature": [""], "time": ["\<MASK>"]}; | {"in": ["pre-hybr soln", "the hybridization reaction"], "out": ["hybrid molecule"]}; {"in": ["hybrid molecule"], "out": []}; | {"action": "add", "output": "hybrid molecule", "reagent": ["pre-hybr soln", "the hybridization reaction"]}; {"action": "incubate", "output": "", "reagent": [""], "temperature": [""], "time": ["10 mins"]}; | Latent semantics of known unknowns (time); | {"action": "add", "output": "hybrid molecule", "reagent": ["pre-hybr soln", "the hybridization reaction"]}; {"action": "incubate", "output": "", "reagent": ["hybrid molecule"], "temperature": [""], "time": ["10 mins"]}; | Reagent: pre-hybridization solution, hybridization reaction; No specific volume provided | | Confirm positive colonies by PCR. Take fluorescence images under \<MASK>. | {"action": "confirm", "device": ["PCR"], "output": "positive colonies", "reagent": [""]}; {"action": "take", "device": ["\<MASK>"], "output": ["fluorescence images"]}; | {"in": ["PCR"], "out": ["positive colonies"]}; {"in": ["positive colonies"], "out": ["fluorescence images"]}; | {"action": "confirm", "device": ["PCR"], "output": "positive colonies", "reagent": [""]}; {"action": "take", "device": ["microscope"], "output": ["fluorescence images"]}; | Latent semantics of unknown unknowns (device); | {"action": "confirm", "device": ["PCR"], "output": "positive colonies", "reagent": ["RNAs"]}; {"action": "take", "device": ["microscope"], "output": ["fluorescence images"]}; | Reagent: RNAs; No specific volume provided | | Transfer the flow to a PLG tube. Incubate at \<MASK> for 5 minutes. Add 90 µl chloroform. | {"action": "transfer", "output": "", "container": ["a PLG tube"], "reagent": ["the flow"]}; {"action": "incubate", "output": "", "temperature": ["\<MASK>"], "time": ["5 minutes"]}; {"action": "add", "output": "", "volume": ["90 µl"], "reagent": ["chloroform"]}; | {"in": ["the flow"], "out": ["PLG tube"]}; {"in": ["PLG tube"], "out": []}; {"in": ["90 µl chloroform"], "out": []}; | {"action": "transfer", "output": "", "container": ["a PLG tube"], "reagent": ["the flow"]}; {"action": "incubate", "output": "", "temperature": ["room temperature"], "time": ["5 minutes"]}; {"action": "add", "output": "", "volume": ["90 µl"], "reagent": ["chloroform"]}; | Latent semantics of unknown unknowns (temperature); | {"action": "transfer", "output": "", "container": ["a PLG tube"], "reagent": ["the flow"]}; {"action": "incubate", "output": "", "temperature": ["room temperature"], "time": ["5 minutes"]}; {"action": "add", "output": "", "volume": ["90 µl"], "reagent": ["chloroform"]}; | Reagent: flow, chloroform (90 µl) | --- Rebuttal 10: Comment: | original text | w/ stage1, w/o stage2&3 | utility of stage1 | w/ stage1&2, w/o stage3 | utility of stage2 | w/ stage1&2&3 | utility of stage3 | |-----|-----|------|------|------|------|------| | Transfer the clear supernatant to \<MASK>. Incubate at 4 °C with rotation. | {"action": "transfer", "output": "the clear supernatant", "container": ["\<MASK>"]}; {"action": "incubate", "output": "", "temperature": ["4 °C"], "reagent":[]}; | {"in": ["the clear supernatant"], "out": ["\<MASK>"]}; {"in": ["\<MASK>"], "out": []}; | {"action": "transfer", "output": "the clear supernatant", "container": ["a new tube"]}; {"action": "incubate", "output": "", "temperature": ["4 °C"], "reagent":[]}; | Latent semantics of unknown unknowns (container); | {"action": "transfer", "output": "the clear supernatant", "container": ["a new tube"]}; {"action": "incubate", "output": "", "temperature": ["4 °C"], "reagent":["the clear supernatant"]}; | Reagent: clear supernatant; No specific volume provided | | Wash the cell pellet with 1x DPBS with 20 ng/ml murine M-CSF in a 100 mm Petri dish. Suspend in \<MASK> complete DMEM medium. | {"action": "wash", "output": "", "reagent": ["1x DPBS with 20 ng/ml murine M-CSF", "the cell pellet"], "container": ["a 100 mm Petri dish"]}; {"action": "suspend", "output": "", "volume": ["\<MASK>"], "reagent": ["complete DMEM medium"]}; | {"in": ["the cell pellet", "1x DPBS with 20 ng/ml murine M-CSF"], "out": []}; {"in": ["complete DMEM medium"], "out": ["suspended cells"]}; | {"action": "wash", "output": "", "reagent": ["1x DPBS with 20 ng/ml murine M-CSF", "the cell pellet"], "container": ["a 100 mm Petri dish"]}; {"action": "suspend", "output": "", "volume": ["15 ml"], "reagent": ["complete DMEM medium"]}; | Latent semantics of unknown unknowns (volume); | {"action": "wash", "output": "", "reagent": ["1x DPBS with 20 ng/ml murine M-CSF", "the cell pellet"], "container": ["a 100 mm Petri dish"]}; {"action": "suspend", "output": "", "volume": ["15 ml"], "reagent": ["complete DMEM medium"]}; | Reagent: 1x DPBS with 20 ng/ml murine M-CSF, complete DMEM medium (15 ml) | | Divide the supernatants (soluble fractions). ... ... Measure the total protein concentration in both positive cell lysates using the BCA protein assay kit according to the manufacturer’s instructions. | {"action": "divide", "output": "the supernatants (soluble fractions)"}; ... ... {"action": "measure", "concentration": ["total protein concentration"], "output": "", "reagent": ["the BCA protein assay kit", ""]}; | {"in": ["the supernatants (soluble fractions)"], "out": ["divided fractions"]}; {"in": ["positive cell lysates", "BCA protein assay kit"], "out": ["protein concentration"]}; | {"action": "divide", "output": "the supernatants (soluble fractions)"}; ... ... {"action": "measure", "concentration": ["total protein concentration"], "output": "", "reagent": ["the BCA protein assay kit", ""]}; | Latent semantics of unknown unknowns (concentration); | {"action": "divide", "output": "the supernatants (soluble fractions)"}; ... ... {"action": "measure", "concentration": ["total protein concentration"], "output": "", "reagent": ["the BCA protein assay kit", "the supernatants (soluble fractions)"]}; | Reagent: supernatants (soluble fractions), BCA protein assay kit; No specific volume provided | | Add 2.6 ml \<MASK>. Incubate cells at 37°C for 24-48 h. | {"action": "add", "output": "", "volume": ["2.6 ml"], "reagent": ["\<MASK>"]}; {"action": "incubate", "output": "", "reagent": ["cells"], "temperature": ["37°C"], "time": ["24-48 h"]}; | {"in": ["\<MASK>"], "out": ["treated cells"]}; {"in": ["treated cells"], "out": []}; | {"action": "add", "output": "", "volume": ["2.6 ml"], "reagent": ["fresh culture medium"]}; {"action": "incubate", "output": "", "reagent": ["cells"], "temperature": ["37°C"], "time": ["24-48 h"]}; | Latent semantics of unknown unknowns (reagent); | {"action": "add", "output": "cells", "volume": ["2.6 ml"], "reagent": ["fresh culture medium"]}; {"action": "incubate", "output": "", "reagent": ["cells"], "temperature": ["37°C"], "time": ["24-48 h"]}; | Reagent: fresh culture medium (2.6 ml) | Title: C.2-3 continued
Summary: The work identifies the problem of translating from natural language instructions for scientific experiments to machine usable formats and frames it as a program synthesis problem. The proposed approach uses language models (along with other parsing techniques) to extract a structured sequence of instructions from natural language inputs which are then verified using an execution model. This approach is compared against expert translations as well as constraint decoding and prompt engineering baselines. Strengths: The paper identifies an interesting problem in the AI for science Domain. It shows that the decomposition of the problem into syntax and semantics can be mapped to operations and reagent flow which is a useful insight. The paper further introduces useful formalism in the form of the PDG and algorithms to synthesise programs in DSLs. The paper evaluates on multiple datasets against reasonable benchmarks, showing the superiority of the proposed approach. Weaknesses: The use of BLEU and ROUGE scores as metrics to compare expert and machine generated instructions is not properly justified. It may infact be problematic considering that both metrics measure textual similarity however, instructions that looks similar could have very different semantics (for example: "... pour hot water ..." vs "... pour cold water ...") The paper does not provide any insight into what each part of the system contributes to the final effectiveness. It would be especially valuable to understand (1) how much language model-based parsing makes a difference and (2) how the constraints imposed by the synthesis prevent the model from incorrect solutions (vs pure prompt engineering baselines) but also provide enough flexibility to do better than standard constraint decoding. Technical Quality: 3 Clarity: 3 Questions for Authors: Why is BLEU/ROUGE used as a metric? Can a more semantically aligned mode of comparison be used here. Qualitative, how does the approach behave differently to the baselines? What does each component of the system contribute? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The evaluation presented in the paper considers datasets where the input is descriptions of experiments in standardised scientific terminology / format. While this demonstrates the system's usefulness in translating such inputs, it is unclear how well it may generalise to inputs following looser terminology / formatting. Since the tool aims to save time for domain experts, it should be demonstrated how much it is easier to write down instructions in this format as compared to the machine usable one directly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Why is BLEU/ROUGE used as a metric? Can a more semantically aligned mode of comparison be used here. This is a very good question. The same concern was considered during the development of our evaluation methodology. Direct comparisons across entire sentences under BLEU/ROUGE scores would indeed pose a problem as the reviewer mentioned. Therefore, to circumvent this issue, we convert all results into a standardized JSON-style format for data representation, and comparisons are made between key-value pairs rather than entire sentences, effectively resolving the metric concern. Let us consider the example mentioned by the reviewer: we represent the two sentences "... pour hot water ..." and "... pour cold water ..." in the following JSON-style format. ``` { op: "pour", reg: "water", T: "hot", } { op: "pour", reg: "water", T: "cold", } ``` The comparison between the two sentences is then transformed into a comparison between two JSON code blocks. We calculate the similarity score cumulatively based on the similarity between the values of matched pairs of keys. For instance, for the key "temperature", the values "hot" and "cold" yield a low similarity score under the ROUGH, BLEU, and even the Exact Match metrics. As "temperature" is one of the major keys within configuration parameters, a high penalty in this dimension significantly affects the cumulative similarity score. With this fine-grained comparison metric, we can comprehensively track the distinctions and commonalities between results without losing expressivity regarding the quantities. We also acknowledge that there are advanced evaluation metrics, especially in the recent works where LLMs are leveraged as external judges and achieve considerable performance in general testing cases. Our choice of "less advanced" metrics is driven by the intention to focus specifically on domain-specific knowledge, which constitutes the primary scope of this paper and may be relatively sparse in general LLMs. Nonetheless, the exploration of more sophisticated evaluation metrics represents a promising avenue for future research, and we appreciate the reviewer's perceptive recommendation in this regard. > Since the tool aims to save time for domain experts, it should be demonstrated how much it is easier to write down instructions in this format as compared to the machine usable one directly. Thanks for the comment. The scope of our proposed framework, in the current stage, is to automatically translate human-oriented protocols into formats suitable for machine execution, rather than helping domain experts creating new ones in an easier format. The goal is to transfer knowledge in a conventional lab into a format suitable for self-driving labs. Therefore, the human-oriented protocols used in this translation are existing protocols previously designed for human operators in the conventional labs, thus coming with no extra cost. In contrast to conventional protocol translation processes, which require domain experts to manually develop rules and functions based on specialized knowledge, our proposed automatic translator attempts to eliminate the need for such expert intervention. Domain experts are not involved in either the development or the execution stages of our translator. Therefore, the evaluations in our paper mainly focus on translation performance rather than generation efficiency. We appreciate the reviewer's insightful suggestion and will make revisions accordingly for better clarification. > While this demonstrates the system's usefulness in translating such inputs, it is unclear how well it may generalise to inputs following looser terminology / formatting. Thanks for the comment. The general applicability of our proposed framework beyond experimental sciences can indeed be a common concern. The core value of translating natural-language-based protocols into formats suitable for machine execution substantially lies in facilitating experiments in self-driving labs, thereby accelerating scientific discovery. Experimental protocols come with unique properties and challenges, such as the fine-grained incorporation of domain-specific knowledge, the non-trivial dependency topology between operations, the long-horizon lifecycles of intermediate productions, and the necessity for precise execution without run-time errors. These factors shape the scope of our research problem, emphasising the need to handle protocols with stringent terminology and formatting. Despite the specific scope of this paper, we are open to exploring the potential for generalizing our framework to other domains with similar challenges as those found in scientific experiments, such as cooking (see C.1-1). > Qualitative, how does the approach behave differently to the baselines? What does each component of the system contribute? Thanks for the question. The rationales for the components within our proposed framework are grounded in both empirical and theoretical considerations. We develop the three-stage framework that integrates cognitive insights from human experts with approaches from program synthesis, automata construction, and counterfactual analysis. At the syntax level, we synthesize the operation dependence graph to transform natural-language-based protocols into structured representations, thereby making explicit the operation-condition mappings and the control flows. At the semantics level, we analyze the reagent flow graph to reconstruct the complete lifecycles of intermediate products, addressing the latent, missing, or omitted properties and values. At the execution level, we contextualize both the operation dependence graph and the reagent flow graph within spatial and temporal dynamics, resulting in the protocol dependence graph. This graph conducts counterfactual reasoning to detect potential conflicts or shortages of execution resources and to identify inappropriate combinations of operations in execution sequences (see C.1-2). --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their detailed response. I can see how using BLEU/ROUGE over JSON structured outputs can alleviate the concerns mentioned in my review. The examples of outputs at each stage is also appreciated. My major remaining concern is with regards to the relevance of the paper to the ML community. As mentioned in my review, including a more detailed discussion of (1) how much language model-based parsing makes a difference and (2) how the constraints imposed by the synthesis prevent the model from incorrect solutions (vs pure prompt engineering baselines) but also provide enough flexibility to do better than standard constraint decoding might the paper of interest to the broader ML community. --- Rebuttal 2: Comment: Imagine a self-driving kitchen that automatically prepares all ingredients and executes all procedures for cooking a meal according to natural-language-based recipes. Such self-driving kitchens would also benifit significantly from translating human-oriented recipes into formats suitable for machine execution. In the following, we present a running example of such a translation, adapted from [1]. The protocol after pre-processing is as follows. ``` Pasta Bolognese Yield: 2 plates Ingredients: - 8 [ounces] white fresh {pasta} - 1 [floz] olive {oil} - 1/4 [ounce] {garlic}; minced - 4 [ounces] {onions}; chopped - 4 [ounces] shallow fried {beef}; minced - 1 - 1 1/2 [ounce] lean prepared {bacon} - 1/3 [cup] red {wine} - 150 [gram] raw {carrots}; thinly sliced - 2/3 [ounce] concentrated {tomato puree} - 4 [ounces] red {sweet pepper}; cut julienne - 1 [ounce] {parmesan} cheese Instructions: Add the @oil@ to a large saucepan, heat to <300 F>, and saute the @onions@. After |2 minutes|, add the @garlic@. Keep on medium to high heat, and don't stir. After |2 minutes| more, add the @beef@. Fry the @bacon@ in a separate pan, on high heat. Remove liquified fat when done. Boil @pasta@ in a medium pan, until al dente (~|8 minutes|). Drain when done. Once the @beef@ is done, add the @carrots@, @sweet pepper@ and @tomato puree@. Slowly add the @wine@ as well, to not lower the temperature. Let it simmer (but not boil) for |5-10 minutes|. ``` Given the protocol as the input of our framework, the resulting DSL program is as follows. ``` add(slot = "oil", target = "large saucepan", container = plate_1, emit = mixture_1); heat(target = mixture_1, temperature = 300F, container = plate_1, postcon = stop()); saute(target = "onions", container = plate_2, duration = 2mins); add(slot = "garlic", target = mixture_1, container = plate_1, emit = mixture_2); heat(target = mixture_2, temperature = 325F, container = plate_1, duration = 2mins); add(slot = "beef", target = mixture_2, container = plate_1, emit = mixture_3); heat(target = mixture_2, temperature = 325F, container = plate_1, postcond = check_done(target = "beef")); fry(target = "bacon", temperature = 350F, container = pan_1, postcond = remove(target = "liquified fat")); boil(target = "pasta", temperature = 212F, container = pan_2, duration = 8mins, postcond = drain()); add(precond = check_done(target = "beef"), slot = ["carrots", "sweet pepper", "tomato puree"], target = mixture_3, container = plate_1, emit = mixture_4); add(slot = "wine", target = mixture_4, container = plate_1, pace = 1mL/s); simmer(target = mixture_4, temperature = 211F, duration = 7.5mins); ``` In this example, we observe that the natural-language-based recipe possesses ambiguities and omissions. Our translation framework addresses these challenges by structuring the recipe at the syntax level, completing the latent information at the semantics level, and linking the programs with necessary resources, such as the usage of plates, at the execution level. References: [1] Roorda, Auke. "Corel: A DSL for Cooking Recipes." Diss. 2021. Title: C.1-1: Running example on translating cuisine recipe for execution by self-driving kitchen --- Rebuttal 3: Title: C.1-2: The behaviors of the components within our proposed three-stage framework Comment: Here we provide a series of case studies to illustrate the distinctions between the behaviors of the components within our proposed three-stage framework and those of the baselines qualitatively. | original text | w/ stage1, w/o stage2&3 | utility of stage1 | w/ stage1&2, w/o stage3 | utility of stage2 | w/ stage1&2&3 | utility of stage3 | |-----|-----|------|------|------|------|------| | Kill most the contaminating spores that have germinated. Centrifuge the spore mixture at \<MASK> for 5 min. | {"action": "eliminate", "output": "", "reagent": ["the contaminating spores that have germinated"]}; {"action": "centrifuge", "force": ["\<MASK>"], "output": "", "reagent": ["the spore mixture"], "time": ["5 min"]}; | {"in": ["the contaminating spores that have germinated"], "out": []}; {"in": ["the spore mixture"], "out": []}; | {"action": "eliminate", "output": "", "reagent": ["the contaminating spores that have germinated"]}; {"action": "centrifuge", "force": ["1,200 x g"], "output": "", "reagent": ["the spore mixture"], "time": ["5 min"]}; | Latent semantics of unknown unknowns (force); | {"action": "eliminate", "output": "the spore mixture", "reagent": ["the contaminating spores that have germinated"]}; {"action": "centrifuge", "force": ["1,200 x g"], "output": "", "reagent": ["the spore mixture"], "time": ["5 min"]}; | Reagent: spore mixture; No specific volume provided | | Add pre-hybr soln directly to the hybridization reaction to get hybrid molecule. Incubate for \<MASK>. | {"action": "add", "output": "hybrid molecule", "reagent": ["pre-hybr soln", "the hybridization reaction"]}; {"action": "incubate", "output": "", "reagent": [""], "temperature": [""], "time": ["\<MASK>"]}; | {"in": ["pre-hybr soln", "the hybridization reaction"], "out": ["hybrid molecule"]}; {"in": ["hybrid molecule"], "out": []}; | {"action": "add", "output": "hybrid molecule", "reagent": ["pre-hybr soln", "the hybridization reaction"]}; {"action": "incubate", "output": "", "reagent": [""], "temperature": [""], "time": ["10 mins"]}; | Latent semantics of known unknowns (time); | {"action": "add", "output": "hybrid molecule", "reagent": ["pre-hybr soln", "the hybridization reaction"]}; {"action": "incubate", "output": "", "reagent": ["hybrid molecule"], "temperature": [""], "time": ["10 mins"]}; | Reagent: pre-hybridization solution, hybridization reaction; No specific volume provided | | Confirm positive colonies by PCR. Take fluorescence images under \<MASK>. | {"action": "confirm", "device": ["PCR"], "output": "positive colonies", "reagent": [""]}; {"action": "take", "device": ["\<MASK>"], "output": ["fluorescence images"]}; | {"in": ["PCR"], "out": ["positive colonies"]}; {"in": ["positive colonies"], "out": ["fluorescence images"]}; | {"action": "confirm", "device": ["PCR"], "output": "positive colonies", "reagent": [""]}; {"action": "take", "device": ["microscope"], "output": ["fluorescence images"]}; | Latent semantics of unknown unknowns (device); | {"action": "confirm", "device": ["PCR"], "output": "positive colonies", "reagent": ["RNAs"]}; {"action": "take", "device": ["microscope"], "output": ["fluorescence images"]}; | Reagent: RNAs; No specific volume provided | | Transfer the flow to a PLG tube. Incubate at \<MASK> for 5 minutes. Add 90 µl chloroform. | {"action": "transfer", "output": "", "container": ["a PLG tube"], "reagent": ["the flow"]}; {"action": "incubate", "output": "", "temperature": ["\<MASK>"], "time": ["5 minutes"]}; {"action": "add", "output": "", "volume": ["90 µl"], "reagent": ["chloroform"]}; | {"in": ["the flow"], "out": ["PLG tube"]}; {"in": ["PLG tube"], "out": []}; {"in": ["90 µl chloroform"], "out": []}; | {"action": "transfer", "output": "", "container": ["a PLG tube"], "reagent": ["the flow"]}; {"action": "incubate", "output": "", "temperature": ["room temperature"], "time": ["5 minutes"]}; {"action": "add", "output": "", "volume": ["90 µl"], "reagent": ["chloroform"]}; | Latent semantics of unknown unknowns (temperature); | {"action": "transfer", "output": "", "container": ["a PLG tube"], "reagent": ["the flow"]}; {"action": "incubate", "output": "", "temperature": ["room temperature"], "time": ["5 minutes"]}; {"action": "add", "output": "", "volume": ["90 µl"], "reagent": ["chloroform"]}; | Reagent: flow, chloroform (90 µl) | --- Rebuttal 4: Title: C.1-2 continued Comment: | original text | w/ stage1, w/o stage2&3 | utility of stage1 | w/ stage1&2, w/o stage3 | utility of stage2 | w/ stage1&2&3 | utility of stage3 | |-----|-----|------|------|------|------|------| | Transfer the clear supernatant to \<MASK>. Incubate at 4 °C with rotation. | {"action": "transfer", "output": "the clear supernatant", "container": ["\<MASK>"]}; {"action": "incubate", "output": "", "temperature": ["4 °C"], "reagent":[]}; | {"in": ["the clear supernatant"], "out": ["\<MASK>"]}; {"in": ["\<MASK>"], "out": []}; | {"action": "transfer", "output": "the clear supernatant", "container": ["a new tube"]}; {"action": "incubate", "output": "", "temperature": ["4 °C"], "reagent":[]}; | Latent semantics of unknown unknowns (container); | {"action": "transfer", "output": "the clear supernatant", "container": ["a new tube"]}; {"action": "incubate", "output": "", "temperature": ["4 °C"], "reagent":["the clear supernatant"]}; | Reagent: clear supernatant; No specific volume provided | | Wash the cell pellet with 1x DPBS with 20 ng/ml murine M-CSF in a 100 mm Petri dish. Suspend in \<MASK> complete DMEM medium. | {"action": "wash", "output": "", "reagent": ["1x DPBS with 20 ng/ml murine M-CSF", "the cell pellet"], "container": ["a 100 mm Petri dish"]}; {"action": "suspend", "output": "", "volume": ["\<MASK>"], "reagent": ["complete DMEM medium"]}; | {"in": ["the cell pellet", "1x DPBS with 20 ng/ml murine M-CSF"], "out": []}; {"in": ["complete DMEM medium"], "out": ["suspended cells"]}; | {"action": "wash", "output": "", "reagent": ["1x DPBS with 20 ng/ml murine M-CSF", "the cell pellet"], "container": ["a 100 mm Petri dish"]}; {"action": "suspend", "output": "", "volume": ["15 ml"], "reagent": ["complete DMEM medium"]}; | Latent semantics of unknown unknowns (volume); | {"action": "wash", "output": "", "reagent": ["1x DPBS with 20 ng/ml murine M-CSF", "the cell pellet"], "container": ["a 100 mm Petri dish"]}; {"action": "suspend", "output": "", "volume": ["15 ml"], "reagent": ["complete DMEM medium"]}; | Reagent: 1x DPBS with 20 ng/ml murine M-CSF, complete DMEM medium (15 ml) | | Divide the supernatants (soluble fractions). ... ... Measure the total protein concentration in both positive cell lysates using the BCA protein assay kit according to the manufacturer’s instructions. | {"action": "divide", "output": "the supernatants (soluble fractions)"}; ... ... {"action": "measure", "concentration": ["total protein concentration"], "output": "", "reagent": ["the BCA protein assay kit", ""]}; | {"in": ["the supernatants (soluble fractions)"], "out": ["divided fractions"]}; {"in": ["positive cell lysates", "BCA protein assay kit"], "out": ["protein concentration"]}; | {"action": "divide", "output": "the supernatants (soluble fractions)"}; ... ... {"action": "measure", "concentration": ["total protein concentration"], "output": "", "reagent": ["the BCA protein assay kit", ""]}; | Latent semantics of unknown unknowns (concentration); | {"action": "divide", "output": "the supernatants (soluble fractions)"}; ... ... {"action": "measure", "concentration": ["total protein concentration"], "output": "", "reagent": ["the BCA protein assay kit", "the supernatants (soluble fractions)"]}; | Reagent: supernatants (soluble fractions), BCA protein assay kit; No specific volume provided | | Add 2.6 ml \<MASK>. Incubate cells at 37°C for 24-48 h. | {"action": "add", "output": "", "volume": ["2.6 ml"], "reagent": ["\<MASK>"]}; {"action": "incubate", "output": "", "reagent": ["cells"], "temperature": ["37°C"], "time": ["24-48 h"]}; | {"in": ["\<MASK>"], "out": ["treated cells"]}; {"in": ["treated cells"], "out": []}; | {"action": "add", "output": "", "volume": ["2.6 ml"], "reagent": ["fresh culture medium"]}; {"action": "incubate", "output": "", "reagent": ["cells"], "temperature": ["37°C"], "time": ["24-48 h"]}; | Latent semantics of unknown unknowns (reagent); | {"action": "add", "output": "cells", "volume": ["2.6 ml"], "reagent": ["fresh culture medium"]}; {"action": "incubate", "output": "", "reagent": ["cells"], "temperature": ["37°C"], "time": ["24-48 h"]}; | Reagent: fresh culture medium (2.6 ml) |
null
null
Rebuttal 1: Rebuttal: We thank all reviewers for their time and valuable comments. The feedback is both substantial and helpful for improving our paper. In this work, we systematically study the problem of translating experimental protocols for human to those suitable for self-driving laboratories. Accordingly, we propose a three-stage workflow that incrementally constructs Protocol Dependence Graphs at the syntax, semantics, and execution levels. Our qualitative and quantitative results underscore the framework's potential to accelerate and democratize the process of scientific discovery. We would like to thank the reviewers for acknowledging our work to be: 1. The paper identifies "an interesting problem in the AI for science Domain" (reviewer #RqLD), "is well-motivated and of great significance in advancing AI applications in scientific discovery" (reviewer #uEvE), and addresses "a critical gap in the transition from AI-driven discoveries to empirical experimentation" (reviewer #AwgV). 2. The proposed method, which is a "novel, automated approach to protocol translation for self-driving laboratories" (reviewer #AwgV), provides "a useful insight" regarding "the decomposition of the problem into syntax and semantics" (reviewer #RqLD), and "further introduces useful formalism in the form of the PDG and algorithms to synthesise programs in DSLs" (reviewer #RqLD). 3. The evaluations are conducted on "multiple datasets against reasonable benchmarks" (reviewer #RqLD), showing that "the proposed approach outperforms pure LLM-based synthesis and matches the manual translation by human experimenters" (reviewer #uEvE). 4. The paper is "well-structured and clearly written" (reviewer #AwgV), and "is easy to follow, although the required background knowledge is non-trivial" (reviewer #uEvE). Based on the reviewers' comments, we made revisions including: 1. Clarifying certain concepts to enhance the paper's accessibility for readers with a background outside experimental sciences. 2. Demonstrating running examples of the behaviors of different components within our proposed three-stage framework in detail to make the paper more comprehensive. 3. Conducting additional analyses and discussions regarding the computational complexity, safety, and theoretical foundation of our proposed framework to make the paper more rigorous and self-consistent. In the following, we address specific questions for each reviewer.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning
Accept (poster)
Summary: This paper implements multimodal emotion recognition and reasoning by fine-tuning the LLaMA model with instructions. It is trained on a large-scale dataset, fine-tuned, and tested on three datasets. Strengths: The global, temporal and local features of the video modality are considered, and LoRA fine-tuning (all tokens), Prompt fine-tuning (text modality) and supervised fine-tuning (linear layer) are used at the same time. Weaknesses: The innovation is limited and only existing methods are used. The key problems are not solved. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. The complex interaction relationship between modalities is not considered. 2. The process is complicated, and the computing resource requirements are high. 3. The effect of Prompt Tuning and Instruction Tuning is highly dependent on the design of prompt words and instructions. If the prompt words or instructions are not accurate or comprehensive enough, it may impact the performance of the model. 4. The lack of reproducibility: the anonymous Github link provided by the author is empty. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review. We've addressed your points carefully and will incorporate these clarifications in our revision. **Q1: The complex interaction relationship between modalities is not considered.** We appreciate your attention to the interaction between modalities. However, we respectfully suggest that the premise of this question may not fully align with our approach and findings: 1. Early works in multimodal large language models (MLLMs) often used the Q-Former proposed in BLIP2, involved complex two-stage pre-training, which could lead to information loss. 2. Direct feature mapping: Emotion-LLaMA directly maps visual and audio features into the textual semantic space. This approach preserves unique characteristics from each modality (e.g., facial micro-expressions, vocal tones) while allowing for their interaction in the unified semantic space. 3. Experimental validation: Our experiments in the MER2024 competition demonstrate the effectiveness of this approach: | Modality Combination | Pre-alignment F1 | Post-alignment F1 | |----------------------|------------------|-------------------| | Audio only (HuBERT) | 0.7277 | 66.18 | | Video only (CLIP) | 0.6673 | 69.07 | | Text only (Baichuan) | 0.5429 | 56.85 | | All modalities | 80.91 | 73.01 | These results suggest that complex alignment methods may benefit weaker modalities but can also lead to information loss (a 7.9% decrease in fusion scores post-alignment). **Q2: The process is complicated, and the computing resource requirements are high.** We appreciate your concern about computational efficiency. However, we believe our method is relatively resource-efficient: 1. Parameter-efficient tuning: We use LoRA, resulting in only 34 million trainable parameters (0.495% of the total). 2. Hardware requirements: All work was completed using only 4 A100 (40G) GPUs, which is modest compared to many LLM pre-training efforts. 3. Inference efficiency: Emotion-LLaMA requires only a single A10 or A100 GPU for rapid inference. **Q3: Dependence on Prompt and Instruction Design** A3: Designing effective prompts and instructions is indeed crucial for harnessing the powerful reasoning abilities of LLMs. In this work, we have made significant contributions by designing various training tasks and different instructions for Emotion-LLaMA to enhance its robustness and generalization capabilities. We continuously refine our prompts and instructions based on iterative testing and validation to ensure they are accurate and comprehensive. The demo in the anonymous repository showcases Emotion-LLaMA's excellent performance, demonstrating that it can provide correct answers regardless of whether the instructions are previously learned or new, highlighting the robustness and innovation of our design. **Q4: The lack of reproducibility: the anonymous GitHub link provided by the author is empty.** A4: We apologize for the inconvenience you experienced. This issue may have been due to a temporary network problem. We have verified that the anonymous repository is accessible and contains all necessary files for reproducibility. Our Emotion-LLaMA has been successfully reproduced by other researchers, who have praised its performance in the demo. We encourage you to access the repository again to review the open-source MERR dataset, the training process, and the code for Emotion-LLaMA. We are committed to ensuring that all materials are available and easily accessible for reproducibility. **Q5: The innovation is limited, and only existing methods are used. The key problems are not solved.** A5: We respectfully disagree with this assessment. Our work addresses critical challenges in multimodal emotion recognition and reasoning: 1. **Data scarcity**: Existing datasets predominantly consist of image-text pairs, lacking dynamic expression descriptions and audio components. Manual annotation of real-world multimodal samples is prohibitively expensive. To address this, we introduced the MERR dataset, comprising 28,618 coarse-grained and 4,487 fine-grained automatically annotated samples across diverse emotion categories. This dataset significantly advances the field by providing rich, multimodal emotion-related instruction-following data. 2. **Real-world applicability**: Existing MLLMs struggle with accurate emotion understanding in real-world scenarios. Emotion-LLaMA, instructed-tuned on the MERR dataset, has demonstrated excellent performance in both controlled and real-world conditions. Our first-place win in the MER2024 competition's MER-Noise track underscores its robustness in noisy, real-world environments. 3. **Generalization**: In the MER-OV track of the [MER2024 competition][1], Emotion-LLaMA significantly outperformed other MLLMs, including GPT-4V, by improving average accuracy and recall by 8.52%. This showcases its superior generalization capabilities across various emotion understanding tasks. 4. **Comprehensive evaluation**: Extensive experimental results demonstrate Emotion-LLaMA's outstanding multimodal emotion understanding capabilities across multiple benchmarks and real-world scenarios. 5. **Reproducibility and accessibility**: We have open-sourced both the MERR dataset and the code for Emotion-LLaMA, and provided an online demo in our [anonymous repository][2]. This facilitates further research and practical applications in the field of multimodal emotion recognition and reasoning. While we acknowledge that there is always room for improvement, we believe these contributions offer valuable insights into addressing challenges in multimodal emotion understanding. We welcome further discussion on how we can enhance our approach to better solve key problems in the field. [1]: https://zeroqiaoba.github.io/MER2024-website/ [2]: https://anonymous.4open.science/r/Emotion-LLaMA/ --- Rebuttal 2: Title: Response to Reviewer 646t Comment: Dear Reviewer, Thank you for your insights. We have made the following revisions to address your concerns: 1. We clarified the interaction between modalities in our approach, demonstrating the effectiveness of our method. 2. We discussed the computational efficiency of our approach and how we ensured that the process is resource-efficient. 3. We ensured the reproducibility of our work by verifying that all necessary files are accessible in our anonymous repository. 4. We highlighted the innovative aspects of our methodology, addressing the challenges in multimodal emotion recognition. We hope these updates address your concerns. We appreciate any further feedback you might have. Best regards, The Authors --- Rebuttal 3: Title: Follow-Up on Revisions and Inquiry on Additional Concerns Comment: Dear Reviewer 646t, Thank you for your thorough review and for raising the score, which we greatly appreciate. As we approach the rebuttal deadline, we wanted to check if there are any remaining concerns or questions that we could address. Your feedback has been invaluable, and we are committed to making any further necessary improvements. Please let us know if there is anything else we should consider. Best regards, The Authors --- Rebuttal 4: Title: Follow-Up on Revisions and Interaction Comment: Dear Reviewer 646t, Thank you for your thorough review and for raising the score, which we greatly appreciate. We have invested a significant amount of effort into this work, and your feedback has been instrumental in guiding our revisions. As we approach the rebuttal deadline, we wanted to ensure that all your concerns have been adequately addressed. We would also like to invite you to try out our demo, available in the anonymous repository. Our work has already attracted considerable attention, leading to a high number of visits to the demo, which has significantly increased the maintenance costs. Despite this, we have kept it running during the review process to provide valuable insights into the practical application of our methods. If there are any remaining questions or additional feedback you could provide, we would be more than happy to address them. Best regards, The Authors
Summary: The paper introduces a multimodal large language model, named Emotion-LLaMA for emotional state understanding. The authors use open-source tools to collect and annotate a dataset, named MERR for model pre-training. Then they perform instruction-tuning on downstream datasets for emotion recognition and emotion reasoning. Extensive experiments are conducted and demonstrate the promising performance of the proposed approach. Strengths: 1. Extensive experiments are conducted and the method achieves SOTA performance on various dataset for emotion recognition and reasoning. 2. Straightforward visualizations are presented. Weaknesses: 1. The paper claims the MERR dataset as a core contribution. However, there is no systematic evaluation for the label quality of the dataset. I understand that pre-training on this dataset improves the downstream performance and thus its validity can be shown to some extent. However, this may be due to the diversity of the unlabeled data instead of the automatically generated label. 2. Table 3, with audio and video inputs, Emotion-LLaMA’s performance is worse/close to the baseline (VAT). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How do you pre-train Emotion-LLaMA? Is it supervised learning with coarse-grained emotion labels? 2. Do you plan to release the MERR dataset? 3. Table 2 fine-tuning, why does Emotion-LLaMA get worse performance on Disgust than MAE-DFER and VideoMAE? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please refer to weaknesses Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your insightful review. Your feedback has been valuable in improving our work. We've addressed each point below and will incorporate these enhancements in our revision. **Q1: How do you pre-train Emotion-LLaMA? Is it supervised learning with coarse-grained emotion labels?** A1: Yes, we use automatically generated labels and descriptions as answers for supervised learning. Instruction tuning of Emotion-LLaMA involves two main stages, utilizing both coarse-grained and fine-grained data from our MERR dataset. Here's a detailed breakdown of our training process: 1. **Data Preparation:** - Coarse-grained data: 28,618 samples - Fine-grained data: 4,487 samples 2. **Feature Extraction:** - Audio features: Extracted using HuBERT-Chinese - Visual features: Extracted using a combination of MAE, VideoMAE, and EVA2 - Text tokens: Processed directly by BERT’s Tokenizer 3. **Prompt Construction:** We construct prompts relevant to the training task, examples of which can be found in **Tables 11 and 12** of our paper. 4. **Instruction Template:** We use the following template to form our instructions: ``` [INST] < AudioFeature > < VideoFeature > [Task Identifier] Prompt [/INST] ``` Where: - `< AudioFeature >` and `< VideoFeature >` are the extracted features - `[Task Identifier]` specifies whether it's a recognition or reasoning task - `Prompt` is the constructed prompt for the specific task 5. **Answer Preparation:** For each instruction, we prepare corresponding answers: - For recognition tasks: Emotion category labels - For reasoning tasks: Detailed emotion descriptions 6. **Training Procedure:** a. **Stage 1 - Coarse-grained Pretraining:** - Data: 28,618 coarse-grained samples - Epochs: 30 - Batch size: 4 - Learning rate: 1e-5 with cosine decay b. **Stage 2 - Fine-grained Tuning:** - Data: 4,487 fine-grained samples - Epochs: 10 - Batch size: 4 - Learning rate: 1e-6 with cosine decay 7. **Implementation:** The specific code (train.py, train_configs/Emotion-LLaMA_finetune.yaml) and more implementation details (README.md: Setup, Training) are available in our anonymous repository (https://anonymous.4open.science/r/Emotion-LLaMA). Detailed descriptions of the training process and feature extraction can be found in **Sec. 4.2** of our paper. **Q2: Do you plan to release the MERR dataset?** A2: Yes, the MERR dataset is available on GitHub as part of our submission. You can access and view the MERR dataset (README.md: MERR Dataset) in the anonymous repository mentioned in the paper. **Q3: The improvement might be due to the diversity of unlabeled data rather than the automatically generated labels.** A3: While the MER2023 dataset contains many unlabeled video samples, MLLMs require instruction datasets for training and cannot be directly trained with unlabeled samples. We built the MERR dataset with 28,618 coarse-grained and 4,487 fine-grained annotated samples with rich emotional descriptions. Instruction tuning based on these labels and descriptions significantly enhanced Emotion-LLaMA's emotional understanding capability. The improvement is due not only to the diversity of the unlabeled data but also to the quality of the automatically generated labels and descriptions. **Q4: Why does Emotion-LLaMA perform worse on 'Disgust' than MAE-DFER and VideoMAE?** A4: As shown in **Tab. 2**, almost all existing MLLMs perform poorly in recognizing 'disgust', often with zero accuracy. This is likely due to the scarcity of 'disgust' samples in current datasets. We also suspect LLMs may have safety restrictions related to 'disgust', contributing to the low accuracy. We plan to collect more samples to enrich the MERR dataset and further explore this issue to improve Emotion-LLaMA's ability to recognize new categories. **Q5: In Table 3, with audio and video inputs, Emotion-LLaMA’s performance is worse/close to the baseline (VAT).** A5: **Tab. 3** shows F1 Scores for MER2023-Baseline and VAT models. The MER2023-Baseline is the competition's baseline model, and VAT is the first-place winner. They found the text modality to be weaker in emotion prediction, so they focused on visual and auditory modalities. Their results showed that adding text for fusion lowered scores due to the lack of dialogue content and contextual background in the video samples. However, the textual modality is crucial in multimodal emotion recognition tasks. Emotion-LLaMA maps audio and visual features to the textual space, using them as contextual information. Even without the text modality, Emotion-LLaMA achieves scores close to the highest in the MER2023 competition, demonstrating robustness. More importantly, when the text modality is added, there is a significant improvement in the F1 Score, proving that Emotion-LLaMA can understand the emotional content in the subtitles. If you have further questions or need additional clarification, please let us know. We value your feedback and are committed to providing thorough responses. [MER2024]: https://zeroqiaoba.github.io/MER2024-website/ --- Rebuttal 2: Title: Response to Reviewer V8m9 Comment: Dear Reviewer, Thank you for your thoughtful review. We have made the following revisions to address your feedback: 1. We provided a detailed breakdown of our pre-training and fine-tuning processes. 2. We discussed the impact of the MERR dataset on the model’s performance and the challenges associated with recognizing the 'disgust' emotion category. 3. We ensured that the MERR dataset is now fully accessible and clarified its role in our methodology. We hope these revisions address your concerns. Please let us know if further clarification is needed. Best regards, The Authors --- Rebuttal 3: Title: Inquiry on Additional Concerns Comment: Dear Reviewer V8m9, Thank you for your thorough review and constructive feedback. We appreciate the time you have taken to assess our work and provide insights that have greatly contributed to improving our paper. As the rebuttal period comes to a close, we wanted to ensure that all your concerns have been adequately addressed. If there are any remaining questions or points that need further clarification, please let us know, and we will be happy to provide additional information. Your feedback is invaluable to us, and we are committed to making the necessary improvements to our submission. Best regards, The Authors --- Rebuttal Comment 3.1: Comment: Dear authors, Thank you for your hard work, and I apologize for the delayed response. My major concerns have been addressed, and I will maintain my ratings and vote to accept the paper. --- Rebuttal 4: Title: Consideration Request Comment: Dear Reviewer V8m9, Thank you for your thoughtful review and constructive feedback. We appreciate the time you have taken to assess our work and provide insights that have greatly contributed to improving our paper. We have thoroughly addressed all the issues you raised, and we believe these revisions have significantly enhanced our manuscript. Given that all your concerns have been carefully considered and rectified, we respectfully inquire if you could consider raising the score. Additionally, we encourage you to explore the demo available through our anonymous submission. Although maintaining the demo incurs significant daily costs—especially now that our work has gained some traction—we have decided to keep it running during the review process to ensure you and other reviewers have full access to it. If you have any remaining questions or additional suggestions that could further improve our submission, we would be grateful for your feedback. Once again, we would like to express our gratitude for your commitment to reviewing our paper and for the constructive comments that have guided our revisions. Best regards, The Authors
Summary: The paper presents a new multi-modal instruction tuning dataset for emotion recognition. The authors also present results of training on this dataset with a multi-modal architecture based on LLaMa 2. They show evaluation results on DFEW and MER2023. Strengths: Evaluating multi-modal emotion recognition approaches based on LLMs are a very relevant research topic. The results shown by the authors are convincing w.r.t. the performance on emotion recognition datasets. Weaknesses: ------------- The argument presented in lines 31 and 33 is not convincing. Why does an inability in methods lead to a lack of datasets? Related Work: ------------- As the dataset is one of the claimed contributions of the paper, there should be a discussion on previous emotion recognition datasets. It is important to lay out the reasons why these previous datasets cannot be used (or not easily be used) for instruction tuning of LLMs. Methodology: ------------ What videos is the MERR dataset based on? I could not find details in the paper on the selection process. Judging from the screenshots, it seems to be movie clips. This has important implications on the concept of "emotion" that is addressed in the paper and needs to be clarified. The chosen methodology lacks justification. For example, what is the reasoning behind selecting the frames with the highest sum of AU activations across all AUs included in OpenFace? It seems to me that there would be the danger of having a strong bias towards moments when the person is speaking, as this often leads to high AU activations, especially AUs related to the lower half of the face. The mapping of AUs to facial expression labels needs more explanation. E.g. is "happy" assigned if the combination of AU06, AU12, AU14 is active, or is it also assigned if only some of those AUs are active? Figure 1 does not show a facial expression description according to Table 7 at all, only descriptions according to Table 8. Concerning Table 8, there are several descriptions given for each AU. Are they all used at the same time, or is only one of them chosen? In Figure 1 it appears that only one of them is chosen, but how is this decided? Several further steps are not clearly defined, e.g. how does LLaMA-3 "refine" the annotations by aggregating the multimodal description? How is the instruction following data constructed of which a single example is presented in Table 9. Is it done manually? If yes, how and how many samples are created? In 3.1 it is not clear, how the dataset is "auto-annotated" with emotion labels. It is also not clear how these annotations are refined (with human involvement)? In general the concept of "emotion" used in the paper remains unclear. Is it about emotional displays, about internal states,...? In later parts of the method section it seems authors are targeting (internal?) emotional states. It would be important to know how they were annotated. Do the instructions for multimodal emotion recognition (Table 11) refer to different tasks? Some of these instructions seem to target displayed emotions, some target internal states. Training details are unclear. With the description provided in the paper it is difficult to understand the training procedure. Evaluation: ----------- The authors employ chatGPT to evaluate emotion reasoning. It is not clear to what extent this approach leads to a valid evaluation. E.g. ChatGPT was shown to be biased on emotion-related tasks [1]. In the end, evaluating the emotion understanding capabilities of one language model using another language model is circular. The impact of using the proposed MERR dataset for pre-training needs to be evaluated. [1] R. Mao, Q. Liu, K. He, W. Li, and E. Cambria, “The biases of pre-trained language models: An empirical study on prompt-based sentiment analysis and emotion detection,” IEEE Transactions on Affective Computing, 2022. Technical Quality: 2 Clarity: 2 Questions for Authors: - Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Limitations are not discussed in enough detail. There is no separate limitations section. The authors also do not lay out the limitations concerning e.g. (citing from the checklist): "The authors should reflect on the scope of the claims made, e.g., if the approach was580 only tested on a few datasets or with a few runs. In general, empirical results often581 depend on implicit assumptions, which should be articulated." "The authors should reflect on the factors that influence the performance of the approach.583 For example, a facial recognition algorithm may perform poorly when image resolution584 is low or images are taken in low lighting. Or a speech-to-text system might not be585 used reliably to provide closed captions for online lectures because it fails to handle586 technical jargon." Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We've addressed your concerns as follows: **Q1: The argument presented in lines 31 and 33 is not convincing. Why does an inability in methods lead to a lack of datasets?** A1: The *issues* in line 33 refer to challenges faced by current multimodal large language models (MLLMs), particularly their difficulty in processing audio and recognizing micro-expressions. This stems from the lack of specialized multimodal emotion instruction datasets (line 31), crucial for training these tasks. Without these datasets, developing methods to integrate audio and recognize subtle expressions is difficult. **Q2: There should be a discussion on previous emotion recognition datasets and why they cannot be used for instruction tuning of LLMs.** A2: Previous emotion recognition datasets provide discrete emotion labels, unsuitable for MLLM instruction tuning: 1. *EmoVIT*: Lacks audio data crucial for comprehensive emotion recognition. Our MERR dataset includes audio features for robust multimodal analysis. 2. *EMER*: Only 100 annotated samples, insufficient for tuning. MERR offers 28,618 coarse-grained and 4,487 fine-grained samples, providing a larger, diverse set. 3. *Other datasets (e.g., AFEW, DFEW)*: Useful for traditional tasks but not for MLLM tuning. They lack the detailed, instruction-based annotations of MERR. We will highlight MERR's unique features - size, multimodal nature, and instruction-based annotations - that make it ideal for training MLLMs in emotion recognition. **Q3: What videos is the MERR dataset based on? Judging from the screenshots, it seems to be movie clips.** A3: The MERR dataset is indeed sourced from MER2023, which includes over 70,000 unannotated samples primarily derived from movies and TV series. These sources offer rich and diverse emotional expressions, more representative of real-world scenarios. Our team signed the relevant End User License Agreements (EULA) and obtained permission from the original data providers. **Q4 & Q5: The chosen methodology lacks justification. The mapping of AUs to facial expression labels needs more explanation.** A4 & A5: We extract the most expressive facial frame by summing the highest AU activations. This approach mitigates biases. For example, high values in AU05 (Upper Lid Raiser) and AU26 (Jaw Drop) indicate surprise or fear. Speaking often results in high AU activations, but the MERR dataset balances AUs between the upper and lower face (Fig. 4), addressing speech bias. For mapping AUs, 'happy' is assigned if AU06, AU12, and AU14 are active, or even if only some are active. Fig. 1 shows AU combinations from Tab. 7 and descriptions from Tab. 8. The top right of Fig. 1 shows the combination for 'surprise' (AU-05: 0.36, AU-26: 1.03). We will clarify this in the revised manuscript. **Q6 & Q7: It is not clear how the dataset is "auto-annotated" with emotion labels and how these annotations are refined.** A6 & A7: In **Sec. 3.1**, we explain the auto-annotation process. We use MiniGPT-v2 for Visual Objective Descriptions, Action Units (AUs) for Visual Expression Descriptions, and Qwen-Audio for Audio Tone Descriptions. By combining these multimodal descriptions with Lexical Subtitles, we generate coarse-grained descriptions. Then, LLaMA-3 refines these annotations to provide in-depth understanding of expressions and speech content, corresponding to internal states. Finally, we remove erroneous annotations and have four experts select annotations that align with human preferences. **Q8: Do the instructions for multimodal emotion recognition refer to different tasks?** A8: Yes, **Tab. 11** lists instructions for different tasks where the model outputs emotion category labels. Emotion-LLaMA integrates external cues (facial expressions, audio tones) and internal states to accurately determine and output the appropriate emotion category. **Q9: Training details are unclear.** A9: Please refer to our anonymous repository for training details. Further details about the tuning process can be found in our response to Reviewer *V8m9: Q1-A1*. **Q10: The authors employ ChatGPT to evaluate emotion reasoning. It is not clear to what extent this approach leads to a valid evaluation.** A10: Previous work [2, 3] shows ChatGPT can be used in emotion-related tasks. In our approach, ChatGPT evaluates the similarity between our model's outputs and the ground truth based on its reasoning capabilities. Tasks such as emotion reasoning [4] and open-vocabulary emotion recognition [5] benefit from ChatGPT’s reasoning skills. Similarly, in video understanding [6], ChatGPT’s reasoning is used for assessment. This mitigates circular evaluation by focusing on the model's ability to match outputs with the ground truth, not ChatGPT's direct emotional understanding. We will include this explanation in the revised manuscript. **Q11: The impact of using the proposed MERR dataset for pre-training needs to be evaluated.** A11: In **Tab. 6**, we compare Emotion-LLaMA's performance when trained on the MERR dataset versus other datasets. Due to the limited size of the MER2023 training dataset (3,373 samples), pre-training poses challenges for models with transformer structures, resulting in a low F1 score of 79.17%. Pre-training with larger pseudo-labeled datasets (73,148 and 36,490 samples) significantly improves performance. Tuning the model on our automatically annotated MERR dataset achieves the best performance, improving the F1 score by 11.19%, demonstrating the richness and quality of the MERR dataset's annotations. Detailed information about the MERR dataset is provided in Fig. 4 and Fig. 5, with comparisons to other datasets in **Tab. 10**. Additionally, Emotion-LLaMA, trained on the MERR dataset, excelled in the recent [MER2024] competition. Detailed information about the competition and results can be found in our response to Reviewer *3ouG: Q5-A5*. [MER2024]: https://zeroqiaoba.github.io/MER2024-website/ --- Rebuttal 2: Title: Response to Reviewer iNoC Comment: Dear Reviewer, Thank you for your constructive comments. We have revised the manuscript to address your concerns: 1. We included a detailed discussion on previous emotion recognition datasets and clarified why they are not suitable for instruction tuning of large language models. 2. We provided a clearer explanation of our methodology, including the MERR dataset’s annotation process and its role in enhancing our model. 3. We added more details about our training process and provided a rationale for using ChatGPT in the evaluation. We hope these changes address your concerns. We welcome any additional feedback you may have. Best regards, The Authors --- Rebuttal Comment 2.1: Title: Further Clarifications and Inquiry on Remaining Concerns Comment: Dear Reviewer iNoC, Thank you for your detailed review and the constructive feedback you provided. We have carefully addressed your concerns in the revised manuscript, including a comprehensive discussion of previous emotion recognition datasets, a clearer explanation of our methodology, and the rationale behind our use of ChatGPT for evaluation. We understand that your score reflects concerns, and we deeply appreciate your critical assessment. To ensure we have addressed all your points, we would like to know if there are any remaining misunderstandings or unclear aspects that we can further clarify before the rebuttal deadline. Your insights have been invaluable in improving our work, and we are committed to making any necessary adjustments. Best regards, The Authors --- Rebuttal 3: Title: Further Clarifications Comment: Dear Reviewer iNoC, Thank you for your detailed review and constructive feedback. We've carefully addressed your concerns in the revised manuscript, including a thorough discussion of previous datasets, a clearer explanation of our methodology, and the rationale for using ChatGPT in evaluation. We understand your score reflects some concerns, and we believe there may be some misunderstandings about our work. We have put significant effort into this project and are eager to clarify any remaining points. We invite you to explore our demo in the anonymous repository and are happy to address any further questions or concerns you might have. Best regards, The Authors --- Rebuttal Comment 3.1: Title: No change in evaluation Comment: I read the rebuttal and will remain with my score. Justification below. A2: DialogueLLM uses MELD, IEMOCAP and EMoryNLP. All these three datasets are not mentioned in the author's response but have been used for instruction tuning LLMs. A3: The selection procedure is still unclear - i.e. how was the dataset sampled from MER2023? A4 & A5: No references are given for the supposed connection between AUs and emotion expression. It is unclear how stable this connection is in different contexts. A6 & A7: The author's say "Llama-3 refines annotations" but how this is exactly done and how the quality of this refinement can be assured is unclear. What is the protocol for human annotators. What are human "preferences" here? A9: Training details that are needed to understand the approach should be part of the main paper. A10: There should be a proper human evaluation of this process, at least on a part of the dataset. The issue about internal emotional states that I raised is not mentioned in the rebuttal.
Summary: The paper presents the Emotion-LLaMA model, a multimodal emotion recognition and reasoning system that integrates audio, visual, and textual inputs. - The authors constructed the MERR dataset, which includes 28,618 coarse-grained and 4,487 fine-grained annotated samples across diverse emotional categories, enabling models to learn from varied scenarios and generalize to real-world applications. - The Emotion-LLaMA model incorporates specialized encoders for audio, visual, and textual inputs, aligning the features into a modified LLaMA language model and employing instruction tuning to enhance both emotional recognition and reasoning capabilities. - Extensive evaluations show that Emotion-LLaMA outperforms other multimodal large language models, achieving top scores on the EMER, MER2023, and DFEW datasets. The main contributions are: - The MERR dataset, a valuable resource for advancing large-scale multimodal emotion model training and evaluation. - The Emotion-LLaMA model, which excels in multimodal emotion recognition and reasoning through the innovative use of instruction tuning. - Establishing Emotion-LLaMA as the current state-of-the-art model in public competitions for multimodal emotion analysis. Strengths: - The paper is well structured and well presented. The author organizes the article into five sections: introduction, related work, methodology, experiments, and conclusion. They clearly describe how they conduct data annotation and model design, provide a good introduction to the experimental setup and analysis of the experimental results, and present a relatively clear conclusion. - Model details are thorough. The authors have provided a good description of their model and training methods, allowing me to clearly understand how the model is designed and trained. I believe their results are reproducible. - This work is valuable . The authors provided a paradigm for emotion annotation of multimodal data and an annotated dataset. They also offered a clear explanation of the annotation process, which will contribute to the development of the related field. Weaknesses: - Insufficient experiments: Although the author has conducted some comparative and ablation experiments, it is obviously insufficient for such a complex multimodal LLM. - Missing details of experimental setup: The authors mentioned that they fine-tuned on several target datasets, but the details of the fine-tuning (including data volume, dataset division, and fine-tuning setup) were not included in the article. This can lead to a decrease in the credibility of their results. - Lack of result analysis: Although the proposed model surpasses existing models in many metrics, the authors only list their experimental results in the results section without further analysis and explanation of the results and some phenomena. The lack of proper explanation and analysis can make some results confusing. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In section 4.2, the author mentioned that they used the HuBERT-Chinese large model for audio modality input processing. Regarding this part, I would like to know the following questions: - Are the experimental results sensitive to language? I hope the author can provide more experimental results to illustrate this point. - In the ablation study section (Tab5), the author seems to have only conducted ablation on the visual encoder. Why didn't they further conduct ablation experiments on the audio encoder, considering that there are many alternatives to Hubert? - Based on the previous question, does the author believe that the audio modality is not important in this task? I would like to see more ablation results on the modality scale. 2. As I mentioned in the weaknesses part, could the author provide more details of fine-tuning on the target datasets? 3. Is Multimodal Emotion Recognition a classification task? How do the authors explain the Dis column in Table 2? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: Although the author mentioned in the checklist that they discussed the limitations of the article, they did not explicitly discuss them in the text. Meanwhile, I noticed that the author did not mention their data sources in the article, and I am concerned whether this might involve data copyright issues. Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback. We have addressed each of your points below and will incorporate these improvements in our revised manuscript. **Q1: Are the experimental results sensitive to language? How does the choice of HuBERT-Chinese affect performance?** A1: Yes, Emotion-LLaMA is sensitive to language. This sensitivity stems from its foundation on LLaMA2-chat (7B), which processes input instructions and outputs in English. Given that most samples in the EMER, MER2023, and DFEW datasets are in Chinese, we translated all text subtitles to English. Our choice of HuBERT-Chinese as the audio encoder was informed by MERBench [1], showing that language-matching encoders achieve better performance. Our experiments confirmed this, with HuBERT-Chinese achieving the highest scores among single-modal models, underscoring the significant role of the audio modality in multimodal emotion recognition. We tested other audio models, including Whisper, Wav2Vec, and VGGish, but they performed poorly in multimodal fusion. Consequently, we selected HuBERT-Chinese and focused on different visual encoders. Our ablation experiments show the following results: | **Audio Encoder** | **Visual Encoder** | **F1 Score** | |--------------------|---------------------|--------------| | Wav2Vec | - | 48.93 | | Wav2Vec | MAE, VideoMAE, EVA | 71.92 | | VGGish | - | 59.44 | | VGGish | MAE, VideoMAE, EVA | 73.89 | | Whisper | - | 53.24 | | Whisper | MAE, VideoMAE, EVA | 70.38 | | HuBERT-Chinese | - | 83.94 | | HuBERT-Chinese | MAE, VideoMAE, EVA | 89.10 | These results highlight the importance of using language-matching encoders for audio modalities in multimodal emotion recognition tasks. We will discuss these findings and their implications in the revised manuscript, particularly in the limitations section. **Q2: Details of Fine-Tuning on Target Datasets** A2: Please refer to **Sec. 4.2** of our submitted paper. Implementation details, including code (train.py, train_configs/Emotion-LLaMA_finetune.yaml) and setup instructions (README.md: Setup, Training), are available in our anonymous repository. Further tuning process details are in our response to Reviewer V8m9: Q1-A1. **Q3: Is Multimodal Emotion Recognition a classification task? How do the authors explain the Dis column in Table 2?** A3: Multimodal Emotion Recognition is a classification task, aiming to classify input samples into different emotional categories. In **Tab. 2**, the 'Dis' column represents the accuracy score for the 'Disgust' emotion category. Most existing MLLMs perform poorly in recognizing 'disgust', often with zero accuracy. This is likely due to the scarcity of multimodal samples for 'disgust' in current datasets. Additionally, LLMs may have safety restrictions related to 'disgust', contributing to the low accuracy. We plan to collect more samples to enrich the MERR dataset and further explore this issue. **Q4: The author did not explicitly discuss limitations in the text and raised concerns about data copyright issues.** A4: The MERR dataset is sourced from MER2023. Our team signed the relevant End User License Agreements (EULA) and obtained permission from the original data providers. We acknowledge the need for an ethics review concerning data privacy, copyright, and consent. We have followed all ethical guidelines and included a detailed statement on ethical considerations in the revised manuscript. **Q5: Insufficient experiments.** A5: We demonstrated Emotion-LLaMA's capabilities through extensive experiments, achieving SOTA scores on the EMER, MER2023, and DFEW datasets, and conducted ablation studies to validate its components and the MERR dataset. We also performed additional experiments: - **Audio Modality Ablation**: As detailed in Q1, we conducted extensive ablation studies on different audio encoders. - **MER2024 Competition Results**: Recently, we participated in the [MER2024] competition, widely regarded as one of the most authoritative benchmarks in the field of multimodal emotion recognition. Emotion-LLaMA excelled in two tracks: a) **Noise Robustness Track (MER-Noise)**: Emotion-LLaMA achieved the highest score of 85.30%, surpassing the second and third-place scores by 1.47% and 1.65%, respectively. | **Anonymous team** | **F1 Score** | |----------------|----------| | team 6 | 80.66 | | team 5 | 81.28 | | team 4 | 82.71 | | team 3 | 83.65 | | team 2 | 83.83 | | team 1 (ours) | 85.30 | b) **Open-Vocabulary Track (MER-OV)**: Our application of Emotion-LLaMA for open-vocabulary annotation improved the average accuracy and recall by 8.52% compared to GPT-4V. | **Model** | **Accuracy** | **Recall** | **Avg** | |----------------|----------|--------|------| | Video-LLaMA | 31.08 | 32.26 | 31.67| | Video-ChatGPT | 46.20 | 39.33 | 42.77| | mPLUG-Owl | 44.80 | 46.54 | 45.67| | AffectGPT | 66.14 | 46.56 | 56.35| | GPT-4V | 56.19 | 58.97 | 57.58| | Emotion-LLaMA | 69.61 | 62.59 | 66.10| These additional experiments demonstrate Emotion-LLaMA's robustness and effectiveness. In our revised manuscript, we will: 1. Include a comprehensive presentation of our experimental results. 2. Provide a deeper analysis of the results, discussing implications for noisy conditions and open-vocabulary tasks. 3. Elaborate on how these results contribute to multimodal emotion recognition and potential real-world applications. If you have further questions or need additional clarification, please let us know. We value your feedback and are committed to providing thorough responses. [MER2024]: https://zeroqiaoba.github.io/MER2024-website/ --- Rebuttal 2: Title: Response to Reviewer 3ouG Comment: Dear Reviewer, Thank you for your valuable feedback. We have carefully considered your comments and made the following revisions to address your concerns: 1. We added additional ablation studies on the audio modality to provide more comprehensive insights. 2. We clarified the fine-tuning process and included more details to improve transparency. 3. We expanded our analysis of the results, particularly focusing on the challenges and future improvements regarding the 'disgust' emotion category. We hope these revisions meet your expectations. Please let us know if there are any further issues or if additional clarification is needed. Best regards, The Authors --- Rebuttal 3: Title: Follow-Up on Revisions and Inquiry on Remaining Concerns Comment: Dear Reviewer 3ouG, Thank you for your valuable feedback and for taking the time to carefully review our submission. We have thoughtfully considered your comments and made several revisions to address the issues you raised, including additional ablation studies on the audio modality, clarifying the fine-tuning process, and expanding our analysis of the results. As we approach the rebuttal deadline, we would like to ensure that all your concerns have been adequately addressed. If there are any remaining issues or areas where you believe further clarification is needed, please let us know. We are committed to making any necessary improvements to our work. We appreciate your efforts in helping us enhance the quality of our paper. Best regards, The Authors --- Rebuttal 4: Title: Follow-Up on Revisions and Demo Interaction Comment: Dear Reviewer 3ouG, Thank you once again for your valuable feedback and for taking the time to carefully review our submission. We have made several revisions to address the issues you raised, including additional ablation studies on the audio modality, clarifying the fine-tuning process, and expanding our analysis of the results. As we approach the final stages of the rebuttal process, we wanted to ensure that all your concerns have been fully addressed. We also invite you to explore our demo, available in the anonymous repository. Our work has already gained some traction, leading to a high number of visits to the demo, which has significantly increased the maintenance costs. Despite this, we have kept it running to provide full access during the review process. We believe it could offer further insights into our work, and we would greatly appreciate any feedback you might have. We are eager to interact with you further and are committed to making any additional improvements necessary. Best regards, The Authors
Rebuttal 1: Rebuttal: We appreciate the thoughtful feedback and constructive criticism from all reviewers. Your insights have been instrumental in refining our work. Below, we summarize the key changes and improvements we have made in response to your comments. ### Key Changes and Improvements 1. **Clarification of Dataset and Methodology**: - We have provided a detailed explanation of the MERR dataset, including its sources, annotation process, and the unique features that make it suitable for multimodal emotion recognition and instruction tuning. - We clarified how the dataset is auto-annotated with emotion labels and refined with expert input, ensuring high-quality annotations. 2. **Experimental Details**: - We added comprehensive details about our pre-training and fine-tuning processes, including specific datasets, sample sizes, feature extraction methods, prompt construction, and instruction templates. - We included additional ablation studies and experimental results, particularly focusing on the audio modality and its impact on performance. 3. **Model Evaluation**: - We elaborated on the evaluation metrics and provided a thorough analysis of the results, including the rationale behind using ChatGPT for emotion reasoning evaluation and how it mitigates circular evaluation issues. - We compared Emotion-LLaMA’s performance with other state-of-the-art models across multiple benchmarks and real-world scenarios. 4. **Addressing Limitations**: - We explicitly discussed the limitations of our work, including potential biases, data privacy, and ethical considerations. We have also outlined the steps taken to address these issues. - We acknowledged the challenges related to the 'disgust' emotion category and our plans to enhance the MERR dataset with more diverse samples. 5. **Resource Efficiency and Innovation**: - We addressed concerns about computational efficiency, highlighting our use of parameter-efficient tuning methods and modest hardware requirements. - We emphasized the innovative aspects of our approach, including the design of effective prompts and instructions that enhance the robustness and generalization capabilities of Emotion-LLaMA. 6. **Reproducibility and Accessibility**: - We ensured that our anonymous repository is accessible and contains all necessary files for reproducibility, including the MERR dataset, training process, and code for Emotion-LLaMA. ### References and Links [1] MERBench: A unified evaluation benchmark for multimodal emotion recognition, arXiv 2024. [2] The biases of pre-trained language models: An empirical study on prompt-based sentiment analysis and emotion detection, Affective Computing 2022. [3] GPT-4V with emotion: A zero-shot benchmark for generalized emotion recognition, Information Fusion 2024. [4] Explainable multimodal emotion reasoning, arXiv 2023. [5] MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition, arXiv 2024. [6] TempCompass: Do Video LLMs Really Understand Videos? ACL 2024. **Repository Links:** - [MER2024 Competition Website](https://zeroqiaoba.github.io/MER2024-website/) - [Anonymous Repository for Emotion-LLaMA](https://anonymous.4open.science/r/Emotion-LLaMA/) We hope these changes address your concerns and effectively demonstrate the robustness and novelty of our work. Thank you once again for your valuable feedback and support.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Prune and Repaint: Content-Aware Image Retargeting for any Ratio
Accept (poster)
Summary: The paper presents an innovative method for image retargeting. It addresses two core challenges in image retargeting (preserving the main information and avoiding artifacts on key objects) simultaneously by carefully devising a content-aware seam-carving method and an adaptive repainting method respectively. Both quantitative and qualitative results demonstrate the advantages of the method over others. Strengths: 1. Significant improvement over prior works on both preserving semantic completeness and maintaining the harmony of the retargeted contents. 2. The paper is well-motivated and the method presented is well-structured. 3. The proposed idea is exciting. The semantic-guided local repainting solution is very reasonable and inspirable for the following researchers on this topic. The two key designs are effective. 4. The paper is well organized and written, with clear explanations of the method and effective visual representations. Weaknesses: 1. Detailed explanations for some key concepts are missing and adding them will further enhance the clarity of this paper, e.g., in Line 142: the process for computing x_0 should be presented. Is it the saliency center of the entire image or the saliency center of a row or column? 2. It will be better for this paper to demonstrate its advantages if the authors include recent deep learning works related to image editing/generation for comparison, especially those global generation methods. As stated, the proposed method selectively regenerates the abrupt pixels and preserves the foreground consistency and local smoothness over previous global generation methods without maintaining foreground consistency with the original image. Although Figure 1 shows an example, more comparisons will be highly beneficial to verify such advantages. 3. The authors perform subjective evaluation by designing a User Study Metric. However, it lacks explanations with corresponding visual samples to show the complementarity among the four different evaluation metrics in Table 2. For example, the authors should include some samples with different degrees of deformation or distortion for comparison and explanation. 4. The ARRD module in Figure 2 is not very clear to show the determination process for inpainting or outpainting. Expanding the mask map seems an optional choice and only works for some special cases when the foreground regions are large and direct cropping will lose too much object contents. However, in Figure 2, it appears always required for any input and ratio. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. From Figure 6, in addition to addressing the drawback of BR, which is unable to handle discontinuities in foreground pixels, I also notice that the inclusion of AR can more faithfully restore the depth and proportion of the original image (such as the size relationship between the cylinder and the background building in architectural images). Can the authors briefly discuss the reasons behind this advantage? 2. I am very interested in the performance on videos. Intuitively, the proposed method will work well on each frame. However, inter-frame smoothness will be a new problem for videos. Without explicit inter-frame consistency regularization, will the proposed model result in cross-frame mutation and jitter? 3. In line 266, what is the detailed difference between BR and AR. Both of them regenerate background regions, but what are the reasons that make BR unable to address discontinuities in foreground pixels? 4. From Table 1, I notice that the performance for different target ratios varies a lot. Please explain this difference. 5. In line 178, how to decide the threshold η? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations and analyses are included by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **Weakness 1**: _Some detailed concepts are missed, such as $x_0$ in line 142._ **Response**: Thanks for pointing it out. The center coordinate $x_0$ represents the center of the entire image. The height coordinate $H_{x_0}$ can be calculated as the average of the heights $H_i$ of all the salient pixels, where $i$ ranges from 0 to $n$, the total number of salient pixels: $$H_{x_0} = \frac{\sum_{i=0}^{n} H_i}{n}. $$ The width coordinate can be determined in a similar manner. * **Weakness 2**: _Lack of comparison with recent image edit/generation methods._ **Response**: Thanks for your suggestion. The results compared with full-image repainting (FR) and InGAN are included in Tab. R1, Tab. R2, Fig. R1 and Fig. R2 in the PDF. Both qualitative and quantitative results demonstrate that our method significantly outperforms existing approaches by our proposed CSC to preserve the key information and AR to enhance the local smoothness. * **Weakness 3**: _Lack of explanations with corresponding visual samples to show the complementarity among the four different evaluation metrics in Table 2._ **Response**: An example illustrating different degrees of content loss, deformation, distortion and aesthetics is presented in Fig. R3 of the PDF. Although the overall deformations in the third and fourth images are quite noticeable, the third image is very smooth locally, so it belongs to deformation rather than distortion, whereas the fourth image has both deformation and distortion. Further examples and scoring criteria will be included in the appendix. * **Weakness 4**: _The ARRD module in Figure 2 is not very clear._ **Response**: Thanks for pointing out this issue, expanding the mask map is an optional step that will only be implemented when the retargeted image cannot accommodate salient objects. We will revise the flowchart with a dashed box and a legend to avoid this confusion. * **Question 1**: _Why AR restore the depth and proportion of the original image better than BR._ **Response**: The expansion operation of AR reduces the seams that should have been removed from the foreground, thereby minimizing pixel displacement and alleviating misalignment of foreground objects. This allows for better preservation of the relative positions and depths of foreground objects. * **Question 2**: _Performance on the video._ **Response**: An example of video retargeting is presented in Fig. R4 in the PDF. As expected, the lack of inter-frame consistency in the PruneRepaint approach leads to inconsistencies in generating objects in the background area. * **Question 3**: _The detailed difference between BR and AR in line 266._ **Response**: To achieve background harmony and foreground preservation, BR identifies and repaints the background areas based on the saliency map, which does not affect the foreground and therefore fails to repair discontinuities in the foreground pixels. In contrast, AR is designed to achieve harmony across the entire image, which adaptively identifies areas where CSC removed more seams, thereby mitigating discontinuities in both the foreground and background. * **Question 4**: _Explain why the performance for different target ratios varies a lot._ **Response**: Typically, the larger the aspect ratio difference between the retargeted image and the original image, the greater the loss in image saliency. We analyzed the RetargetMe dataset and found that the average aspect ratio is 0.7405, which is closest to 9/16 (0.5625), indicating minimal saliency loss at this ratio. As the aspect ratio deviates further from the original ratio, the Saliency Discard Ratio (SDR) increases. * **Question 5**: _How to decide threshold $\eta$ in line 172._ **Response**: The threshold $\eta$ is the mean value of the entire saliency map. --- Rebuttal Comment 1.1: Title: Final Rating Comment: Thank you for the detailed responses. I have reviewed the rebuttal and other reviews. The authors have sufficiently addressed all my concerns, and I am inclined to uphold the initial score.
Summary: This paper proposes a new image retargeting framework that prunes background and repaints local connections. It improves the traditional seam-carving with semantic guidance to make the pruning content-aware, avoiding deformation and loss of important objects. Meanwhile, the authors introduce an adaptive repainting module using an image-conditioned diffusion model to selectively inpaint or outpaint local regions to achieve local smoothness. The authors also design two evaluation metrics, including the Saliency Discard Ratio (SDR) and a user study metric for evaluation. The proposed method shows superior performance over others and generalization to varying target ratios. Strengths: Good motivation and key problems are clearly summarized. The method is technically sound and novel. I am very interested in this task and like the proposed idea. It appears to be a pioneering work in the image retargeting community using diffusion models. Impressive results and a large improvement over previous works in preserving object completeness, and coherence, and it holds better generalization. Weaknesses: As pointed out by the authors, the inference speed has a large room to be improved. I suggest the authors adopt some accelerated diffusion models to improve it. The authors argue that their method can work well on any ratio, such as 1:1 and 4:3, yet these visual results are not illustrated. Minor writing or grammar mistakes, such as missing the point in line 70, and missing “the” in the caption of Figure 3. How does the text prompt work? I am interested in its contributions to the retargeting results. Technical Quality: 3 Clarity: 3 Questions for Authors: In equation (5), what does W_s denote? Is it the same as W in equation (2), meaning the width of the input image? For the proposed metric SDR, the authors use the width change of the saliency map to measure semantic completeness. How to implement this measurement? If my understanding is correct, is it to sum the width of all lines in the saliency map and output the maximum one as the width of the saliency map? Why do not use the metrics in salient object detection such as F-measure or use IOU to evaluate? In Table 2, I think content completeness and aesthetic scores are more important than the other two scores as they measure the two key views for image targeting. So, I wonder whether there is a better strategy to obtain an overall score rather than just averaging them. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors explicitly discussed the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **Weakness 1**: _Inference speed to be improved._ **Response**: Thanks for your suggestion, we will further employ accelerated diffusion models to improve the inference speed. * **Weakness 2:** _Results on ratio 1:1 and 4:3 are not visualized._ **Response**: Thanks for pointing out this issue, we present visualization results for only two extreme ratios to more clearly demonstrate the advantages of PrueRepaint as other ratios hold less difficulties. Limited by the page size, visualized results for the other two aspect ratios will further be included in the appendix. * **Weakness 3**: _Minor writing or grammar mistakes._ **Response**: Thanks for your detailed review, we will carefully proof the manuscript. * **Weakness 4**: _About the text prompts._ **Response**: Here we regard Image Retargeting as an image-to-image generation task and do not utilize additional text prompts to assist in the task. The positive text prompts used in the experiment are the default 'best quality, high quality,' while the negative prompts are 'monochrome, lowres, bad anatomy, worst quality, low quality.' The weight of the image prompt is set to 1, meaning that the text prompts contribute minimally. * **Question 1**: _The meaning of $W_s$ in equation (5)._ **Response**: $W_s$ is different from the image width $W$. $W_s$ denotes the saliency width which is defined in Eq. (4). We will add the description on $W_s$ in Eq (2). * **Question 2**: _How to implement $W_s$ in metric SDR? Is it to sum the width of all lines in the saliency map and output the maximum one as the width of the saliency map? Why do not use the metrics in salient object detection such as F-measure or use IOU to evaluate?_ **Response**: * For the first and second questions, I'm afraid there might be a slight misunderstanding. $W_s$ sums the width of all lines in the saliency map and takes their union rather than the maximum. In the implementation, we use a list of zeros with length $W$ to store whether each column has salient pixels. If a column has a salient pixel, we set the corresponding entry to 1. Finally, summing the list gives the salient width. * For the last question, both saliency detection metrics and IOU are only suitable for evaluating two images with completely identical resolutions. However, image retargeting changes the aspect ratio as well as resolutions of the original image. Therefore, we proposed a simple metric that provides a rough estimate of saliency preservation. Thank you for your inspiring question, we will explore more retargeting metrics in our future research. * **Question 3**: _The weights of different subjective metrics._ **Response**: We agree with you that different people and tasks have different biases towards subjective metrics, thus specific weights can be tailored to specific needs. We take averages just to compare the overall performance in a more intuitive and concise way.
Summary: This work contributes to a new image retargeting model named PrueRepaint, which is adaptive to work with any target ratio. The authors first improve the traditional seam-carving method with saliency priors to achieve content-aware pruning and protect important semantic regions. After that, they introduce an adaptive repainting module using the diffusion model to maintain local smoothness after pruning. The method is unique compared to the previous and also effective. The newly proposed metrics are reasonable. Experiments have demonstrated the effectiveness of the key designs and the large advantages over other methods. Strengths: - I believe the view that the authors address the image retargeting task is novel and very important. This work takes a step towards a spatial-variable diffusion model. In contrast, most diffusion models focus on the spatial-fixed community. - The method is well-designed and reasonable. The proposed idea differs a lot from previous cropping-based or global generation methods. The authors provide a more reasonable and effective solution path. - The authors also introduce two reasonable evaluation metrics. - Ablation studies verify the effectiveness of each design of the proposed method. - The proposed method achieves fairly superior results compared to previous models. Weaknesses: - It will be more convincing to verify the superiority of the proposed method if more methods such as the ‘InGAN’ model in Figure 1 can be involved for comparison. The previous methods address this task from varying views, such as using cropping, scaling, seam-carving, and a generative model. This work seems to provide a new idea. So I expect a more comprehensive comparison to other solutions to show its advantages. - Although the proposed method has achieved large improvement over previous works on avoiding artifacts, the results still hold some distorted regions. In my view, strengthing local correlations will be helpful to solve this problem. - The dataset contains limited samples. Although the authors try to output different ratios and the results show the advantages and better generalization over previous works, it will be much better to enhance the results and comparisons with more samples. So, I suggest the authors collect a new high-quality dataset with diverse scenarios for this task and test the model with a larger dataset for evaluation in the future. Technical Quality: 3 Clarity: 3 Questions for Authors: - The authors point out that ‘the repainting region generated by ARRD is not complete enough and contains certain distorted regions’ in the limitation section. Please give more explanations for this limitation. - The result of ‘+CSC’ in Table 3 and ‘background repainting’ in Table 4 hold the same score. This should not be a coincidence, please explain the reason behind it. - For the IP-adapter, it seems that the proposed method does not rely on text prompts to fulfill or improve the retargeting result. However, I am still interested in what text prompt is used in your design. Have you tried different text prompts such as “keep key semantics” and how about the results? - The output ratio is set to 16:9, 1:1, 4:3, and 9:16. What are the reasons for selecting these ratios as the target output? In previous image retargeting works, I also notice some other ratios such as 2:3 and other extreme ratios. -The repainting choice that inpainting or outpainting is based on setting a hyperparameter and decided by comparing the target ratio and the foreground size. I expect an adaptive strategy for this determination. Can you provide some potential solutions for this problem? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discussed the limitations adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **Weakness 1**: _Lack of comparison with more retargeting methods._ **Response**: Thanks for your advice. We have added experiments on InGAN as well as full-image repainting (FR) in Fig. R1, R2 and Tab. R1 in the PDF. Both the quantitative and qualitative comparisons show the large superiority of our method. * **Weakness 2**: _Suggestions to solve the distorted regions with local correlations._ **Response**: Yes, thanks for your constructive suggestion. Although our method has achieved significant improvement on both preserving semantic completeness and avoiding local artifacts, it is indeed a promising solution to further improve local smoothness by strengthening local correlations. However, the spatial misalignment between the original and retargeted images makes this local consistency constraint difficult to be implemented. We will study this problem carefully in our future work. * **Weakness 3**: _Lack of a larger high-quality dataset._ **Response**: Thanks for your advice, we are actually working on the retargeting dataset. We collect images with a wider variety of objects, input ratios, foreground scales, and object layouts for comprehensive evaluation. * **Question 1**: _Explain the limitation of ARRD._ **Response**: The reason is that ARRD searches the local pixel displacement area without global understanding. For example, in comparing the 'Original image' and '+CSC+AR' images in the second row of Figure 6, some seams passing through the streetlight were removed, causing misalignment. Ideally, the entire streetlight should be repainted, but AR only repaints the pixels near the deleted seams in the middle, resulting in a streetlight that remains misaligned in the generated image. * **Question 2**: _Explain why the results of '+CSC' in Table 3 and ‘background repainting’ in Table 4 are the same._ **Response**: The BR and AR are all implemented on the base of CSC. As BR identifies the background based on the saliency map for repainting, it will not change the saliency regions, thus they hold the same score. * **Question 3**: _About text prompts._ **Response**: * For the first question, we regard Image Retargeting as an image-to-image generation task. Moreover, designing specific prompts for each image is quite expensive thus we do not utilize additional text prompts to assist in the task. The positive text prompts used in the experiment are the default 'best quality, high quality,' while the negative prompts are 'monochrome, lowres, bad anatomy, worst quality, low quality.' The weight of the image prompt is set to 1, meaning that the text prompts contribute minimally. * For the second question, text prompts are supposed to be captions that describe an image. However, prompts such as 'keep key semantics, preserve the main structure' are typically not captions of images. As a result, they are meaningless in guiding the generation process. * **Question 4**: _The reasons for selecting the reported ratios._ **Response**: Most existing devices have aspect ratios ranging from 16:9 to 9:16. The models that handle extreme ratios tend to perform better on the more common ratios as well. Therefore, we only select 4:3 and 1:1 as the middle-ground aspect ratios to test. Among them, 16:9 and 4:3 are common aspect ratios for computers and televisions, 1:1 is a preferred image and video size for social media (such as Instagram), and 9:16 is commonly seen on smartphone screens. --- Rebuttal Comment 1.1: Comment: After checking the response and other reviews, I am inclined to increase the rating. --- Rebuttal 2: Title: Final Review Comment: Thank you for your detailed rebuttal and for thoroughly addressing all of my concerns. I appreciate the additional experiments and explanations you provided, particularly regarding the comparisons with retargeting methods, the handling of local correlations, and the dataset enhancement. Given the improvements and clarifications, I will maintain my initial positive rating.
Summary: The work presents an addon using Diffusion models to Seam Carving to perform Content Aware resizing of images. Strengths: + None Weaknesses: - The work seems to be a rehash of Seam Carving, and some diffusion models were added to perform retargeting. There is no novelty in the method. - One page of the paper is direct equations from Seam Carving. How can it be a contribution? - The application is quite dated and has not made any significant through the methods proposed, which are logical but lack impact. - The writing is hazy, and there is barely any motivation about the approach and the application. Technical Quality: 1 Clarity: 1 Questions for Authors: Please see above. Confidence: 5 Soundness: 1 Presentation: 1 Contribution: 1 Limitations: Limitations not well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **Weakness 1**: _No novelty in the method._ **Response**: 1) Seam-carving is a semantic-agnostic approach that often results in severe foreground loss and distortion (see Fig. 4, 5 and 7). In contrast, our proposed content-aware seam-carving (CSC) incorporates semantic awareness to preserve key objects, leveraging visual saliency cues and a careful integration strategy (see section 3.2). 2) Additionally, to address the inconsistent style and structure issues in previous global generation methods, we introduce adaptive repainting (AR). AR adaptively identifies discordant artifacts with a masking strategy and adjusts the mask to accommodate any aspect ratio. 3) The qualitative and quantitative comparisons in Sections 4.3 and 4.4 demonstrate the significant superiority of our method over previous approaches, highlighting the advantages of our designs. * **Weakness 2**: _One page of the paper is direct equations from Seam Carving._ **Response**: We respectfully disagree that CSC is merely a rehash of seam-carving (SC). SC determines the importance of different structures in an image using low-level gradient information, which often leads to key foreground loss or deformation. To address SC's lack of semantic awareness, we introduce high-level saliency priors alongside spatial priors (see lines 138 to 148), as presented in Eq. (2). Additionally, a tolerable saliency loss ratio $\lambda$ (see lines 154 to 161) is set to avoid excessive loss of the foreground in extreme ratios and to accommodate potential image expansion operations in AR. Our CSC successfully preserves key information. As demonstrated by the results in Tab. 3 and Figs. 6 and 7, our proposed CSC method achieves significant improvements over traditional SC. * **Weakness 3**: _The application is quite dated and has not made any significant through the methods proposed._ **Response**: 1) Regarding applications, the reviewer may refer to popular tools like Adobe Photoshop's "Content-Aware Scaling" and Instagram's "Auto Crop", which improve the work efficiency of media professionals and enhance the entertainment experience of the general public. 2) Our method significantly advances the two key aspects of retargeting - semantic completeness and local smoothness. This greatly enhances the user experience when deploying retargeting in real-world applications. * **Weakness 4**: _The writing is hazy, and there is barely any motivation about the approach and the application._ **Response**: Our motivation is to address two challenging and key issues in image retargeting tasks: preserving the main information and avoiding artifacts, as clearly stated in line 25 of the manuscript. CSC and AR are proposed to solve them respectively.
Rebuttal 1: Rebuttal: We thank all for your efforts and are glad to achieve your high recognition of the work's innovation, suitability and importance for image retargeting, significant outperformance, as well as the written. We gratefully thank all the reviewers for their constructive remarks and useful suggestions, which have greatly helped us to improve the quality of our manuscript. We thank all the reviewers for their efforts. We are encouraged they found our idea to be sound, clear and effective (**RHY7, HZC2, oiR7**). We are glad they found our approach to be intuitive and holds superiority in model performance (**RHY7, HZC2, oiR7**). We are pleased reviewer **RHY7** recognizes the potential of our research and **HZC2** and **oiR7** raise future insights for us. We address reviewer comments below. Pdf: /pdf/6a977bef380dd161e76e68d2bee7bc0a4feca496.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Most Influential Subset Selection: Challenges, Promises, and Beyond
Accept (poster)
Summary: The paper presents a theoretical analysis of why existing greedy additive methods fail to solve the most influential subset selection (MISS) problem, which aims to find the subset of training data with the largest collective influence. Greedy additive methods (first assigning individual influence scores, ordering and taking the top-k group for example) assume linearity of collective influence and fail to account for the non-linearity due to interactions between samples of the group. Building on the analysis, the paper proposes an adaptive greedy algorithm where the individual influence scores are dynamically updated to capture interactions among samples and ensure that samples of the subset have consistent influence (sign + order of influence are preserved). This algorithm is demonstrated using synthetic and real-world (MNIST) experiments. Strengths: - The paper is very well written. - The paper contributes a thorough theoretical analysis of failure modes in existing greedy approaches to solve the most influential subset selection (MISS) problem and therefore shows why these approaches often fail to find meaningful subsets. - The problem of subset influence is interesting and increasingly relevant given the increasing scale of datasets. The paper is therefore not only relevant for the area of data influence but also data-centric AI overall. Weaknesses: This paper is mainly a theory paper, focusing on the failure modes of influence-based greedy heuristics for most influential subset selection (MISS). The choice of datasets and experiments to run (synthetic data, MNIST on MLPs) fits the scope of the paper. Yet, if I understand correctly, the proposed alternative approach to solve MISS dynamically with the adaptive greedy algorithm is shown for a subset size of 2 and does not scale to larger subset sizes. In Remark 4.3 (line 288), the authors hypothesize that the method would scale to subsets larger than 2 if the found subsets are truly already belonging to the most influential subset. Given the fragility of influence scores (Basu et al., 2021), I doubt that the scenario in the authors' hypothesis is realistic. I am unsure though if I understood the claim in this remark correctly. Technical Quality: 4 Clarity: 4 Questions for Authors: The paper was very clear, and I only have a few questions to gain a better understanding of the potential impact: - See weaknesses. - You mention the second-order group influence functions by Basu et al. (2020) in your discussion as an alternative approach to compute group influence. How does the adaptive greedy approach you suggest compare to the second-order approach by Basu et al. (2020) in terms of computational efficiency (when considering group sizes >2, too)? Is it a faster alternative? - While I acknowledge that this is mainly a theory paper, I would like to understand the potential impact better. What would be potential application scenarios for finding the **most** influential subset? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 2 Limitations: Yes, limitations are discussed in detail in the discussion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer Uyvf for taking the time to review our paper and their constructive feedback. Please find below our point-to-point response. **Remark 4.3.** We note that starting from Section 3.2, we adopt the closed-form of individual influences ($A_{-{i}}$) instead of the influence estimates. The purpose is to separate the two failure modes: the errors incurred by the influence estimates (discussed in Section 3.1 and in prior works such as Basu et al., (2021)), and the non-additive structure of the group influence. Since the errors of influence estimates are not intrinsic to the greedy heuristics as well as their adaptive variants, it is only reasonable for us to use the ground-truth individual effect in order to study their fundamental properties. Our positive result and the hypothesis in Section 4 also assume access to the ground-truth individual influence. In this context, the fragility of the influence estimates is a separate issue. The critical question is whether the adaptive greedy algorithm can effectively capture the interactions between samples and thereby solve the more general $k$-MISS problem. This is a challenging open problem, and an important first step is to formally define “cancellation” for more than two samples. We will clarify this in the revision, and leave this as future work. **Second-order group influence functions.** We note that the approach adopted by Basu et al. (2020) is a simplification of the actual second-order influence function, which is part of a more general framework known as the higher-order infinitesimal jackknife [1] in the literature. The actual second-order influence function involves computing a third-order tensor, which is infeasible for deep neural networks (and much more computationally expensive than the adaptive greedy algorithm). Consequently, Basu et al. (2020) ignored the third-order derivative in their calculations, making their computational complexity roughly on the same order as the vanilla influence function and more efficient than the adaptive greedy algorithm. However, there are two main caveats: 1) The third-order term might contain rich information for large-scale neural networks and classification tasks; 2) The subset selection problem is reframed as a discrete quadratic optimization problem, and while it can be solved efficiently via relaxation and projected gradient descent, the additional step makes it challenging to obtain provable guarantees even in simple linear models. Beyond computational efficiency, we believe these two approaches capture different types of interactions among samples. The second-order group influence function can detect clusters of samples, corresponding to the amplification effect, whereas the adaptive greedy algorithm can identify samples with cancellation effects, as demonstrated in our analysis. We believe a clever combination of these two approaches holds significant potential and leave this as a topic for future research. **Potential application scenarios.** We will take ZAMinfluence [2], one of the most prominent algorithms in MISS, to discuss the broader impact of MISS. ZAMinfluence was introduced to assess the sensitivity of applied econometric conclusions to the removal of a small fraction of samples. For example, suppose a conclusion involves determining whether a particular coefficient in a linear regression model is positive. If the sign of the coefficient flips after removing a few samples, we might be concerned as the conclusion is excessively sensitive to a small portion of the data. Conversely, if the sign remains the same regardless of which size-$k$ subset is removed, we can be more confident in the robustness of the conclusion. ZAMinfluence has been applied to many disciplines, including but not limited to applied econometrics, economics, and social sciences (see Appendix A for a detailed summary). For instance, the following sentence is quoted from [3], a paper in economics: *“Therefore, we use an approach proposed by Broderick et al. (2020) to test if sign and significance of our estimates could conceivably be overturned by removing small fractions of the data with the potentially largest influence on size and sign of estimated effects.”* In experimental studies, the process of collecting samples is often not fully random, and the conclusions drawn from these samples might not be robust. In this case, it is necessary to apply MISS to assess the robustness of conclusions and identify potential sources of sampling bias. In summary, MISS is framed as a machine learning problem, but it shines through its applications that extend far beyond machine learning, enhancing the reliability of analytical conclusions across a wide range of scientific domains. **References** [1] Giordano, Ryan, Michael I. Jordan, and Tamara Broderick. "A higher-order swiss army infinitesimal jackknife." arXiv preprint arXiv:1907.12116 (2019). [2] Broderick, Tamara, Ryan Giordano, and Rachael Meager. "An automatic finite-sample robustness metric: when can dropping a little data make a big difference?." arXiv preprint arXiv:2011.14999 (2020). [3] Finger, Robert, and Niklas Möhring. "The adoption of pesticide-free wheat production and farmers' perceptions of its environmental and health effects." Ecological Economics 198 (2022): 107463. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I appreciate the author's suggestion to clarify that their presented method addresses the 2-MISS problem and offers a starting point for the solution for the more general k-MISS problem and believe it will be useful for readers to assess the scope of the paper. My main concern was the potential application impact of this work, where the author's response convinced me of practical scenarios. Hence, I raise my score from 6 -> 7. --- Reply to Comment 1.1.1: Comment: Thank you again for reviewing and acknowledging our work!
Summary: The authors investigate the Most Influential Subset Selection (MISS) problem, which aims to identify a subset of training samples with the greatest collective influence on machine learning model predictions. They discuss limitations of prevailing approaches in MISS and highlight issues with influence-based greedy heuristics. The paper proposes a new method to greedily select the k most influential samples, utilizing adaptive iterative updates to the importance weight of remaining samples that has not been selected by the greedy procedure. Strengths: - The concept of adaptively updating the influence weight per sample is reasonable. - The notion of influence is sensible. - The submodularity approach to studying data subset selection is reasonable. - The authors discuss the limitations of non-adaptive greedy heuristics for MISS. - The paper includes an in-depth example and visualizations of the shortcomings of regular greedy algorithms for MISS problems, clearly demonstrating the problems. - The implementation is publicly available. - The technical appendix appears to follow a sensible proof strategy; however, I haven't had the chance to formally check the appendix (disclaimer). Weaknesses: - By their nature, being greedy procedures that iteratively re-evaluate the influence of each individual sample, the proposed algorithms are too inefficient to be applicable to even moderately sized datasets or datasets where subset selection matters most. Consequently, the authors mostly consider small datasets in their experiments. - As reported in D.5, k-MISS-MLP-MNIST took 28 hours, which is quite wasteful. - The paper does not formally discuss potential problems and limitations with the applicability of k-MISS in the context of non-linear models; therefore, it remains unclear whether the influence function is truly applicable to non-linear models where samples may have unknown non-linear future influence. - The influence of randomness from stochastic gradients and initializations (of the MLP) is not discussed or modeled. - The paper does not convincingly demonstrate that the proposed solution has practical relevance. For example, there is no case study that demonstrates the practical usefulness of the reported results. - The paper does not convincingly demonstrate that the proposed solution advances our theoretical understanding of MISS. - The work lacks contextualization regarding highly related research areas, including Data Subset-Selection, Active Learning, Data Shapley, Importance Sampling, Landmark Selection, and Core-set Selection. - The paper does not compare the proposed solution to established baselines and highly related data, such as SELCO [http://proceedings.mlr.press/v139/s21a/s21a.pdf], which employs submodularity-based approximation strategies to jointly estimate coefficients and the most relevant data subset. - The paper does not recognize highly influential prior work on Data Subset Selection [http://proceedings.mlr.press/v37/wei15.html]. - The dataset selection lacks representativeness. - The experimental assessment is based on too few datasets (Concrete Compressive Strength, Waveform Database Generator, and MNIST). - The test set sizes are very small. - It remains unclear how robust the proposed algorithm can deal with variations in hyperparameters. - It remains unclear how robust the notion of influence can handle random effects from initialization and gradient batching. - The experiment does not report any robustness checks, such as sensitivity analysis or cross-validation. - The experiments do not include a comparison with state-of-the-art methods from data subset selection or Data Shapley, which makes it difficult to assess its relative performance. Technical Quality: 3 Clarity: 2 Questions for Authors: ./. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 1 Limitations: - The authors haven't discussed the implications data sample selection from senstive data: If the data is user-generated, non-private, and sensitive, this procedure might select and flag data records of certain individuals. - The experiments are limited and do not showcase practical limitations under different circumstances (see above) Flag For Ethics Review: ['No ethics review needed.'] Rating: 2 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer 3kwV for taking the time to review our paper. Before addressing the reviewer’s comments, we would like to clarify some misconceptions. - **Thesis and contributions.** Our work falls under the category of learning theory, and our thesis is to advance the theoretical understanding of MISS – an important research topic in the field of data attribution. We do not claim any algorithmic contributions; instead, our aim is to build the foundation for future algorithmic advancements through a comprehensive analysis of the pros and cons of the common practices in MISS. The experiments are designed to corroborate and extend our theoretical findings, instead of showcasing that a particular algorithm has beaten the state-of-the-art. - **Related work.** As stated in our general response, data selection and MISS are completely different research topics. They differ in objectives, applications and techniques. As for the relationship between MISS and Data Shapley, 1) influence functions and Data Shapley define the influence/contribution of individual samples in different ways; 2) MISS extends influence functions to modeling the influence of a set of samples. It is clear that MISS and Data Shapley are not comparable. Below is our point-to-point response: **Efficiency of the adaptive greedy algorithm.** - To be clear, we didn’t propose the adaptive greedy algorithm and certainly were not claiming that it is an all-round solution. One of the main findings of the paper is that the adaptive greedy algorithm trades computational efficiency for performance gain, and the experiments are designed to corroborate and extend this finding. - Although efficiency is not on our priority list, we have optimized our processes to the best of our available resources, which is detailed in Appendix D.4. - We strongly believe that **the 28-hour experimentation is part of the scientific discovery process and it would be unfair to dismiss it as wasteful.** **Non-linear models.** Influence functions have been applied to (deep) neural networks ever since the seminar work by Koh and Liang [1]. While our theoretical results are restricted to linear models, the experimental results on MLP clearly demonstrate that our findings can generalize to non-linear models. **Practical relevance.** As stated above, we did not claim any algorithmic contributions, and our thesis is to advance the theoretical understanding of MISS. For use cases of these algorithms, we refer the reviewer to [2], which is the original paper that proposed ZAMinfluence. We have also summarized a few points in our response to Reviewer Uyvf under “Potential application scenarios”. **Theoretical understanding.** We do not take such an unjustified claim since our entire work is dedicated to the theoretical understanding of MISS. **Contextualization.** Please refer to our general response. **Established baseline and influential prior work.** With due respect, these works are irrelevant to our study. The reasons are detailed in the general response and our clarification on the misconceptions. **Comparison with mentioned “SOTAs”.** This is incompatible with the thesis of our work, plus they are not comparable. Please refer to our clarification on the misconceptions. **Robustness to hyperparameters.** We clarify that apart from the randomness of training in the MLP experiments, there is no hyperparameter that needs to be tuned in both the vanilla and adaptive greedy algorithm. For the randomness in training MLPs, please refer to “Randomness in experiments” and our general response. **Implication on sensitive data.** We believe that the implications of this very specific scenario is beyond the scope of a paper on learning theory that focuses on the fundamentals of the general MISS problem. **Experiments.** - **Dataset.** - UCI and MNIST are standard datasets in the machine learning community, and MNIST is widely used in the study of influence functions such as [1]. If the reviewer has any “representative” datasets in mind, can the reviewer please share with us? - We consider three different datasets in the experiment section, which we believe is sufficient for a paper on learning theory. This is further supported by Reviewer Uyvf: “The choice of datasets and experiments to run (synthetic data, MNIST on MLPs) fits the scope of the paper”. - **Size of the test set.** We want to emphasize that the purpose of the test samples in our setting is very different from their usage in standard machine learning. Here, the test samples serve as the *target function*, and the size could be arbitrary. In fact, we focus on a single test sample in Sections 3 and 4 — in this case, MISS measures the alteration of model behavior on this particular sample of interest. - **Randomness in experiments.** For MLP training, we have conducted additional experiments and we show that the results are consistent and robust across different random seeds. Please refer to the general response for details. - **Robustness checks such as sensitivity analysis and cross-validation.** For linear regression and logistic regression, there is no need for sensitivity analysis or cross-validation as there’s no hyperparameter tuning. For MLP, we added details of cross-validation in Table 1 of the attached PDF in the general response. We also included them in Appendix D.3 in the draft. **References** [1] Koh, Pang Wei, and Percy Liang. "Understanding black-box predictions via influence functions." International conference on machine learning. PMLR, 2017. [2] Broderick, Tamara, Ryan Giordano, and Rachael Meager. "An automatic finite-sample robustness metric: when can dropping a little data make a big difference?." arXiv preprint arXiv:2011.14999 (2020).
Summary: The paper explores the challenge of understanding the collective influence of subsets of training data on machine learning models, referred to as the Most Influential Subset Selection (MISS) problem. Traditional influence functions, which focus on individual data points, often miss the more complex interactions within subsets. The authors analyze existing influence-based greedy heuristics, revealing their potential failures, particularly in linear regression, due to errors in the influence function and the non-additive nature of collective influence. They propose an adaptive version of these heuristics, which iteratively updates sample scores to better capture interactions among data points. Their experiments on both synthetic and real-world datasets validate the theoretical findings, demonstrating that the adaptive approach can extend to more complex scenarios like classification tasks and non-linear models. Strengths: - Addresses a timely important problem of how to leverage pointwise influence estimates to remove datapoints (dataset selection is a very important topic imo, and a related work of [1] recently won a best paper at ICLR). Even if some of the results were known in the literature (see weaknesses), the paper provides a clear story and analysis. - Exceptional exposition for the most part; very lucid and clear narrative, and addresses both high-level and subtler points (for example, section 6 is a nice discussion of natural follow-up questions) Weaknesses: - My main concern is that the results presented in Section 3 are sort of a "folklore" in the IF literature (see, e.g., [1] and the references therein). The under-estimation of group effects is also analyzed in earlier work [2], for example. [1] https://arxiv.org/abs/2309.14563 [2] https://arxiv.org/abs/1905.13289 Technical Quality: 3 Clarity: 4 Questions for Authors: - I understand that re-training was modified for the MLP experiments due to computational constraints, but I'm very curious how doing the full correct version would further improve the results in Figure 4. In particular, given the nature of NNs to overfit, I would suspect that partial re-training would not be enough to fully "erase" the influences of deleted points. - Though the paper focuses on MISS, it would be nice to comment a bit on related problems of dataset selection and coreset selection (which is more of an LISS) - Figures 1 through 3 should have axes labels. Also, would give more intuition of the examples in the original feature space are also shown. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Satisfactory Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer 2zFQ for taking the time to review our paper and their constructive feedback. Please find below our point-to-point response. **Comparison to [1,2].** - We believe the main and the most fundamental difference between our work and [1] is the topics of study — our work focuses on MISS, whereas [1] concerns data selection. As explained in the general response, these two terms differ in objectives, applications, and techniques (while influence functions are used in both our work and [1], they are used for sub-sampling in [1], which is not a part of our work). Consequently, the conclusions in these works are not transferable (i.e., the sub-optimality of IF in data selection does not necessarily imply its failure in MISS). - Regarding [2], the underestimation of IF is only a by-product of our analysis in Section 3.1. In fact, we believe the main contribution of our work lies in Section 3.2, where we provide a comprehensive analysis of the non-additive structure of collective influence. The results in Section 3.1 are not particularly surprising given that influence functions are known to be fragile in prior works; we included it for the sake of completeness. - Finally, we have provided a detailed discussion of the theoretical research in MISS in Appendix A. **Re-training.** We conducted an additional experiment of full re-training following the reviewer’s suggestion. The results are demonstrated in Fig. 1 (right) of the attached PDF in the general response. The main takeaway is that switching from warm start to full re-training does not change our conclusion: the adaptive greedy algorithm consistently outperforms the vanilla one. As a side note, for the winning rate metric, the trend of full re-training does not fully match the one with warm start in Fig 4 of the submission. We believe randomness may have played a significant role in this discrepancy, particularly because full re-training involves more randomness compared to starting from fixed checkpoints. We will further investigate this and include the results of multiple trials with confidence intervals in the revision. **Data selection/coreset selection.** Please refer to our general response. We will make sure to clarify the differences between data selection and MISS in the revision. **Examples in the original feature space (Figs 1-3).** The samples in Figures 1 to 3 are 2d synthetic data that we crafted for demonstration purposes; they do not correspond to examples (e.g., images) in high-dimensional real-world datasets. The coordinates are generated from Eq. (7) plus some Gaussian noise. We hope this clarifies the reviewer’s question. **References** [1] Kolossov, Germain, Andrea Montanari, and Pulkit Tandon. "Towards a statistical theory of data selection under weak supervision." The Twelfth International Conference on Learning Representations. [2] Koh, Pang Wei W., et al. "On the accuracy of influence functions for measuring group effects." Advances in neural information processing systems 32 (2019).
Summary: The paper explores the problem of most influential subset selection, that is, selecting a subset of training data points whose removal would change a machine learning model the most. The paper develops further on previous works (most notably of Chatterjee and Hadi, and Kuschnig et al) and shows why the greedy heuristics do not work well in practice. Greedy heuristics operate by computing the influence (e.g., following Koh and Liang) and simply selecting the data points with highest individual influence. The paper shows how indirect interactions can lead to under- and over-estimation of influence. The paper then shows why a specific adaptive method works well in practice. The analysis of the paper is limited to OLS. While the paper does not propose a new influential subset selection algorithm, the analysis conducted in the paper adds important theoretical insights that are likely to be helpful to the research community. Strengths: 1. The paper does a very good job of setting up the problem and explaining the main results. The formalism is clean and easy to understand and the results are accompanied by intuitive explanations, e.g., in Figure 1. 2. The key limitation of greedy heuristic methods (considering the full training dataset) is explained quite well in Section 4. 3. While the paper does not quite add an algorithmic improvement, the theoretical analysis provided is an important building block in developing our understanding of the underlying mechanisms. Weaknesses: 1. Some of the analysis is limited to simpler models like OLS and takes assumptions like the inevitability of the $N=X^TX$ matrix (which may not be the case in real world datasets). This per se is not a big weakness. However, some discussion here on how we expect these results to behave on real world datasets where $N$ is non-invertible, or on models beyond OLS would be greatly helpful. 2. The result of theorem 4.2 seems a bit limited and too tailored to the adaptive greedy algorithm. Specifically, given the steps described in line 251, (remove most influential sample, retrain the model, repeat), and the proposition 4.1 (order preservation between two points), it is not immediately clear if/how much the Theorem 4.2 would extend to 3-MISS and beyond. 3. The title of the paper seems to be quite general and a bit mismatched with the content. While the content shows the weaknesses of existing influence estimation approaches (that too with a OLS model, in a 2-MISS settings and a very specific adaptive algorithm), reading the title feels as if the paper would address a much broader range of models and algorithms. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Section 3.1: To what extent is the finding in Section 3.1 limited to the specific target function used here, that is, the prediction on $x_{test}$? After all, reading the legend in Figure 1, there is not a huge difference on the target function value between points 1 and 8 (0.120 vs. 0.117). More broadly, do we expect the effect of high leverage points to be this large when we take “less noisy” functions like the sign of the prediction on $x_{test}$ or accuracy on the whole test set? 2. Line 171: About correcting the influence with its corresponding leverage score, how does that process look like precisely? 3. If the reviewer understood correctly, $\bar{A_{-S}}$ in Figure 3 shows the actual influence of the two approaches considered. Why not baseline this comparison with the “ground truth influence”, that is, the influence of the actual most influential data points? Clearly, this will ground truth will be very difficult to compute as $k$ increases, but some understanding over smaller values of $k$ might be very helpful for the reader. 4. The results in Figure 4 are mostly as one would expect. However, in the case of MLP, the gap in $\bar{A_{-S}}$ and winning rate starts shrinking around $k=50$. Any reason why that might be happening? How robust are the results to different choices of training data random seed? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The paper is generally quite open about the limitations. However, please see the point 3 in weaknesses about the generality of the title. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer btdJ for taking the time to review our paper and their constructive feedback. Please find below our point-to-point response. **Non-invertible $N$.** We assume $N$ to be invertible since influence functions rely on the uniqueness of the optimal solution. When this is violated, we can use $L_2$ regularization, which is a standard practice when applying influence functions to deep learning. Our analysis naturally extends to this case. More broadly, we have extended our analysis from OLS to weighted least squares and kernels. However, since these extensions did not provide additional insights, we have opted to present the analysis in its current form. We will ensure these potential generalizations are discussed in the revision. **Extension to $k$-MISS.** - We believe that our analysis being tailored to the adaptive greedy algorithm is not a weakness, as the purpose of Section 4 is to demonstrate that adaptivity helps in a non-trivial way. - We acknowledge that under the cancellation setup, our current results are restricted to 2-MISS. We highlight a few challenges: 1) Conceptually, unlike the amplification setup, where it is straightforward to accommodate an arbitrary number of samples, it is not immediately clear how to define “cancellation” for more than two samples. 2) Technically, proving the success of MISS is extremely difficult, as it requires considering all possible subsets, whose size grows exponentially with $k$. In fact, even proving the results for 2-MISS turns out to be highly non-trivial. We will clarify these challenges in the revision and leave them as future research. **Mismatch between title and content.** We fully understand that the reviewer was expecting a broader coverage of algorithms and a more general analysis beyond linear models. Nevertheless, - MISS is a relatively new and underdeveloped field. To the best of our knowledge, all algorithms except the second-order group influence function (which we also discussed in Section 6), are based on influence-based greedy heuristics. In this regard, our work pioneers in MISS by systematically studying this dominant strategy. - While our theoretical results are limited to linear models and the cancellation setup only concerns 2-MISS, we have conducted experiments on non-linear models and general $k$’s to extend our theoretical analysis. Therefore, we believe the title is suitable for our work. However, if the reviewer has more concrete suggestions for the title, we would be very open to discussions. **Target functions.** - The numbers in Figure 1 are not the actual influences; they are the influence estimates computed by the influence function. We didn’t optimize these numbers hard as it is clear from the proof of Theorem 3.1 that the ratio between them can be arbitrary. - Regardless of the target function, the influence function underestimates the change in parameters by $1/(1-h_{ii})$ in linear regression. This is the main reason why samples with high leverage scores could incur a large effect when removed. Based on this fact, it is not hard to generalize our results to other target functions. - Take the sign of prediction as an example, we can slightly modify Figure 1 (by shifting the samples along the y-axis), so that the predictions of the original OLS and OLS without sample 8 are negative, but the prediction of OLS without sample 1 is positive. This means that removing sample 1 changes the sign of the prediction, but it does not have the largest influence estimate and is therefore not selected by the greedy algorithm. - We posit that the choice of target function is a very important consideration in influence functions yet received insufficient attention from the literature. We have included a discussion in Section 6 in the hope of inspiring future research. **Procedure of correcting the influence estimate.** In linear regression, it suffices to divide the influence estimate by $(1-h_{ii})$ to obtain the actual individual influence $A_{-\{i\}}$. Please refer to the paragraph under equation (9). **Comparison with ground truth influence.** We concur that comparing the influences achieved by vanilla/adaptive greedy algorithms to the ground truth is an interesting question. To this end, we conducted additional experiments on a small-scale synthetic dataset. More specifically, we focus on a binary classification task with logistic regression, and consider the target function $\phi(\theta) = p(z;\theta)$, where $z$ is the test sample and $p(z; \theta)$ is the softmax probability assigned to the correct class. The synthetic data is generated from a mixture of Gaussians: given $\sigma$, we sample $25$ training data points from $\mathcal{N}(c_i, \sigma I_2)$ for each $i\in \{0,1\}$, where $c_0 = (1, 0)$ and $c_1 = (-1, 0)$. The test sample $z$ is uniformly sampled from one of the Gaussians. We conducted the experiments for various $\sigma$’s, and for each $\sigma$, repeated the experiment 20 times using different random seeds to generate the train/test data. Finally, we report the averaged ratio of the obtained influence over the ground-truth influence. The results are demonstrated in Fig. 2 of the attached PDF in the general response. In summary, the adaptive greedy algorithm outperforms the vanilla counterpart on all $\sigma$’s, and recovers the ground truth subset when $\sigma$ is large. **Robustness of results to randomness in training.** We believe the drop around $k=50$ is due to randomness. To verify this hypothesis, we conducted two experiments targeting at different randomness in the process. We refer the reviewer to the general response for the detailed setting and the general takeaways of the experiments. Particularly, the first experiment (randomness in evaluation, Fig. 1 top left) clearly demonstrates that the drop in Fig. 4 of the submission is only due to randomness, as we have fixed the selected subset but the new figure does not exhibit the same drop. --- Rebuttal Comment 1.1: Title: Raising my score Comment: Thank you for the detailed response. Most of my concerns were addressed and I am raising my score to accept. It would be great to add the additional discussion here on target functions and robustness to the final version of the paper.
Rebuttal 1: Rebuttal: We express our sincere gratitude to the reviewers for their detailed review. It is encouraging to see that the reviewers acknowledged the significance of the topic of study: “Addresses a timely important problem”, and “The paper is therefore not only relevant for the area of data influence but also data-centric AI overall”. It is also great to hear that our writing was described as “Exceptional exposition for the most part”, “very lucid and clear narrative”, “and “does a very good job of setting up the problem and explaining the main results”. The following comment addresses some common questions raised by multiple reviewers. We will attend to specific feedback from the reviewers in our individual responses. **Related literature.** (Reviewer 2zFQ, 3kwV) - We would like to clarify that data selection/coreset selection are distinct research areas compared to most influential subset selection (MISS), despite their similar names. - In terms of *objectives*, data selection aims to identify the most informative training examples for effective learning or estimation, whereas MISS aims to identify a set of training samples that will maximize the alteration of model behaviors. - In terms of *applications*, data selection mostly concerns data efficiency, whereas MISS is typically used for diagnosis (e.g., whether the inferential results based on machine learning models are robust to small variation in data). - In terms of *techniques*, MISS is largely built upon influence functions, whereas data selection is typically centered around sub-sampling. - Due to these differences, data selection/coreset selection are usually not mentioned in the literature of influence function/influential subset selection. Nevertheless, we are aware of the inspiring work [1] mentioned by Reviewer 2zFQ, and we have already discussed the broader context of MISS in Appendix A, including a brief discussion of data selection. We will further clarify these differences and move the literature review to the main text in the camera-ready version. - Finally, Data Shapley and influence functions are two different approaches to modeling the influence of individual training samples. Influence function serves as an approximation of leave-one-out (LOO), whereas Data Shapley satisfies the equitable valuation conditions. Both methods have their own merits and are not directly comparable. **Robustness of the MLP experiment.** (Reviewer btdJ, 3kwV) We appreciate the reviewers’ suggestion on analyzing the robustness of results to randomness in training. We conducted additional experiments using different random seeds, with the results presented in the attached PDF. Concretely, we examined the randomness in both the evaluation step (Fig. 1 left) and the selection step (Fig. 1 middle). In both cases, randomness arises from neural network training — to select the most influential subset, the adaptive greedy algorithm involves repeatedly retraining the MLP after selecting and excluding a new sample; to evaluate a selected subset, we need to train an MLP on the set of training samples excluding the subset. When examining the randomness in the evaluation step, we held the selected subsets fixed (the same ones used to produce Fig. 4 in the submission) and ran the evaluation using a different random seed. When examining the randomness in the selection step, we ran MISS with a different random seed, and then evaluated the newly obtained subsets. The latter scenario involves more randomness. In summary, our results are generally consistent and robust across different random seeds. Specifically, the adaptive greedy algorithm consistently outperforms the vanilla greedy algorithm, though there are some fluctuations in the winning rate (e.g., Fig. 1, middle). We believe this is still due to the inherent randomness in model training. However, due to limited time and computational resources, we could only run each experiment once during the rebuttal phase. We will include more results with multiple trials in the revision. Pdf: /pdf/990b41576f8a27b6392f48945caf6aac3d0320e1.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null